Nmedia - Fotolia

HPE increases ‘The Machine’ memory pool 20-fold

The Machine is HPE’s proof-of-concept next-generation hardware architecture that aims to overcome the limits of today’s IT by using large memory arrays

Hewlett Packard Enterprise (HPE) has developed what it claims is the world’s largest single-memory computer, comprising 160TB of memory as part of its The Machine research project.

The Machine, which is the largest research and development programme in the history of the company, is aimed at delivering a new paradigm called memory-driven computing – an architecture custom-built for the big data era.

The idea is to reduce the workarounds application programmers need to use to overcome the limitations set by current computer architectures.

At HPE Discover in London last November, Kirk Bresniker, chief architect at HPE, told Computer Weekly: “What we have done in the past is have elaborate schemes using large pools of disk or flash-based block devices that take a very long time to access. To speed this up, we pull in large chunks at a time and use a cache.”

This can increase the complexity of the application and, if data is not stored in high-speed cache memory, it can also affect the application’s performance. Tweaking databases to improve application performance is among the key tasks database administrators need to do to ensure the data that is most frequently accessed is available with the least amount of latency.

Speaking about the latest development, HPE CEO Meg Whitman said: “The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day. To realise this promise, we can’t rely on the technologies of the past; we need a computer built for the big data era.”

Read more about The Machine

According to HPE, the prototype hardware is configured with 160TB of memory – capable of simultaneously working with the data held in every book in the Library of Congress five times over.

Based on the current prototype, HPE said the architecture could scale to an exabyte of memory within a single system. That is 4,096 yottabytes, which is 250,000 times the size of the entire digital universe today.

HPE said such a pool of memory would make it possible to work across every digital health record of every person on earth simultaneously, while also processing every piece of data from Facebook along with every trip of Google’s autonomous vehicles and every dataset from space exploration.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on IT architecture

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close