Nmedia - Fotolia
Hewlett Packard Enterprise (HPE) has developed what it claims is the world’s largest single-memory computer, comprising 160TB of memory as part of its The Machine research project.
The Machine, which is the largest research and development programme in the history of the company, is aimed at delivering a new paradigm called memory-driven computing – an architecture custom-built for the big data era.
The idea is to reduce the workarounds application programmers need to use to overcome the limitations set by current computer architectures.
At HPE Discover in London last November, Kirk Bresniker, chief architect at HPE, told Computer Weekly: “What we have done in the past is have elaborate schemes using large pools of disk or flash-based block devices that take a very long time to access. To speed this up, we pull in large chunks at a time and use a cache.”
This can increase the complexity of the application and, if data is not stored in high-speed cache memory, it can also affect the application’s performance. Tweaking databases to improve application performance is among the key tasks database administrators need to do to ensure the data that is most frequently accessed is available with the least amount of latency.
Speaking about the latest development, HPE CEO Meg Whitman said: “The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day. To realise this promise, we can’t rely on the technologies of the past; we need a computer built for the big data era.”
Read more about The Machine
- HPE has developed a concept computing architecture which it says will power future generations of applications. We find out how it will change IT.
- 3D Nand piles flash cells on top of each other to tackle some of the scalability problems of flash storage, and is set to help achieve the all-flash datacentre.
- HPE aims to push the limits of computing by moving the memory bottleneck it claims is limiting the performance of application software.
According to HPE, the prototype hardware is configured with 160TB of memory – capable of simultaneously working with the data held in every book in the Library of Congress five times over.
Based on the current prototype, HPE said the architecture could scale to an exabyte of memory within a single system. That is 4,096 yottabytes, which is 250,000 times the size of the entire digital universe today.
HPE said such a pool of memory would make it possible to work across every digital health record of every person on earth simultaneously, while also processing every piece of data from Facebook along with every trip of Google’s autonomous vehicles and every dataset from space exploration.