Nmedia - Fotolia

HPE demos memory-driven architecture for next-generation IT

HPE aims to push the limits of computing by moving the memory bottleneck it claims is limiting the performance of application software

This Content Component encountered an error

HPE has given the first demonstration of its next-generation PC server hardware infrastructure, which it claims will reinvent the economics of computing.

Traditionally, computers follow a model developed by John von Neumann in the 1940s. According to HPE, the cost of memory is the limiting factor in the von Neumann architecture.

Speaking at the start of HPE Discovery in London, Kirk Bresniker, CTO for HPE’s enterprise group, said: “There are billions of devices coming online. There has to be a better way to think about an architecture than one that is six decades old. Moore’s Law is starting to slow down, and at the same time we are seeing a massive rise in the amount of data.”

Bresniker said HPE had set out to investigate how to design a computer system to cope with the coming growth in data.

“Rather than isolating data and the CPU, we want to make insight to data central and allow algorithms to run in a way that is not possible today,” he said.

HPE proposes that a modern computing architecture should no longer be processor-centric, but memory-driven.

Its architecture has been developed as part of “The Machine” research programme, HPE’s proof-of-concept prototype, which the company claims is a major milestone in its efforts to transform the fundamental architecture on which all computers have been built for the past 60 years.

“We have achieved a major milestone with The Machine research project, one of the largest and most complex research projects in our company’s history,” said Antonio Neri, executive vice-president of HPE’s enterprise group.

“With this prototype, we have demonstrated the potential of memory-driven computing and also opened the door to immediate innovation. Our customers and the industry as a whole can expect to benefit from these advancements as we continue our pursuit of game-changing technologies.”

Read more about next-generation hardware

  • JP Morgan has gone live with a supercomputer for fixed income trading operations. The investment bank is using a Maxeler dataflow supercomputer to analyse and profile intra-day trading risk.
  • The Blazegraph database runs on graphics processing units to speed graph traversals. Machine learning in the form of Google TensorFlow is also a GPU target.

Bresniker added: “Memory is distributed throughout the system.”

This memory is connected with photonics to reduce latency. The idea is that applications are engineered to run in-memory. The processing part can run on the CPU, a system on a chip, an application specific integrated circuit (Asic) or, in the future, field programmable gate arrays or a yet-to-be-invented quantum computer.

HPE is collaborating with SanDisk, now owned by Western Digital, to provide low-cost, highly scalable resistive memory capable of offering the capacity of Flash storage, which runs at the speed of computer memory (dynamic RAM).

Through the collaboration, the pair hope to develop memory for the same price as storage, which effectively means applications could be engineered to use vast amounts of memory on tap. In doing so, the applications gain the performance boost available from direct memory access, rather than needing to rely on far slower block storage protocols, typically used on flash storage products such as solid-state discs. 

The proof of concept blade server uses 1TByte of memory. HPE claims it can accelerate big data processing on Apache Spark 15-fold and will run large-scale graph queries 100 times faster.

This Content Component encountered an error

Read more on IT architecture

This Content Component encountered an error

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close