Executive interview: Kirk Bresniker, chief architect, HPE

HPE has developed a concept computing architecture which it says will power future generations of applications. We find out how it will change IT

In November 2016, HPE demonstrated its next-generation PC server hardware infrastructure, which it claims will reinvent the economics of computing.

Kirk Bresniker, chief architect at HPE, says memory has become the bottleneck of modern computing.

Today’s computers are generally derived from the architecture defined by mathematician John von Neumann in 1945 in which a central processing unit (CPU) fetches and runs instructions that are stored in memory.

But the von Neumann architecture has a memory bottleneck, which has led computer architects to create ever more sophisticated ways to optimise processing. “In the von Neumann architecture, instructions, program and data are all sent down one pipe,” says Bresniker.

Memory is considered an expensive commodity, which is why operating systems and large applications tend to use block storage devices such as hard disks as virtual memory. The operating system copies blocks of memory into and out of the storage device and the computer’s main memory (such as dynamic RAM) in order to run large applications and process vast datasets that exceed the amount of physical memory installed.

Bresniker says: “What we have done in the past is have elaborate schemes using large pools of disk or flash-based block devices that take a very long time to access. To speed this up, we pull in large chunks at a time and use a cache.”

The cache effectively stores frequently accessed data in fast memory. Bresniker says this is only useful if the application is able to hit the cache a very large proportion of the time. “If the application does not fit this model, we will change the way we approach the problem to make it fit,” he says.

Read more about next-generation hardware

With next-generation memory such as 3D-Nand and Memristor technology, Bresniker says it is possible to change the way developers approach application optimisation, because they would no longer need to worry about ensuring the majority of data accesses are via the cache to avoid loading data from slow block storage devices.

“We want to change the basic economics of having relatively scarce amounts of memory,” he says.

HPE is collaborating with SanDisk – now owned by Western Digital – to provide low-cost, highly scalable resistive memory capable of offering the capacity of Flash storage, which runs at the speed of computer memory (dynamic RAM).

Breaking Moore’s Law

A number of trends are driving modern computing. Bresniker says the new classes of memory that change memory scalability are becoming available at the same time that Moore’s Law, which has pushed computing speeds up until now, is starting to slow down.

As Moore’s Law becomes harder to attain, hardware architects are looks at new designs, leading to the use of graphics processing units and field programmable gate arrays to solve highly specific computational problems.

“We have a different type of computing scalability,” says Bresniker. “We can scale compute using specialised designs and we have memory that can scale up in capacity economically.”

HPE’s vision is a concept hardware architecture called The Machine, in which specialised processors can be used alongside a memory fabric built on next-generation memory to solve new classes of applications.

Different types of computational engine can connect to this shared pool of memory. HPE’s concept is of a computational pipeline, comprising CPUs, GPUs, field programmable gate arrays and application specific integrated circuits, each tasked with working on the same pool of memory.

“People doing massive computations and network function virtualisation love this system,” says Bresniker. Rather than copy and move data one packet at a time to different functional units for further processing, the data never moves. “You ingest data at one end, output at the other, but every piece of computational work is done in place,” he says.

The Hana insight

Building a relational database management system is well understood. “We have a certain amount of memory, a large amount of disk storage and we use some of this memory to buffer the storage,” says Bresniker. “And we can tell you, to many significant figures, exactly how many transactions per second a given amount of disk, memory and compute will achieve.”

According to Bresniker, an in-memory database such as SAP Hana is an example of a company taking the first step in direct memory architectures. “What the Hana team did was to restructure how they access data, so that data is only held in memory,” he says.“ This is a great example of someone taking a known approach and doing it differently, but they are limited by the type of system they can use. They are all in and they are committed to in-memory processing. How big can you make a Hana system? It is as big as the biggest memory system you can make.”

Today, says Bresniker, systems such as HPE’s Superdome X have high memory capacity but also a relatively high cost. With a machine like the Superdome, Bresniker says a customer is likely to spend more on memory than anything else.

“What we are trying to address with The Machine is how to create much larger memory arrays that I can afford,” he says.

More affordable memory

HPE is working to make the cost of memory more affordable, says Bresniker. This includes everything from the cost of manufacturing to the cost the end customer has to pay and the cost of application development. “If I want to have an algorithm like the one the SAP team did adapted for random access to memory, the memory array needs to provide relatively uniform performance because that allows the algorithm writer to be more carefree about how they access it,” he says.

In other words, uniform memory access means that, unlike a cache, there is no performance hit on the application when non-cached data is accessed. It also means, according to Bresniker, that the developer would be able to build an array that could hold an entire computational problem without having to worry about moving data in and out of slow block storage systems.

Photonics is the other aspect of The Machine that excites Bresniker. Photons are lossless, so, unlike electronic circuits, there is no need to boost the signal every 20cm, he says. It also means that no matter whether two hardware systems are connected by photonics a few centimetres or several kilometres apart, the energy required to transmit the data remains constant.

“I am able to design ways to connect up memory with photonics, running and twisting the optical fibre in three-dimensional space, which is something you just cannot do with electronics,” he says.

The Machine is a concept architecture, but what comes out of the research and development indicates where the industry is taking next-generation computing platforms. For Bresniker, an architecture based around vast amounts of low-cost memory will rewrite the economics of computing and fundamentally alter the way applications are built.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Clustering for high availability and HPC

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close