macrovector - Fotolia

Interview: Mark Potter, CTO, HPE

The head of Hewlett-Packard Labs speaks to Computer Weekly about a new era of computing, where memory is no longer a constrained resource

The Machine is the largest research and development programme in HPE’s history. Its goal is to deliver memory-driven computing.

Memory-driven computing puts memory, not the processor, at the centre of the computing architecture. The Machine represents HPE’s research programme for memory-driven computing. Technologies coming out of the research are expected to be deployed in future HPE servers.

The elevator pitch is that because memory used to be expensive, IT systems were engineered to cache frequently used data and store older data on disk – but with memory being so much cheaper today, perhaps all data could be stored in-memory rather than on disk.

By eliminating the inefficiencies of how memory, storage and processors currently interact in traditional systems, HPE believes memory-driven computing can reduce the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds, to deliver real-time intelligence.

In an interview with Computer Weekly, Mark Potter, chief technology officer (CTO) at HPE and director of Hewlett Packard Labs, describes The Machine as a wholly new computing paradigm.

“Over the past three months we have scaled the system 20 times,” he says. The Machine is now running with 160TB of memory installed in a single system.

Superfast data processing

Fast communication between the memory array and the processor cores is key to The Machine’s performance. “We can optically connect 40 nodes over 400 cores, all communicating data at over 1Tbps,” says Potter.

He claims the current system can scale to petabytes of memory using the same architecture. Optical networking technology, such as splitting light into multiple wavelengths, could be used in the future to further increase the speed of communications between memory and processor.

Modern computer systems are engineered in a highly distributed fashion, with vast arrays of CPU cores. But, while we have taken advantage of increased processing power, Potter says data bandwidth has not grown as quickly.

“Memory-driven computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society”
Mark Potter, HPE

As such, the bottleneck in computational power is now limited by how fast data can be read into the computer’s memory and fed to the CPU cores.

“We believe memory-driven computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society,” says Potter. “The architecture we have unveiled can be applied to every computing category – from intelligent edge devices to supercomputers.”

Compute power beyond compare

One area of interest for this technology is how it could be applied to build a high-performance computer (HPC), such as an exaflop-scale supercomputer.

The Machine could be many times faster than all the Top 500 computers combined, he says, and it would use far less electrical power.

“An exaflop system would achieve the equivalent compute power of all the top 500 supercomputers today, which consume 650MW of power,” says Potter. “Our goal is an exaflop system that can achieve the same compute power as the top 500 supercomputers while consuming 30 times less power.”

It is this idea of a computer capable of delivering incredibly high levels of performance compared with systems today, but using a fraction of the electrical power of a modern supercomputer, that Potter believes will be needed to support the next wave of internet of things (IoT) applications.

“Our goal is an exaflop system that can achieve the same compute power as the top 500 supercomputers while consuming 30 times less power”

Mark Potter, HPE

“We are digitising our analogue world. The amount of data continues to double every year. We will not be able to process all the IoT data being generated in a datacentre, because decisions and processing must happen in real time,” he says.

For Potter, this means putting high-performance computing out at the so-called “edge” – beyond the confines of any physical datacentre. Instead, he says, much of the processing required for IoT data will need to be done remotely, at the point where data is collected.

“The Machine’s architecture lends itself to the intelligent edge,” he says.

One of the trends in computing is that high-end technology eventually ends up in commodity products. A smartphone probably has more computational power than a vintage supercomputer. So Potter believes it is entirely feasible for HPC-level computing, as is the case in a modern supercomputer, to be used in IoT to process data generated by sensors locally.

Consider machine learning and real-time processing in safety-critical applications. “As we get into machine learning, we will need to build core datacentre systems that can be pushed out to the edge [of the IoT network].”

It would be dangerous and unacceptable to experience any kind of delay when computing safety-critical decisions in real time, such as for processing sensor data from an autonomous vehicle. “Today’s supercomputer-level systems will run autonomous vehicles,” says Potter. 

Near-term deliverables

Technology from The Machine is being fed into HPE’s range of servers. Potter says HPE has run large-scale graph analytics on the architecture and is speaking to financial institutes about how the technology could be used in financial simulations, such as Monte Carlo simulations, for understanding the impact of risk.

According to Potter, these can run 1,000 times faster than today’s simulations. In healthcare, for example, he says it is looking at degenerative diseases, where 1TB of data needs to be processed every three minutes. HPE is looking at how to transition whole chunks of the medical application’s architecture to The Machine to accelerate data processing.

Read more about HPE’s The Machine

  • HPE has developed a concept computing architecture which it says will power future generations of applications. We find out how it will change IT.
  • HPE aims to push the limits of computing by moving the memory bottleneck it claims is limiting the performance of application software.

From a product perspective, Potter says it is accelerating its roadmap and plans to roll out more emulation systems over the next year. He says HPE has also worked with Microsoft to optimise SQL server for in-memory computing, in a bid to reduce latency.

Some of the technology from The Machine is also finding its way into HPE’s high-end server range. “We have built optical technology into our Synergy servers, and will evolve it over time,” he adds.

Today, organisations build massive scale-out systems that pass data in and out of memory, which is not efficient. “The Machine will replace many of these systems and deliver greater scalability in a more energy-efficient way,” concludes Potter. 

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Software development tools

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close