WavebreakmediaMicro - Fotolia

How HPE is climbing the supercomputer league to reach exascale performance

Hewlett Packard Enterprise recently joined a project to develop exascale computing for the US Department of Energy. We assess the implications

Hewlett Packard Enterprise (HPE) has quickly become one of the major suppliers of supercomputer-class computing, thanks in part to its purchase of one of the big names on the Top500 list.

In November 2016, HPE acquired high-performance computing (HPC) company SGI, pushing it up the Top500 supercomputer league

In fact, 26 of the 145 top supercomputers HPE has ranked in the June 2017 Top500 list, are from the SGI acquisition with several of these systems ranked in the Top 50. 

Analyst IDC estimated that the global HPC market would be worth $15.3bn by 2019 – and HPE is set to have a major share.

In January, IDC published a whitepaper that suggested the SGI acquisition would inject supercomputer skills into HPE’s business. IDC noted that the SGI employees who transferred across to HPE were experienced at designing hardware-software systems that compete at the bleeding-edge performance levels that typify leadership-class supercomputers.

HPE has now set itself a goal of achieving exascale computing, a level of computer processing power equivalent to the sum of all of today’s Top500 supercomputers combined.

In June, HPE was awarded a research grant from the US Department of Energy (DOE) to develop a reference design for an exascale supercomputer. The overall goal of the programme is to achieve exascale performance in 2022-23. To reach this goal, high-performance computers will need to be 10 times faster and more energy efficient than today’s fastest supercomputers.

Lower supercomputing energy consumption

Mike Vildibill, vice-president of advanced technology group at HPE, leads the company’s exascale activity. Speaking to Computer Weekly about the challenges of meeting exascale computing, he said: “If you look at the Top500 – added together, all 500 systems would be equivalent to one exaflop.”

In other words, a single exaflop is equivalent to all 500 of the world’s fastest supercomputer systems combined. In total, these systems consume 600MW (megawatts) of energy, which Vildibill said approaches the limits of a nuclear reactor. “The DOE wants to develop a system that only consumes 20MW of power.”

Vildibill does not believe future innovations in chip architecture will be enough to tackle the immense amount of energy needed for exascale computing. “Just waiting for technological advancement will not come close to achieving the energy requirements,” he said, predicting that the power to move data will exceed the power to process it. “By 2022, we will consume more power moving data than processing data.”

Read more about HPE high-performance computing

Rather, than moving data between main memory and the processor, HPE’s architecture relies on the concept of memory-driven computing, where data is stored in main memory so that it no longer needs to be copied from one location to another for the computer’s processor to work on it.

Memory-driven computing is at the heart of HPE’s exascale reference design. The work on exascale computing and memory-driven computing is derived from HPE Labs’ The Machine research programme.

HPE said fundamental technologies that will be instrumental in the exascale project include a new memory fabric and low-energy photonics interconnects. The company said it would be looking at non-volatile memory options that could attach to the memory fabric, significantly increasing the reliability and efficiency of exascale systems.

“HPE is taking some of the fundamental parts of The Machine and dramatically reducing the power of these systems,” said Vildibill.

Benefits beyond science

Supercomputers were previously used mainly by the largest organisations and government institutes for science-based applications. HPE sees a need for HPC systems beyond the scientific community. “Exascale is about crossing the frontier to make computing more accessible and affordable,” said Vildibill.

The research and development needed to create such a system is a bit like what went on at Nasa following president John F Kennedy’s speech announcing the Apollo Moon programme, which not only gave the world the Saturn 5 rocket, but also space blankets, solar panels and went some way to revolutionise the use of quartz for timing.

When asked why there would be a demand for exascale computing, Vildibill said: “The human genome comprises three billion base pairs and has created an entire industry based on computer analytics. The physical world is phenomenally complex. The common wheat has five to six times as many genomes, so there is an insatiable amount of computing that could be done.”

Vildibill believes HPE’s in-memory exascale computing research will ultimately benefit society, just as Nasa’s Apollo moon programme did in the 1960s. “We see an explosion of data such as artificial intelligence and machine learning, which create a lot of data. In the past, we had to hire a room full of developers to run HPC applications. In the future you would throw a bunch of raw data at the AI [artificial intelligence], and it starts postulating the questions that could be asked of that data.”

Another application Vildibill sees for exascale computing is the so-called digital twin. “There is an insatiable amount of compute we could do to model the physical world. The only barrier is the cost and resources we need.”

HPE has been investing in developing storage-class memory technology. Unlike very expensive dynamic RAM, Vildibill said this non-volatile random access memory (RAM) could be used to build extremely large memory arrays, for large-capacity systems. It is also more power-efficient, according to Vildibill.

He said HPE is also developing software, firmware, advanced packaging and liquid cooling. All of these pieces will need to come together to achieve exascale computing.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Server hardware

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

SGI purchased Cray Research in 1996 and spun the Cray business unit off in 2000.   That business unit was acquired by Seattle-based Tera Computer Company and the combined Tera + Cray entity was renamed Cray Inc.  

SGI and Cray have charted very different paths since then and none of the Cray Top 500 systems on the current list have any affiliation with SGI.  That also means that none of those Cray systems have any affiliation with HPE and the statement "this means HPE now has four of the 10 fastest supercomputers in the Top500 list" is patently incorrect.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close