WavebreakmediaMicro - Fotolia
Hewlett Packard Enterprise (HPE) has quickly become one of the major suppliers of supercomputer-class computing, thanks in part to its purchase of one of the big names on the Top500 list.
In November 2016, HPE acquired high-performance computing (HPC) company SGI, pushing it up the Top500 supercomputer league
In fact, 26 of the 145 top supercomputers HPE has ranked in the June 2017 Top500 list, are from the SGI acquisition with several of these systems ranked in the Top 50.
Analyst IDC estimated that the global HPC market would be worth $15.3bn by 2019 – and HPE is set to have a major share.
In January, IDC published a whitepaper that suggested the SGI acquisition would inject supercomputer skills into HPE’s business. IDC noted that the SGI employees who transferred across to HPE were experienced at designing hardware-software systems that compete at the bleeding-edge performance levels that typify leadership-class supercomputers.
HPE has now set itself a goal of achieving exascale computing, a level of computer processing power equivalent to the sum of all of today’s Top500 supercomputers combined.
In June, HPE was awarded a research grant from the US Department of Energy (DOE) to develop a reference design for an exascale supercomputer. The overall goal of the programme is to achieve exascale performance in 2022-23. To reach this goal, high-performance computers will need to be 10 times faster and more energy efficient than today’s fastest supercomputers.
Lower supercomputing energy consumption
Mike Vildibill, vice-president of advanced technology group at HPE, leads the company’s exascale activity. Speaking to Computer Weekly about the challenges of meeting exascale computing, he said: “If you look at the Top500 – added together, all 500 systems would be equivalent to one exaflop.”
In other words, a single exaflop is equivalent to all 500 of the world’s fastest supercomputer systems combined. In total, these systems consume 600MW (megawatts) of energy, which Vildibill said approaches the limits of a nuclear reactor. “The DOE wants to develop a system that only consumes 20MW of power.”
Vildibill does not believe future innovations in chip architecture will be enough to tackle the immense amount of energy needed for exascale computing. “Just waiting for technological advancement will not come close to achieving the energy requirements,” he said, predicting that the power to move data will exceed the power to process it. “By 2022, we will consume more power moving data than processing data.”
Read more about HPE high-performance computing
- The head of Hewlett-Packard Labs speaks to Computer Weekly about a new era of computing, where memory is no longer a constrained resource.
- The Machine is HPE’s proof-of-concept next-generation hardware architecture that aims to overcome the limits of today’s IT by using large memory arrays.
Rather, than moving data between main memory and the processor, HPE’s architecture relies on the concept of memory-driven computing, where data is stored in main memory so that it no longer needs to be copied from one location to another for the computer’s processor to work on it.
Memory-driven computing is at the heart of HPE’s exascale reference design. The work on exascale computing and memory-driven computing is derived from HPE Labs’ The Machine research programme.
HPE said fundamental technologies that will be instrumental in the exascale project include a new memory fabric and low-energy photonics interconnects. The company said it would be looking at non-volatile memory options that could attach to the memory fabric, significantly increasing the reliability and efficiency of exascale systems.
“HPE is taking some of the fundamental parts of The Machine and dramatically reducing the power of these systems,” said Vildibill.
Benefits beyond science
Supercomputers were previously used mainly by the largest organisations and government institutes for science-based applications. HPE sees a need for HPC systems beyond the scientific community. “Exascale is about crossing the frontier to make computing more accessible and affordable,” said Vildibill.
The research and development needed to create such a system is a bit like what went on at Nasa following president John F Kennedy’s speech announcing the Apollo Moon programme, which not only gave the world the Saturn 5 rocket, but also space blankets, solar panels and went some way to revolutionise the use of quartz for timing.
When asked why there would be a demand for exascale computing, Vildibill said: “The human genome comprises three billion base pairs and has created an entire industry based on computer analytics. The physical world is phenomenally complex. The common wheat has five to six times as many genomes, so there is an insatiable amount of computing that could be done.”
Vildibill believes HPE’s in-memory exascale computing research will ultimately benefit society, just as Nasa’s Apollo moon programme did in the 1960s. “We see an explosion of data such as artificial intelligence and machine learning, which create a lot of data. In the past, we had to hire a room full of developers to run HPC applications. In the future you would throw a bunch of raw data at the AI [artificial intelligence], and it starts postulating the questions that could be asked of that data.”
Another application Vildibill sees for exascale computing is the so-called digital twin. “There is an insatiable amount of compute we could do to model the physical world. The only barrier is the cost and resources we need.”
HPE has been investing in developing storage-class memory technology. Unlike very expensive dynamic RAM, Vildibill said this non-volatile random access memory (RAM) could be used to build extremely large memory arrays, for large-capacity systems. It is also more power-efficient, according to Vildibill.
He said HPE is also developing software, firmware, advanced packaging and liquid cooling. All of these pieces will need to come together to achieve exascale computing.