Cherries - stock.adobe.com
The Stephen Hawking’s Centre for Theoretical Cosmology (Cosmos) is set to use HPE’s in-memory technology to help further its research into the early universe and black holes.
Cosmos is the first customer of HPE’s latest modular in-memory platform, Superdome Flex. The latest supercomputer is supporting the work of the faculty, combining an HPE Superdome Flex with an HPE Apollo and Intel Xeon Phi system.
The archiecture borrows many of the technologies in HPE’s The Machine concept of future computing.
According to HPE, the hardware will enable Cosmos to confront cosmological theory with data from the known universe and incorporate data from new sources, such as gravitational waves, the cosmic microwave background, and the distribution of stars and galaxies.
The computational power will help the group search for tiny signatures in huge data sets that could unlock the secrets of the universe, said HPE.
“Our Cosmos group is working to understand how space and time work, from before the first trillion trillionth of a second after the Big Bang up to today,” said Stephen Hawking, the Tsui Wong-Avery director of research in the University of Cambridge’s department of applied mathematics and theoretical physics.
“The recent discovery of gravitational waves offers amazing insights about black holes and the whole universe. With exciting new data like this, we need flexible and powerful computer systems to keep ahead so we can test our theories and explore new concepts in fundamental physics.”
Paul Shellard, professor of cosmology at the Stephen Hawking Centre for Theoretical Cosmology (CTC) at the University of Cambridge, said: “Our purpose is to test our mathematic theories. We create mini Big Bangs on the computers and make predictions about the universe, and we look at the real data to test theories on the origins of the universe.”
Describing how these simulations are used, Shellard said: “We collide black holes together to try to determine gravitational waveforms. Datasets are getting bigger. There is a flood of new data and new types of data. We have to analyse larger data sets and understand them using our mathematical models.”
Shellard said this task is computationally challenging and challenging for software development, which is needed to enable the researchers to support new data types.
Read more about HPE's HPC strategy
- Hewlett Packard Enterprise recently joined a project to develop exascale computing for the US Department of Energy. We assess the implications.
- A new server architecture based around memory is still a work in progress, but IT pros and leaders at HPE see its value today coming from incremental advancements.
“We have only just got enough people to sustain the programming. An HPC system is very complicated with vectors, threads, nodes and lots of memory hierarchies,” added Shellard.
Discussing how HPE’s Superdome Flex helps in-memory allows scientists to try out their theories quicker, Shellard said: “We can ingest data into memory and work on it. It is very easy to program, and you can develop your data analytical pipeline very quickly.”
He said the cosmologists are able to expand their research from looking at two points in the sky to three points in the sky, which involves a thousand times more computational power.
But, with help from Intel, Shellard said the researchers have been able to optimise the problem such that it now requires a hundred-times the computational power required to analyse two points in the sky.
In-Memory-Driven Computing is an architecture central to HPE’s vision for the future of computing, based on a pool of memory accessed by compute resources over a high-speed data interconnect.
The shared memory and single system design of HPE Superdome Flex enables researchers to solve complex, data-intensive problems holistically and reduces the burden on code developers, enabling users to find answers more quickly, HPE said.