janez volmajer - Fotolia

International Space Station runs HPE Apollo supercomputer

The first off-the-shelf supercomputer to run in space, Apollo 40, will be used for local number crunching and artificial intelligence applications

Following a 12-month trial, HPE has opened a high-performance computing (HPC) capability on the International Space Station (ISS).

HPE and Nasa said the Spaceborne Computer had successfully completed the one-year mission, proving it can withstand harsh conditions of space – such as zero gravity, unscheduled power outages and unpredictable levels of radiation.

Following the trial, HPE said its supercomputing capabilities were ready for use aboard the ISS. These “above-the-cloud” services will allow space explorers and experimenters to run analyses directly in space instead of transmitting data to and from Earth, said HPE.

Eng Lim Goh, chief technology officer and vice-president for high-performance computing and artificial intelligence at HPE, said: “After gaining significant learnings from our first successful experiment with the Spaceborne Computer, we are continuing to test its potential by opening up above-the-cloud HPC capabilities to ISS researchers, empowering them to take space exploration to a new level.”

Experiments on the ISS capture high volumes of data, including high-resolution images and videos. In the majority of cases, however, researchers only need to view specific parts of the data. Enabling the dissection and processing of data onboard the ISS avoids latency and drives greater efficiency and speed.

In a blog post published 11 months into the trial, Mark Fernandez, Americas HPC technology officer at HPE, noted that software for space needs to work very differently to Earthbound code. Specifically, he said that in space, the idea of a consistent network connection cannot be assumed.

“Assuming a consistent AOS (acquisition of signal) was an Earthly bias that crept into our software design. In the future, we plan to design our spacebound (or remote) software stack differently to account for the much more frequent network anomalies,” he wrote.

Another challenge for the astronauts, according to Fernandez, is that they are not IT experts.

“We’re used to writing instructions for customer replaceable units (CRUs) to enable IT-savvy customers to be able to resolve issues by using a provided replacement part. Astronauts are experts in a lot of things, but IT isn’t always one, especially when working in zero gravity. 

“These CRU guidelines are woefully inadequate to hand to astronauts. In a fairly extensive process, we developed detailed instructions for customers that aren’t trained in IT and tailored them for the space environment,” he explained in the post.

The final area of concern is that radiation in space will affect delicate computer equipment. “Since we can’t expect what the radiation environment in space will look like minute-by-minute, we’ve taken an upside-down approach to monitor all of the components,” said Fernandez.

“If we suspect a component is out of parameters, we hunker down into a safe mode. We stay in that safe idle configuration to make it through that time period. Once that event has passed, we execute a health check to ensure everything is performing well before resuming operation.”

The supercomputer will not only support local processing of vast datasets from experiments conducted on the ISS, but will pave the way to support the computing requirements needed by a manned mission to Mars.

HPE and Nasa said they aim to further improve independence for space explorers through enhanced insight that will enable artificial intelligence and machine learning applications to unlock new discoveries. Improving insight and speed will accelerate scientific findings, not only for new discoveries in space, but also for understanding Earth and surrounding environments.

Read more stories about the ISS

Read more on Software development tools

Data Center
Data Management