cherezoff - stock.adobe.com

Enterprise HPC: Why HPE is buying Cray

HPE’s decision to acquire supercomputing pioneer Cray for $1.3bn serves to highlight the growing importance of high-performance computing (HPC) deployments in the enterprise market

HPE raised a few eyebrows with the news of its proposed acquisition of supercomputing pioneer Cray in May 2019, with the move serving to highlight the growing importance of high-performance computing (HPC) in many areas of business computing.

The agreement will see it acquired by HPE in a transaction valued at approximately $1.3bn. The deal is expected to close by the first quarter of HPE’s fiscal year 2020, which ends in January.

HPE is the lead supplier in the global HPC market in terms of market share and revenue, but this is made up of a large volume of sales of relatively modest hardware deployments. Cray, on the other hand, plays at the top end of the market, deriving much of its revenue from selling into customers with the most demanding requirements, such as government research laboratories.

In fact, Cray’s revenue is two-thirds government and one-third commercial, while for HPE the converse is true, with one-third coming from government work and two-thirds commercial revenue.

From this point of view, the acquisition makes sense, as the two companies complement each other in terms of their market position and technologies. This view is echoed by Tim Zimmerman, vice-president of network services and infrastructure at analyst Gartner.

“The two organisations have sales synergy in university research and government markets or any opportunities that are focused on data-intensive artificial intelligence (AI) projects that require supercomputing capabilities. The Cray product line will also create a combined offering that extends HPE’s current compute, storage, interconnect and software product line,” he says.

Mutually beneficial

The move means HPE has effectively bought itself some high-end market share, as Cray recently secured contracts for several new advanced supercomputer systems, including one to be installed at the US Department of Energy’s Argonne National Laboratory. This deployment is expected to be the first true exascale supercomputer, capable of processing one exaflops – or a billion, billion calculations per second – when it becomes operational.

Additionally, the acquisition should benefit Cray from a financial security point of view, following several years in which the company saw its revenue shrink as customers decided to sweat the supercomputing assets they already had and postpone investment in newer, faster systems.

“In the short term, this does offer Cray customers greater stability, since Cray will be able to leverage the supply chain and manufacturing efficiencies of HPE as well as share in operational expense synergies. This will allow Cray the resources to continue to focus on delivering supercomputer solutions that extend the HPE product family,” says Zimmerman. 

Meanwhile, customers can also expect to see the benefit of HPE gaining Cray’s expertise in a number of key areas, such as where more traditional HPC workloads are now converging with AI and big data analytics. These are starting to feed into mainstream business applications already, and are likely to play an increasingly important role in future, claims Steve Conway, senior research vice-president at Hyperion Research.

“People associate HPC with arcane scientific research, but it entered the computing mainstream a long time ago,” he says. “Lately, HPC has been moving more into new environments, crossing over into enterprise IT to support sales and marketing, for example.”

This is only to be expected in the era of big data, where greater and greater volumes of data are being stored and retained by organisations in the hope that this will yield key business intelligence insights or deliver some extra value some other way, perhaps leading to new business models.

Conway added that in a recent survey of CIOs, datacentre managers, and IT directors, 36% of those interviewed said they were already using HPC in applications for business operations. This enables them to address more complicated questions than before, such as analysing the data to find out not just who the top sales people are in a particular territory, but how and which products were involved to maximise future revenue.

Previously, it might have taken weeks to do all the calculations, but with today’s systems you can crunch through the data half a dozen times a day, says Conway. Because of use cases such as this, the overall HPC market is projected to be worth about $39bn by 2023, he adds.

Misgivings over HPE-Cray M&A

Some in the industry have a few misgivings regarding the pending acquisition. HPE has not always been successful in assimilating the companies and technologies it buys. As was the case with PDA maker Palm some years ago, some of its acquisitions effectively just drop off the face of the Earth.

It is hoped the Cray brand carries enough weight that HPE will allow it to operate more or less autonomously within the parent company

“HPE does not have a good reputation for acquiring companies and making them work. It has a reputation for acquiring technology and companies and then they simply disappear and it kills the brand,” says Ovum distinguished analyst, Roy Illsley.

However, it is hoped the Cray brand carries enough weight that HPE will allow it to operate more or less autonomously within the parent company, at least when it comes to the research and development of technologies for the high-end supercomputer systems for which it has established a global reputation.

“We anticipate that Cray will be managed as its own business unit as part of the Hybrid IT organisation. In the longer term, we see Cray working with HPE Labs and the rest of the Hybrid IT team to deliver high-performance compute, storage, software and services,” says Gartner’s Zimmerman.

Merging HPC portfolios

HPE has furthermore hinted that it plans to take some of the advanced technology from Cray’s high-end systems and cross-pollinate it into HPE’s existing HPC portfolio. The latter comprises the HPE Apollo and HPE SGI product families, with the SGI line-up being a legacy from HPE’s acquisition of that company in 2016.

“This portfolio will be further strengthened by leveraging Cray’s foundational technologies and adding complementary solutions,” claimed HPE president and CEO Antonio Neri, during his speech announcing the deal.

What might those technologies be? Supercomputers today are largely based on technology that will be familiar to the IT staff running a corporate datacentre, being made up of a large number of compute nodes that are often running x86 processors. The difference is in their sheer scale, and the way the nodes are linked, typically using complex topologies and high-speed interconnects such as InfiniBand or Intel’s Omni-Path.

Cray’s Shasta architecture, used in the new supercomputers the firm is building, is designed to deliver high performance through flexibility, enabling a system to be built using a mix of processor types, including x86, ARM, GPU, or field programmable gate array (FGPA) chips. This is an important development, since many of the demanding new workloads, such as those using machine learning (ML), benefit greatly from acceleration using specialised hardware such as GPUs.

A key part of this architecture is a new Cray-designed interconnect called Slingshot. This is intended to serve as a high-performance backbone with each link operating at 200Gbps. Cray has made Slingshot interoperable with Ethernet, which means Slingshot switches can connect directly to third-party Ethernet-based storage systems and to standard datacentre Ethernet networks. HPE says this is one technology it will be looking to capitalise on.

“With Moore’s Law, internal collaboration and the need for additional computing requirements for data-intensive AI/ML applications or HPCaaS, the opportunity does exist with technology such as the Slingshot interconnect,” agrees Zimmerman.

HPCaaS is high-performance computing as a service, which in this case is a reference to GreenLake, HPE’s pay-as-you-go consumption model for on-premise IT and hybrid cloud, which the company says it is looking to expand with the Cray purchase.

Slingshot may even have helped clinch the HPE-Cray deal after Nvidia acquired Mellanox earlier this year. Mellanox is a leading maker of high-performance networking hardware, and Nvidia has taken a significant slice of the AI and ML market with products based around its GPUs, including its own purpose-built DGX systems. The threat posed by this merger may have spurred HPE to secure its own high-speed interconnect technology.

The signals from HPE and Cray appear to suggest that Cray will be allowed to continue to operate in the high-end supercomputing arena, while gaining the benefit of HPE’s greater scale and global distribution

The signals from HPE and Cray appear to suggest that Cray will be allowed to continue to operate as it does now in the high-end supercomputing arena, while gaining the benefit of HPE’s greater scale and global distribution. Meanwhile, customers should eventually start to see the benefit of some of Cray’s advanced technology being infused into other HPE products.

Cray’s president and CEO, Peter Ungaro, indicated as much in a blog posting about the acquisition, saying: “We are excited about integrating the Shasta system architecture, software, programming environment, Slingshot and ClusterStor technologies across HPE’s broad product portfolio and advanced research programmes.”

In other words, the acquisition looks set to bring together Cray’s advanced technology with HPE’s broader product portfolio, in a move that looks set to provide enterprise customers with integrated solutions that will meet the growing requirements of emerging data-intensive applications and services.

Read more about Cray

Read more on Clustering for high availability and HPC

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close