Dedicated HPC infrastructure still rules, says Huawei

A Huawei executive makes the case for dedicated high-performance computing (HPC) infrastructure, despite the growth of public cloud services that have democratised access to HPC and AI capabilities

This article can also be found in the Premium Editorial Download: CW ASEAN: CW ASEAN: Preparing for 5G

Major cloud suppliers such as Google, Microsoft and Amazon may have made high-performance computing (HPC) and artificial intelligence (AI) more accessible to organisations, but dedicated HPC infrastructure still rules when crunching heavy workloads.

In an interview with Computer Weekly, Francis Lam, director of HPC product management at Huawei, said HPC still holds a lot of its business and operational value for organisations with consistently high computing demands and requirements. These include workloads like molecular bonding and simulating drug interactions with microorganisms and human cells.

“While lighter workloads can be processed on cloud-like architectures, organisations that run a higher workload will require something more extreme in scale, absolute performance and customisable hardware requirements,” he said.

Lam also pointed out that public cloud resources for AI projects are predominantly used in “bursting scenarios”, when workloads are inconsistent and resources are only needed for a relatively short time and for needs above peak capacity.

“Users with higher computing demands and are currently running dedicated HPC infrastructure will see little benefit in decommissioning their systems in favour of dedicated cloud resources as their workloads are vastly different,” he said.

Lam said many of these users also have private cloud resources, keeping their HPC data secure in private networks rather than relying on a public host for important data.

“Hence, there is still relevance for HPC, despite the continued growth of commoditised public infrastructure for such HPC applications,” he added.

Still, Lam acknowledged that public cloud-based HPC workloads, while representing a small portion of HPC consumption today, are expected to grow.

“The increasing adoption of cloud for HPC in many cases comes not from replacing on-premise deployment, but in attracting new users who are not able to afford dedicated HPC infrastructure,” he said.

HPC is a growing segment of enterprise computing with more than $35bn in worldwide spending in 2016, and is forecast to grow to nearly $44bn in 2021, according to market research firm Intersect360 Research.

With AI expected to be a key driver of this growth, suppliers such as Huawei and Lenovo have intensified their efforts to bring products and services to market in recent years.

Huawei, for example, has built the Atlas platform targeted at AI and HPC workloads, while Lenovo has taken a more consultative stance, helping enterprises to identify the benefits of AI and HPC through testbed projects.

Although the US is still the leader in HPC, China – home to the two of the world’s most powerful supercomputers and widely expected to be the first country to field a supercomputer capable of crunching one quintillion calculations per second – is fast catching up.

“Chinese HPC companies have come a long way, and are now recognised as industry leaders in both providing HPC technologies, as well as for their noteworthy supercomputer and hyperscale deployments,” Lam said.

“There are still many pressing scientific and commercial problems to be solved; future decades will present greater opportunities for the HPC community worldwide,” he added.

Read more about HPC and AI in APAC

  • The growing adoption of AI is driving more organisations in Asia to turn to  HPC, according to a senior Lenovo executive.
  • Australian researchers are using Amazon’s Lambda serverless computing service to solve pressing health problems.
  • bottom-up approach towards data modellingis needed to address the shortcomings of physical and theoretical models in artificial intelligence.
  • Australia’s CSIRO has upgraded its high performance computing infrastructure to keep pace with global research.

Read more on Clustering for high availability and HPC

Data Center
Data Management