E-Handbook: AMD's Ryzen, Epyc power surge in desktops, laptops, data centers Article 4 of 4

agsandrew - stock.adobe.com

Processors re-imagined: Will cloud and AI lead mean all change in the datacentre?

We look at the changes cloud-native platforms and artificial intelligence could bring about in the datacentre.

Intel has long dominated the datacentre, with its x86 server chips accounting for upwards of 90% of the market for the past decade or longer. But the datacentre is now undergoing changes, as enterprise workloads start to incorporate big data analytics and machine learning, while cloud-native deployment models such as containers and serverless computing are on the rise.

“Cloud provides opportunities for any new processor system to make headway, as the total platform can be a mix of so many things,” says independent analyst Clive Longbottom.

“Historic platforms had to be pretty much uniform (hence x86) but virtualisation and containerisation have made this less of an issue. Plus, the acceptance of things like GPUs [graphics processing units], FPGAs [field programmable gate arrays] and ASICs [application-specific integrated circuits] has meant that workloads can be targeted at defined areas of a platform as and when needed,” he adds.

Intel is well aware of the changing nature of workloads, as demonstrated by some of the features introduced with the Cascade Lake Xeon server processors earlier this year. These include the Vector Neural Network Instructions (VNNI), extensions to the existing AVX-512 vector processing instructions, which are intended to accelerate calculations involved in deep learning processes.

Cascade Lake also added support for Intel’s Optane DC Persistent Memory, which can be fitted into DIMM slots to expand the overall memory capacity of a server without filling it with expensive DRAM. This could prove useful for in-memory database processing and as datasets used in analytics grow ever larger.

Next up on Intel’s roadmap is Cooper Lake. This was supposed to have been released this year, but is now due in 2020. This will be followed by Ice Lake, which is set to be Intel’s first server chip family manufactured using a 10nm (nanometre) process. Ice Lake features a redesigned core with wider and deeper instruction pipelines, and will also implement more instructions targeting artificial intelligence (AI) and machine learning.

Coming in 2021 is a major overhaul called Sapphire Rapids, set to be used in the Aurora A21 exascale supercomputer Intel is building. This will also feature Intel’s Ponte Vecchio Xe GPU accelerator. In fact, Intel says what it calls an XPU framework approach is key to future workloads, combining central processing units (CPUs), GPUs, FPGAs and other accelerators.

AMD’s EPYC return

Meanwhile, AMD has been playing the comeback kid with its Epyc processors. The second generation, codenamed Rome, launched in August, with up to 64 CPU cores and eight memory channels per socket. The cores themselves have enhancements such as wider floating point units, an improved branch predictor and better instruction pre-fetching, all of which leads to a claimed 29% improvement in instructions per clock over the first Epyc chips.

Looking to the future, AMD expects to deliver a third generation of Epyc in 2020, codenamed Milan, which will offer further tweaks to the CPU cores. This will be followed in 2021 by a fourth generation codenamed Genoa. Little is currently known about this, but it may be manufactured using a 5nm process and support DDR5 memory.

AMD has been playing the comeback kid with its Epyc processors. The second generation, codenamed Rome, launched last August, with up to 64 CPU cores and eight memory channels per socket

With the successful introduction of a second-generation Epyc, AMD seems to have gained the confidence of partners and customers, such that major suppliers such as Dell EMC, HPE and Lenovo have already launched systems supporting it. The fact that AMD’s chips offer comparable performance to Intel’s Xeon, but at a lower price, has also helped.

ARM does not manufacture its own chips, instead leaving that to its licensee partners, and the ARM ecosystem has had a number of false starts in the server market – Calxeda, Qualcomm, Broadcom and even AMD have all brought ARM-based server chips to market before dropping out for one reason or another.

The current crop of hopefuls comprises Marvell with the ThunderX line it inherited via its acquisition of Cavium; Fujitsu with the A64FX chip that it is using to build supercomputers; and Ampere Computing, founded by former Intel president Renée James, with the eMAG range of processors.

ARM itself has drawn up an ambitious roadmap for future core designs aimed at the datacentre under the Neoverse project. The first of these, Ares, was unveiled in February 2019 and is designed to scale up to 128 cores. This will be followed in 2020 by Zeus with enhancements to make it 30% faster than Ares. Further out, Poseidon is expected to be optimised for a 5nm production process.

The problem that ARM licensees have often faced is the ecosystem to support ARM-based server systems is much less mature than that for x86 servers. This is despite support for ARM, including key software stacks such as Red Hat Enterprise Linux and Ubuntu Server, as well as most of the big software projects.

“ARM should have done better than it has. It makes a great low-power edge chip or discrete workload server – but ARM does not seem to have managed to get the partners together to make this a real play,” says Longbottom.

Perhaps tellingly, most of the current crop of ARM server chips is found either in supercomputers or being deployed into hyperscale environments, where the lower power consumption of ARM chips is a key advantage. Fujitsu’s A64FX chip powers Japan’s Fugaku supercomputer, while Marvell’s ThunderX2 is also in supercomputers and deployed (for internal use only) in Microsoft’s Azure datacentres. Ampere’s chips are used to power some server instances by bare metal cloud provider Packet.

Power to the people

IBM has been treading its own path with its Power systems, targeting demanding enterprise workloads and focusing on Linux and other open source tools. The current Power9 processors are designed to offer more bangs-per-buck than Intel’s chips, with up to 24 cores capable of four or eight threads each, and theoretically capable of being paired with up to 8TB of memory.

Power9 was also the first chip to support PCIe 4.0, enabling high-speed connections to accelerators. In addition, it sports BlueLink ports running Nvidia’s NVLink 2.0 protocol, allowing GPU accelerators to be connected at a higher speed than even PCIe 4.0.

Read more about processors

On the roadmap, Power10 is slated for 2020 and may double the number of cores to 48, as well as introducing PCIe 5.0 I/O.

The Power architecture has previously not been considered a real challenger to Intel and x86 in the mainstream, but IBM’s recent acquisition of Red Hat may change that.

“Power is still a great architecture, with hardware virtualisation built in, great workload balancing and so on,” says Longbottom. “It has a part to play – provided IBM plays it properly, and RedHat gives it a great way to play it.”

IBM is pushing a hybrid and multicloud vision based on technologies such as Linux and Kubernetes. Under this vision, Red Hat Enterprise Linux becomes the default operating system for Power systems, and IBM has wasted no time in repackaging many of its key software products, such as DB2 and WebSphere, to run on Red Hat’s OpenShift container platform.

The latter brings us back to the march of cloud-native approaches to developing applications, which emphasise open source tools such as Linux, Docker-style containers and Kubernetes. None of these technologies is tied to any single processor architecture, and so it makes the choice of server architecture less important – or so you might think.

Meanwhile, serverless computing has also been garnering attention. This takes abstraction to a new level, enabling (in theory) the customer to simply run their code without having to worry about the provisioning and management of the underlying infrastructure used to run that code. Serverless computing is generally cloud-hosted, such as AWS Lambda, although there are on-premise platforms.

In reality, while new-build applications and services are cloud-native, there are still a great many legacy workloads in everyday operation in enterprises, and these may be tied to a particular platform, typically x86 servers. Organisations may look to eventually refactor or replace these with cloud-native versions, but for the present most organisations are likely to be sticking with x86 servers on-premise for this reason.

Overall, Intel is likely to continue to dominate the server market for the near future at least. AMD is likely to grab some share of this, if Epyc can continue to beat Intel on price/performance.

Beyond x86, ARM is making inroads into supercomputing and hyperscale, but whether ARM servers will be seen in any numbers in the enterprise is questionable. Power is more tricky to gauge, but there are plenty of Red Hat enterprise customers that may be tempted by IBM’s claims that Power systems can handle demanding workloads at a lower overall cost than using x86 servers.

Read more on Chips and processor hardware

CIO
Security
Networking
Data Center
Data Management
Close