WavebreakMediaMicro - Fotolia
Intel Cascade Lake vs AMD Rome: Who will rule the datacentre roost in 2019?
Intel and AMD are both making a renewed push into the enterprise datacentre market in 2019, with both set to release new processors. We take a look at what they have to offer
The enterprise server market has started getting interesting again over the course of the past year, as AMD returned to the fray with its EPYC platform to go head-to-head with Intel’s Xeon chips. Now, both firms have detailed new processors that are coming in 2019, which display their respective thinking about the trends that...
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
are shaping the modern datacentre.
Things have changed somewhat since the EPYC launch in the middle of 2017. AMD now has its chips in enterprise systems from Dell EMC, HPE, Cisco and Supermicro, and is steadily gaining market share.
Meanwhile, Intel appears to have hit a few bumps in the road, with reports that it is having difficulty supplying enough chips to meet demand, while the introduction of its 10nm chip technology has been delayed yet again until sometime in 2019.
As might be expected, both companies have opted to increase the number of processor cores per socket to boost performance. However, each has their own specific improvements, with AMD delivering more instructions per clock (IPC) through a new microarchitecture, while Intel adds new instructions aimed at accelerating deep learning workloads and support for Intel’s Optane memory technology to be used in DIMM slots.
Intel’s offerings, codenamed Cascade Lake, represent the next generation of the Xeon Scalable family. These are due for an official launch in early 2019, but Intel has disclosed details of one of the top-end parts, Cascade Lake Advanced Performance (AP), which will boast up to 48 cores and feature 12 DDR4 memory channels, allowing for double the memory capacity of existing Xeons.
Meanwhile, AMD is also readying the next generation of EPYC, codenamed Rome, for launch in 2019, based on updated Zen 2 cores. This tops out with an impressive 64 cores per socket, double that of the existing EPYC family, but retains the eight DDR4 memory channels and 128 lanes of PCIe I/O so the new chips will fit in the same motherboard sockets as the first generation.
However, PCIe support has now been upgraded to PCIe 4.0 standard, which offers twice the bandwidth per lane compared with existing PCIe 3.0, which should deliver faster throughput when used with devices such as NVMe SSDs and Ethernet adapters that are compatible with PCIe 4.0.
Under the hood
There are some surprises when you look at what is inside the chip package of these new processors. Both Cascade Lake AP and the Rome EPYC are multi-chip packages (MCPs), meaning they are made up of more than one silicon chip, with internal connections tying them together.
In Intel’s case, Cascade Lake AP is effectively two 24-core chips joined together, since other Cascade Lake SKUs are set to have six DDR4 memory channels, the same as the current Skylake generation.
The chips are connected using one Ultra Path Interconnect (UPI) link from each, with the same type of link used to connect between sockets externally (Cascade Lake AP supports one or two sockets).
AMD’s existing EPYC chips are made up of four separate “chiplets”, cross-linked with the Infinity Fabric high-speed interconnect. The upcoming Rome EPYC chips, however, are radically different, comprising a single I/O and memory controller chip that is surrounded by up to eight chiplets, each of which carries eight Zen 2 cores.
This separation means the I/O and memory controller can be manufactured using the same 14nm process used for the first EPYC chips, while the new Zen 2 chiplets are made using a newer 7nm process.
Those Zen 2 cores also boast some architectural enhancements, with the width of the floating point units doubled to 256 bits, plus an improved branch predictor and better instruction pre-fetching.
As a result, the new EPYC chips boast a 29% improvement in IPC over the first generation. AMD also claims it has halved the power consumption per operation in Zen 2.
As Cascade Lake is staying put on Intel’s 14nm production process, it might seem there are few differences between the upcoming chips and the existing Xeon Scalable family.
However, Intel has added some unspecified hardware mitigations to combat the Spectre and Meltdown vulnerabilities, as well as new Vector Neural Network Instructions (VNNI) to accelerate deep learning tasks, and support for Intel’s Optane DC Persistent Memory DIMMs.
With VNNI, also known as Intel Deep Learning Boost, the firm is claiming a performance improvement of up to 17X over the Skylake family, thanks to the ability to handle INT8 convolutions in a single step rather than three separate steps for the AVX-512 instructions. However, this obviously requires application code to be optimised for Cascade Lake.
Meanwhile, Intel’s Optane DC Persistent Memory is now available as a new tier in the memory hierarchy between DRAM and storage, owing to its higher latency but greater capacity. It is available now in DIMM form factor in 128GB, 256GB and 512GB capacities, while recently launched 256GB DIMMS are the largest available DRAM capacity at the moment.
Intel envisages Cascade Lake servers fitted with a combination of DRAM and Optane. Two modes are supported; App Direct Mode and Memory Mode. The first is intended for software that is Optane-aware and can choose whether data should go into DRAM or the larger, persistent Optane memory.
In Memory Mode, DRAM acts as a cache for the Optane memory, with the processor’s memory controller ensuring the most frequently accessed data is in DRAM.
Read more about chip technology
- To differentiate new chip technology from existing GPUs, mobile tech companies are slapping a “neural” label on their products.
- OpenStack supporting community will help enterprises to overcome infrastructure barriers to adopting artificial intelligence technologies in 2019 as demand for GPU and FPGA-based set-ups grows.
Intel therefore seems to be following a strategy of adding optimisations to accelerate specific workloads, such as deep learning instructions and Optane memory, the latter of which could prove useful for in-memory databases or analytics. However, these will typically require code to be specifically written to support these features.
AMD, on the other hand, is pushing the price/performance proposition by offering a large number of cores at a low price point – as it did with the first generation EPYC – as well as better performance per watt.
It should be noted that neither Intel nor AMD has detailed prices for the new chips yet, but Intel’s top-end Skylake chips were priced in excess of $10K, multiple times that of the top-end EPYC, and it seems likely this price difference will continue.