vladimircaribb - Fotolia

AMD fleshes out EPYC datacentre server processor strategy

AMD is preparing to go head-to-head once more with Intel in the datacentre, with the release of its EPYC server processor family

Chipmaker AMD is returning to the datacentre arena with the launch of the EPYC server processor family, which is pitched at stealing market share from Intel by offering better price/performance and innovative features such as encrypted memory.

The AMD EPYC 7000 series is available immediately, and set to feature from the third quarter of 2017 in servers from leading enterprise suppliers HPE, DELL EMC and Lenovo, plus others that target hyperscale and HPC customers such as Supermicro, Sugon and H3C.

EPYC is a new design based on the Zen processor core introduced in the firm’s Ryzen consumer chips earlier this year. It will be competing not just against Intel’s current Xeon offerings, but the yet to be released Xeon Processor Scalable chips based on the Skylake architecture.

AMD is hoping to draw customers to systems based on EPYC rather than Intel’s rival Xeon chips by offering a product that is built from scratch with enterprise requirements in mind, according to the firm.

“We asked, ‘How do we provide the performance that is needed not only for today’s workloads but also for where the market is headed?’” said Scott Aylor, AMD’s corporate vice-president of Enterprise Solutions.

The answer was to try to outdo Intel’s chips in several key areas. The EPYC family offers up to 32 of AMD’s Zen processor cores, while Intel’s current Xeon parts max out at 24 cores. Each EPYC processor also has double the number of memory channels, enabling servers to be configured with larger memory capacities – up to 2TB per socket when using 128GB Dimm components.

In terms of performance, the top-end EPYC 7601 chip can deliver up to 47% higher integer performance compared with Intel’s Xeon E5-2699A V4, and up to 75% higher floating point performance, according to AMD benchmarks.

But it isn’t just about raw performance. AMD said it has have carefully balanced performance with energy consumption through features such as workload-aware power optimisation and the ability to vary the supply voltage on a per-core basis. This results in an EPYC consuming up to 55% less energy per unit of computation compared with an Intel Xeon.

Locking down the chip

Security is another area that AMD is making a play for, with EPYC featuring a dedicated security system centred on an integrated Secure Processor. This provides key generation and management for data encryption, but also serves as a hardware root of trust to validate the system at boot up.

The security features also extend to Secure Memory Encryption (SME) and Secure Encrypted Virtualisation (SEV). These use hardware circuitry to encrypt some or all of the memory on-the-fly, with SEV allowing virtual machines to be isolated from each other and the host system using encryption. The two features ensure that data can be secured from malware even if malicious code gains access to an application’s memory space.

With EPYC, AMD is taking aim squarely at the largest segment of the server market, namely the two-socket boxes that account for around 80% of x86 servers out there.

AMD believes it can compete with Intel at all price points here, but also thinks that it can take some of that market with one-socket configurations instead, thanks to the larger memory capacity and because it is offering even single socket versions with up to 32 cores.

This is another area where AMD is aiming to differentiate itself from Intel processors, by offering the full set of features and capabilities across the product line-up. Intel’s current crop of Xeons is divided up into E3, E5 and E7 lines, with increasing levels of functionality.

Read more about datacentre technologies

“We have the philosophy of making every EPYC processor unrestrained,” said Aylor. “Every EPYC processor in the stack, whether it is the eight-core variant at the opening price point all the way up to the highest performing 7601 model, all have an equivalent feature set. They have eight memory channels and 128 lanes of PCIe.”

The EPYC line-up is differentiated largely on number of cores, split across three tiers, with 32, 24 and 16 core versions. All have clock speeds above 2GHz and can boost this up to a maximum of 3.2GHz if required.

Internally, the new EPYC processors differ not just from Intel’s Xeons but also from AMD’s earlier Opteron products in that they are actually four separate silicon dies linked together. The reasons for this are that it is much cheaper to produce these smaller chips than one monolithic die.

Each die has two memory channels and up to eight cores, and these dies are cross-linked using a new interconnect called Infinity Fabric. Every EPYC chip can therefore be thought of as a four-node compute cluster inside a single chip package.

Infinity Fabric is the key to making this work. It is described as a souped-up successor to the HyperTransport links used with AMD’s Opteron chips, offering bandwidth of 42GB per second for each bi-directional link. In a two-socket EPYC configuration, four Infinity Fabric links are also used externally to connect the two processors, so that there are no more than two hops between any two dies in the entire system.

Looking ahead, AMD said it is already developing a successor to these EPYC chips, which will be based on a new Zen 2 core design and produced on a 7nm manufacturing process. A further generation based on a Zen 3 core is also in the works.

However, for the moment AMD believes it has delivered what it set out to do, according to senior vice president and chief technology officer Mark Papermaster.

“This is just the beginning to get ourselves back into the server market and high performance, and you have our commitment that we are back to stay.”

Read more on Datacentre performance troubleshooting, monitoring and optimisation