Intel vs AMD: Server rivals battling it out for edge datacentre supremacy

Chipmakers Intel and AMD are targeting edge computing environments with their latest embedded chips. What do each firm's propositions have to offer?

Chipmakers Intel and AMD are long-time rivals in the datacentre market, regularly going head to head with their respective server platforms.

That rivalry has picked up again recently, with the competition shifting to new ground with the release of embedded products from both firms that target applications in edge computing, software-defined networks and storage, among others.

In the middle of 2017, AMD heralded its return to the server market with a new portfolio of EPYC processors based on the Zen microarchitecture, followed just a month later by Intel’s launch of its Xeon Scalable family, based on the Skylake microarchitecture.

The two line-ups differ in some features and capabilities, but both are aimed squarely at the datacentre, with dozens of CPU cores and support for large amounts of memory.

Fast forward to this year, and February saw Intel announce the Xeon D-2100, a series of system-on-chip (SoC) processors based on the same Skylake cores, but “architected to address the needs of edge applications and other datacentre or network applications constrained by space and power,” the firm claims.

Just weeks later, AMD unveiled the EPYC Embedded 3000 Series, based on its server chips, plus the Ryzen Embedded V1000 Series. The latter are built on AMD’s Ryzen desktop processors, which combine both CPU and GPU cores on the same chip.

In a similar vein to Intel, AMD touts these new processors as delivering “transformative performance from the core to the edge”.

Embedded in edge computing

Both companies sense a growing opportunity for embedded, especially in the current trend towards edge computing. This can be defined as the need in some situations to have systems with significant processing power close to the point of action, such as on a factory floor or even as part of an autonomous vehicle, so that processing can be done locally rather than sending data back to a remote cloud datacentre to be processed.

Embedded processors traditionally find their way into hardware appliances, and at this end of the performance spectrum, this might mean serving as the controller for networking kit or storage hardware in a datacentre.

Therefore, both Intel and AMD have adapted their respective server processors with extra on-chip capabilities such as built-in Ethernet ports or configurable input/output (I/O) ports.

Intel’s Xeon D-2100 family is split into three: server and cloud, networking and enterprise storage, and Intel QuickAssist Technology. The latter group includes on-chip hardware acceleration for functions such as encryption, with a throughput of up to 100Gbps. This targets encrypted traffic in secure networks and boosts compression in high-performance storage applications.

The top-end chip is the Xeon D-2191, a “server and cloud” chip which has 18 cores, while the mainstream Xeon Scalable parts go up to 28. It is also unique in that it has no network support, while all the other chips in the family feature four 10Gbps Ethernet ports.

All of the chips have four memory channels, enabling a maximum of 512GB of DDR4. Meanwhile, there are 32 lanes of PCI Express (PCIe) 3.0, plus Intel has equipped all of the Xeon D-2100 chips with 20 configurable high-speed I/O (HSIO) lines, which can be defined under software to be another 20 PCIe lanes, up to 14 Sata ports or up to four USB 3.0 ports.

The Xeon D-2100 Series integrates the Platform Controller Hub (PCH) inside the processor package, rather than this being a separate chip in Xeon servers, while AMD’s EPYC family already has this integration across the board from the start. This move means fewer chips are needed to build a system.

AMD’s EPYC 3000 family is mostly differentiated by cores, with four-core and eight-core chips supporting two DDR4 memory channels and 32 PCIe lanes, while the 12-core and 16-core chips feature four DDR4 channels and up to 64 PCIe lanes.

While AMD’s chips top out with fewer cores than Intel, the 12-core and 16-core models can support double the memory – up to 1TB – plus up to sixteen Sata ports, as well as up to eight 10Gbps Ethernet ports. This latter capability is almost certain to make the EPYC 3000 chips an attractive proposition to hardware suppliers building network appliances.

Meanwhile, the Ryzen Embedded V1000 Series feature just two or four CPU cores, but are integrated with AMD’s Vega GPU featuring up to 11 compute units. This means they are more likely to find a home in applications that call for device with a display, such as medical imaging or industrial control, but having an on-board GPU could prove useful for a variety of other applications.

Both Intel and AMD are pushing software compatibility with server processors as a key feature for their new embedded chips. This is especially so with edge computing, where the compute infrastructure may end up being akin to a mini-datacentre than a traditional embedded use case.

Demonstrating the next

At its launch event in London, AMD gave a demonstration of how workloads such as a virtual router or firewall appliance could be migrated from the network core out to the edge and back again, depending on circumstances such as the level of traffic at the time.

The demo used a network appliance based on an EPYC 3000 chip, while a HPE DL385 server based on the EPYC 7000 processor represented the network core. Both were running an environment based on the OpenStack framework.

Intel also foresees its new embedded processors being used in a similar fashion. For example, the firm has previously detailed how the demanding capabilities being specified for next-generation 5G wireless networks will require an overhaul of the network infrastructure used by mobile operators.

Specifically, network functions will have to be implemented in software, using techniques such as network function virtualisation (NFV) to make the whole network more adaptable and dynamic. This will lead to cellular base stations becoming more like miniature datacentres, and Intel sees the Xeon D-2100 series as the ideal candidate for the job.

Both companies are also touting security features of their respective chips, which makes sense if they are expected to operate in remote environments, as may be the case in edge computing or IoT deployments, and may be handling sensitive data.

In addition to the QuickAssist Technology mentioned earlier, Intel’s Xeon D-2100 family inherit the AVX-512 instructions for accelerating floating point processing, which includes faster data transfers and faster encryption processing, according to the firm.

Meanwhile, EPYC 3000 inherits all the security features of the datacentre EPYC chips, including its secure root of trust and support for encrypted memory and encrypted virtual machines (VMs). The secure root of trust is based on the Secure Processor built into each EPYC chip, and is designed to provide a secure boot process.

Secure Memory Encryption (SME) enables individual memory pages to be encrypted on the fly, transparently to applications. This is designed to protect against any malware that may gain access to the system memory, since the data will be encrypted to all applications except the one that owns a specific memory page.

Secure Encrypted Virtualisation (SEV) applies the same protection, but to entire VMs running on an EPYC server, protecting their memory areas against other virtual machines and in case the hypervisor is compromised.

Cost as the common denominator

Overall, both Intel and AMD appear to be aiming at the same market, and offering broadly similar capabilities; the latest embedded processors sport about half the number of cores as the datacentre chips they are derived from, but add built-in networking and other features to support their embedded role.

However, one way that AMD is seeking to differentiate itself is on cost. Its top-end EPYC 3451 chip with 16 cores is priced at $880, while Intel’s top-end Xeon D-2191 with 18 cores comes in at $2,407.

This could prove a deciding factor for many applications. AMD had backing from Seagate, which is evaluating both the EPYC 3000 and EPYC 7000 processors for use in its enterprise and datacentre storage solutions, according to Mohamad El-Batal, chief technical officer for Cloud Systems.

“The key message here is it’s not only about what you can enable, it’s also about being able to enable it and the end user being able to afford it,” he says.

Read more about datacentre hardware

This was last published in March 2018

Read more on Datacentre systems management

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

The AMD embedded V1000 APU sure sounds it could be a hit for "edge" (ta for explaining the term - i like it) sexy new segments like AI/robotics/automotive..., 

Many such apps rely heavily on interpreting graphics inputs from cameras etc.

The APU gives app developers a choice of tightly integrated  sibling processors under one support umbrella. 

Since gpu compute is many times faster where it CAN be used, and these are real time apps, the apu has a big quality edge as a foundation to build a device around.

AMD should be unassailable by intel/nvidia in this segment. It helps gain critical mass, that it is a twin of a VERY promising consumer APU.

The current 2400g APU is just a start. APUs are sure to grow in power. More gpu core units can ~easily be added within the 95w TDP limit.

There is a rumor tho, that with current memory bandwidth limitations, there is little point.

Many apu fans pray for an apu version of what amd now sell intel for laptops, an apu with 2/4GB of fast GPU cache. That would be a game changer for apu power levels.

Another route up for apuS is to double memory channels for double bandwidth (as with Treadripper vs ryzen). On an embedded system it may not be much costlier.

Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close