This article is part of our Essential Guide: Server form factors: A guide to rackmount, blade servers and more

Choosing chips for next-generation datacentres

Why your next datacentre refresh should encompass alternative chip architectures.

Imagine a datacentre architecture where resources can be allocated dynamically. Within a decade, such orchestration could become a reality.

Today, servers can be optimised for general-purpose workloads – usually based on the Intel Xeon chipset.

According to Gartner’s second-quarter server shipment figures for 2014, x86 server revenue increased by 12.7% in Europe, while Risc/Itanium Unix revenue declined by 23.6%.

"Xeon is pretty good at everything because of the speed of development in Xeon technology," says Gartner analyst Errol Rasit.

But no one can really predict what future computational workloads will look like. Rasit believes Xeon’s status quo could eventually be disrupted by new workloads with capabilities  not yet considered.

Specialised processors

While the success of the commodity x86 server architecture goes from strength to strength, specialism may be the way forward for future datacentre designs. This has already happened in mainframe computing, with IBM’s System z integrated information processor (zIIP) and the System z application assist processor (zAAP).

There is also growing interest in using graphics processing units (GPUs) for certain computationally intensive tasks. The next stage in automated datacentre orchestration is to identify these workloads so they can be run optimally on the most appropriate hardware resource. Now imagine if the hardware itself could be configured dynamically based on the I/O, computing, floating point, network and storage requirements of a given workload. Gartner describes such an architecture as a computing fabric.

According to datacentre server company Unisys, by adopting a fabric-based architecture, organisations can gain the agility and flexibility of shared resource pools such as compute, network and storage. This helps to avoid datacentre sprawl, where each mission-critical application used to have its own dedicated physical server, and helps to speed deployments and reduce costs. Unisys sees this computing fabric based on x86 server chips, but there is no reason why other chip architectures cannot form part of the fabric.

 The only major hurdle is the need to move from electrical to optical-based connectivity. Gartner analyst Errol Rasit says server motherboards rely on printed circuit boards, which means memory needs to be as close to the processor as possible. But when you change to light transmission, he says, degradation of the signal is minimal.

 “Using silicon photonics , it is entirely possible to plug in different switchable optimised computing resources,” Rasit adds.

 In a photonic-based computing environment, it will be possible to mix and match processor, memory, storage and network components in a single cabinet, in a row of cabinets, or even across datacentres. “Rather than it being pre-defined using virtual machine software, you will define the system logically based on the work requirement,” says Rasit.

Power drives new design criteria

Among the biggest areas of investment in recent years has been in the development of low-energy server-based computing. Processors based on laptop Atom processors and, more recently, ARM processors that power smartphones, offer server manufacturers significantly lower-energy footprints than Xeon server chips.

According to Gartner’s forecast, by 2017, 4% of the x86 server market will shift to extremely low-energy servers. But servers powered by ARM and Atom are not general-purpose machines, which makes them unsuitable as Xeon replacements.

The architectural profile of low-powered servers can benefit certain workloads. System designers balance memory, compute and I/O in a way that works best for computationally simple tasks such as big data, web and supercomputing, where the application can be segmented into discrete tasks that can be run simultaneously on massively parallel hardware comprising hundreds of low-powered processors.

Even a machine such as the Sequoia IBM Blue Gene/Q supercomputer – the third most powerful supercomputer in the world, according to the Top500 list – is based on moderate compute capabilities. Each node comprises a 16-core PowerPC A2 processor, running at 1.6GHz, and 16GB of DDR3 memory. As a result, its energy footprint is relatively low for the level of performance it can achieve.

HP CEO Meg Whitman has high hopes for the company’s Moonshot low-energy server family as a differentiator in the commodity server market. Moonshot is based on Intel Atom and AMD Opteron system-on-a-chip (SoC) processors, optimised for desktop virtualisation and web content delivery applications. These servers can run Windows Server 2012 R2 or Red Hat, Canonical or Suse Linux distributions.

But manufacturers are also starting to sell ARM64 SoC-based servers, which require an ARM-specific development environment and Linux distribution. Lenaro is the main initiative to drive software standardisation for ARM-based servers. Linux distributions Ubuntu and Red Hat are available for the ARM server architecture.

ARM64 microservers

Karl Freund, vice-president of product marketing at AMD Servers, says there is huge interest in ARM-based alternatives, but the technology is at least six months away.

“Production silicon is just starting to become available, but it will take time to do benchmarks and it will take time for the server market to mature,” he says.

AMD has just shipped its development platform, which can be used by equipment manufacturers, operating system developers, datacentre operators, Java and the open-source software community. From a chipmaker’s perspective, he says ARM is the equivalent of open-source hardware, which lowers chip development costs considerably.

Semiconductor companies Cavium and Applied Micro are taking two different approaches to the ARM microserver market. Cavium is specialising in low-powered cores, while Applied Micro is taking a high-performance computing (HPC) approach.

AMD is building its chips based on the ARM Cortex-A57 core. Freund says this allows for lots of differentiation. “We believe we can optimise for specific workloads, such as to build a system with a large number of slow cores, or provide enhanced acceleration technology by using FPGA or GPU, or an encryption processor,” he says.

From a chipmaker’s perspective, ARM is the equivalent of open-source hardware

This differentiation has the potential to create a competitive market for specialist ARM servers, which Freund believes could lead to lower costs.

While the target market for such servers will primarily be the web-scale companies, with huge server estates, lower costs could drive adoption in the enterprise. Indirectly, businesses could benefit from lower-cost cloud computing, while ARM may also power server appliances, where the actual hardware is irrelevant.

“Nobody asks what chipset [Oracle’s] Exadata uses. Companies will come up with a data appliance that uses an ARM processor,” says Freund. In fact, he expects ARM-based hardware could power distributed object storage for applications such as Hadoop, where the cost would be much lower than traditional network attached storage.

This was the design principal behind AMD’s Seattle ARMCortex-A57-based SoC. “We have a lot of I/O in terms of Sata and 2*10Gbps plus an eight-core processor,” says Freund.

The computing fabric

In the old days of data processing, massive server rooms housed separate bits of hardware for processing, storage and memory. The commodity blade server integrates much of this technology, apart from storage, where shared network-connected storage was regarded as the most efficient configuration for x86-based infrastructure.

But certain applications, such as Hadoop, work best when each server processes data from local storage. And the industry is beginning to develop low-powered computing architectures based on laptop and smartphone processors, where Atom and ARM64 chips power hyperscale microservers.

Datacentre 2025

Servers with AMD’s Seattle ARM-based chip are not expected to ship until mid-2015. But given that datacentres are designed to last 15 or more years, chances are that datacentres built in the next few years will need to take into account ARM and other architectures.

Products such as SAP Hana and Oracle Exadata show that there is a market for specialist appliances. So there is no reason why future datacentres cannot be designed in a way where pools of servers are optimised for certain types of work.

A key requirement will be to shift application architectures to make them more componentised and have a standard way to access the application programming interfaces. The industry has already invented this approach, through a service-oriented architecture and the enterprise service bus.

Now imagine a software datacentre, where the connectivity is an enterprise service bus, all physical hardware is virtualised and the infrastructure extends onto the public cloud. Speciality servers, like the low-powered Atom and ARM-powered microserver, will have a role to play in such a datacentre, where it would host workloads that need to run at the lowest cost with the smallest energy footprint possible.

This was last published in September 2014

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close