Tip

Data centre design: Using engineered racks, pods and containers

IT pros must adopt a new approach to data centre design and consider using engineered racks, pods and containerised systems for better scalability, capacity and efficiency.

IT pros that find the conventional approach of scaling out –buying bare racks and populating them with servers – inefficient should consider engineered racks, pods and containers in their data centre design. 

For some years now, using a large number of relatively commoditised server components to create a high-performance platform has been the norm for data centre managers. Buying bare 19 inch racks and populating them with 1-4U servers has provided a reasonably easy way of increasing scale for specific workloads. But, such an approach to data centre design is not as easy as it first seems.  

Limitations of bare racks
As vendors have increased power density in their equipment, simply piling more of it into a rack can lead to unexpected problems. For example, today’s high-speed CPUs run hot, and so data centre pros need to plan for adequate cooling to avoid failure.  

The use of a rack without thinking of how best to place power systems and server components can lead to hot spots that are difficult to cool, and subsequently to the premature failure of the equipment.  

With the increasing use of virtualisation, IT pros are moving away from using completely separate systems for server, network and storage. Instead, they are placing these various systems in close proximity to each other, generally within the same rack or chassis in order to gain best performance through the use of high speed interconnects.  

And amid this, putting together a complete rack in such a manner as to allow adequate cooling is becoming increasingly difficult. Data centre infrastructure management (DCIM) tools such as nlyte or Emerson Network’s Trellis are required to play the “what if?” scenarios and to model future states using complex fluid dynamics (CFD) to indicate where hot spots are likely to occur.

But there are newer approaches to data centre design that can help IT pros overcome cooling issues. For one, a growing number of vendors are pre-populating a rack, engineering a complete row or pair of rows as a module or creating a complete stand-alone “data centre in a box” with a standard road container or similar that contain everything required to run a set of workloads.

More on data centre design

Aisle lighting tips for data centre
Guide to requirements gathering process for data centre design

Using engineered racks in data centre design and construction
At the basic level is the engineered rack.  This need not be just a collection of a vendor’s equipment bolted into a standard 19 inch rack. It could be a highly specialised chassis with in-built high speed busses and interconnects that provide the optimal speed for data interchange between storage and CPU, as well as dedicated network systems that are integrated more closely into the system. 

These network switches – referred to as top-of-rack switches – are driving a new approach to networking using a “network fabric”, with a flatter hierarchy and lower data latency across systems, leading to much better overall performance.

However, engineered racks only take a data centre up to a certain level.  Expanding and scaling out these racks can be relatively complex as it involves integrating one rack with another, and systems management tasks are left to a higher level.  

Building a modular/containerised data centre
But there’s a way around this. Many vendors are providing a more complete system, based around a modular approach, sometimes called a data centre “pod.”  Here, the system provided is engineered as a complete row or pair of rows.  

Cisco pioneered in this approach with its Unified Computing System (UCS), followed by the joint venture between VMware, Cisco and EMC with their VCE vBlock architecture.  Since then, many vendors have come out with similar offerings.  

These data centre modules provide a complete, stand-alone multi-workload capability, complete with in-built virtualisation, systems management, storage and networking, along with power distribution and cooling. For single row systems, the cooling tends to be in-row; for paired rows, it is often done as a hot aisle/cold aisle system.  

However, one big problem with such a modular system is that its expansion either involves a massive step up through the addition of another module, or the addition of smaller incremental systems such as an engineered rack.  In both cases, the design of the existing data centre may not make this easy.

In the final case comes the modular or containerised data centre. Originally, vendors saw this as a specialised system only for use in specific cases – such as where a small data centre is required for a short period of time (e.g. a large civil engineering building project), or where there is a need for a permanent data centre but there is no capability for a facility to be provided.

Containerised systems can be just dropped into a space – this can be in a car park or a field.  As long as there is sufficient power available (and sometimes water for forced cooling), the container can be operated.

Lately, organisations are realising that a containerised system can be used as part of their overall data centre design.

Engineered racks are fine for small requirements, whereas a modular approach provides some flexibility, yet the modules still need building on site. A container is delivered on the back of a lorry – and is then offloaded, put in place, plugged in and started up.  

Vendors such as Microsoft have taken a combined modular/containerised approach in its latest data centres. Here, containers are used to get standard workloads up and running quickly and modules are used for greater flexibility.

Upgrading modules and containers
The biggest issue with containerised systems is that the engineering tends to be too inflexible.  If the IT team wants to make any changes, it will have to completely strip down and rebuild the infrastructure using new components.  

As containers come in specific sizes, much of the equipment used is specialised, and data centre managers may find it cheaper to just replace the container with a brand new system, rather than adapting an existing one.

But vendors are realising this and are putting in place approaches to deal with it.  For example, some vendors are essentially renting out a data centre capability – at the end of the agreed lifetime of the containerised data centre, it can be simply replaced with the latest equivalent system (by the vendor) and the old one is taken away to be recycled as much as possible.

Intel is working on the concept of a sealed high-temperature container. If a containerised system can be run with minimal cooling, it will be highly energy efficient, but will suffer increased equipment failure due to the higher temperatures involved.  

Individual component manufacturers are improving the hot running capabilities of their kit, but Intel is looking into how a 50°C container operates. Understanding that there will be increased component failure, the idea is to over-engineer the complete system by, say 50% - at the power supply, server, storage, network and other levels. The container is then completely sealed – there is no end-user access whatsoever.

The container operates over an agreed period of time, and as equipment fails, the over-engineering allows for this. By the end of the its predetermined lifespan, the container should still be running at the originally agreed levels. The vendor then replaces the container with a new one and takes the old one away, breaks it open and recycles what is possible.

Engineered racks, modules and containers all have a part to play in the future of a modern data centre. The age of a self-designed and built, rack-based system is passing rapidly. We at Quocirca, a data centre research and analysis firm, recommend that all data centre managers consider the wide range of pre-built modular data centre systems while planning their infrastructure’s design to overcome cooling as well as capacity issues.

Clive Longbottom is a service director at UK analyst Quocirca Ltd. and a contributor to SearchVirtualDataCentre.co.uk.

Read more on Datacentre capacity planning

CIO
Security
Networking
Data Center
Data Management
Close