Feature

Get datacentre cooling under control

Whether it is to save datacentre running costs or part of the business’s 'go green' strategy, energy efficiency is a top priority for many datacentre managers. But reviewing existing datacentre cooling designs and exploring new ones for efficiency may not be as hard as you might think.

There was a time when it was common to fit a distributed computing datacentre with computer room air conditioning (CRAC) systems to maintain IT systems’ temperature. But CRAC units have since been depicted as the bad boys of the datacentre because their high energy usage.

45697_Equinix.jpg

A commonly used (but badly flawed in some ways) measure of how well a datacentre operates is its power usage effectiveness (PUE) score – the ratio between the total amount of energy used by a datacentre and the amount used by the IT equipment itself. When PUE first came to the fore, many datacentres were running at a score of more than 3, with many even running above 5. Even now, the overall global average has been reported as 2.9 through research carried out by Digital Realty Trust.

A different study, carried out by the Uptime Institute, came up with a figure of 1.85. But the point is that for pretty much every watt used in powering the IT equipment, another watt is being used in the datacentre facility itself.

Think about it – the IT equipment is all the servers, storage and network equipment in the datacentre. All the rest of the power used goes to what are called ‘peripheral systems’ – lighting, losses in uninterruptible power supply (UPS) systems and cooling. Even a major datacentre will struggle to use up much of its power through lighting, and modern UPS systems should be able to run at above 98% energy efficiency. The biggest peripheral user of power is cooling.

It is time to get datacentre cooling under control – and it may not be as hard as IT professionals think.

Datacentre doesn’t have to be freezing cold

First, times have changed. The guideline standards for temperatures for IT equipment have moved. No longer are we looking at a need for 17-19°C for air in a datacentre: the latest ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) guidelines allow for temperatures upwards of the 20s°C, and in some cases through to the mid-30s or even into the 40s.  Just allowing this could reduce your energy bills by a large amount.

But this is not all that can be done. There is a whole raft of cooling approaches that can be looked at to produce a fast return on investment.

Focus cooling to where it is needed most

Cooling all the volume of air in an entire datacentre in order to cool the small volume of IT equipment is wasteful. Datacentre managers should look to using hot and cold aisle containment to minimise the amount of cold air required. This can be done through the use of specific solutions or just by covering the space between rows with fire-resistant material and using fire-retardant polypropylene or other door systems at each end. Racks will need to be checked to ensure the cold air pushed through the cold aisle is moved over the hotspots of equipment and does not bypass them, but this can be monitored reasonably simply.

Containerised datacentres

For those who are reviewing their infrastructure, moving to highly contained racks with integral cooling could be a good move. 

Using sealed racks, such as the Chatsworth Tower system, contains cooling within the rack itself, with cold air introduced at the bottom of the stack and hot air exiting at the top into a contained space.  

Systems from suppliers such as Emerson and APC provide more engineered solutions for rack- and row-based cooling, with individual CRAC systems held within the row for more targeted cooling.

Liquid cooling and free air cooling

Liquid-based cooling systems are also making a comeback. Rear-door water coolers are a retro-fit solution that cover the back of a rack, removing extra heat from the cooling air. Other systems act directly on the IT equipment, using a negative pressure system that sucks the water through pipes, so that if a leak occurs, air is sucked into the system rather than water being pushed out. IBM's Aquasar and Chilldyne provide such systems – but these are intrusive in that they use specific finned heat exchangers that need to be connected to the chips being cooled. 

Other systems, such as Iceotope and LiquidCool, offer fully immersed IT modules that use a dielectric liquid to remove heat effectively. Green Revolution provides dielectric baths into which equipment can be immersed, allowing existing equipment to continue to be used. Each of these systems enables the heat removed to be reused elsewhere, for example for heating water to be used in other parts of an organisation.

Free air cooling is something else to consider. In most northern climates, the outside air temperature will exceed the temperature needed to cool IT equipment for only a few days a year. Systems such as a Kyoto Wheel allow external air to be used at minimum power cost.

Adiabatic cooling, which makes the most of the physics of how an evaporating liquid cools its surroundings, can be used for climates where the outside temperature tends to exceed the ASHRAE guidelines or to bolster systems in more temperate climes on days when the outside air temperature edges above what is needed. Companies such as EcoCooling, Munters, Keysource, Coolerado provide adiabatic systems for datacentre use.

The aim is to minimise the amount of energy used to keep a datacentre’s IT equipment operating within its designed thermal parameters. By using approaches such as those mentioned above, companies like 4D Data Centres have managed to build and operate a datacentre with a PUE of 1.14; Datum's datacentre is operating at a PUE of 1.25; and Google's is running at around 1.12.

This means that for a large-ish datacentre with an existing power requirement of 1MW that is now running at a PUE of 2, 500kW will be going to the IT equipment and 500kW to the datacentre facility. Reducing the PUE by reviewing the cooling could bring this down to a PUE of less than 1.2 – for the same 500kW IT equipment load, only 100kW would be used by the facility.  In this way, 400kW of power costs are being saved – in many cases through low-cost, low-maintenance solutions.

Time to review your datacentre cooling? Definitely.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in April 2014

 

COMMENTS powered by Disqus  //  Commenting policy