Feature

Configure datacentres with green tech to save costs

The drive for a green, sustainable datacentre may appear to have subsided somewhat, compared with the push a few years back to be seen to be doing everything possible to save the planet, but many organisations are now implementing green policies without wrapping them up in a sustainability message.

The current driver for green technology is to save money through optimising energy use, rather than a desire to be seen to be green.

One of the main reasons behind the need to cut back on energy consumption is an obvious one – cost. Energy prices are unlikely to come down in the foreseeable future – indeed, they are trending ever upwards – and datacentres use a lot of energy. The other driver that is beginning to have an effect in the UK is the Carbon Reduction Commitment Energy Efficiency Scheme, commonly known as the CRC.

The CRC was originally conceived as a net cash-neutral scheme, through which those organisations that could demonstrate their energy use was optimised would gain money, while those which showed little to no improvement would lose money. However, the CRC has changed to become a straight tax on all organisations, related to how much energy they use. Currently, the CRC applies only to those organisations which use large amounts of electricity, but as the government searches for new tax revenues, it is likely to be expanded in scope to cover more organisations over time.  

The drive for energy efficiency in the datacentre should therefore be even stronger.

Energy-efficient cooling in the datacentre

Many datacentres are being run against old-style environmental designs, where the approach to cooling is based around ensuring that input cooling air is at such a low temperature that outlet air does not exceed a set temperature. In many cases, the aim has been to keep the average volumetric temperature in the datacentre around 20°C or lower, with some running at between 15°C and 17°C.  

A datacentre with a floor area of 1,000m² and a ceiling-to-floor height of three metres will require the cooling of 3,000m³ of air. To ensure that the average temperature remains within limits, the air will need to be flowing, and this leads to a measure called air change rate (ACR). Many datacentres work at an ACR of around 100-200 per hour, requiring up to 600,000m³ of air to be cooled per hour.

The cost of cooling so much air to well below standard air temperature can be enormous – but is it really required?

The first step to finding out is an assessment of the existing datacentre. The best way to do this is to implement temperature monitoring systems around the datacentre. This is not simply a case of using strategically placed thermometers – the use of infrared heat cameras will help in identifying where the datacentre has existing hotspots that need addressing.

Once the existing environment is mapped out and heat issues identified, it is possible to start replanning the datacentre. It may be that a rack has been filled with 1U or 2U servers and the density of hot central processing units (CPUs) is leading to hotspots. It may be possible to spread these servers over two or more racks, or to mix the servers with items that generate less heat, so fewer hotspots are created.

It may become apparent that certain items of equipment have very high thermal issues. In most cases, this will be because the equipment is “old” – more than three years old. It will be cost effective in many cases to replace such equipment with a more modern equivalent. Due to improvements in design and engineering, the newer equipment will be more energy efficient and have fewer thermal problems.

Optimising airflow for maximum cooling

It is possible to take existing racks and use polycarbonate sheeting over the aisles with plastic doors at the end of the rows to create a sufficiently contained environment

Clive Longbottom, Quocirca

The next step is to look at how cooling can be best implemented. In the example of the 1,000m² datacentre, a lot of air is being cooled that is doing little in the way of cooling the IT equipment. By containing the cooling airflow and directing it where it is most needed, much higher efficiencies can be obtained. The use of hot and cold aisles makes this possible, and it does not necessarily require large investment in technology. In many cases, it is possible to take existing racks and use polycarbonate sheeting over the aisles with plastic doors at the end of the rows to create a sufficiently contained environment to reduce the volume that must be temperature-controlled by an order of magnitude.

If the racks have been redesigned to ensure that there are fewer hotspots, the use of blanking plates and flow-direction plates can ensure that cooling air is directed exactly where it is most needed, lowering the ACR and saving further energy.

This is where computational fluid dynamics (CFD) comes in useful. Being able to map airflows and to carry out “what-if?” scenarios enables airflows to be optimised to ensure that cooling air hits the hotter areas of IT equipment effectively, and that cooling air is not wasted by flowing over other areas. CFD software is increasingly included in datacentre infrastructure management (DCIM) suites from the likes of Nlyte, Emerson Network Power, Eaton and others.

Increasing a datacentre’s thermal envelope

Further savings can be made by reassessing the thermal envelope in which the datacentre works. The American Society of Heating, Refrigeration and Air conditioning Engineers (Ashrae) provides guidelines on best practice for datacentre thermal operation, and these guidelines have changed in recent years.

Ashrae’s original 2004 guidelines aimed for a maximum allowable temperature of around 25°C. By 2008, this had risen to around 27°C. Ashrae’s 2011 guidelines created a range of different approaches, depending on the type of datacentre and acceptable equipment failure rates, but moved the upper acceptable temperature as high as 45°C for less controlled environments, with more controlled datacentre environments being able to run at 35°C.

At these temperatures, the amount of air mechanically cooled through the use of computer room air-conditioning (CRAC) units is massively reduced. In many cases, the need for CRAC units is completely obviated, and free air cooling can be used instead.

The cost of CRAC units should not be underestimated. A measure of the overall energy effectiveness of a datacentre is power utilisation effectiveness (PUE). This number is derived by taking the total energy used by a datacentre and dividing it by the amount of energy used for enabling a useful workload – that is, the energy that is provided to the IT equipment within the datacentre.

For many existing datacentres, the PUE will be between 2 and 2.5. For every watt of energy provided to the IT equipment, between 1W and 1.5W of energy is used in peripheral equipment – lighting, uninterruptible power supplies (UPSs) and cooling. Lighting is a small part of this, and a modern datacentre should be run in a lights-out status anyway. A modern UPS should be greater than 95% energy efficient (many are now 98%+ energy efficient). This leaves the cooling system as the biggest energy drain for calculating PUE. If the CRAC units can be decommissioned and replaced with free air cooling, or with low-energy systems such as adiabatic cooling, the datacentre’s PUE will improve dramatically – and the energy requirements for the datacentre overall will fall.

For example, if an existing facility has a PUE of 2.5 and through the use of advanced cooling design it can drop to 1.5, 40% of the total energy used by the datacentre can be saved.

Although a green datacentre is probably not at the top of anyone’s priority list at the moment, energy optimisation probably is. A step change in datacentre energy usage can be gained through a few relatively simple steps, such as those detailed above.
 

Saving tens of percentage points on the datacentre’s energy cost can only benefit the business. Not only will it be able to show this saving directly against the bottom line, but it will also be able to use the saving to put a tick in the “green” and “sustainable” boxes in its corporate responsibility statement.


Clive Longbottom is an analyst at Quocirca.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in February 2013

 

COMMENTS powered by Disqus  //  Commenting policy