Within a data centre, it is often assumed that the IT estate (i.e. servers, storage and network equipment) is the energy hog. Therefore, many organisations think they can optimise their energy costs by consolidating down to a more optimised server, storage and network estate. So, they embark on just such an energy efficiency action plan.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The Green Grid’s Power Usage Effectiveness (PUE) scoring compares the total amount of energy used by a data centre facility against the amount of energy that is used in server, storage and networking. It finds that the majority of data centres are running at a PUE of greater than 2. That number means that for every unit of energy used within the server, storage and network equipment, at least another unit is used in “peripheral” equipment, which includes lighting, cooling and backup power supplies.
Organisations look at rationalising the IT estate through extensive use of virtualisation and then find that their PUE has got worse and wonder why.
Assuming that the original data centre had a PUE of 2.0, with half the energy being used by the IT estate and half by peripheral equipment, and with that amount of IT estate being halved through the use of virtualisation, the new case is that an overall energy savings of 25% has been made (half of the IT estate’s energy usage).
If nothing has been done about the peripheral equipment, then PUE has suddenly risen to 3.0.
Simply shutting down CRAC units is not a solution
It is tempting for an organisation to just shut down some of the computer room air conditioning (CRAC) units, but this can bring in new problems. For example, most organisations install CRAC units to ensure that there is a degree of resilience in place should any one unit fail and to allow for any unit to be taken out of use to carry out planned maintenance. Therefore, just removing CRAC units from the facility based on a theoretical need for less cooling can mean reducing data centre resilience as well.
Cooling is an energy hog and there are many approaches that can help minimise the amount of cooling required within a data centre. For instance, you can run the data centre at a higher temperature and use highly targeted means of cooling such as hot and cold aisles.
Energy efficiency action plan
Further improvements can be made through changing the way the chiller units themselves work. Most existing CRAC units are fixed-speed. Once on, they tend to run at 100% on a continual basis, only turning off when control systems can guarantee that there is enough cold air available to maintain the temperature in the data centre for a period of time. In most cases, this means that the CRAC units run pretty much continuously – wasting energy. It also means that more frequent maintenance is necessary on the units than should really be required.
Pros and cons of variable speed CRAC systems
Newer CRAC units use variable speed capabilities – which also means variable power. The unit uses a feedback loop to ensure that the amount of chilled air the CRAC unit provides is just sufficient to meet the data centre’s needs. If the data centre is already within its thermal envelope, the CRAC units can lower their output and therefore become more energy efficient.
As long as the feedback loop is effective, the units will run far closer to a theoretical optimised limit and energy savings will be highly worthwhile.
But, there are a couple of downsides to moving to variable speed CRAC systems. The first is the obvious one: It means replacing older units, so there is a capital cost associated with it. The second is that even modern variable speed CRAC units have energy overheads associated with them when running at reduced loads. The most energy-efficient CRAC unit is one that is not running at all, whereas the most cooling-efficient one is running at 100%.
These two contradictory aspects can be harnessed if there is a means of storing the output from the CRAC units. For example, the cooling output from the units can be passed through to an intermediary substance, such as water or another liquid, rather than being transferred directly to air.
Indeed, newer systems from the likes of IBM are looking at using a phase change substance, where the cold energy is used to “freeze” a liquid. Then, on thawing, it releases the cold back to the air as required. Using an intermediary store means fixed-speed CRAC units can be run at 100% when they are needed and then turned off completely when a sufficient store of cold has been created. As that store is depleted, the units can be turned back on again – at 100%.
You’ll need to carefully calculate the volume of store to ensure that the overall efficiencies work out. When combined with higher data centre operating temperatures and highly constrained, targeted cooling architectures, this can be more cost effective for many existing data centres than a complete move to variable-speed CRAC units.
An intermediary store also means you can more easily manage the failure of a CRAC unit because there will still be coolant available for a known period of time from the store itself. In addition, in climates where ambient temperatures for even a part of the year are adequate for direct data centre cooling, you can turn off the CRAC systems completely, with external air just needing to be filtered and its humidity levels modified before being used in the data centre.
Don’t become a CRAC addict
Although CRAC systems are an important part of a facility, there is no need to become a CRAC addict.
A sound data centre energy efficiency action plan involves a little forethought and making sure the facility is matched to the needs of the IT estate. This can go a long way to ensure that a fully-optimised environment provides suitable levels of overall availability while minimising energy u se and on-going maintenance costs.
Clive Longbottom is a service director at UK analyst Quocirca Ltd. and a contributor to SearchVirtualDataCentre.co.uk.