Identifying truths and myths of datacentre efficiency

There is no one-size-fits-all solution to optimising datacentre efficiency, so CIOs should consider all available options. Archana Venkatraman reports

Datacentre energy efficiency: identifying truths and myths There is no one-size-fits-all solution to optimising datacentre efficiency, so CIOs should consider all available options.

A power-hungry datacentre is one of the main concerns of an IT executive. Growing data volumes and storage hardware increase a datacentre’s energy requirements, while tighter budgets and stricter carbon emissions regulations put restrictions on electricity usage.

But despite the pressure, existing datacentres are inefficient. Servers run idle 90% of the time, says David Flynn, chief executive of Fusion-io.

Cooling costs account for a significant portion of the energy consumption of a typical datacentre. They are also the biggest contributors of an enterprise’s carbon emissions.

Virtualisation should improve utilisation, but far too few servers are virtualised, according to Clive Longbottom, managing director of datacentre consultancy firm Quocirca.

“A majority of servers are still not virtualised and are running at lower than 10% utilisation rate, with storage being at around 30%. Unless utilisation rates are dealt with, datacentres cannot be highly energy-efficient,” he says.

And while server processors are very powerful, an enterprise can only use 20-30% of that power because storage systems cannot feed data to the CPU fast enough.

Industry efforts towards datacentre efficiency

Large businesses are now facing regulatory pressure to make their datacentres more green. For instance, prime minister David Cameron has made it mandatory for FTSE companies to report carbon emissions from April 2013, making them accountable, transparent and responsible.

There is no one-size-fits-all solution to optimising datacentre efficiency, so CIOs should consider all available options

In addition, the Carbon Reduction Commitment (CRC ) Energy Efficiency Scheme requires enterprises, both private and public, to reduce their carbon emissions or face hefty penalties. CRC is a mandatory scheme aimed at improving energy efficiency and cutting emissions and will apply to organisations using more than 6,000MWh of electricity every year – equal to an annual electricity bill of approximately £500,000, according to Mark Allingham, a datacentre design professional.

There are between 4,000 and 5,000 such organisations in the UK, collectively accounting for around 10% of the UK’s total emissions, he says. But cost is still the single biggest driver of datacentre efficiency. For an enterprise using an inefficient datacentre, there is a danger of the overspend on IT equipment and then the associated overspend on licences for operating systems, application servers, databases, etc, as well as maintenance and the need for more systems administrators to manage the equipment, says Longbottom.

At the moment, it is only worth doing something if it saves money – no-one is really investing in “green”, he says. “But these cost-saving activities are washed with a green message and are added to the CSR [corporate social responsibility] statement.”

To understand datacentre energy efficiency, CIOs should consider looking at metrics that measure electricity consumed for given workloads. “A datacentre’s energy efficiency can be measured in terms of how much data can be processed per watt, and how much data can be stored per watt,” says Flynn.

“While different people define datacentre efficiency differently, for Fusion-io it is using the resources you have to the fullest with the least amount of waste,” he adds. Fusion-io has developed ioDrives, which it says puts idle CPUs back to work and delivers efficiency across the datacentre so that its customers do not have to sprawl out their servers and storage to meet the data demands they face.

Architectural considerations

Many companies are changing their architectural designs and moving to a “cloud-like” structure to increase utilisation – motivated by the need for agility and energy efficiency, says Ian Brooks, European head of innovation and sustainable computing at Hewlett-Packard.

So a datacentre that is very disk-centric could waste energy in powering the mechanically spinning disks, as well as cooling down the building from all the heat created by those drives. Innovative new-age datacentres built by companies such as Facebook which use flash memory in place of disk consume less energy.

As flash memory continues to be more affordable and energy costs keep rising, more companies are likely to switch to flash memory, not just to improve application performance, but also because they generally run cooler than hard disks, says Fusion-io’s Flynn. HP has designed an energy-efficient datacentre using solar power, which aims to save power costs as well as minimise the environmental impact. Google will use wind-energy to power its Oklahoma datacentre as part of its green strategy. Rackspace is working with the Open Compute project for energy efficiency best practices. “We are working with a community of peers to constantly rethink what we must have and what we can do without to improve efficiency,” says Melissa Gray, director of sustainability at Rackspace.

Hot air

Cooling techniques such as air cooling, water-based cooling and evaporative cooling, using renewable energy such as wind energy, and usingcontainerised datacentres are seen as effective ways of controlling IT’s energy use. But each of these techniques has limitations.

For instance, chiller-less datacentres – which rely on the use of outside air for infrastructure cooling rather than the use of expensive, energy-guzzling air-conditioning units – significantly reduce operating and capital expenses, but they can only be built in climates that do not experience hot temperatures or high humidity levels.

Regulatory bodies such as the American Society for Heating Refrigeration and Air Conditioning Engineers (ASHRAE) recommend the use of such datacentres.

A Dell study on climate data in the US, Europe and Asia found that chiller-less datacentres require IT equipment that can withstand short-term excursions of up to 45°C. But much available IT equipment can withstand only 35°C. Aiming to overcome this, Dell built its next-generation servers to withstand temperatures of up to 45°C.

The image above shows the geography suitable for a chiller-less fresh-air cooled datacentre corresponds to the dark blue region in Alaska and Northmnost Canada. Source: Dell.

The image above shows an air-side free cooling map of Japan. Under the assumed conditions of 27°C maximum dry bulb temperature and a maximum dew point of 15°C, none of the islands of Japan are suitable for chiller-less fresh air cooling. Source: Dell

Busting the energy efficiency myths

Metrics used to measure datacentre efficiency provide crucial insights on what an IT executive can do to improve it. Power Usage Effectiveness, or PUE, is the most common metric used to measure just how effective a datacentre is.

An optimal PUE of 1.0 is accepted to be ideal, but is it achievable? Longbottom says it is not. A single 2W LED light used for emergency maintenance in a lights-out, free-air-cooled, non-UPS-based datacentre means that not all the energy is being used for IT purposes, even if that light only gets turned on once a year, he says

“Even a 99% efficient UPS means that there is a 1% loss, so the datacentre can only have a nominal PUE of 1.01,” adds Longbottom. Typically, a 10-12-year-old datacentre might have a PUE of 2.8, which is not great, but not awful, says HP’s Brooks. HP’s audits showed that its clients are achieving PUEs in the low twos – say 2.3 to 2.6 – which can be further improved using close-coupled air-conditioning units, he says.

HP says its EcoPOD products are designed to help enterprises achieve a PUE of 1.05. Meanwhile, according to Fusion-io’s Flynn, while disk-spinning infrastructure cannot deliver an optimal PUE, flash memory platforms can not only help businesses achieve better PUE but can also lead to consolidation of servers.

But PUE is a pretty crude measurement in many datacentre instances, experts insist. “PUE does not measure the efficiency of the compute or the emissions from the power consumed or other aspects of the datacentre,” says Rackspace’s Gray.

The more you know about the computing workload, the cooling needs and the climate, the better you can fine-tune the datacentre’s energy use, she says. IT executives must look at effective PUE (ePUE), which takes into account the utilisation levels of IT equipment and is more meaningful, says Longbottom.

 ePUE = total energy / (utilisation rate x IT energy)

Steps towards energy efficiency

So what can organisations do to improve energy use in their datacentres? A good starting point is the Green Grid organisation. But there is no one-size-fits-all solution, so CIOs should consider all available options, such as cloud computing, server virtualisation and IT consolidation.

An infrastructure running in an empty office space that is repurposed and has a lot of redundant servers will perform poorly compared with a datacentre purpose-built for the cloud,” says Gray.

Replacing four to five-year-old servers with highly energy-efficient ones can achieve an energy-saving payback within a year, according to HP. “But a lot of the choice comes down to the environmental conditions of where the datacentre is located,” says Longbottom.

For example, evaporative cooling may sound good for a warm climate, but if the prevailing conditions are 99% humidity – as the case would be in Singapore and Malaysia – evaporation will not work, he adds.

Meanwhile, air cooling, if implemented well, can be used even in relatively warm areas – but will need backing up with standard computer room air conditioning (CRAC ) systems or other cooling methods to cover times when the external conditions are warmer.

The datacentre’s energy efficiency will remain a key focus for datacentre managers. But efforts do not necessarily mean massive changes to the facility and the equipment housed within it, but simple, small, yet strategic steps. 

Read more on Datacentre energy efficiency and green IT