Power Usage Effectiveness, or PUE, is the phrase du jour for measuring just how effective and energy efficient an organisation’s data centre is. But, PUE remains a pretty crude measurement in many data centre instances and data centre managers must look at effective PUE or ePUE.
At its simplest level, PUE is a direct relationship between how much energy is used across the whole of a data centre divided by how much energy is used to power the IT equipment. The basic equation can be shown as such:
PUE = Total energy
Therefore, energy used in powering cooling systems, uninterruptable power supplies (UPSes), lighting and so on causes the PUE to be a higher number. Theoretical perfection is a PUE of 1, where all power is put into the IT equipment, with none being put into the support environment.
The majority of existing data centres are running at PUEs of around 2.4, with large, multi-tenanted systems running at around 1.8 or less.
Many different approaches have been brought to the fore to enhance PUE. These include using free air cooling, variable rate computer room air conditioning (CRAC) units, lights-out operation, along with modular computing using hot and cold aisles or even containerised systems to better control how energy is used.
More on PUE and IT energy efficiency
Let’s take a look at a couple of data centre examples where PUE doesn’t work:
A data centre manager is set the task of improving the utilisation of an existing IT estate of servers. It is apparent that virtualisation is an easy way to do this. By bringing down the number of servers in use from, say 1,000 to 500, the data centre manager can save on energy costs as there are fewer hardware resources. In addition to lower power costs, virtualisation helps the business save money on the hardware itself, on licensing, maintenance and so on.
This has to be great news, right?
Not exactly. The problem is that the costs of re-engineering the data centre facility tends to militate against the changes a data centre manager will make to the cooling systems and UPSes. At first glance, this does not seem to be a problem -- if the cooling and UPS managed to support 1,000 servers, they will easily serve 500, so the facilities management professionals leave them as they are. It is more cost effective for the business.
Let’s assume a simple model. The old data centre had a PUE of 2: For every watt of energy going to the servers (and storage and networking equipment), a further watt was being used for cooling and UPS and so on. Now, the use of virtualisation has cut the energy being used by the servers by a half, but has left the cooling and UPS as they were. Therefore, the PUE has gone from 2 to 3, a horrendous figure that would frighten any IT manager looking to improve the organisation’s green credentials.
Having a data centre move from a PUE of 2 to 3 makes it difficult for data centre managers to digest the fact that energy bills are down by a quarter overall. PUE fails in this situation.
Let’s take another, totally different, example. A company owns a data centre with quite a lot of existing space in it. The business also happens to be cash rich. It does not want to change the way IT already runs the servers, storage or network in the data centre. This is because the IT pros think it is functioning fine. But, they operate in a market where sustainability is an important message. What can they do about it?
They could go out and buy a load of old servers and storage off eBay and install it in their data centre. If they then turn the equipment on, but don’t use it, PUE allows them to count the energy being pushed into these bits of equipment as “useful” energy to go into the bottom part of the PUE equation. As long as the company manages its cooling effectively and is prepared to sacrifice all the unwanted equipment in the case of a power failure, then the company's PUE is improved. Crazy? You bet.
It could be easy to move PUE into a measure that makes either of these scenarios difficult to happen. If the utilisation levels of the IT equipment is measured as well and brought into the equation, then a more meaningful measure can be made -- an effective PUE, or ePUE.
Therefore, existing PUE numbers would be pushed up, and the new equation would look like this:
ePUE = Total energy
utilisation rate x IT energy
If, in the first case, existing utilisation rates were running at 10%, the ePUE goes from 2 to 20. However, if utilisation rates are driven up through the use of virtualisation to 50%, then the new ePUE after virtualisation moves to 6 (total energy = 3, IT energy = 1 and utilisation rate = 0.5) – a very good improvement, rather than a rise in PUE.
In the second case, it would still be possible for the organisation to apply false loads against the servers and to load up storage with useless files. But this would result in massively increased heat profiles which could then lead to a need for better cooling systems – and that would push the ePUE back up again.
Anything that helps an organisation position itself against others in its space when it comes to energy use and data centre effectiveness has to be welcomed. However, a flawed approach such as PUE can lead organisations and their customers to the wrong conclusion.
The use of a more rounded ePUE approach instead of PUE makes comparing data centres and energy use a more level playing field and puts the focus where it needs to be -- on the efficiency and utilisation rates of the IT assets in the data centre.
Clive Longbottom is a service director at UK analyst Quocirca Ltd. and a contributor to SearchVirtualDataCentre.co.uk.