IT directors are being forced to focus on the management and structure of their datacentres, as it becomes clear that there is not a limitless supply of energy. Plugging yet another server into the datacentre is no longer an option for companies operating in Canary Wharf, where companies are facing major difficulties securing extra power.
Increasing the efficiency of a datacentre has become a necessity for many CIOs, and the most effective measures often involve housekeeping and optimisation to see change in the short term.
Although some companies are currently unaffected by power shortages, the cost of powering and cooling a datacentre is becoming a pressing concern. A study published last year by consultancy BroadGroup found that the average energy bill to run a corporate datacentre in the UK is about £5.3m per year. The report predicted that this would double to £11m over five years and that the UK would become the most expensive place in Europe to run a datacentre.
For the datacentre "super-user", unlike smaller businesses, access to energy is an issue. Google and Microsoft are building datacentres the size of double football pitches in the southwest of America, where hydro power is plentiful. But Tikiri Wanduragala, senior server consultant at IBM, says, "Every large UK account is also talking about power consumption of datacentres. It is their top priority."
Mike West, managing director of datacentre specialist Keysource, says, "Medium-sized businesses are also having problems removing heat with their legacy cooling systems."
Organisations have arrived at this juncture partly because manufacturers and end-users have been working at cross-purposes. "For a long time manufacturers have been obsessed with occupying less physical space and producing ever-smaller chips and boxes," says David Elwen, a director at IT consultancy DMW Group.
One consequence of this is that organisations are faced with having to find ways to power and cool denser and hotter components crammed into the same amount of space.
Datacentre designers have also been guilty of "comfort cooling" for people rather than computers. Google published research in February 2007 that found disc drives can tolerate temperatures of 38 degrees Celsius without suffering a higher failure rate. The findings, based on five years' study of Google's own datacentres, contradict established thinking that equipment must be kept cool to function.
Compounding these design follies is a lack of communication between facilities managers and IT managers. "The facilities manager is a very different animal to the IT manager. He knows about power loads and distribution of airflows, but nothing about the IT equipment that sits in the cabinets he is looking after," says Elwen.
Hassan Moezzi, director of business development at datacentre design and operations specialist Future Facilities, says, "IT staff are ignorant of the physics of cooling, and IT departments are shifting kit in and out of datacentres like a yo-yo".
The problem is exacerbated by the fact that the key aspects of the datacentre - management and usage - are often outsourced to different parties, and so the two parts remain distinct.
The separation between energy consumption on one hand, and IT procurement and usage on the other, was further confirmed last month by a Green Technology Initiative survey. It found that only 20% of user organisations surveyed considered both IT requirements and energy consumption when purchasing equipment.
Dan Sutherland, founder of the Green Technology Initiative, believes that making IT directors accountable for the infrastructure they deploy and manage would make an immediate difference to carbon emissions.
"If IT directors were targeted and given budgets and bonuses according to how much they spent on power, they would look a lot more carefully at the kit they use in the datacentre. Utility bills have for so long been merely treated as an accepted annoyance," says Sutherland.
Getting people from all corners of the IT operation around one table is the new challenge, says Wanduragala. "It is a big challenge and an even bigger struggle. It requires the communications manager, the storage and the server guys and the procurement people to all sit down together."
Wanduragala says he is witnessing a shift in datacentre management away from software and server teams towards the datacentre manager. "Now any procurement decision must be based on all parameters, and the biggest of those is raw efficiency," he says.
The smart action for the IT director is to stop the problem getting worse, says Wanduragala. This could entail a veto of new hardware purchases until existing capacity has been exploited. "You may have to make people jump through hoops before they get to buy a new server," he says.
Such a course of action would probably mean virtualising every server, which, in turn, means persuading business units to share servers. Wanduragala says he is observing an alternative trend of organisations replacing multiple servers with one higher-performance - and more efficient - server.
Virtualisation is one formula for squeezing every ounce of performance out of existing infrastructure. Another option that would work for power-intensive applications is grid computing. Ian Osborne, project manager at Grid Computing Now, says, "The datacentre of the future will look like a much more flexible infrastructure."
Osborne says that grid middleware could allow the IT manager to distribute a processing task across available resources. With idle resources continuing to consume 85% of power, grid computing is an attractive option, although it is only really an option to CIOs managing huge scientific and financial tasks.
Turning applications off when they are no longer needed constitutes a big, and more immediately accessible, saving. An audit of the applications of a blue chip company by DMW Group revealed that 20% were never used, yet remained switched on.
"The problem is that when replacement systems are introduced no one goes around switching legacy applications off," says Elwen. He recommends putting redundant applications into a "graveyard" area, where they are eventually archived and switched off.
Virtualisation, grid computing and better house-keeping are all options within the IT manager's domain, but just knowing what you consume is a good start.
West says, "IT managers are surprised when we do an audit comparing the consumption of the computer room with the rest of the building. It is often three or four times greater than expected and can account for 25% of the output."
Powering and cooling your datacentre
How significant is the cost of powering and cooling a datacentre?
Ten years ago, 70% of the total cost of acquisition and management of servers was in the cost of the hardware. Today, about 70% of the cost of managing servers revolves around power and cooling, according to research from analyst firm IDC.
Unlike the US, where super-consumers are attracted to states with the most attractive power supplies and local laws, UK users are more or less constrained to the National Grid.
Purchasing alternative sources of energy has debateable value, says David Elwen, director of DMW. "Frankly, you can forget about renewable energy in the UK. It is tokenism."
However, appealing the wind or alternative energy source is, simply poured into the grid, and could end up at any destination, Elwen says.
"The only way to use truly renewable energy is to produce it yourself and then sell any surplus back to the grid. That is the reverse of the usual datacentre that runs on mains and generates its own back-up," says Elwen.
Some hospitals produce their own power, but the exact science of generating at a constant load and strict planning permission requirements make it an unappealing option for datacentres.
What are computer manufacturers doing to lessen power consumption?
Choice of power supply may be largely out of the hands of UK IT directors. Likewise, the physical architecture of chip components and servers, and its consequent draw on power, is a given, although efforts are being made by manufacturers.
Earlier this year, IBM announced a breakthrough in the design of chips from the traditional 2D design to a 3D stack. IBM's Semiconductor Research and Development Center has found that stacking multiple chips together uses less power and generates less heat.
Is cooling datacentres a big issue?
"Every megawatt required to power hardware takes another 1.5Mwatts to cool it," says Hassan Moezzi, director of business development at Future Facilities. Choosing better cooling methods is very much within the scope of the IT manager, and it makes a big difference to carbon emissions.
Also, laying out racks and cabinets randomly can interrupt complex and cooling airflows and create hotspots.
Future Facilities advocates that, instead of aiming for a blanket temperature across a room, IT directors consider the datacentre as a giant waterbed. "If you press one end, the other will be affected," he says.
However, software that simulates the airflows around a datacentre to help the IT better plan the layout of kit is available.
Is there alternative cooling?
Some firms are selecting under-floor refrigeration methods instead of air conditioning. This allows for more localised cooling.
Other companies are seeking alternatives to refrigeration, exploring options such as ground water and fresh air to cool racks and cabinets of computing gear.
The Google disc drive tolerance research >>
Have your say
What is your take on Helen Beckett's opinion? E-mail email@example.com
Read more on IT for small and medium-sized enterprises (SME)
The datacentre chill-factor: Finding the right conditions
Datacentre liquid cooling: What needs to happen for it to become commonplace in colocation?
Schneider Electric CTO tips liquid cooling for take-off as machine learning takes over datacentre
Case study: University of St Andrews squeezes green datacentre into squash court