Demand for ever-greater computing capacity is leading to spiralling - and costly - power consumption. Danny Bradbury looks at the best ways to cut costs and boost efficiency
Speak to most IT managers about power consumption and they will give you a blank look. After all, it is generally the facilities department that pays the energy bill. But electricity is becoming an increasingly visible issue for IT departments. One way or another, they are being forced to deal with it.
There are three main drivers for reducing power consumption: environmental, financial and performance-related. In the datacentre, performance is a critical factor, says Brent Kirby, product marketing manager for Opteron processors at chip maker AMD. "The noise we keep hearing from customers is that datacentres are having problems increasing computing capability within the confines of their power and cooling capabilities."
As companies try to squeeze more performance from their computing infrastructures, they run into density and electricity issues. Many server rooms were built at a time when people didn't envisage packing as many servers into a small space as they do now. Simply getting the electricity to power such dense computing platforms into the server room is difficult enough. Then there is the air conditioning, which also requires a significant amount of energy, says Tikiri Wanduragala, EMEA eServer and storage consultant at IBM.
One way around the problem is to switch from traditional rack-based servers to blade servers, says Wanduragala, because blades integrate components such as switching, fans and storage, cutting down on consumption.
"What you can do is virtualise it and reduce power by spreading the load over more CPUs," says Wanduragala. Running an application across multiple servers within a blade chassis, you can make each processor work less. That may be inefficient from a processing perspective, but it reduces the power each processor consumes and therefore the heat it emits, while still keeping your computing density relatively high.
But what processors will go on those blades? Chip designers are finally starting to realise the importance of power consumption within the processor itself. Transmeta was one of the first to drive this point home with its Crusoe and Efficeon processors, but others have taken up the gauntlet. AMD has released low-power versions of its Opteron processor, and also has a technology called Powernow, which steps down the power consumption based on application performance needs.
Intel has its own focus on power consumption at the processor level. It previously focused on the issue only in its mobile Centrino line, but has vowed to address it at desktop and server level when it ships its next-generation microarchitecture next year. This will merge its Netburst processor platform with the Banias platform which underpinned the Pentium M, lauded as a mobile processor designed to limit power consumption.
The other big development, which most processor designers of any note are embracing, is multicore processing, in which more than one processor core is put onto a piece of silicon to divide up the work of multithreaded applications on the server, while making the multitasking of single-threaded applications more effective on the desktop.
"We are getting nearly twice the computing power within the same thermal envelope as our single-core part," says AMD's Kirby. "We did not have to increase the ceiling of our parts for our dual-core chips to exist."
The other way to reduce datacentre power consumption is by using virtualisation software to cram more applications into one processor. Companies such as VMWare offer these services, which Dave Thornley, service and support manager at Sheffield Hallam University, recently used to help freeze the rapidly rising power drain in his datacentre. "The power draw on the datacentre was increasing, as was the cost of paying for it," he says, adding that power consumption presented both a financial and a capacity problem. "We run a large uninterruptible power supply in the machine room and that was approaching capacity, to the point where we would not be able to add any more equipment."
The university ran a huge variety of servers, ranging from 1U rack units through to large clustered database systems. It was adding 20 to 30 servers a year on average, says Thornley, but each server's processor ran at only about 15% utilisation.
It used VMWare's ESX software running on two four-way Xeon-powered Hewlett-Packard DL580 servers to create roughly 35 virtual machines running different applications. It has been out of the pilot phase for six months. "We have yet to move on the consolidation, in terms of pulling machines out and throwing them away, but every new server is going onto VMWare," says Thornley.
Power for the two 2 DL 580 servers costs about £1,600 a year; 25 DL 360 servers, which a virtualised DL 580 replaces, would cost £12,025 a year, representing a saving of £10,425, according to Thornley's figures - which do not include the costs of cooling. However, the software cost must be factored into the figure.
But while people like Thornley concentrate on the datacentre, Sumir Karayi, CEO of 1E, is more worried about desktop machines. Whereas finance is sometimes a secondary consideration to capacity in the datacentre, money is everything on the desktop. Karayi's company sells Nightwatchman software, which is designed to manage power consumption on desktop computers.
Sleep mode and power management facilities in corporate PCs are often disabled, says Karayi. "IT departments can always tell facilities that they don't care how much power costs because they have to update PCs with virus patches because it is a security risk to the business," he says. "So a lot of companies we go to today do not turn off computers at night and weekends."
Nightwatchman allows IT departments to turn PCs on and off remotely to save costs. A December 2004 report from research firm Intertek about household computing suggests that a consumer PC will use about 300 kilowatt hours (kWh) a year. Even if managing power consumption on a PC saves only a few tens of pounds a year, a large company could begin to realise significant savings.
The alternative is to go with a thin client that uses fewer power-hungry components. Richard Barrington, head of government affairs for computer supplier Sun Microsystems, promotes the Sun Ray thin client. "We share one processor between 25 users and that sits in the datacentre somewhere," he says. "The overall energy consumption of a Sun Ray is 85%-90% below that of a PC."
Both Karayi and Barrington are disappointed at the government-funded Carbon Trust, which advises UK businesses on how to cut energy consumption with a view to reducing the UK's carbon emissions. The trust recently turned down Karayi's application to have energy-saving technology used in PCs added to the Energy Technology List (ETL), and has also told Sun that ICT equipment is a low priority, says Barrington.
Companies purchasing technologies and products listed on the ETL can benefit from a 100% tax write-off on their capital expenditure in the first year, as opposed to the conventional staggered reimbursement, which occurs in yearly 25% increments. But the Carbon Trust, which reviews the ETL each year and makes recommendations to the UK government, has steadfastly refused to include ICT equipment on the list. Why?
It is a question of volatility and practicality, says Garry Feldgate, director of delivery and external relations for the Carbon Trust. Once a technology category has been added to the ETL, manufacturers can apply to have individual product models added to the accompanying product list. These products generally stay on the list for years at a time, but the volatile nature of the IT business means products become obsolete very quickly, making list management a problem, says Feldgate. "I have an IBM laptop, which I was issued with four months ago. It was the latest IBM laptop, and it is now discontinued," he laments. He adds that the savings from energy-efficient computer equipment dwarf what a company would receive from a tax break on that kit.
But there must be some answer, and the UK government doesn't seem particularly bothered about finding one. It has not joined the voluntary European initiative, called the Group for Energy Efficient Appliances (GEEA), which promotes the use of energy-efficient office equipment. DEFRA refuses to comment.
Clearly, if companies want to conserve power on the desktop, then the same rules apply to the datacentre - do it yourself. You will have to find your own case to put to the finance director. Those IT managers who care about the environment will have to sneak it in, hidden beneath a balance sheet.
Case study: Drug company gets injection of extra power
If there is one sector that requires more than its fair share of number-crunching, it is life sciences. Inpharmatica, a company that carries out drug discovery computing on a contract basis for large pharmaceutical firms, found its existing hosted computing infrastructure bursting at the seams.
It had about 2,500 CPUs running in Sun and IBM 1U rack servers located in its own offices and a hosted datacentre. It needed to move to a higher-density computing infrastructure to cope with the work it does calculating the relationships between proteins.
Steve Tringham, IT manager at Inpharmatica, moved to IBM blade servers running 400 3.5GHz Intel Xeon processors with 64-bit extensions. These used more electricity than AMD's rival Opteron chips, but Tringham wanted as much processing power as he could find.
The new solution beefed up Inpharmatica's number-crunching capability, but it came at a cost. An average rack uses 3Kwatt of electricity, says Tringham, whereas the specialised racks used for the IBM blade servers need 18Kwatt each. Inpharmatica's existing hosting company could not provide that much power, and it would also stretch his own server room. Energy capacity was a potential showstopper.
"People like IBM and other suppliers pitch high-density computing, which is great because you use less space," he says. "But if the computer rooms in your building can only cope with 18Kwatt in total, it is still an empty computing room. Until you can spend the money to put a load more power and air conditioning into the same space, you have not got a winner."
The company moved its whole infrastructure to Globix, a company that provides hosting facilities in the City of London. According to director Christian Eckley, Globix built out very high-capacity datacentres just before the dotcom bubble burst, leaving it all dressed up with nowhere to go, and with a nasty period spent in Chapter 11 bankruptcy. Nevertheless, Globix bounced back, and now uses its enhanced power capacity to provide high-density hosting for customers such as Inpharmatica.
Now, Tringham's server infrastructure is consolidated and, while the energy part of its hosting cost is likely to be rather large, at least he does not have to worry about finding enough power to make the computers - and of course the air conditioning - work.