Feature

Blade datacentres demand cooling and power distribution rethink

New Asset  

Insufficient power and excessive heat are the two biggest issues IT directors face when running a datacentre, according to a survey of 112 users by analyst firm Gartner.

 

 



Discussing his experience of deploying blade servers in a datacentre, Owen Williams, head of IT at property firm Knight Frank, said, "When installing blade servers, watch out for blade cabinets, which draw more power than the older cabinets, and the blades throw out a lot of heat - you may need to upgrade your air conditioning."

He said this was because older racks draw cold air in from the bottom and blow hot air out from the top. Blade racks, on the other hand, draw air in from the front and blow it out of the back.

"You need grilled floor tiles in front of the racks to blow cold air in the right location and you need to arrange your racks back to back so that hot air from the back of each rack is drawn up and chilled by your air conditioning units," Williams said. Such a set-up avoids hot air from one rack being blown into the front of the rack behind.

Gartner analyst Andy Butler said, "The chief cooling challenge of blades so far has been the ability to cram the technology in such a way that the density promises can be achieved."

He said suppliers routinely had to leave "fresh air" inside rack configurations to allow enough space in the cabinet for the technology to work reliably.

"I also see suppliers putting great effort into designing racks and cabinets in a way that will pass the air through the blades and fans more effectively," Butler added.

Even if it were possible to cram a full complement of blades into a single rack, the power demands of such a configuration would be significant, Butler said.

Tim Dougherty (pictured above with eServer Bladecenters), IBM eServer Bladecenter worldwide manager, said IBM offers a thermal design in its blade servers called calibrated vectored cooling. He said this uses energy-efficient blowers that move air from the front of the system to the back while protecting components inside the server.

John King, enterprise server manager at Hewlett-Packard, said raising the floor in datacentres could reduce heat by 20%.

King said a single rack of blade servers would need to be split into two or more racks to overcome power distribution issues. This reduces the amount of power required for each rack, but adds to demands on space.

HP's System Insight Manager server management utility can be used with servers running Intel Speedstep technology to reduce the speed of the processor, which lowers power consumption and radiated heat. "The next step is to control thermal dynamics automatically so the processor can be slowed down based on temperature," said King.

The company is also working with software partners to model the thermal dynamics of datacentres, King added.

HP has seen growing interest among UK IT directors in the thermal dynamics of blades and power consumption. In February, a group of HP engineers presented papers on these topics at three seminars: two in London and one in Manchester.

 

The impact of cooling demands on datacentre design   

Evidence is growing that datacentres are being over-designed for electrical capacity, because of concerns about meeting the incremental power and cooling demands of modern server equipment (such as blade servers), according to Gartner.  

The analyst firm said sufficient capacity should be delivered relative to patch panels, conduits and intermediate distribution feeds. Special attention should be focused on equipment densities relative to initial and longer-term electrical power capacities. 

The company said the single greatest issue in the contemporary datacentre is maintaining adequate cooling and air movement, given the intense heat gain of modern blade server and direct-access storage device equipment. 

Modern blade servers can be packed into a single-rack enclosure, resulting in power demands of 18 to 20 kilowatts per rack, according to Gartner.  

The primary issue with dense-packing the layout with high-capacity servers is that, although it is efficient from a space standpoint, it creates serious heat problems requiring incremental cooling, as well as significant incremental electrical costs. 

As a general rule, Gartner advised users to plan for the datacentre to scale from 50 watts to 100 watts per square foot (in a raised floor area); increase capacity on a modular basis, and assess the trade-offs between space and power in the total cost of the new facility. It said users should also provide space between racks for air circulation. 

Gartner said electrical capacity ranging from 30 watts to 70 watts per square foot for computer equipment is typically required. However, additional power is required for air conditioning, humidification, lighting, and uninterruptible power supply and transformer losses.  

This additional demand could add one to one-and-a-half times more wattage to the electrical load, depending on equipment spacing and air-handling efficiencies.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in May 2005

 

COMMENTS powered by Disqus  //  Commenting policy