Heat problems are severely restricting the capacity of modern datacentres, and users need to address the problem urgently, an analyst has warned.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
IDC senior analyst Daniel Fleischer said, "Datacentres have been built to cope with older technology." Modern blade servers can be packed more densely into datacentre racks. They provide a cost-effective way of increasing computational capacity in a datacentre without requiring the extra expense of additional floor space.
However, in Fleischer's experience, a rack can accommodate only 40% of the blades it could potentially hold because of power and cooling constraints. This means that only abot 19 blades can be fitted in a rack, instead of the full quota of 48.
A fully configured rack containing 48 blades could produce about 20kW of heat, which must be removed from the datacentre.
Case study: Inpharmatica cools down with the help of Infrastruxure
Manufacturer American Power Conversion (APC) has extended its Infrastruxure cooling units for racks to address the problem of overheating in blade server datacentres.
The cooling system works by directing hot air expelled from the back end of a server into an air-conditioning unit, which passes the cooled air back to the front of the server.
Cooling units fitted to either side of the rack provide further cooling. APC says Infrastruxure can cool up to 60kW per rack.
APC uses software to produce simulations of airflow within the datacentre. This allows different designs to be evaluated prior to installing the equipment.
One organisation that benefits from this approach to cooling is bioinformatics company Inpharmatica. The firm runs blade racks in a co-location centre from Globix. It was looking to bring six fully populated blade server racks into the Globix datacentre.
Steve Tringham, IT director at Inpharmatica said, "We are using about 400 processors in a Linux cluster. The big issue is that cooling becomes a significant factor."
The challenge, said Globix, was that the blade racks needed by Inpharmatica required 15kW of power and cooling. Globix used APC's Infrastruxure architecture to support the cooling requirements of Inpharmatica's racks.
Dave Brooks, European IDC and facilities manager at Globix, said, "APC's air-conditioning units are in-row to bring the cooling much closer to the load, and by sealing the hot aisle through adding a ceiling and doors, the high-density racks are completely temperature neutral in the datacentre."