Yahoo preassembles New York data centre

Feature

Yahoo preassembles New York data centre

Scott Noteboom compares the developments in the discipline of datacentre design in recent years to the manufacturing revolution that Henry Ford sparked in the motor industry of the early 20th century with the invention of the assembly line.

Noteboom, Yahoo's global head of datacentre operations, says building cars was slow and expensive in a pre-assembly line world: "The automobile used to be entirely hand-built in a coachwork factory, that's the way the datacentre business has been for a long time. In the past, I saw datacentres as high cost [and] low efficiency, like a coachwork factory." In modern datacentres, however, there is pressure for quick deployment and high efficiency because of the commoditisation of the "production of bits". Noteboom's team at Yahoo – one of the world's largest providers of search engine, email and enterprise services – has been perfecting the art of quick construction of efficient mission-critical facilities to accommodate today's bit movement and storage requirements.

With the company's new datacentre in Lockport, New York, Yahoo has been able to get the time-to-build down to less than six months per one 9MW phase. What's more, the project is on track to come in within the $150m budget originally allocated, and the facility is turning out to be one of the most energy efficient datacentres in the US, according to the company.

The $150mn figure includes everything: building, network equipment, servers and datacentre infrastructure. The datacentre portion - including electrical and mechanical systems - is about one-third of the total cost.

Design and other influences

Yahoo held a grand opening for the first phase of its Lockport facility in September, at the same time the datacentre went into production mode. Construction of the second phase began in June and is set for completion in December. The company is already looking for real estate in the region to expand beyond this

The first phase measures about 155,000 sq ft in total, which includes an administrative building and multiple datacentre halls. The second phase only includes datacentre space, which "hooks" on to the other side of the administrative building. Noteboom says the concept was inspired by a design commonly seen in prison buildings. The two phases have the same serving capacity, delivering about 9MW of critical load each, and are built to the same design.

The facility's annualised average Power Usage Effectiveness (PUE) is 1.08. Because of air-side economisation, the efficiency is worse during summer months and better during the winter. Much of the facility's energy efficiency comes from the design of its thermal-management system. The design is patent-pending on an international basis, Noteboom says. Besides prisons, other sources of inspiration for the design have been modern chicken coops and old aluminium and steel plants (which are common in the area).

Both chicken coops and metal plants have to deal with high heat densities and were designed to draw outside air. The region's air-movement trends were taken into account when designing and positioning the datacentre. The facility was built as one large air handler, designed to capture winds blowing from the Great Lakes - five freshwater seas on the Canada-US border - the largest group of freshwater lakes on the planet. When outside air alone is not enough to cool the datacentre, an evaporative-cooling system - developed by Yahoo's engineers - picks up the slack. "We only require this evaporative cooling on average 200 hours a year," Noteboom says. "The rest of the year we don't have any kind of [mechanical] cooling."

Evaporative cooling systems are becoming increasingly popular in modern datacentre designs. The high-profile Facebook datacentre being built in Oregon, for example, is employing evaporative cooling. In such a system, air that needs to be cooled is pushed through a moistened surface and becomes cooler as the moisture on the cooling medium evaporates. Noteboom's team also conducted a number of studies to re-evaluate the allowable humidity and temperature windows for servers and widened these so they exceed guidelines specified by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), but still fall within the requirements specified by original equipment manufacturers to be covered by warranty.

Yahoo has been studying temperature and humidification requirements since 2006 and has data on servers over their entire lifecycle. While allowable air temperature in the Yahoo datacentre exceeds the ASHRAE recommended 78F, it does so only for an average of 34 hours a year. Hot aisles within the facility are contained, with hot air rising into mixing plenums at the top. Some of the hot air may be mixed with outside air while in the plenum, if necessary, and heat that is not needed is ejected from the facility. There are no air handlers in the building, which uses the collective power of server fans to draw outside air in. There is, however, a small number of fans for air re-direction. Noteboom says one of the studies Yahoo has undertaken found that a typical datacentre has about 300% more fans than required.

Yahoo's emphasis on redundancy

The first phase and the future second phase are split into numerous data halls, or wings, as Noteboom calls them. Each wing has a separate electrical set-up, which is served by a separate transformer and has either one or two back-up generators. Not every wing has redundant infrastructure. Some have N+1 redundancy and some have only N, depending on the function each particular wing is serving. There are no separate electrical rooms within the design, with the entire support infrastructure housed in the same space as the IT equipment, including UPS systems and electrical switchgear, further reducing cooling requirements of the facility. The cooling system is also backed up by UPS systems.

According to Rudy Bergthold, Chief Technology Officer and Senior Vice President at Cupertino Electric, Yahoo's electrical contractor for the Lockport project, the facility uses line-interactive UPS systems and a flywheel-based energy storage system. The property has an on-site utility substation and the datacentre has an elaborate power metering scheme, measuring at multiple points within the power infrastructure, including incoming power from the utility, at transformer outputs, at UPS outputs, at outputs of the critical bus and at individual circuits at the power-strip level. Generator power is also metered when the generators are on. There are numerous environmental monitoring points throughout each wing.

Mixed power densities

The infrastructure is designed to provide power densities of about 150 watts per sq ft, but the challenges associated with designing for high densities is a subject Noteboom thinks has been inflated in the datacentre industry discourse to the point of overshadowing the often equally challenging low-density designs.

"It's challenging [because] when you're doing your capacity planning and such, you have to take either of the two approaches," he explains. "Data storage, [for example], ends up being a low-power density challenge." The challenge is coming up with an efficient "average" density that will accommodate both high-density compute resources and comparatively low-density storage arrays. "We are constantly working with data storage providers to create higher density storage arrays because watts per sq ft in storage infrastructure is nowhere close to computing infrastructure," Noteboom says. One example of low-density storage applications is Yahoo's archive - part of the company's email offering - which provides users with limited storage. A portion of this archive is not routinely accessed, which makes it a good candidate for low-density storage infrastructure. While there are no mixed densities in the first phase, there will be areas with different power densities within phase two. Yahoo is not committed to any particular IT vendor, using equipment from a variety of suppliers. "We're very competitive on the server front, so we use a multitude of server and storage manufacturers," Noteboom says. Environments in the company's datacentres are virtualised. "Virtualisation and utility computing is key to us."

This article first appeared in Datacenter Dynamics Focus. A presentation on the Yahoo datacentre is running at the Datacenter Dynamics conference, 9-11 November, Lancaster Hotel, London.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2010

 

COMMENTS powered by Disqus  //  Commenting policy