The new future of datacentres

When Kevin Timmons, general manager of Microsoft's Data Center Services, arrived in New York on the night before his keynote address to the DatacenterDynamics New York conference, he stepped out for a bite to eat.

When Kevin Timmons, general manager of Microsoft's Data Center Services, arrived in New York on the night before his keynote address to the DatacenterDynamics New York conference, he stepped out for a bite to eat. In a typical New York deli, he decided on a pastrami sandwich. Having checked the options and, as it was late, he was advised that perhaps a single - as opposed to a double or a triple - was the sensible order. He went for the single. When it arrived it was three inches thick. Telling that story and showing a picture of said supper to the conference, he said: "I slept on that." Whether he was using his late-night snack as a metaphor for what he was about to say, only he knows.


What he proposed was that new datacentres, regardless what their scale, would be built on premanufactured, preassembled components. That the old way of doing things is now defunct.

He said that all datacentres would be delivered through a manufacturing supply chain in future, and that unlike any other kind of commercial construction, the mechanical electrical is where the money goes, and that current practises are unsustainable. What Timmons and Microsoft then proposed was nothing short of a revolution in datacentre delivery that questions every aspect of past and current methods.

The answer lies in IT preassembled components (ITpacs). These are not containers in the (already) traditional sense, but are preconfigured, manufactured components preassembled and delivered to any site around the world.

The idea behind ITpacs is not standardised configurations that are replicated but applying different components to suit everything from the physical environment to the robustness and required fault tolerance of the application. "When we talk about our latest generation of datacentre, it is preassembled components, manufactured off-site, brought in and plugged into our datacentre infrastructure.

"So this is the holy grail for us - a paradigm shift to ultra-modularity - which there is lots of chat about in the industry. What does it really mean? It allows us to modularise and premanufacture parts for the datacentre, where it makes sense in the world, which in turn lowers our total cost of ownership.

"I can't stress this enough - this [total cost of ownership] is the single most important metric, and is what drives all my decisions.

"We've got quite a sophisticated model, which takes between 50 and 70 different factors into account. It spits out a number at the end called 'dollar per kilowatt month'. And we modelled all our decisions in that way. What I'm proposing is to view your data centers as more of a traditional manufacturing supply chain instead of a monolithic build."

The advantage for the user, he believes, is that this will allow you to go to your chief financial officer with a much easier decision timing model, which allows you to say you're going to spend a little now on land, connect fibre, put up a substation to get conditioned power on site and then to be able to scale the centre as demand dictates. This, Timmons says, avoids having to try to model demand over many years and make an initial $400m or $600m in capital expenditure, in the hope that the business demand will show up.

"There are a number of centres around the world that are testaments to that kind of slip-up, where they have two of four floors open because they did the entire development and the demand didn't show up.

"The key to ITPACs is that you can deploy one to 10,000 servers at a time. I can't overstate this enough. With the variability in our demand projections, it is incredibly important for me to send out one new server - and it is as important for me to order 40-50,000 servers and plug them in at one time.

"You can call them containers, but they are not really containers in the traditional sense - they are really integrated air handling and IT - those are prepackaged off-site and plugged into the spine with power and network resources.

"Our overall goal is to reduce the time to market by 50% and significantly reduce the cost. I have a goal to reduce the cost in next-generation builds by 50%, and we're on track to actually better that. I can't be any happier about that.

"I aim to deliver outstanding power usage effectiveness (PUE) and outstanding efficiency, all while using more renewable materials - heavy emphasis on using steel, aluminium - minimising uses of concrete and minimising disturbance to land. It is shaping up to be somewhat of a pole barn."

The ITPAC that was shown is still only at proof of concept and is not anywhere like the final version, but what Timmons was proposing was not a unit that would act as an overspill capacity, or that would be suitable for military or rugged environments as an addition to the traditional concrete and steel structures we call datacentres. Instead, what was being proposed is a system of manufactured datacentres that can be shipped around the world and deployed twice as fast, and at 50% of the cost of traditional methods. It just means finding the right places to put them. Timmons has a team of globetrotters seeking the best location for its first next-generation datacentre.

He said he was showing a proof of concept that was already developing rapidly, and that the final first-generation ITPAC would look considerably different. He said that in the coming months Microsoft would announce a new data center where they will be deployed. While Timmons didn't say as much, don't be surprised if it turns out to be in South-East Asia.


Questions from the floor

Naturally, the audience had some questions, one of which was: "Is there a limit to the server density that ITPACs can handle, especially as there is only so much that can be cooled by air?"

Timmons explained that the answer is partly not to get hung up on space. Finding the right land (acreage can be quite cheap) means not having to pack the racks and start pushing kilowatts of power to it. Timmons conceded that on the IT side there are still plenty of people who "see a bright shiny object such as blade servers that need 600W per sq ft and promise return on investment in three months, but with no idea what it is going to do to the actual data center", he said. "In Microsoft we are making headway - we can demonstrate what is the proper choice and what is not - from an IT perspective."

Another question was raised about the factors that drove the physical considerations for the ITPAC.

There was a nod to shipping methods, taking guidance on shipping containers. And from largely internal research within Microsoft, what is the average scale unit size request to meet the business needs? The optimum size typically fell around 400-600/KW.

Design PUE ratings come in at around 1.16, with the worst getting up to peaks of around 1.35, depending on location.

Late Supper

The message from Timmons, not lost on the New York audience, is that whether it is datacentres or sandwiches, the super-size option is no longer tenable. It will cost more, take longer to make, be more difficult to digest, and before you get halfway through it you'll realise you don't need the rest of it. No one asked about the sandwich.


The ITPAC-based next generation datacentre

Microsoft's first generation of datacentre consists of a single server, or rack of servers.

Second-generation facilities are pre-rack deployments - integrated off-site and brought in as one or more single racks - consistent with the firms Quincy and San Antonio, Texas centres.

Third-generation Chicago, and to a degree Dublin, which actually have a mix of container and pre-rack deployments.

The next generation will consist of ITPAC s. The ITPAC is really an air-handling unit built mostly of aluminium, with some steel with IT racks deployed.

The flexibility of design goes right down to whether or not you need a centralised UPS, or a rack-mounted or in-server UPS. "We've already moved onto other designs that have two rows - working with manufacture to have them make similar structures to come into our environment."

The ITPAC s cool using an adiabatic system - can be with or without the louvres with the filters - and for deployments in controlled internal climates definitely with or without the skins.

One of the biggest concerns is not temperature, but humidity. In places such as South-East Asia, it is hot and wet, and it is the humidity that is the focus of much of the research into ITPACs. Microsoft, like many others, is pushing the inlet temperatures to 90°-95°F.

One area of innovation that Timmons said is exciting for him and his team is the building of a sensor-based, highly resilient proprietary mesh network that allows configuration, control and monitoring, allowing new applications to be explored around asset management.

"When the resilient mesh network is deployed in the centers, and we've got hundreds of thousands of assets we can control, we want to be able to check the movements of every single IT asset in those facilities in very short order using this technology." More than a bite sized metaphor?

Read more on Data centre hardware