fotohansel - Fotolia

CW@50: The changing face of the datacentre over the past 50 years

As Computer Weekly prepares to celebrate its 50th anniversary, we take a look back at how the design and role of the datacentre has changed over the past five decades

If the digital economy was a living, breathing thing, the datacentre would undoubtedly fulfil the role of its nervous system. Every time a user stimulates a device or app, news of this action invariably passes to a datacentre to bring about a timely and appropriate response.

For example, whenever someone logs into online banking, scans their Oyster card or simply reacts to something a friend has written on Facebook, a datacentre will be actively involved.

Each one of these interactions is transmitted, along a neuron-like high-speed networking cable, to a server in a datacentre somewhere, where it is swiftly processed so the initiator can quickly check their bank balance, use the public transport network, cultivate their online social life or – in short – get on with the rest of their day.

News of the server’s response, meanwhile, will pass to the storage part – or the brain – of the datacentre equation, ensuring details of this brief, yet essential, process have been logged, bringing an end to a chain of events that has taken milliseconds to perform.

Billions, if not trillions, of these types of transactions occur across the globe each day, as our reliance on internet-connected devices and cloud services rises.

In line with this, the world’s datacentre footprint continues to grow, with technology firms embarking on new builds or expansions to their existing facilities to ensure user demands – from a performance and expectations perspective – are met.

Running an efficient and resilient facility is of utmost importance to every 21st century datacentre operator.

But, were it not for the experimentation and research efforts of a slew of techies over the past five decades, many of the design concepts and technologies now considered part and parcel of running a modern datacentre might not exist today.

Step back in time

Among that group is Uptime Institute executive director Pitt Turner, who – over the course of his 40-plus years in the IT industry – has played an instrumental role in standardising the way datacentres are designed and built, through his contribution to the creation of the Tier Standards: Topology classification system.

For the uninitiated, the Tier Standards: Topology was published by the Uptime Institute at the end of 1990s, and – through the use of a Tier I to Tier IV marking system – is used to denote the availability of a datacentre’s physical infrastructure. 

Operators have a choice between simply building their facilities to meet the required Tier level, or – if the business using the site requires – can have the datacentre officially certified to state that it makes The Uptime Institute grade.  

Since its initial introduction, the documentation has been subject to regular reviews, downloaded in more than 200 countries, and drawn upon to inform the design of datacentres around the world.

One of the reasons it has proven so successful, according to Turner, is because it has served to ensure that when a business specifies the type of datacentre they need, the design team responsible for building it understands the requirements.

“What it did was create a common language so that people could convey, from owner to designer, what kind of operational functionality was expected,” Turner tells Computer Weekly.

“Before, people would say, ‘I want a datacentre,’ and someone would turn around and say, ‘I will build a datacentre for you,’ but there was not much telling if they were talking about the same kind of facility.”

The standards were initially drawn up at the request of an Uptime Institute client who needed help clarifying how – after a run of mergers and acquisitions – its newly acquired datacentre portfolio met the requirements of the business, recalls Turner.

“They came to us and asked if we could help them develop a vernacular that would allow them to differentiate this computer room from that computer room, and the result became the Tier Standards,” he says.

Nowadays, the document is commonly used in operators’ marketing materials to succinctly describe how resilient their facilities are.

“Executives can now use the standards to denote the level of functionality they want, give it to the implementation teams and come up with a design that meets any of the four levels of the Tier Standards,” says Turner.

“Whether or not they go all the way through to the certification is a business requirement choice. So is working out what tier to aim for because the best tier is not the one with the biggest number.

“The best tier is the one that responds to the business requirements of the enterprises you are going to be running in this facility, and there’s no sense in paying for more than what you need,” he adds.

Improving energy efficiency

The latter sentiment is one that many modern-day server farm operators abide by, as finding ways to improve efficiency is essential when running a facility as costly and power-hungry as a datacentre.

To give them a helping hand, The Green Grid introduced the Power Usage Effectiveness (PUE) metric in 2007 for operators to use as a means of benchmarking the energy efficiency of their sites internally.

Since then, a datacentre’s PUE score has – albeit incorrectly – become another mainstay of most providers’ marketing materials.

Over the past decade many facilities have moved to adopt designs that incorporate hot and cold aisle containment within their data halls, whereby server racks are lined up in alternating rows.

The heat given off by the servers is isolated in one row for disposal, while the cold air used to keep the equipment cool is fed into the other.

Industry adoption of this approach has been gathering steam since 2009-2010, following the publication of the Ashrae Technical Committee’s 9.9 (TC 9.9) Thermal Guidelines.

“For every kilowatt of power a server used to use 10 years ago, you needed at least another kilowatt of thermal power to cool it down,” explains Paul Finch, a 25-year veteran of the datacentre industry, who currently works as the senior technical director at colocation provider Digital Realty.

“Because of the work Ashrae has done, for the same kilowatt of IT power, we can cool it with 100W rather than 1,000W. That represents a huge increase in energy efficiency, not to mention a huge reduction in carbon emissions.”

The document made a compelling case for letting datacentre IT equipment run at higher temperatures because it would reduce the need for mechanical cooling and, in turn, the amount of power required.

“In 2004, a 20°C to 22°C server inlet temperature was the norm in datacentres, but – on the back of Ashrae’s work – the recommended temperature range is now somewhere between 18°C and 27°C,” says Finch.

“By making it that much broader, you don’t need to have the engineering infrastructure in place to maintain these lower temperatures any more, and that has improved energy efficiency dramatically.”

Standardising designs

Up to this point, operators had been wary of letting their facilities run too hot, for fear that doing so could increase the risk of datacentre hardware failure. But adopting hot and cold aisle containment allowed them to protect against this and manage their sites more efficiently.

In years gone by, the only way to guard against overheating was to pump cold air under the floors and out from between the cabinets, so the whole data hall would feel the benefit, says Finch. The introduction of hot and cold aisle containment has reduced, and in some cases eliminated, the need for mechanical cooling.

The introduction of TC 9.9 has had a profound effect on the industry, adds Finch, and will continue to do so for many years to come, as hardware manufacturers look to adapt the design of their technologies in line with its guidance.

“There will be a point at which they can’t go any higher on the temperature front, and that’ll be limited, perhaps, by the processor technology within the server or some other components inside,” he says.

“But in the foreseeable future I can see datacentres continuing to run hotter, primarily because it drives sustainability and energy efficiency, but also has a dramatic impact on operating costs.”

Back in the day

In the early days of the datacentre, before they were even referred to as such, the push to create facilities with high levels of redundancy and resiliency, which relied on large amounts of manual labour to keep ticking over, meant the need for efficiency often took a backseat.

Even so, trying to keen an early mainframe up and running for a lengthy period of time was often a tall order, recalls Uptime Institute’s Turner, based on his experience of seeing one at work on his college campus in the 1960s.

“The most common sign that was posted in the computer centre when I was there read: ‘The computer is down today,” he laughs.

 “In the old days, if a mainframe operator could keep the system running for an entire shift, they were considered a local hero,” he adds.

But, the world operated at a different pace back then, as another of Turner’s observations seeing a datacentre at work in the 1960s neatly highlights.

“There was a building where I grew up that looked like a three-storey concrete brick – it was a data processing centre for one of the big regional banks. It had no network to speak of, so in the evening, all these trucks would arrive from branch banks in the region, carrying paper,” he says.

“Through the night that paper would be processed, the data would be crunched, new printouts would be created and then they would send the documents back out to the branch banks so they could open in the morning. I never went in, but I understood how it worked.”

Andrew Lawrence, vice-president of 451 Research, says Turner’s mainframe experience was shared by most mid-size to large organisations that opted to own and operate their own IT hardware during the 1970s and 1980s, in that the technology was often difficult and labour-intensive to run.

“Often, the ‘machine room’, the antecedent to today’s datacentre, would house just a few big machines,” he says.

“In the 1980s, many businesses wanted to opt out of operating computers, but this was before colocation. They either paid others to host their applications, or they outsourced all their big computing requirements.”

Meanwhile, as the stability of mainframe technology improved, users were encouraged to achieve utilisation rates for the hardware in the region of 90%, and failure to do so could result in disciplinary action from on high, Turner claims.

However, with the arrival of the PC and server era of computing, that concept seemed to go out of the window, creating a whole new set of problems for the IT world.

“Organisations wanted to downsize their IT, but the revolution in minicomputers and PCs really meant they were fighting a losing battle. Whatever they did, expenditure on in-house IT rose and rose,” says Lawrence.

A lot of the best practice IT departments subscribed to during the mainframe era fell by the wayside during this time, Turner adds.

“As more and more business applications shifted over to servers from mainframes, they escaped the bureaucracy of IT and the long lead times associated with deploying new applications,” says Turner.

“A large portion of the mainframe was allocated to maintaining data purity and preventing corruption in the information being stored. All that was left behind in the server world.”

Turner claims it was around the time of the first Uptime Institute Symposium in 2006 that IT users, having been stung by the twin evils of server hardware failures and rising IT costs, started to pay close attention once more to getting more out of their investments.

As such, it was around this time that the concept of virtualising servers to increase hardware utilisation, and thereby allow organisations to take steps to downsize their datacentre footprint, began to catch on.

“We would say that’s exactly the same as the mainframe used to do, except we called it logical partitions instead of virtual instances, but it’s really the same thing,” says Turner.

“Through proper application of virtualisation we have seen case studies where the power, space and cooling requirements for multiple datacentres were reduced by as much as 80%.”

Raising datacentre standards

While it’s fair to say the shift away from mainframes was not without its problems, the emergence of standard racks and servers during the 1990s had a transformative effect on datacentre design.

“What it meant was that all datacentres around the world could be designed and operated in the same way, and that suppliers could save huge costs by standardising on the ‘U’-sized rack. This also helped in connectivity and air flow,” says Turner.

“The rapid development of the commercial internet in the 1990s was another big step, as this enabled any enterprise to connect to any customer, employee or partner without laying down expensive connections. This, in turn, enabled colocation and hosting companies to take over enterprise IT operations much more easily.”

Dotcom disaster zone

The start of the 21st century is noted as being a particularly dark time in the history of the internet, as a number of online firms went out of business in the wake of the dotcom bubble bursting in 2000.

In line with the growing number of UK households with access to the internet, as alluded to by 451 Research’s Lawrence, a slew of online-only retailers and service providers emerged, all looking to cash in.

A great number of them were either acquired for huge sums of money or embarked on initial public offerings that attracted huge amounts of investment, despite never turning a profit or generating the level of revenue needed to stay afloat long term.

Inevitably, many of these firms, because they were unsustainable, went out of business or got sold on for much less than their initial valuations in the early 2000s. The datacentre market suffered a kicking as a result.

To keep up with the demand for online services, huge amounts were invested in building out datacentre capacity to deliver them. But, as internet firms began to fold, many sites remained dormant, recalls Andrew Fray, UK managing director of colocation provider Interxion.

“Many people built datacentres and then nobody came. These really were the dark days for the datacentre industry across Europe. They were expecting significant deployment of carrier and inter-related infrastructure but, at the time, it was like sagebrush blowing across the datacentre because they were powered, but empty.”

Steve Wallage, managing director of analyst house Broadgroup, paints a similarly bleak picture of what life was like for datacentre operators in the wake of the dotcom boom and bust of the early noughties.  

“In 2000, there were 27 pan-European datacentre players, and 17 of them went out of business, so the sector was really badly affected. The market didn’t really get back on its feet again until around 2007 or 2008,” he says.

Which brings us to the global financial crisis; a time when enterprises looked to cut costs by outsourcing their IT requirements, which played into the hands of the colocation players.

The dearth of datacentre space that existed as a by-product of the dotcom bubble bursting actually came into its own around this time, with internet service providers and financial services firms using colocation to meet their capacity needs.

“A lot of the IT industry, and server shipments are a good example of this, go up and down with the economy, whereas datacentre outsourcing actually increases with recession,” explains Wallage.

“If you look at the datacentre and colocation market now, it’s still growing at around 10-12% a year, whereas IT services as a whole is growing at around 2-4%.”

As time has gone on, companies operating in a wider range of vertical markets have moved to make colocation part of their IT strategy, positioning the datacentre sector as an increasingly appealing investment prospect in its own right.

With low customer churn rates, predictable revenues and soaring demand for capacity, Wallage says the datacentre market has emerged as a popular investment area in recent years.

“Five to six years ago, datacentres were thought of as the garbage cans of the IT industry. But several weeks ago I held a meeting with an investor who loves dealing in datacentres because – in his words – it is such a sexy area,” says Wallage.

Some of this is down to the fact the market has proved to be recession-proof in the past, says Wallage, while the emergence of trends like cloud computing and the internet of things (IoT) should ensure interest in datacentre space remains high for a long time to come.

Also, with colocation providers changing hands for huge sums of money at the moment, a lot early investors in the market are seeing a good return on their initial investments.

“The success of some of the companies in the market has been instrumental in that because investors have seen very good returns on their investments. Telecity, for example, at its low point was worth around £4m, and then got sold to Equinix for £2bn,” Wallage points out.

Where we are

The mid-noughties also marked the start of Facebook’s ascent to social networking dominance, as well as the emergence of – what is now – the world’s biggest provider of cloud infrastructure services – Amazon Web Services (AWS).

Google and Microsoft also began setting out their public cloud strategies, prompting interest from private datacentre operators about how these firms – whose services are used by millions of people – are able to keep up with demand.

In the case of AWS, Google, Facebook and the like, this is often achieved by building their own hardware, taking a more software-led approach to infrastructure management, and adopting a DevOps-friendly approach to new code production and releases.

All of this serves to ensure the trend towards greater automation of datacentre processes, as previously highlighted by the Uptime Institute’s Turner, continues apace.

While some enterprises have made moves to follow in the footsteps of the web-scale giants, when it comes to their datacentre technology strategies, others have started taking steps to wind down their investments in this area and move more of their IT infrastructure to the cloud.

It’s a trend that has been gathering pace for several years, according to Gavin Jackson, UK and Ireland managing director of AWS, and looks set to shape enterprise datacentre investments for many years to come.

“In the fullness of time, we think customers will run fewer datacentres, and there will be very few workloads and application types that will still be in their own datacentres. This will have a profound effect on the datacentre space that customers own,” says Jackson.

“We see that as a continuing and accelerating trend, as people reduce their datacentre footprint and start using cloud services from Amazon Web Services, and others in the future.”

451 Research’s Lawrence is not quite so sure, and thinks – over the course of at least the next decade – the datacentre will continue to have a role to play in the enterprise.

“All the evidence suggests that in the next five to seven years, most enterprises will mix their IT up, with some cloud, some colocation and some in-house IT. After that, it is less clear,” he says.

“It is already becoming clear there is going to be a lot of computing done at the edge – on-site and near the point of use, whether for technical, latency, legal and economic reasons.

“But it remains to be seen how the ownership at the edge of the network will be organised, managed and owned,” he concludes. 

Read more about Computer Weekly’s 50th anniversary

Read more on Datacentre performance troubleshooting, monitoring and optimisation

CIO
Security
Networking
Data Center
Data Management
Close