pressmaster - stock.adobe.com

Evolution in action: The changing face of the datacentre

The role datacentres play in the wider enterprise IT landscape is changing, fuelling the demand for a mix of different types of facilities that are capable of meeting the growing variety of enterprise data-processing requirements

The datacentre is going through a period of rapid flux at the moment, the likes of which have not been seen since the boom era of rapid expansion during the dot-com bubble of the late 1990s.

Not only are trends such as cloud services and mobile devices causing an exponential growth in the volume of data being stored and processed, but new applications and services using analytics and machine learning are demanding greater performance, leading to a reshaping of servers, storage and networking and the way they work together.

In fact, one Gartner analyst claimed last year that the datacentre is dead, stating that role of traditional datacentres is being relegated to that of a “legacy holding area”, dedicated to specific services than cannot be supported elsewhere, or hosting systems that are more economically efficient being kept on-premise.

Not everyone agrees with this sweeping statement, not least because the biggest factor affecting datacentres is the rise of the cloud. And the cloud providers themselves are the biggest operators of datacentres at the moment.

However, it is certainly true that enterprises are re-evaluating their datacentre strategies. Gartner claimed that 80% of enterprises will have shut down their traditional datacentres by 2025, versus 10% today.

For enterprises, the cloud is a boon, because it means that they do not necessarily have to plan and build datacentres the same way that they used to do, even if they still plan to keep a substantial amount of their IT on-premise.

The changing role of the datacentre in enterprise IT

In the past, datacentres would be built around the organisation’s current needs, but they would also be expected to last for perhaps 25 years.

As such, the plan for them had to include the ability to scale as the business grew, according to Ovum distinguished analyst, Roy Illsley. Now, enterprises can just focus on the next six months, and if their requirements suddenly change they can procure extra IT resources from the cloud while they figure out how to adjust their strategy for the longer term.

“That’s the driving factor for most CIOs now – working out what their mix should be of on-premise versus co-location versus public cloud,” says Illsley.

But if you are one of the big cloud companies or a hosting provider, you will be required to meet this growing demand for compute resources, and thus your datacentres are likely to need to keep on expanding.

They are also becoming more complex to manage, because they have added a variety of different types of infrastructure in the past, including the typical rack-mount pizza box servers and storage arrays, plus other types such as hyperconverged infrastructure (HCI).

The larger datacentres are beginning to see compartmentalisation or modularisation, building infrastructure in zones that might be used for operating different kinds of services or different customers. But this can be problematic, because many datacentres were cabled out for what they were built for 15-20 years ago, according to Illsley.

Expansions at the edge

One solution to this has been to expand by building new smaller scale datacentres rather than enlarging existing ones. This also fits with a growing trend for workloads to be more distributed, driven by the requirements of data sovereignty and the demand for new data-intensive services that call for low latency, meaning that siting a small datacentre closer to your customers may be a better solution than backhauling to a giant regional one.

So the industry seems to be splitting into mega datacentres operated by the hyperscale businesses at one end, and smaller localised data centres at the other.

“The hyperscalers are building mega-big, multi-megawatt datacentres, [but] the smaller more dedicated datacentres seem to be where we find significant interest,” says Illsley. “I think the bit that’s going to lose out in the long term is those middle ground datacentres, once they reach end of life.”

Read more about datacentres

The small datacentres are typically being built relatively quickly, according to Ovum, often using pre-configured racks of hyperconverged infrastructure that can just be slotted into place and connected up. HCI is based on appliance-like nodes that deliver compute and storage in a single box, with their resources pooled across a cluster of nodes, rather than using separate server and storage kit.

This pooling of resources in HCI calls for a sophisticated management layer that automates many processes, reducing the need for IT staff to maintain it – at least in theory.

Illsley says that a good example of the new approach to building datacentres is the one employed by cloud firm OVH, which uses water-cooled servers that can be shipped out in pre-built racks to sites such as disused factories. So long as there is sufficient power and network connectivity, a new datacentre can be quickly spun-up because it does not need a specially engineered building that is outfitted with complex air conditioning.

Meanwhile, some enterprises are returning to the approach of having an on-site server room inside their office buildings. This is possible because you can now pack much more compute power into a smaller space than would have been the case in the past.

Introducing AIOps to the datacentre

Another development that is enabling these smaller datacentres is AIOps, or artificial intelligence for IT operations. This is an extension of the automation capabilities such as those seen In HCI mentioned already, but using a combination of analytics and machine learning to spot any developing issues with the IT infrastructure and either take action or alert an administrator.

The idea is that AIOps will allow IT teams to proactively manage any infrastructure challenges before they become system-wide problems. But it should also allow today’s smaller IT teams to manage an entire datacentre, providing it is not too large and complex.

The infrastructure inside datacentres is also changing in various ways. Workloads are becoming more complex and datasets are getting larger, while some applications are starting to blend AI and analytics into mainstream business processes such as sales and marketing.

In the high performance computing (HPC) arena, it has been common practice for several years to use hardware accelerators for workloads such as simulations, analytics, and machine learning. GPUs, in particular, have ridden this wave, thanks to their massively parallel architecture, but more exotic hardware such as field programmable gate array (FGPA) chips or application-specific integrated circuits (ASICs) are also being used.

However, such hardware can be very costly to implement, and so few enterprises are currently investing heavily in it unless they operate in a sector where it plays a major part in driving their business.

Once again, the cloud means that anyone who wants access to GPU acceleration can buy this as a service from Amazon Web Services (AWS) or Microsoft Azure, for example, and thus it is the hyperscale companies (or those who need to operate dedicated HPC clusters) that are deploying such hardware.

Opening up to open source hardware

The hyperscale companies have also pioneered changes in servers and racks through initiatives such as the Open Compute Project (OCP) started by Facebook and the Open19 Project founded by LinkedIn. Both were started as a means to develop more specifications for datacentre equipment, not just with the aim of driving down costs but also driving greater power efficiency.

Both have introduced changes like stripping out the power supply from individual servers and instead distributing power from a central power distribution unit in each rack. Open19 also specifies a backplane that connects all the servers to a rack-mounted network switch. The overall effect is somewhat like the blade server concept, but scaled up to the level of the rack.

So far, OCP and Open19 racks and servers have mostly been used by the hyperscale companies in order to have equipment built for them at a competitive price by ODMs such as Wistron, but the specifications of both projects are open to anyone, and some enterprises and finance companies have adopted such hardware for their datacentres.

In fact, the Open19 Foundation is now pushing its hardware platform for edge computing environments. The organisation claims its standardised server sled designs make it more efficient to maintain such sites by allowing a technician to show up and simply remedy a failing node in minutes by swapping over one sled for an identical replacement.

General Electric is one firm that has backed Open19 from the start, and uses the infrastructure for deploying its Predix edge platform for the industrial IoT (IIoT) into remote sites.

Overall, the datacentre sector presents a changing face as enterprises and service providers adapt to meet the changing environment, with cloud and other online services playing a major part. We can expect to see the hyperscale datacentres get ever bigger while at the same time more and more mini datacentres pop up to meet localised needs. Flexibility and the ability to adapt quickly to changing circumstances are becoming important factors.

But this is just a brief overview of the kind of changes that are affecting datacentres. Inside datacentres there are also big changes happening with the servers, storage and networking technology, as well as new innovations in cooling and energy efficiency.

Read more on Datacentre systems management

CIO
Security
Networking
Data Center
Data Management
Close