How the datacentre market has evolved in 12 months

Feature

How the datacentre market has evolved in 12 months

It has been an interesting year for the datacentre. The increasing interest in co-location, hosting and cloud has led to murmurings around the death of the in-house datacentre, while hardware and software suppliers have tried to adapt to a world that is no longer as predictable as it was.

Certainly, the co-location suppliers, such as Interxion, Equinix and Telecity have been having a good time, yet even they realise that they cannot rest on their laurels.

131122_cs0866.jpg

Differentiating themselves from the crowd is key, and the co-location companies are looking at how to become "cloud brokers", helping their customers identify who else in a given facility may be able to provide functions and services at internal datacentre interconnect speeds, rather than at wide area network (WAN) interconnect speeds.

Such services will help to define a new market during 2014 and the winners will become centres of gravity for new software players in the market who sees such a model as fitting in alongside its standard on-premise and/or cloud offerings.

This could then lead to the likes of Rackspace, Savvis and other hosting companies, along with the more proprietary cloud players such as AWS and Google, to increase their efforts in providing marketplaces for customers to gain access to self-service functionality and applications in the same manner – or at least in a way that minimises latency and optimises response across their multiple datacentres.

Certainly, throughout 2014 and beyond, the market for these new service brokers will increase. However, the datacentre owned and managed by the organisation is nowhere near dead. The way that a datacentre needs to be built, provisioned and managed is certainly changing, though.

Planning for shrinkage

In the past, datacentre planning was mainly about when it would need to be expanded. However, many datacentres are now shrinking as more workloads move into external facilities and virtualisation and increasing equipment densities push down capacity requirements.

This introduces a lot of issues, particularly around the equipment that is crucial to a datacentre’s operation, but tends to be overlooked by IT. Facilities management may have control of equipment such as UPSs, cooling systems and auxiliary generators, and these have tended to be implemented as monolithic systems. This may be OK when the only future is expansion, as growth can be by the incremental addition of smaller units. However, when shrinkage is required, this either means running with excess capacity on these systems (and so ruining any power utilisation effectiveness – PUE – scores) or in a complete, expensive replacement of the systems to better reflect reduced requirements.

Therefore, we have seen a move to replacing these systems with more modular systems as they come up for replacement. Indeed, many organisations are identifying the “sweet spot” where the cost of trying to “sweat” the benefit of the item of equipment is outweighed by the cost of running it in an ineffective manner and are replacing these systems to be more efficient.

This is also carrying through to the IT equipment, where IT teams are realising that trying to plan an IT platform for a period of many years may no longer be an effective way of meeting the needs of the organisation. Many more organisations take a lifecycle management approach to their hardware, replacing items on a basis that attempts to maintain the most cost-effective beneficial platform for the business, rather than just the cheapest. Here, organisations such as Bell Microsystems provide full services from acquisition through implementation and management to swap out and secure disposal. The disposal of old equipment has also matured as legal aspects such as the Waste Electronic and Electrical Equipment (WEEE) laws tighten. As well as Bell, EcoSystems IT provides a complete secure disposal service where it recycles as much as it possibly can – generally at zero net cost to the organisation requiring the service.

Leasing and finance

The new datacentre equipment has also tended to move away from self-provisioned racks of equipment to a more converged infrastructure of pre-engineered compute, storage and network systems, such as Cisco’s UCS, IBM’s Pure Data and Dell’s VRTX and Active Systems platforms. This may be seen as a heavy expense for organisations struggling to raise finance in today’s economic climate.

For those struggling to raise the required funds to move their existing datacentre towards a modern platform architecture within their datacentre, options from the likes of BNP Paribas Leasing Solutions can enable IT funds to be aggregated across many years and made available to cover the hardware and software acquisition costs for larger projects with repayments being made across a broader period of time, without the risks often associated with tying IT expenditure into bank or other finance loans, where the business may need to be put at risk against repayments. BNP Paribas only takes a lien against the equipment, so minimising the business risk.

One of the biggest changes in the datacentre, however, has been driven by the increased use of virtualisation. Virtualisation has now become mainstream with many organisations now having at least a good number of their servers fully virtualised. Making effective use of a virtualised platform has led to a need for more complex and effective software to manage the whole environment – and for an abstraction of the management from the underlying hardware wherever possible.

To start with, 2013 saw the emergence of management software that concentrated on the virtual layer – but neglected to understand the dependencies between the virtual and physical worlds. Now, Quocirca emerging management systems from the likes of CA, IBM and Dell provide a more holistic view of what is happening, ensuring that issues at the physical layer do not have an overly adverse impact on the virtual systems.

The software-defined world

This has then led to the emergence of the “software defined” world. Starting with software defined networking, it was realised that, in a fully virtualised world where functions and processes would need to cross over different platforms and systems, a dependence on proprietary systems would become an increasing hindrance to how well an IT platform could serve the business.

Now, alongside software defined networking (SDN), we have seen software defined computing (SDC), software defined storage (SDS) and several other SDxs emerge. EMC has coined the term the software defined datacentre (SDDC) to try and show how all the SDxs will need to be brought together through an over-reaching management capability.

For datacentre managers, 2014 will therefore need to see the emergence of the software defined facility (SDF), bringing the areas that facilities management tend to have control over. Here, look to the datacentre infrastructure management (DCIM) suppliers such as nlyte, Emerson Network Power, CA and Raritan to push inclusive systems that will embrace and work with existing SDx software from other suppliers.

All told, 2013 has been a period of dynamic change for datacentre managers. However, it has really only been laying the foundations for 2014, and Quocirca expects to see the continued move to a hybrid mix of existing physical platforms alongside private and public clouds across a multiple datacentre facilities drive further innovation in the market.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in November 2013

 

COMMENTS powered by Disqus  //  Commenting policy