Businesses see a clear need for more use of virtualisation amid growing volumes of data and data centre equipment. IT professionals must
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Research from business and IT analysis company Quocirca examined what was driving investments in data centres in Europe and the Middle East. The research, carried out as two cycles in May and November 2011, found that consolidation was the main driver for data centre investment, followed by limitations in current facilities. As the prospects of a double-dip recession set in, the need to support business growth dropped dramatically, as did the need to move to a new technical architecture.
Figure 1.: Survey respondents stated that consolidation drove their data centre
investments. [Source: Quocirca]
The drop-off in investments in new technical architectures coincides with an increased adoption of virtualisation during that period, as demonstrated by other research. Companies that have carried out a more complete adoption of virtualisation may feel that they have already changed their platform and won’t be looking to move to a new platform. Even if these organisations see themselves adopting more virtualisation during 2012, they will not see it as a change of platform.
Organisations, however, are continuing to have IT infrastructure management problems as data centre equipment continues to grow, so further areas of virtualisation adoption is expected.
Developing an IT platform and data centre infrastructure strategy
Businesses are adopting virtualisation because it allows them to run more workloads on the same amount of IT equipment. It also gives them an opportunity to lower expenses (by saving on new physical servers and infrastructure).
Being able to move from a 5% to 10% server utilisation rate to a 40% to 50% rate provides savings not only in hardware, licensing and support, but also in data centre energy costs, which continue to trend upwards.
Beyond that, one of the main reasons for virtualisation adoption is to gain a new, flexible and dynamic IT platform -- one which can more rapidly respond to changes in the business’ strategic needs. A dynamic IT platform is one that is able to “borrow” computing resources as well as network and storage resources on which it can try things, rather than having to physically allocate discrete equipment. This means businesses can experiment with new ideas for little cost.
When to say no
But one of the biggest errors that organisations make while developing an infrastructure strategy is to use virtualisation just to move an existing environment onto one that is more efficient – not necessarily to one that is more effective.
In many implementations, virtualisation allows existing systems just to run more efficiently through the use of shared resources, rather than enabling a massive change in platform in and of itself. In that scenario, virtualisation is really just an evolution of clustering, and the applications are still constrained by a physical grouping of specific servers. In addition, workloads are not shared amongst the available resources. It is still a one-application-per-environment system, rather than a shared-everything approach.
To correctly implement virtualisation, you must put in place the right platform for an elastic cloud so it can share available resources amongst multiple workloads in an automated and transparent manner. Cloud computing’s definition specifies that resources must be able to be applied “elastically” and be able to be provisioned and de-provisioned at will against workloads.
A good cloud implementation should also allow for composite applications, where a business process is supported through the aggregation of different technical services in an on-demand manner. This may require using a mix of technical services from a cloud system owned and operated by the organisation itself, ones that are run on behalf of the organisation by a third party and ones which are freely available in public cloud services.
Mistakes to avoid while developing an IT platforms and infrastructure strategy
Organisations and IT departments need to adopt a new approach to virtualisation and cloud computing to ensure their data centre infrastructure is responsive to changing business needs. They should stop thinking along the lines of, “We are having problems with our customers, and we better buy a customer relationship management application” or “Inventory is causing us issues, so let’s put in a different enterprise resource planning package.”
Instead they should be thinking, “We have problems with the way we are attracting customers. We need to run business processes that are correct for today’s needs, and we need suitable technical services to ensure that the processes run correctly today and can change tomorrow.”
It’s the IT department’s responsibility to make sure that its infrastructure strategy along with a mix of cloud environments (both private and public cloud platforms) can support the organisation's changing business needs.
A private cloud held in a private data centre must be flexible. As equipment densities continue to increase, a business must ensure that power distribution, cooling and floor strengths are sufficient to support needs over a significant (5+ years) period of time. Similarly, when choosing an external colocation or cloud hosting environment, the same due diligence is required to ensure that the facilities will support the business for a similar period of time – and that the provider has plans in place to ensure support well beyond that time frame.
A private cloud in a private facility is also unlikely to be just a pure cloud environment. There will remain certain functions and applications which – for whatever reason – an organisation chooses to continue running on a physical server, on a cluster or on a dedicated virtual platform.
IT must factor in this heterogeneous mix of physical and virtual IT platforms while configuring and supporting a data centre facility, as well as use it to inform the organisation’s systems integration and management processes. IT professionals must also allow for integration and management across the whole chain of private and public systems.
Finally, users must accept that “the next big thing” always supersedes “the next best thing.” Cloud will certainly be important and will lay the foundation for the future. However, there will be continuous changes in how functional services are provisioned and used causing more adjustments to the underlying platform.
Developing a modular approach
Users, therefore, should not opt for a prescriptive or a proscriptive platform. Instead, they should ensure that the IT platform adheres to industry standards and that the data centre facility itself is as flexible as possible.
This may mean that a modular approach to the data centre makes more sense, rather than a build based on filling bare racks. Engineered compute blocks, consisting of pre-configured server, storage and network components, can be more easily provisioned and maintained within a facility and can be swapped out more effectively as required. Alongside this, power distribution and cooling needs are more easily met and there is less need for the facility to be continuously altered to meet changes at a granular level.
A change in IT platform is what a majority of organisations need to think of while developing a data centre infrastructure strategy because existing application and physical resource approaches are no longer sufficient. Providing a flexible environment helps support the business in the most effective manner; now and for the foreseeable future. Virtualisation and cloud deliver that too but must be implemented correctly in order to make good on their promises.
Clive Longbottom is a service director at UK analyst Quocirca Ltd. and a contributor to SearchVirtualDataCentre.co.UK.
This was first published in March 2012