Expert Advice

The end of the physical server is nigh

When it comes to virtualisation, it is safe to say that server virtualisation is at the top of the adoption list. The results from the National Computing Centre survey on virtualisation demonstrated this, with 62% of organisations already using it and 36% in either the planning or evaluating stages.

Are you up for the challenge?
To date, most organisations have taken the easy route, going for the low-hanging fruit, non-business critical server virtualisation that give them quick results and a high volume of workloads transitioned into the virtual environment. Now it is time to tackle the higher-risk and more business-critical functions as well. The tools and processes companies use to approach virtualisation strategies today will shape the future of their IT departments.

We need to get over the server-hugging mentality and remove ourselves from physical hardware.

Paul Casey, Data centre Platforms Practice Leader, Computacenter UK,

The customers I've spoken with know that they have to deal with higher value and potentially more challenging workloads, but many don't seem to be ready to make that leap yet.

One reason for this is an understandable fear of the unknown. We have seen that application technology owners within large organisations are pushing back on some areas of virtualisation because it makes them nervous about the applications they look after. They are responsible for supporting that application, and allowing it to be virtualised takes them out of their comfort zone.

There is a limited scope for virtualising servers if the technology is resisted even as the application is supported by the vendor in a virtual state. However, CFOs are switched-on to the cost saving opportunities and we believe they are the ones that will be pushing for more.

Then there is the fact that the majority of businesses haven't driven consolidation as much as they could (or should) have, because they don't always understand capacity management. Many technologists, especially from the Windows environment, are not used to capacity planning for consolidation platforms.

This discipline is better understood in the mainframe and UNIX world, and this lack of understanding and tools can impact both the return on investment (ROI) and amount of workloads virtualised, raising questions such as: How do you plan for new workloads when you don't know what you are currently using? What impact will the new workloads have in the environment? and When will existing capacity run out?

What do we propose?
To take virtualisation to the next level, I suggest categorising workloads based on some measures that fit the business requirements. These include: business criticality; availability; cost; plus whatever works and having a virtual platform that maps on to those requirements.

Once that is in place, you will be better placed to manage more business critical systems in isolation from lower priority workloads and provide the levels of availability and recoverability required. This can often result in better availability and faster recoverability than is possible in a non-virtual state, and it is important the business recognises that going virtual can mean a better service at a lower cost than the traditional approach.

But, in order to deliver this type of service, you need the tools and processes to allow you to deliver effective management for the virtual estate, and that is where there is a gap today in most virtual environments. So these elements also need to be addressed before you can bring the business critical systems into the environment with confidence.

Another issue is that the budget holders in the business area tend to be a bit more emotional about "physical stuff." They want to see the big servers in the backroom buzzing away that they have paid for and their applications are running on. It can be difficult to break out of such an established mindset. However, you need to overcome this if you are going to maximise the reach and benefits of virtualisation in the data centre.

Another challenge for IT is that virtualisation continues to evolve at pace, and the technology is taking leaps forward every 12 to 18 months. Often, by the time organisations have evaluated and then started to implement virtualisation, a new, more compelling and efficient version is available, thus making migrating into the environment even more challenging.

This is especially true for larger organisations with a high number of severs to virtualise, as these projects can take years to complete. Smaller companies with fewer than 300 servers can complete this activity reasonably quickly and then find it easier to exploit the improvements the latest generation of hypervisors have to offer.

In addition, once you have virtualised your estate, changes are also easier to deal with at the infrastructure level -- upgrading or changing hardware technology becomes a much simpler task than trying to move 300 Windows-based applications from one vendor's server platform to another.

The crystal ball
I want to suggest two things here. The first is to follow a virtual first policy for all new workloads. We need to get over the server-hugging mentality and remove ourselves from physical hardware. The second is not to get obsessed by individual applications. Many of them are not mission critical. Look objectively at your requirements and not at the number of applications you have, or think you should have.

Be forward thinking and focus more on your eventual target than what's immediately on your plate.

Paul Casey, Data centre Platforms Practice Leader, Computacenter UK,

I believe that 2010 will be the year companies start to become more ambitious when it comes to server virtualisation. Companies are already aware of the power savings, agility and reduced floor space that virtualisation offers, and those who have already virtualised will be looking to further reduce costs and improve consistency of service delivery and change through data centre automation. I see this growing massively in 2010 as organisations try to exploit cloud-type models.

The models for the future lie in virtualisation on-demand, cloud computing and self-service. And companies are already closer to these models than they might think.

When it comes to market adoption, we have found that all sectors, from manufacturing to retail, have taken to virtualisation at more or less the same rate. However, companies doing it on the cheap find that the adoption rate is slow and the risks are high. It is better to invest in the right tools and services in order to do the job right for your own needs.

Avoid pitfalls along the way
A typical hurdle when it comes to virtualisation is not achieving ROI. People tend to oversimplify the challenges of moving to a virtual platform -- business resistance, complexities with candidate systems, technology management challenges, etc. They often set their objectives too high and underestimate the tools and investment that they need.

From research results, we've seen that companies do get a ROI, but not as much as they wanted or expected. Be realistic when you forecast the cost and timescale of a virtualisation implementation -- don't just rely on the vendor ROI claims.

My advice for companies is simple: Be forward thinking and focus more on your eventual target than what's immediately on your plate. Set a vision for yourself and then work out a plan on how you will get there.

Paul Casey is the Data Centre Platforms Practice Leader at Computacenter UK.

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in August 2010

 

COMMENTS powered by Disqus  //  Commenting policy