James Thew - Fotolia

Virtualisation rolls on, but still less than 50% of workloads are virtualised

Survey finds server virtualisation is top way to upgrade datacentres, while public and private cloud storage gains serious momentum

Server virtualisation is still in full swing, and is the most common way for organisations to upgrade their datacentres. At the same time, upgrades to disaster recovery and storage are being dragged along in the wake of server virtualisation.

But surprisingly, less than half of all datacentre workloads are virtualised. Meanwhile, use of cloud storage – private and public – is gaining serious momentum.

Those are the findings of a survey – virtualisation backup software provider Veeam’s Availability Report – that canvassed 1,140 senior IT department decision-makers in late 2015. Most of those questioned work in organisations with more than 1,000 employees, and abound one-third work in 5,000-strong businesses.

The survey also found that organisations suffer, on average, 15 unplanned downtimes a year, which cost them an average of $80,000 per hour.

The survey’s respondents are in 24 countries across the globe and work in a spread of industry sectors. Here we give the findings for the global sample, but also highlight UK figures where they are of interest.

Server virtualisation steams on

Server virtualisation is still the most common way that organisations upgrade their datacentres, with 63% currently investing in such projects. OS upgrades are the next most popular (54%), followed by disaster recovery (53%), storage upgrades (47%) and virtual desktops (44%).

Private cloud infrastructure is an investment target for 42% of organisations, while public cloud (software-as-a-service) is gaining the attention of 38%, with the same percentage putting money towards infrastructure-as-a-service and 30% targeting disaster-recovery-as-a-service.

The most common driver for datacentre modernisation is to give the customer a 24/7 experience, with that reason given by 68% of respondents. Lowering operational costs is the chief cause for investment by 65% of respondents, while 60% say it is driven by a desire to strengthen security and control.

Most organisations say their requirements have become more stringent to minimise downtime (61%) or to guarantee access to data (61%).

Less than half of workloads virtualised

A key figure revealed by the survey is how much of organisations’ workloads is virtualised. The average is just over 44.5%, and only 1% are 100% virtualised. Most (71%) are between 20% and 70% virtualised. Those figures are slightly higher in the UK, with 48% virtualised and 72% of those questioned between 20% and 70% virtualised.

When asked what portion of workloads they expected to be virtualised in two years’ time, the figure rose to 57% (UK 62%).

The average percentage of organisations’ workloads that are mission-critical is 47%, which rises to 52% for workloads expected to be mission-critical in two years.

Mission-critical workload backup

Only 13% of respondents have a fault-tolerant regime where mission-critical data is protected in real time or near real time. The average time between backups is four hours, but most (54%) back up mission-critical apps at a frequency of between five minutes and one hour.

The average recovery point objective (RPO) works out at four hours, with 59% between five minutes and one hour. Only 13% protect data in real time or near it.

The average recovery time objective (RTO) for mission-critical apps is three hours, with most (62%) having RTOs of up to two hours.

When it comes to recovery, the average amount of time to recover mission-critical apps is two hours. Those with fault-tolerant regimes – the 13% mentioned above – fell short of their plans, with only 9% recovering immediately from an outage. 

Unplanned downtimes

The average number of unplanned downtimes a year is 15. For mission-critical apps, the average length of that downtime is two hours, and for non-mission-critical apps it is five hours.

The average cost of that downtime is $79,510 per hour for mission-critical apps and $59,254 for non-mission critical. Those figures are higher and lower, respectively, for the UK ($100,266 and $40,376).

A key cause of downtime is patching and upgrades. The largest number of respondents (38%) said they experience downtime during patching and upgrades some of the time, 10% said it happens most times and 15% said it happens half the time.

How often to test backups

A key task in ensuring you can recover from an outage is to test your backups. The survey asked respondents how often they test backups back to production capability and the average was every seven days (four for the UK).

The largest proportion test weekly (39%), but here the UK lags behind, with the most common period between tests being monthly or quarterly (26% and 28%).

But how deep does that testing go? The survey asked those that test their backups what percentage they test in a quarter. Most test less than 5% of backups (UK 7%).

A staggering number of backups fail to recover upon testing. The global average is 18.3%, with the UK failing to recover 9% of backups.

Disaster recovery

So how do organisations protect their data? The survey found the most popular method is off-site backup to disk, tape or cloud (66%), followed by local backup (63%). Then came off-site replication (36%) and local replication (40%).

Storage snapshots on the array are used by 36%, snapshots to secondary on-site storage are used by 29%, and 18% use snapshot to secondary off-site storage.

The cloud

The survey asked how organisations use the cloud. Most (62%) use the cloud to store backups off-site for disaster recovery, and archiving in the cloud is also quite common (60%). Remarkably, 39% say they use the cloud for replication and high availability.

Only 11% globally do not use the cloud for data protection, but that figure leaps to 31% for the UK. The UK also lags on use of the cloud for off-site backup retention (41%) and cloud archiving (41%).

More on server virtualisation and DR

The fundamentals of disaster recovery are well established, but there is uncertainty, and even false claims from suppliers, about how the rise of virtualisation affects DR.

The disaster recovery planning process is not fundamentally technology-centric, so when can virtualisation make it quicker and easier to restore services after an unplanned outage?

This was last published in March 2016

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Storage management and strategy

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

interesting read
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close