Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Many organisations look to implement server virtualisation to reduce capital and operational expenditures. While server virtualisation also provides storage managers with a variety of new features, these features can lead to additional complexity, particularly with data backup. Not only does virtualisation introduce new things that require backing up, it provides more ways to back up an organisation's infrastructure.
Let's take a look at some key factors relating to data backup that organisations should consider when embarking on any server virtualisation project:
What should you back up?
In a typical physical server environment, organisations only need to consider backing up the operating system (OS) and data. In virtual environments, the OS and data are still just as important. However, three new elements that are critical to the infrastructure are introduced: the hypervisor, virtual machine configuration information and management tools.
With server virtualisation, virtual machines (VM) are essentially stored as image files that are backed up like any other file, even while the VM continues to run. These image-level backups have cut system restoration times to minutes.
However, the ability to easily back up and restore entire VMs creates a large amount of data traversing the system and requires additional storage. Organisations may want to consider data deduplication products when performing image-level backups because many of the backups will contain multiple copies of OS files.
Network versus storage
Many organisations that run virtual infrastructures often continue to perform traditional data backup methods across the local-area network (LAN) using agents installed at the OS level. Using this method can reduce spending on additional technologies, but backup windows can be an issue; in the event of a system failure, the recovery of the OS and application can still take hours, if not days.
The shared resource model inherent within server virtualisation can also have a significant impact on this data backup method. If multiple VMs try to run backups at the same time, they will often contend for the same CPU, memory and network resources, which will result in poor backup performance.
By moving backup to the storage layer (LAN-free backup), many of the issues detailed above can be ignored. Backups can be reduced to seconds and restores can be reduced to minutes as data no longer needs to travel across the network.
Alternatively, backup occurs at the array level via snapshotting. To take advantage of LAN-free backup, organisations will generally have to implement products that leverage virtualisation vendors' technologies. Depending on the technologies that are already in place, this can result in additional investment and configuration changes.
The shared resource model can also create challenges when performing backups at the array level. As each volume generally houses several virtual machines, if a snapshot of that volume is taken, then all VMs are backed up. This is great for the backup, but can cause issues when you wish to recover a single VM, especially if that VM has virtual disks spanning multiple volumes. Data consistency can also be a problem when performing array-based snapshotting because the arrays aren't typically aware of the VMs or the applications residing in them. By carefully planning the locations of the virtual disk files and taking advantage of the advanced automation software tools provided by some storage vendors, the array-based backup method is an excellent option.
Backing up virtual infrastructures can be relatively simple or highly complex depending on an organisation's budgets, acceptance of new technologies, and recovery point objectives (RPO) and recovery time objectives (RTO). To successfully back up a virtual infrastructure, organizations should address the following four points:
- Understand exactly what needs to be backed up
- Choose the right backup method(s) and research their limitations
- Carefully select a vendor and test their product
- Introduce backup tiering based on service criticality
This was first published in October 2009