Backup is a key component of any good data protection strategy. But does the advent of virtualisation mean traditional backup can be replaced by new data protection methods, such as replication, snapshots and live migrations?
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Backup is the process of making a secondary copy of data that can be restored to use if the primary copy becomes lost or unusable. Backups usually comprise a point-in-time copy of primary data taken on a repeated cycle – daily, monthly or weekly.
Sometimes backups can be used to roll back a virtual server environment to a previous point in time as part of the maintenance or upgrade process. Backups can also be used as a virtual machine cloning tool.
Backup may be required in the following scenarios:
- Logical corruption – Data can become corrupted through application software bugs, storage software bugs or hardware failure, such as a server crash.
- User error – An end user may delete a file or directory, a set of emails or even records from an application and subsequently need the data again.
- Hardware failure – Failure scenarios can include hard disk drive (HDD) or flash drive failure (multiple failures can cause data loss even when RAID is used), server failure or storage array failure.
- Hardware loss – Possibly the worst scenario is an event such as fire that renders hardware inoperable and permanently unrecoverable.
Backup service levels
RPO defines the historical point in time to where the backup will be restored. Some applications require an RPO of zero, but a period of minutes or hours is acceptable in many other scenarios. Occasionally, it is necessary to restore data from much further back in time, maybe days or weeks previously.
RTO defines the amount of time required to restore backed-up data to the primary system, which could be measured in minutes or hours but is typically required to be as fast as possible.
More on backup
- Enterprise backup embraces VMs and cloud, but lags on mobile
- Virtualisation and cloud ubiquitous in midrange backup software
- Key choices in virtual machine backup
- Virtual server backup products surveyed
- Backup vs archive: Can they be merged?
- Backup appliances offer easy backup for SMEs and branch offices
- Gartner: Time to rethink backup procedures
Backup vs replication
Remote data replication is sometimes assumed to be equivalent to backup, but this is not the case.
Replication solutions can be either synchronous or asynchronous, meaning transfer of data to a remote copy is achieved either immediately or with a short time delay. Both methods create a secondary copy of data identical to the primary copy, with synchronous solutions achieving this in real time.
This means that any data corruption or user file deletion is immediately (or very quickly) replicated to the secondary copy, therefore making it ineffective as a backup method.
Another point to remember with replication is that only one copy of the data is kept at the secondary location. This means that the replicated copy doesn’t include historical versions of data from preceding days, weeks and months, unlike a backup.
Snapshots supplement backup
A snapshot is a point-in-time copy of data created from a set of markers pointing to stored data and is effectively a backup. Snapshots provide a variety of approaches that can supplement backup and provide rapidly accessible copies to which is it possible to roll back.
So what are the key snapshot variants? They include:
- Copy-on-write snapshot – Most snapshot implementations use a technique called copy-on-write, which makes an initial snapshot then further updates as data is changed. Restoration to a specific point in time is possible as long as all iterations of the data have been kept. For that reason, snapshots can protect against data corruption, unlike replication.
- Clone/split-mirror snapshot – Another common snapshot variant is the split-mirror, where reference pointers are made to the entire contents of a mirrored set of drives, file system or LUN every time a snapshot is made. Clones take longer to create than copy-on-write snapshots because all data is physically copied when the clone is created. There is also the risk of some impact to production performance when the clone is created because the copy process has to access primary data at the same time as the host.
- Continuous data protection (CDP) – CDP is a method of snapshotting that tracks and store all updates to data as they occur. Theoretically, this means CDP solutions can roll back to any point in time, down to the smallest granularity of update. But there is a price to pay with CDP in terms of the cost of storage needed to keep every changed block copy and the performance impact of storing the data. As a result, some vendors implement what they call near-CDP, taking snapshots of changed data at set times and consolidating changes over a longer time period. This means heavily updated data doesn’t overwhelm the capacity of the CDP system. In virtual environments, APIs such as vSphere’s VADP enable CDP solutions to be implemented by third-party software vendors.
A word about live migration
The live migration functionality that comes with virtualisation hypervisor platforms is undoubtedly an incredibly useful thing. It allows users to move virtual machines and data stored between physical locations without disruption.
But despite the claims of some in the IT industry, it isn’t really a backup solution, as the primary data is simply moved to another location. The ability to transparently move a VM to another location provides for some degree of disaster recovery, although the actual process of migrating really needs to happen before disaster strikes, making it more of a disaster-avoidance solution.
Best practice – combine backup and other methods
A good data protection strategy combines a number of aspects that can include all of the above features.
Short-term snapshots are great for dealing with user errors and some data corruption scenarios. More importantly, they are very fast (data can be reverted or restored in seconds), usually very space-efficient and in many cases, restores can be performed by the user, taking the workload off the backup administrator.
CDP takes things a step further with more flexible recovery scenarios that trade off backup capacity and performance against restore granularity. This means CDP is great in environments where granular rollback is required and where that feature isn’t provided through the application or database.
Finally, traditional backup offers a solid backstop position should a major hardware or site disaster occur. Although traditional backups don’t necessarily provide the flexibility or efficiency of other methods, they offer a better long-term solution for data retention especially where backup policies dictate multiple backup copies in geographically dispersed locations.
An efficient data protection strategy will make use of a combination of all of these solutions, applying them to different classes of data as necessary.