WavebreakmediaMicro - Fotolia
Backup testing: The why, what, when and how
We look at backup testing – why you should do it, what you should do, when you should do it, and how, with a view to the ways in which it will be made easier by forthcoming developments in backup software
Pretty much everyone knows why backups are important. More so than ever in an age of heightened ransomware threats, it’s critical to have a clean copy of data to roll back to in case of emergency.
However, you shouldn’t be waiting until an emergency to find out if recovery from backups will work. In other words, you need backups you can rely on – and the only way to ensure that is by testing.
In this article, we look at the key elements of backup testing, including what to test, when to test and how, and the management framework you require around backup testing.
Why backup testing?
Every organisation has data it cannot afford to lose. How organisations differ and how data within organisations differs will vary according to the amount of data they can lose and the length of time it is unavailable.
Those concepts are boiled down to the idea of RPO and RTO. RPO, or recovery point object, relates to how much data you can afford to lose as measured back to the last good copy. RTO, or recovery time objective, is how long it takes to restore that data.
How rigorous those are for your organisation – as measured by the effect on reputation, potential commercial loss, compliance and the threat of fines, and so on – will be a good measure of what is important in backup testing.
That’s because backups are worthless unless you can restore to the requirements of your organisation.
What to test
The short answer to this question is the ability to recover. But there will be many different datasets with varying levels of criticality in terms of RPO and RTO.
For example, data held by an organisation can range from long-term archives to transactional production data. One can sit there for years not being accessed, while the other is in use right now and losing seconds of it could cost big money.
What’s needed before anything else is a backup audit, which begins with an audit of all the organisation’s data and applications, its location (including all sites and cloud hosting), and its importance as measured in RPO and RTO terms.
Further to this, the audit should cover how these datasets are protected in terms of backup, but also other methods such as snapshots, which are a way of protecting data since the last backup.
Finally, the backup audit should be updated on a regular basis to take account of new application deployments, with backup requirements for applications built into the development process.
As can be seen, the “what” to back up question is potentially huge and constantly changing, but essential to get to grips with so you can assuredly restore data when required. Some suppliers – such as Veritas – are working on so-called autonomic backup, whereby this job is taken on by the backup software itself, but this is not widely productised yet.
How to test backups
The aim of all testing is to ensure you can recover data. Those recoveries might be of individual files, volumes, particular datasets – associated with an application, for example, or even an entire site, or several.
So, testing has to happen at differing levels of granularity to be effective. That means the differing levels of file, volume, site, and so on, as above. But it also means by workload and system type, such as archive, database, application, virtual machine or discrete systems.
At the same time, the backup landscape in an organisation is subject to constant change, as new applications are brought online, and as the location of data changes. This is more the case than ever with the use of the cloud, as applications are developed in increasingly rapid cycles, and by novel methods of deployment such as containers.
When to test backups
This means testing must be as comprehensive as possible, while taking into account that testing at some levels – for example, the entire organisation – will be impractical on a super-frequent basis.
So, it’s likely that testing will take place at different levels of the organisation on a schedule that balances practicality with necessity and importance. Meanwhile, that testing must consider the constantly changing backup landscape.
As mentioned above, that could mean backup testing is built into the application development and deployment process. So that as applications are tested, the ability to recover their data is also tested.
Document the backup testing plan
When all facets of the need to test backups and the ways in which it should be done are taken into account, there is a need to record and plan it, so your backup testing plan should be documented.
It should record:
- Regular audits of your key systems and applications and state their RPOs and RTOs.
- Backup systems in place.
- A testing schedule applicable to all levels of potential recovery, from file level to site-wide.
- The ways in which these elements will be tested and recovery targets to be achieved.
- When backup testing documentation will be updated and what will trigger updates.
Read more about backup
- Backup maintenance – make sure backups can deliver: Deploying backup but overlooking the need to make sure it works is a common error. We look at the why and how of backup maintenance to help ensure you can recover from your backups.
- Backup failure – four key areas where backups go wrong: We look at the key ways that backups can fail – via software issues, hardware problems, trouble in the infrastructure and good old human error – and suggest ways to mitigate them.