Kzenon - stock.adobe.com

Backup maintenance: Make sure backups can deliver

Deploying backup but overlooking the need to make sure it works is a common error. We look at the why and how of backup maintenance to help ensure you can recover from your backups

Backup is essential to ensure a business can survive all manner of incidents and mishaps, ranging from the minor – such as a deleted file or corrupted application – to full-scale disasters such as a fire, flood or ransomware attack.

But backup or disaster recovery plans can never be effective if backups are not checked or maintained.

Why maintain backups?

Backup maintenance comprises two key elements: to ensure the technical integrity of backup files and to ensure that the right data is backed up at the right time.

Organisations sometimes fall into the trap where they set a backup policy and deploy backup tools and then forget about it, but this is risky.  Organisations need to test that they can recover files, and restore data and applications.

To recover applications might mean a bare-metal restore to new hardware – after a fire or flood, for example – or recovery of a virtual machine after a systems failure.

Testing could comprise making sure backup files are accessible, not corrupt, and can be restored to the correct systems within the required timescale.

Recovery testing is especially important with tape and other physical backup media because where media is stored offsite – recommended as an anti-ransomware measure – there is the added risk of physical damage or deterioration. But logical backups to on-premises arrays and to the cloud also need to be tested.

As Tony Lock at analyst house Freeform Dynamics points out, backup software needs updating and patching. Bugs can corrupt backups or create security flaws, but software updates also need to be tested to ensure they work as planned.

Backup, and backup right

At the technical level, CIOs need to ensure backups work. Backup policies and technologies need to reflect what is happening in the business and backup plans need to be kept under review.  This includes adapting for the use of new applications, growing datasets, and changes to recovery time and recovery point objectives (RTOs and RPOs).

“The big issue here is that the needs of the business can change rapidly, making the data protection policies and tools set yesterday for any data or application not a good fit for today,” says Tony Lock.

These requirements will usually be driven by the business through a need to reduce downtime and data integrity. But they can also be the result of external factors, such as regulatory or shareholder requirements.  Firms also need to guard against external threats to their data, from natural disasters to the possibility of cyber attack, especially ransomware.

As the business changes, these risks will change too. This could be because the company operates in territories more prone to cyber crime, or verticals such as IT or finance that are common targets, or in critical national infrastructure.

Organisations need to look at how quickly they can backup their data and how quickly they need to recover it. Enterprises, especially in regulated industries, have an ever-shorter backup and recovery window. Customers and stakeholders have become less tolerant of downtime.

A key test of any backup maintenance plan is whether there is enough time to backup files, and then to recover in line with the RTO in the business continuity plan. Changes such as greater data volume can derail this.

If organisations fail to do these tests, recovery can fail. To counter this, business continuity plans often now include data replication, redundant mirrored systems, and backup to the cloud as ways to minimise downtime. In some cases, organisations might opt to recover applications and data to a cloud environment, even if only temporarily.

Technical backup maintenance

At the technical level, IT teams need to maintain and patch backup software, ensure they test backups on a regular basis for integrity and check they can be restored.

According to Stephen Young, a director at AssureStor – which provides disaster recovery and backup technologies to IT resellers – this means verifying that backup sets protect the correct data in line with the correct retention policies. Backups then need to be monitored, and restores need to be tested.  He also advocates “time tests” to check how long backup and recovery takes in real-world conditions.

Organisations should also monitor their logs to identify issues with backups and potential failures. They should set a media refresh policy, including for hard drives, and make sure staff are trained in how backup software works and how to recover data.

Firms should also consider how the cloud affects availability and integrity of data, as well as how the cloud provides additional backup options.

Cloud service providers and software-as-a-service (SaaS) vendors provide a level of resilience and a service-level guarantee as part of their package.  However, organisations need to be aware that while the cloud service will protect its own operations, that is not the same as backup for customer data.

This typically falls to the customer to arrange, either as an added-cost service or through a third party. To backup cloud application data could mean copying data to on-premises file stores or to another cloud (i.e. cloud-to-cloud backup).

“Putting your data in the cloud doesn’t make it somebody else’s problem. Accountability remains that of the application owner,” says Patrick Smith, EMEA field CTO at storage company Pure.

Organisations need to include cloud-based data, including data generated by SaaS applications, in their backup plan. As with on-premises backups, they need to ensure those copies are workable backups that can be restored.

Backup tools and automation

Fortunately, enterprise backup tools now include functionality to manage and maintain backup copies. Backup software is increasingly vendor- and platform-agnostic, and able to handle multiple types of storage device and cloud backup targets.

IT teams need to choose one primary backup tool to monitor all their devices and backup services, or use each system’s logs to monitor performance and spot any potential failures.

Organisations could, for example, build dashboards to monitor all their backup systems and scripts to test that they work and that restore processes are effective. And they can tie that into centralised monitoring.

But these steps support, rather than replace, human oversight of backups.

“It is ‘process, process, process’,” says analyst Tony Lock. “Put in place processes to ensure you maintain an overview of what’s actually going on in your data protection systems and ensure they fit the needs of the organisation as options and business requirements change.”

In addition, organisations should make use of new features in their backup tools. These features, such as greater automation, are great time savers, but they will only bring benefits if backup plans are updated so they reflect the software’s new capabilities.

Automation will increasingly play a part in maintaining backups, however. A number of vendors are working to make software that manages backups but also discovers new applications and datasets that need to be included.

In the future, backup software may be smart enough to work out which tiers of storage to use depending on access frequency, apply techniques to reduce data volumes, cut costs and potentially speed up recovery times.

Automation is likely to lead to increasing use of cloud-based backup providers and backup as a service. Cloud services can scale up or down as applications change and provide redundancy for critical data, but CIOs and IT directors still need to ensure they regularly check backups and test that they work.

Read more about backup strategy

Read more on Data centre hardware

CIO
Security
Networking
Data Center
Data Management
Close