Tape management will be easier and your manager is happy as they won't have to fork out for tens of tapes per month and the off site storage costs will naturally reduce. Also, the tape operators are happy at the thought of ejecting a fifth of the tapes, which will allow more time to enjoy that freshly brewed cup of tea.
So everyone's happy, utopia.
Management have bought into the idea, the software and licences have arrived on site and on Tuesday a consultant is going to arrive to install and configure the solution. A week later and the archiving is working great, all of the files and emails to be archived have been moved to the new target over the course of the week and from now on archiving will run only once a week. The server and storage guys are rubbing their hands with delight.
On the other side of the partition the backup administrators are rubbing their heads in confusion as something is defying logic – the file servers are now taking longer to backup. There is less data to backup, but it's taking longer. They look at the throughputs of the backups for the file servers and they can see the throughputs have dropped post implementation of the archiving. The cause; well, the thousands of files in the order of tens of megabytes (MB) have been replaced by stub files in the order of kilobytes. And we well know how crippling small files are on backup throughputs.
The backup administrators are also confused as to how the number of tapes on a full backup, including the archive repository, (where the archived files reside) actually require more tape than prior to archiving. The answer; well in effect there may now be double the files to backup, as each archived file will now include a stub file (KB) within the original file system, along with the actual data file on the target repository disk (MB or GB).
Backup administrators are also ripping out their hair as in the process of implementing the archiving solution someone has overlooked the requirement to backup the archive database, which, is the crucial link between the stub files and the actual data file. The loss of the archive database would render data retrieval impossible. Remember the database and repository backups have to be consistent.
Roll on a month and the backup, server and storage administrators are on site to perform a DR test. The backup admins have 4 hours to bring the backup environment on line in readiness to perform server and application data restores. After a frantic 4 hours the backup environment is on line and servers have been built in readiness for restores. But DR records have not been updated and the archiving/database server has not been built; the binaries are not on site. Copies of the archiving binaries and licences arrive on site and the archiving server is ready for its database restore by the start of day 2.
Luckily, this was a DR test and gave the company's business continuity team visibility into the changes required to the DR plans, for the recovery of its archived IT environment.
I once attended an archiving training course where upon asking the question of DR scenarios I was met with blank looks – purely as no customers who had implemented the archiving product had been faced with a DR situation and therefore the archiving company had never addressed the eventuality.
In my opinion, backups are like those moving tile puzzles where there is one free tile; what you may fix by moving a tile to one location (be it a neat solution for server and storage requirements ) may affect the puzzle somewhere else (the challenge for backup requirements).
About the author: Hywel Matthews is a senior consultant at Glasshouse Technologies (UK), a global provider of IT infrastructure services, with over with 12 years experience in the IT Industry and 9 years experience in backup, recovery, disaster recovery (DR), systems and storage.