By Antony Adshead, UK Bureau Chief
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Data deduplication is the removal of duplicate data blocks and their replacement with a pointer to the first iteration of that block. For that reason, data deduplication's key challenge is its processing overhead, especially in the case of large data sets.
Inline deduplication and post-processing deduplication are defined by when they process data to remove duplicated elements. Let's look at how each one works and what scenarios they are best suited to.
Inline deduplication requires less disk space
Inline deduplication looks for duplicate blocks of data as the data is ingested to the target device.
This method of data deduplication requires less disk space than post-process deduplication because duplicate data is removed as it enters the system. The drawback is that deduplication processing at this point creates a bottleneck that can affect the length of the backup window.
But that same attribute can bring advantages. If, for example, your business will regularly back up large quantities of data which contain many duplicate blocks that don't change, then inline deduplication could be best.
Inline deduplication products recognise redundant data as it comes in from several different backup data streams and won't forward the full block to target media if it knows it must be duplicated. Performance will usually improve over time as the data deduplication product becomes used to the data set it has to work with and data reduction ratios will improve toward an optimal maximum.
Post-process deduplication speeds backup
Post-process deduplication also looks for duplicated data blocks and replaces them with a pointer to the first iteration of that block. But unlike inline deduplication, post-process deduplication doesn't begin processing backup data until after it has all arrived at the backup target. So, if you regularly back up large amounts of redundant data, you'll be using resources to pump it all into the backup target unreduced.
That means that a primary requirement of post-process data deduplication is that there's enough disk capacity to store the largest potential backup your business is likely to carry out.
If you want to minimise backup times, post-process deduplication could be your best bet. Post-process deduplication has no impact on the length of the backup window. Backup operations are completely unaffected, with data deduplication carried out once the data set is on the target disk.
However, if you need to replicate deduped data to an offsite location for disaster recovery (DR), it may work out better to go with inline deduplication instead of sending un-deduplicated data across the wide-area network (WAN) and then dealing with it post-process.
Impact of data deduplication on tape backups
Post-process deduplication can be the better choice when you plan to copy data from disk to tape soon after backup. For that reason, if you want to copy to tape soon after backup, it can be done more easily if you have the un-deduplicated data set to hand, as you would in a post-process deduplication scenario.
It's usually recommended that data not be copied to tape in its deduplicated state because of potential problems that can occur when attempting to restore data sets in which the original iterations of data blocks are scattered across a number of tapes.
It's for this reason also that if your environment contains a mix of disk and tape, post-process deduplication may be better suited to your purposes. There's less disruption to backup and archiving processes from a data deduplication method that carries out its work as a discrete stage rather than incorporating it with backups, as does inline deduplication.
This was first published in March 2010