Bottlenecks occur when the backup throughput required to complete the process is larger than the network bandwidth between the data and backup devices. To overcome bottlenecks, you can either increase network connections to the existing backup data servers or increase the number of data mover servers.
The first option is faster and less expensive, you have enough network port density and can use a multi-network configuration. The second option is a more expensive alternative because of hardware, license and power and cooling costs.
Data type - small files
The cycle of opening individual small files for backup is time consuming. Also, backup environments are generally unable to provide the data throughput required to ensure the backup completes in the allotted time frame.
The simplest and most cost-effective method to achieve this is to run more than one backup on a server at a time. If this is not possible, then you can eliminate a file-level backup although you may find that certain individual files cannot be recovered. Also, where possible, you can use NDMP.
If the number of streams can't be increased, nor the throughput improved, then you may be able to reduce the number of files needing backup. This can be achieved through two methods:
- Incremental forever, where an initial full is followed by incrementals thereafter, minimising the data and time needed for back up.
- Eliminate duplicate files with data deduplication technology.
With increasing pressure to manage costs, you would like to improve backup processes and operations at no capital cost. However, if processes and operations can be improved no further there is a compelling need to purchase more data movers, backup agents or dedupe products.
This was first published in October 2009