Organizations are now not being able to meet their Recovery Time Objectives. This is largely a result of the fact that recovery time in a physical tape library is far more as compared to the virtual tape library (VTL). In addition, tapes are now treated as non reliable media due to their limited life span. It's not unusual to encounter an unsuccessful recovery even after a successful backup. So enterprises are getting attracted towards virtual tape libraries, and adopting VTL as their backup hardware.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Virtual tape libraries can be leveraged using tweaks to optimally utilize VTL resources and to get high data transfer rates. Some of these VTL tweaks are:
Most backup vendors support the open tape format. As a result, we can back up heterogeneous platforms simultaneously on a single tape. This feature uses the backup software's multiplexing capabilities, so that data from different backup clients and different operating systems can be simultaneously backed up on the same tape. During multiplexing enabled backup, the backed up data is from different clients and types. So the virtual tape library's dedupe ratio will be highly impacted, as there will be less commonality encountered in backing up data blocks. So incase of VTLs, it is recommended to back up only one data stream at a time to achieve high dedupe ratios, i.e. multiplexing value should be set to 1. Avoid running disk-intensive applications such as virus scans on the backup clients when the backup is running.
• Deciding upon the time schedule of backups also plays an important role, so distribute the backup load evenly. As in, we should divide backup sets and schedule them, so that their start time differs by at least half an hour.
• Having a virtual tape library in place doesn't mean that you will get backup throughputs of more than 230 MB/s. Such expectations can be maintained only if your LAN network is capable of delivering data at that rate to the VTL. So a separate Gigabit LAN should be dedicated for backups for optimum backup performance. In case of huge networks with large numbers of servers, categorize servers according to their criticality. Also enable SAN based backups for critical servers, and dedicate virtual tape drives to them for backup. • Use separate VLANs for the production network and backup network. Ensure that you use only Cat 5e or Cat 6 cables in these networks.
• Persistent binding should be enabled on the operating system level so that the symbolic device names remain consistent across reboots, and there is minimal backup disruption. This ensures that we need not reconfigure the virtual tape library time and again on the backup software.
• Connect as many ports as possible on the network from the virtual tape library to get optimal backup throughput.
• In case of LAN based backups, ensure that network settings are consistent throughout. This means that if the switch is at auto negotiation, then you should set all hosts and virtual tape library port settings to auto negotiation.
• If the backup is running, don't add tapes or make configuration changes to the VTL, as it may hinder backup performance.
• Use the clone feature provided by different virtual tape library vendors (Direct to Tape in case of EMC and Direct Tape Creation for NetApp) to automatically clone VTL tapes to physical tapes.
• Limit initiator connections to 4 on the VTL's target ports.
• Block size also has a impact on virtual tape library performance and duplication ratios. Typically, virtual tapes with 256 KB, 512 KB, and 1 MB block sizes show approximately the same performance levels. So increasing the tape record size doesn't have as such a impact on the performance. Hence it's better to set record size used by the backup application to 256 K.
• Ensure that the VTL's total used capacity doesn't exceed 80% of total capacity. This will impact the total backup throughput as well as the dedupe ratios, in case of virtual tape libraries which have post process deduplication.
About the author: Anuj Sharma is an EMC Certified and NetApp accredited professional. Sharma has experience in handling implementation projects related to SAN, NAS and BURA. One of his articles was published globally by EMC, and titled the Best of EMC Networker during last year's EMC World held at Orlando, US.