Traditionally, disaster recovery meant significant investments towards the set up of an IT infrastructure that mirrors the primary data center. Today, with the use of virtualization for disaster recovery (DR), you can significantly reduce requirements for redundant hardware. Time taken to bring up the secondary site and conduct ‘business as usual’ is also significantly less with the use of virtualization for disaster recovery.
Factors such as storage for applications at the primary site remain the same for a physical disaster recovery site as well as a virtualized DR site. This is where the similarities end.
Consideration 1: Recovery time objective (RTO)/recovery point objective (RPO)
Estimating the right RTO and RPO are important while using virtualization for disaster recovery. For example, service downtime of even an hour translates to huge revenue loss at a stock trading company. In this case, RPO and RTO are key constraints due to the need for low recovery lead times. With a virtualized disaster recovery site, RTO/RPO windows can be considerably smaller.
Physical disaster recovery infrastructure requires manual activities such as bringing up servers, reconfiguring the DNS, and others. On the other hand, virtualized server infrastructure has provisioned virtual machines on call—according to the type of servers, synchronized with the primary site, and ready for use. This makes it possible to kick off the virtualized disaster recovery site with less lead time. Therefore, RTO levels for a virtual disaster recovery site’s setup can be in minutes. This also translates to a smaller RPO.
Consideration 2: Data replication frequency and approach
Data replication frequency also affects RTO/RPO. Greater the frequency, lesser is the RTO/RPO window. Using virtualization for DR infrastructure, a higher data replication frequency is possible. Therefore, the replication approach also matters.
A differential backup approach requires creation of a primary backup copy on a non-business day. Backups are then taken on each following day. In the event of a disaster, load the primary data backup and synchronize it with the snapshots. Only then can the DR site be made live.
As opposed to this, a virtual DR site can have virtual machine instances which are mirror images of those at the primary site. It is then required to replicate only the data (or configuration changes) of primary servers to the virtual instances. Lesser data to replicate means more frequent replication. This considerably brings down lead time.
Consideration 3: Connectivity between primary and DR site
It’s a good DR practice to have the primary and virtual DR site located in two different seismic zones. A strong connectivity backbone is required between the two sites. For instance, if an organization has a 1 Mbps pipe between primary and secondary sites, replicating 100 GB of data will take approximately three days or more—not a favorable option. So the two data centers should be connected using dark fiber with at least 100 Mbps connectivity.
Consideration 4: Number of services to be supported for business continuity
Determine the number of servers to be hosted on a virtualized DR site according to priority. For instance, replicate customer facing applications first, followed by critical internal applications. The same order should be followed while bringing up these services from the DR site in case of a disaster. The applications will have a direct implication on RTO/RPO values.
Consideration 5: Authentication mechanism (Active Directory, LDAP etc)
In many organizations, Active Directory is tied to a single domain mapped to the primary site. Now, if the DR site maps to the same domain for authentication, part of the primary site should be always available for authentication. So authentication mechanisms should be disconnected from the primary site.
The ability to replicate authentication to the secondary site is critical. A virtual machine with a snapshot of the Active Directory server should be pre-configured at the virtual DR site. It should be synchronized with the primary site at all times.
About the author: Goutham Kumar is a solutions architect for infrastructure management and tech support at MindTree Ltd.
(As told to Harshal Kallyanpur.)