Have you ever faced a server crash because your traditional backup environment couldn’t handle the peak load? This is because increasing workloads impact input-output (I/O) ratio and application performance. From this frame of reference, a virtual environment can handle the peak load backup workloads with a certain amount of care. In this tip you will gain insights on how to create the perfect virtual machine (VM) backup infrastructure for your environment.
Server virtualization allows dramatic levels of server consolidation, often in the range of 10:1. This takes over the old traditional ‘silo’ of one application = one server. In traditional backup setups you have to put your application servers in the hot mode (where backup occurs at the backend while the applications are still running). In virtualized environments, a single server hosts several virtual machines, and the backup for each server can affect several applications. Hence, it is important to have a flawless VM backup infrastructure in place at the following tiers.
Storage level infrastructure
To illustrate using an example, storage failure in a server may now take down ten applications, not just one. A dual-disk failure or more commonly, a failure with a media error on rebuild means that datasets of ten applications have to be reloaded, instead of only a single app. Since we have to deal with 10 times the data on a server, there’s impact on the backup window. For effective VM backup infrastructure it is necessary to have a more reliable, robust and faster VM backup solution in place.
Ensure strict service level agreements (SLA) in a virtual environment, since the unavailability of one asset could affect performance of other assets. Define stringent SLAs with all your vendors; downtime specifications for each application should depend on its criticality. For instance, you could permit an hour’s downtime for email, finance applications or SAP modules.
SLAs ensure an efficient storage system to support increased resource utilization using virtualization. Ensure that you allocate enough space for each VM, bearing in mind the peak loads and requirements of your users.
In order to guarantee zero downtime, incorporate flash storage in your VM backup infrastructure. Flash as a technology helps performance acceleration, and isn’t very expensive. Instead of investing in expensive solid state drives (SSD), use flash storage devices to eliminate the complexity of another storage tier.
Ensure that snapshots are done at the storage level, as it offloads several cycles from the host servers. Instead of directly backing up data from the servers, store your data on a storage device. Backup should occur from the storage device into a storage target like direct attached storage (DAS) or tape backup. This will eliminate application performance issues and your users will be able to undertake glitch-free transactions. VMware and Citrix have tools that allow you to easily back up snapshots from the storage device to a storage target.
VMs store redundant data such as the operating systems, patches, and software applications which are common to every virtual server. Deploying a deduplication solution ensures that this data is reduced to a single instance. So even if you have 15 VMs running on a system, you will require just a single OS instance. This can provide space savings as much as 90%, as it can bring down 50 TB of data to about 30 TB, thus ensuring faster data backup and storage cost reduction.
Compute and network level infrastructure
Consider a case where you have 10 physical servers, each hosting an average of 15 VMs. You now have 150 VMs running in your environment. For efficient VM backup infrastructure you should separate applications servers and storage from your user base’s server and storage solutions. This ensures that application backups will not interfere with user data during VM backups.
Select storage systems that are built for the new virtualized environments. Look at systems with snapshot features that integrate with server virtualization software to take consistent backups without disturbing VMs or hosted applications. Snapshot backups could later be moved to another storage system as mentioned earlier.
An isolated approach could be incorporated for the network connections. Isolate the network links that connect application servers and storage; have separate links for the user database as well. Virtualized environments should ideally have a 10 Gigabit Ethernet connection. Use fiber channel over Ethernet (FCoE) connections for your storage area networks (SAN).
Ensure network redundancy in your virtual environments. For example, you could have multiple paths running from your systems to the storage device. If a network link fails, the other could take over, ensuring seamless connectivity and availability.
About the author: Syed Masroor heads technology and solutions Organization at NetApp India. Armed with over 15 years of experience, he helps develop enterprise class IT architecture for customers across India. Masroor leads a team of seasoned consultants that work with enterprise customers to design and architect highly scalable storage solutions for their applications. He is an engineer and a regular speaker at events.
(As told to Mitchelle R Jansen.)