Server virtualisation is the de facto method for deploying new applications in the datacentre. Virtual machines have to be stored somewhere in the infrastructure and that has typically been achieved using a mix of external and internal storage hardware.
But the choice of storage products is now greater than ever, with a bewildering array of features and functionality. Making the right storage choice is dictated by workload requirements and the need to ensure the typical issues experienced with storage are mitigated successfully.
The top five problems with storage in virtual server and desktop platforms include the following:
Sprawl – Virtual machines are easy to create compared to the effort that used to be involved in purchasing, racking and commissioning a physical server. The ability to spin up a VM on-demand improves business agility, but is achieved at a price. It is easy for storage resources to become consumed by virtual machines that are orphaned (not associated with a hypervisor), or are no longer used (either so-called “zombie” machines that are powered on and doing nothing, or are powered off and unused).
Efficiency – Without careful management, virtual machine storage resources can grow uncontrollably. There is always a desire to deploy as much storage capacity to each VM as possible, because this minimises future (potentially impactful) work later to resize the VM. However, the danger of using templated deployments is in the over-allocation of resources that ultimately end up never being used.
Performance – Virtualisation reverses the role of the LUN (logical unit number) compared to physical server deployments. Where each server would receive one or more LUNs, virtualisation creates larger LUNs and uses them to store many virtual machines. VM storage workloads at a LUN level are therefore typically random in nature, because it is impossible to predict I/O activity across multiple active virtual machines sharing the same physical storage. This problem is particularly prevalent in VDI (virtual desktop infrastructure) environments that can see very high peaks in I/O (for example, so-called Boot storms).
Cost – The perceived cost (per Gbyte) of storage has been in constant decline for many years. However, this only really applies to disk-based systems, especially those using large-capacity drives. Flash storage is certainly not cheap in comparison to disk, although we are seeing hybrid systems pushing towards the $1/Gbyte mark. Cost is an important factor in virtual machine deployment and choosing the right storage with the appropriate cost/performance profile is essential.
Data protection – Virtual machines need to be protected, but traditional methods of backup/restore don’t meet the needs of virtual environments. The consolidation of hardware that forms the basis of cost savings in virtualised environments means deploying agents onto each VM to take a backup, and this just isn’t a practical solution. The difficulty is in taking suitable backups that are both VM and application consistent without affecting VM performance/availability and providing granular file or application data access.
So, those are the major issues in storage and virtualisation – but how can we address them? There is no “magic bullet” solution that addresses each problem; rather, each is solved by putting good practice in place and using a range of hardware and software solutions.
Sprawl – Solving the problem of orphan and zombie/unused VMs comes from implementing good practices around the tracking and management of virtual machines. Orphan VMs can be identified and tracked using scripts that extract VM lists and compare these with the VM file structures on disk. For VMware ESXi, this means looking at VMX and VMDK files and for Microsoft Hyper-V, this means VHD and VHDX files as well as the XML files that define a virtual machine.
With sensible naming standards and ownership details, zombie/inactive VMs can be traced back to the owner for verification of whether the VMs are still needed. Both vSphere and Hyper-V provide PowerShell (and other) toolkits for easy scripting; both platforms provide the ability to add description details, such as ownership information, to VMs.
Efficiency – There are many ways to implement storage efficiency measures, including the use of thin provisioning (both on the hypervisor and on external storage), compression and data deduplication technologies. Storage capacity can be optimised using linked clones that maintain delta differences between a VM master image and clones generated from it.
Read more on storage and virtualisation
- Server virtualisation brings benefits but increases storage capacity needs. Here we survey the key challenges and what to do about them.
- In this special report, we explain the fundamentals of virtual desktop infrastructure (VDI) storage, and how to specify storage for workload in a VDI environment.
Significant savings can be made using all these techniques where VMs are based on the same, or very similar, images. Care must be taken when using thin provisioning to ensure that the normal creation/deletion of data within a VM does not consume physical resources that have been logically released by the VM. This means running clean-up tasks, such as “sdelete” (with occasional defragmentation) to claw back so-called “dead space”.
Performance – Performance issues can be addressed through a whole raft of technologies. Software products such as FVP from PernixData, Infinio’s Accelerator and Atlantis Computing’s USX move I/O closer to the CPU by using local DRAM cache and flash in the server. These acceleration products reduce I/O latency and so improve performance, especially with environments that have a high level of redundant data that can be deduplicated. Performance can be improved by deploying VMs onto hybrid and all-flash solutions, such as those from Tegile and Pure Storage.
Cost – All-flash solutions will certainly improve I/O performance, but will come at a cost compared to disk-based systems. Most virtual environments have a mix of active and inactive VMs, so flash may be appropriate for only a fraction of the virtual machine application workload. In-built hypervisor tools, such as Storage IO Control and SDRS, can, in part, be used to help assign VMs to the most appropriate location. However, these tools can be limited in their scope. As an alternative, solutions such as VMTurbo’s Operations Manager software can be used to examine and optimise virtual environments for all resource usage, not just storage.
Data protection – The traditional way of securing backups in virtual environments has been through the use of snapshots, either at the hypervisor or storage array level. The issue with this technique is the consistency of the snapshot image. Hypervisor features such as VADP provide the ability to take consistent snapshots, but taking the snapshot at the hypervisor has performance implications for the VM.
Tools such as Veeam’s Backup & Replication and HP’s StoreOnce RMC synchronise the snapshot process between the hypervisor and storage, using the consistency benefits of the hypervisor with the performance of the physical array in order to implement snapshots with minimal impact to production workloads. In the case of RMC, this facility can also be used as a tool to generate VM images for test/dev purposes.
Although we have highlighted some of the more obvious solutions to VM storage issues, there are other products in the market:
VM-aware storage – Products such as Tintri’s VMstore are aware of the files that comprise a VM and can deliver to application performance and capacity requirements.
Server-side storage – This includes VSAN from VMware as well as other products, such as Maxta MXSP and Springpath HALO, which deliver virtual storage appliances within the virtual machine infrastructure.
VVOLs – For VMware vSphere, suppliers have started to introduce support for VVOLs, which encapsulate virtual machine files into a single entity. This will provide the ability to apply service levels (performance, capacity) to individual VMs and offload the management to the external array.