kentoh - Fotolia

Top five things you need to know about VMware Virtual Volumes

VMware’s Virtual Volumes (VVOLs) promise a revolution in storage for virtual machines, but what are the key things you need to know?

With the release of vSphere 6.0 last month, VMware has brought to market one of the most talked-about – and arguably, hyped – storage features in its portfolio, Virtual Volumes (VVOLs).

VVOLs allow policy-based metrics to be applied to storage for an individual virtual machine rather than at the datastore level. This is a significant step up from the capabilities of vSphere 5.5 and is a potential paradigm shift in the way external storage and VMs are managed, with the result that many suppliers have been keen to announce their support and involvement in the VVOLs programme.

VVOLs-enabled deployments will, it is claimed, simplify operational tasks, improve resource utilisation and allow more granular service-level application.

But, as with every new technology, the devil is in the detail. In this article, we drill down into VVOLs, look at why they are needed and summarise the top five things to be aware of when implementing your VVOLs strategy.

Datastore evolution

Whether storage is provided through block-based (iSCSI, Fibre Channel, FCoE) or file-based (NFS) protocols, vSphere virtual machines are all stored in a logical object known as a datastore.

NFS datastores use the file system of the NAS platform itself, whereas block-based solutions use VMFS (Virtual Machine File System), VMware’s proprietary file system format. VMFS has been and improved with each vSphere release, and now supports increased capacity (with 62TB VMDKs) and performance (via VAAI and other optimisation features).

When shared external arrays are used to provide storage, datastores are almost exclusively built from a single large LUN or volume, which is typically the smallest “unit” of addressable storage in such systems.

Read more on VVOLs

  • VVOLs is a provisioning feature for vSphere 6 that changes how virtual machines (VMs) are stored and managed. 
  • With Virtual Volumes in vSphere, policy-based storage management promises to give admins a better grip on VM data stores.

On non-virtualised platforms, LUN-level granularity has been an acceptable restriction as one or more LUNs map to a single host server, allowing service-based policies to be applied at the LUN level.

With virtualisation, the relationship between the LUN and the guest VM inverts and becomes one to many (one LUN/datastore to many guests), each of which experiences the same level of service because they sit on a shared volume.

This architecture has presented a number of problems. QoS (quality of service) and other policy-based metrics can only be applied at the LUN level, even with block-level tiering (any movement of “hot” or active data would occur for any and every VM on the LUN).

This means all VMs on a datastore have the same level of service applied to them. Until now, VMware customers have used features such as Storage DRS to achieve capacity and performance load balancing.

Changing the service level of a virtual machine has meant moving the VM to another datastore configured with different performance or availability characteristics, which is a time-consuming and impactful process that requires admins to reserve out physical disk storage to cater for any possible rebalancing need.

Even systems with QoS already built-in could not take full advantage of VM-level QoS because there was no way to address a single VM other than to place it on its own LUN.

The one-VM-per-LUN strategy doesn’t work well because ESXi (the hypervisor in vSphere) is limited to 256 LUNs per host, whether using iSCSI or Fibre Channel. This puts an immediate cap on scalability in this kind of configuration.

Enter VVOLs

VMware’s solution to the VM addressability issue has been to develop a technology that allows a more granular approach to referencing virtual machines, and the result is Virtual Volumes.

At a high level, a VVOL can be thought of as a container for a VM, but in fact the implementation is subtler than that. Each VVOL addresses a component of a virtual machine (whether a config file, swap file or VMDK), so multiple VVOLs together represent a virtual machine object.

VVOLs a number of concepts and these help to explain how the technology is implemented.

  • Storage provider (SP): The storage provider acts as the interface between the hypervisor and the external array. It is implemented out-of-band (meaning it isn’t in the data path) and uses the existing VASA (vSphere APIs for Storage Awareness) API protocol. The storage provider issues information, such as details on VVOLs and storage containers. VVOLs requires the support of VASA 2.0, released with vSphere 6.
  • Storage container (SC): This is a pool of physical storage configured on the external storage appliance. The specific implementation of the storage container will vary between storage suppliers, although many already allow physical storage to be aggregated into pools from which logical volumes (and, in the future, VVOLs) can be created. The way in which suppliers implement storage containers will be important because it will affect availability and resiliency.
  • Protocol endpoint (PE): This object provides visibility of VVOLs to the hypervisor and is implemented as a traditional LUN on block-based systems, although it stores no actual data. The protocol endpoint has also been described as an I/O de-multiplexer, because it is a pass-through mechanism that allows access to VVOLs bound to it. For people familiar with EMC VMAX technology, a protocol endpoint is similar to a gatekeeper device.

Most of the work needed to implement VVOLs has been on the array makers’ side. VMware has provided the specification, but the suppliers have to implement to it. With that in mind, here are five things to validate with your storage array supplier regarding their support of VVOLs:

  1. How many VVOLs does the platform support? As discussed, each virtual machine requires more than one VVOL. A separate VVOL is required for the VM config, VM swap and each VMDK that comprises a virtual machine. This requirement means that the number of VMs supported on an array is potentially much lower than the number of VVOLs. This could be an issue on all-flash arrays where I/O density (and features such as data deduplication) enable the of many virtual machines.
  2. What features are supported at a VM level? So far, there has been little detail on what service policies will be supported for each virtual machine. We should expect features such as QoS, snapshots and replication to be supported, but in many cases this will depend on whether the array maker already offers these options as part of the existing platform (or chooses to implement them with VVOLs). VVOLs support without VM-level features makes the solution pretty much worthless.
  3. Can I run mixed workloads? Implementing VVOLs should not be an all-or-nothing change. Array makers need to show that VVOLs can be implemented where required, without a significant impact on the rest of the workload of a storage array. When it comes to feature support, the array should be able to determine all VVOLs that comprise a VM, and apply functions such as snapshots to a LUN, VM or VVOL.
  4. How is VASA supported? VASA 2.0 (as discussed) is needed to run VVOLs. The VASA provider is the link between the array and the storage and acts as the conduit for configuration information. Some VASA implementations are native to the array, but some suppliers require the deployment of additional software (in VMs or as separate servers) to provide the VASA service. Non-native support may require you to think about the cost and impact of deploying the VASA service.
  5. Are VVOLs a chargeable option? This may seem somewhat obvious, but some suppliers may choose to charge for the VVOL feature rather than offer it as an upgrade. We already know that VMware will charge for the all-flash version of VSAN in vSphere 6, so don’t be surprised if suppliers want to recoup some of their invested time by charging for the VVOL feature.

Read more on Virtualisation and storage

CIO
Security
Networking
Data Center
Data Management
Close