Feature

vSphere 5 storage features stake a claim in array territory

In August VMware released the fifth version of its flagship virtualisation hypervisor, vSphere 5. The latest release contained almost 200 new or enhanced features, including many that were related to storage. So, what are those new vSphere 5 storage features, and what impact will they have for the storage administrator?

File system improvements

vSphere 5 introduced a new version of the VMware file system, VMFS-5, used with block-based storage LUNs (in contrast with the file system duties offloaded to NAS subsystems when using file access storage already present when using NFS). This is an upgrade to the previous version, VMFS-3, and it contains a number of performance and scalability improvements.

VMFS-5 enables the creation of data stores as large as 64 TB on a single LUN. Under VMFS-3, data stores of greater than 2 TB could only be achieved by aggregating LUNs. And the block size of a data store has been standardised at 1 MB; prior to vSphere 5, users had the option of choosing from among 1 MB, 2 MB, 4 MB and 8 MB block sizes when creating a VMFS data store. A consistent block size enables data stores to use storage more effectively, particularly in thinly provisioned environments. Finally, support for files less than 1 KB in size has also been made more efficient.

Performance on VMFS-based file systems has been improved with the wider use of the vStorage APIs for Array Integration (VAAI) primitive (or command) Atomic Test and Set (ATS) to enable more granular locking at the block level. This change means that ATS is used in more places within a VMFS, locking files at a smaller block size. In some respects, an improvement of the performance of locking is a necessary requirement with the increase in size of a VMFS data store; locking at the data store level becomes more efficient as the data store size increases.

Storage Distributed Resource Scheduler

vSphere 5 introduced Storage Distributed Resource Scheduler (Storage DRS), a feature that enables load and capacity balancing of virtual machines across data stores. Using vCenter Server, multiple data stores can now be placed into an administrative cluster among which virtual machines can be moved, depending on I/O load and capacity.

The feature operates in two modes. “Initial Placement” determines the best place to deploy a virtual machine based on the current capacity and load on each data store within a cluster. From then on, DRS can provide recommendations on where to move a virtual machine to improve either the I/O response time, capacity or both.

Storage DRS can report on recommended virtual machine moves or have vSphere move virtual machine files automatically once certain thresholds have been reached. Storage DRS operates at the vCenter Server level  and requires a Storage vMotion licence.

VMware recommends that all data stores within a cluster have similar performance characteristics to effectively load balance across all resources.  
 

VAAI for NAS

vSphere 4.1 included hardware acceleration of bulk data copying, and locking was through VAAI for block-based storage arrays. That has been extended to cover NAS devices with the following primitives:

  • Full File Clone, which enables cloning of a virtual disk
  • Native Snapshot Support, which enables native vSphere snapshot processing to be offloaded to the NAS array
  • Extended Statistics, which provides enhanced information on NAS data stores
  • Reserve Space, which enables the creation of “full size” virtual disk files on NAS filers that previously only supported thin provisioning

Many NAS filers can already perform the functions offered by these primitives, but integration ensures a higher level of data integrity where vSphere is managing the operations.

VAAI for block-based arrays, meanwhile, has been extended to cover the SCSI Unmap command. This feature enables the operating system to logically indicate when blocks within a LUN can be released for reuse by an array and is targeted at improving thin provisioning implementations. Some vendors (including EMC) have recommended disabling Unmap as it can have a negative impact on performance within their arrays.

vSphere Storage Appliance

vSphere 5.0 introduced the new vSphere Storage Appliance, which runs as a virtual machine, presenting internal disks in the hypervisor server as an NFS data store. Resiliency is achieved by replicating data stores across additional vSphere Storage Appliances on multiple ESXi servers in a cluster configuration.

At this stage the technology is relatively expensive and comes with recommendations that appear excessive, such as using RAID 10 on the internal server disks while also requiring replication between hypervisors. At Version 1.0, the vSphere Storage Appliance has few features compared with more mature market offerings that can act as virtual arrays running as a virtual machine guest. However, VMware will likely add features quickly, and we can expect future releases of the vSphere Storage Appliance to offer stiff competition with the likes of Open-E, Nexenta and others.

vStorage APIs for Storage Awareness

The vStorage APIs have been extended with the addition of the vStorage APIs for Storage Awareness (VASA). This feature enables vCenter Server to obtain more information on the underlying storage in use by vSphere, including about RAID levels and whether features such as thin provisioning and replication are being used.

As VMware deployments become larger and more complex, providing information to the hypervisor on the configuration of storage resources enables more efficient management of virtual machines. With vSphere 5, a new feature called Profile-Driven Storage enables the hypervisor to place virtual machines onto a tier of storage that matches the VMs’ performance requirements.

Other improvements

A number of other enhancements in performance have been brought in with vSphere 5. Storage vMotion has been enhanced through a new replication feature called Mirror Mode. This improves the efficiency of the replication process and makes disk image migration times more predictable. vSphere Storage I/O Control has been extended to provide clusterwide limits for I/O on NFS data stores.

vSphere 5 also supports a software Fibre Channel over Ethernet (FCoE) initator, and the iSCSI initiator is now fully configurable through the vCenter GUI.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in November 2011

 

COMMENTS powered by Disqus  //  Commenting policy