ComputerWeekly.com.com

LUN storage management for vSphere and Hyper-V

By Antony Adshead

The LUN has long been a bedrock of storage configuration for physical servers with LUN storage partitions carved out from Raid groups to provide logical chunks of capacity for applications.

Now, virtual server environments can abstract the physical characteristics of a server into software and so provide increased scale and utilisation of hardware resources.

But, storage must still be provided to virtual server and virtual desktop machines, with the hypervisor taking on an important role as the virtualisation layer, abstracting physical storage resources to virtual devices.

So, what has happened to the LUN? That depends on the virtualisation environment you’re using.

Physical and virtual drives and LUNs

Regardless of hypervisor type, the persistent retention of data needs some form of storage device, either a traditional hard drive or a solid-state disk (SSD). For block storage, VMware’s vSphere suite, including ESXi and Microsoft’s Hyper-V use fundamentally different approaches to presenting physical storage.

VSphere systems take LUNs configured on a storage array and format them with the VMware File System (VMFS). This is a proprietary file format used for storing virtual machine files that takes advantage of on-disk structures to support highly granular levels of object and block locking.

The reason this is necessary is that most vSphere deployments use a small number of very large LUNs, with each LUN holding many virtual machines. An efficient locking method is needed to ensure performance doesn’t suffer as virtual environments scale up.

A single virtual machine is comprised of many separate files, including the VMDK or Virtual machine Disk. A VMDK is analogous to a physical server hard drive, with a virtual guest on vSphere potentially having many VMDK files, depending on the number of logical drives supported, the number of snapshots in use and the type of VMDK.

For example, for thin provisioned VMDKs, where storage is allocated on demand, a guest hard drive will consist of a master VMDK file and many VMDK data files, representing the allocation units of each increment of space as the virtual machine writes more data to disk.

By contrast, Microsoft has chosen to incorporate all components of the virtual machine disk into a single file known as a VHD (virtual hard disk). VHD files are deployed onto existing Microsoft formatted file systems, either using NTFS or CIFS/SMB.

There is no separate LUN format for Hyper-V. VHD files allocated as thin volumes (known as dynamic hard disks) expand by increasing the size of the file and consuming more space on disk. Inside the VHD, Microsoft stores metadata information in the footer of fixed size VHDs and in the header and footer of dynamic VHDs.

VHDs have advantages over VMDKs and VMFS in block-based environments in that the underlying storage of the data is NTFS, Microsoft’s standard file system for storage on Windows servers. This means VHD files can easily be copied between volumes or systems by the administrator without any special tools (assuming the virtual machine isn’t running of course).

It also makes it easy to clone a virtual machine, simply by taking a copy of the VHD and using it as the source of a new virtual machine. This is particularly beneficial using the new deduplication features of Windows 2012 which can significantly reduce the amount of space consumed by virtual machines that have been cloned from a master VHD.

Designing for performance and capacity

The aggregation of servers and desktops into virtual environments means that the I/O profile of data is very different to that of a traditional physical server. I/O workload becomes unpredictable as the individual I/O demands from virtualised servers can appear in any order and so are effectively random in nature.

This is referred to as the “I/O blender” effect and the result is that storage provisioned for virtual environments must be capable of handling large volumes of I/O and for virtual desktops, to cope with “boot storms”, which result from high I/O demand as users start their virtual PCs in the morning and close down at the end of the working day.

To guarantee performance, typical storage deployments will use a number of options:

Ultimately, provisioning storage for virtual environments is all about getting the right IOPS density for the capacity of storage being deployed. This may seem difficult to estimate but can be taken from existing physical servers as part of a migration programme, or by pre-building some virtual servers and measuring IOPS demand. For virtual desktops, a good estimate is around 5-10 IOPS per desktop, scaled up across the whole VDI farm. This would require additional capacity to be built in for boot storm events.

LUN performance and presentation

For block devices, LUNs can be presented using Fibre Channel, Fibre Channel over Ethernet (FCoE) or iSCSI. Fibre Channel and FCoE have the benefit of using dedicated host bus adaptors (HBAs) or CNAs (Converged Network Adaptors) that make it easier to separate host IP traffic from storage network traffic. However, there are still some important design considerations even where a dedicated storage network is available.

Firstly, there’s the option to present LUNs across multiple Fibre Channel interfaces for both resiliency and performance. We’ll take resiliency as a given, as that would be standard practice for storage administrators, but for performance, multiple HBAs (or dual-port HBAs) allow physical segmentation of vSphere and Hyper-V LUNs by tier for performance purposes.

This may not seem like the most logical approach, but bear in mind LUNs presented to vSphere and Hyper-V are typically large, and so queue depth to individual LUNs can become an issue, especially with workloads of different priorities. This can be especially important where high performance all-flash devices have been deployed. For iSCSI connections, dedicated NICs should be used and multipathed for redundancy. Both Microsoft and VMware have deployment guides to show how to enable iSCSI multipathing.

LUN sizing

While on the subject, it’s worth discussing LUN sizes. vSphere (and less so Hyper-V) are limited in the number of LUNs that can be presented to a single hypervisor. Typically, storage for these environments is presented using large LUNs (up to 2TB) to maximise the presentable capacity. As a result, the users of that LUN, which could represent many hosts, all receive the same level of performance.

Creating many LUNs of 2TB in size is quite expensive in storage terms. So, Thin Provisioning on the storage array presents a useful way to enable LUNs to potentially expand to their full 2TB capacity, while enabling multiple LUNs to be presented to a host to ensure I/O is distributed across as many LUNs as possible.

Limits of the LUN and the future

The grouping of storage for hypervisor guests at the LUN level represents a physical restriction on delivering quality of service to an individual virtual machine; all guests on a LUN receive the same level of performance.

Microsoft recommends using a single LUN per VM, which may be restrictive in larger systems (and certainly represents a significant management overhead), but is still possible to achieve.

VMware has stated its intention is to implement vVOLs – virtual volumes – to abstract the physical characteristics of the virtual machine storage from the storage array to the hypervisor. This would enable better granularity in terms of prioritisation of virtual machines and their I/O workload, even when they exist on the same physical array.

But while some companies focus on removing the storage array completely, it’s clear there are benefits in retaining an intelligent storage array, one that understands and can communicate with the hypervisor. 

16 Jul 2013

All Rights Reserved, Copyright 2000 - 2024, TechTarget | Read our Privacy Statement