Feature

VDI storage: Planning for the storage needs of a new virtual desktop implementation

The storage requirements for a virtual desktop infrastructure (VDI) are very different to the storage requirements of server virtualisation environments.

Many of today’s server and data centre applications are designed for and deployed against shared storage environments. By contrast, user desktops are not; they expect to have low-latency access to dedicated disk drives over a non-shared interconnect. Moving your desktop infrastructure to VDI introduces latency and contention for disk resources.

For that reason, when moving to VDI it is important to appreciate the demands placed upon storage by the specific workload profile of such an environment. In this article we survey VDI storage needs with regard to specific I/O requirements as well as key storage choices for VDI, such as drive type, which storage protocol to use and its effect on backup.

VDI I/O profile

Understanding I/O is key to a successful VDI storage deployment.

Most VDI deployments display fairly similar I/O characteristics, with a relatively high writes-to-reads ratio. This is due to the nature of desktop workloads; a good rule of thumb is to expect between 30% to 40% reads and 60% to 70% writes.

Understanding your read/write workload can have an impact on your choice of RAID level. This is because the write penalty for some RAID configurations is higher than others. For example, parity-based RAID level such as RAID 5 and RAID 6 are burdened with a higher overhead during random write operations than something like RAID 1 or RAID 10.

When planning your VDI storage infrastructure, if you are unsure of your I/O requirements, a good ballpark figure may be around 10 IOPS per user. A figure of 20 to 25 IOPS per user is considered high, but between 7 and 10 is quite common.

While the read/write distribution in VDI environments can be different from many traditional server virtualisation environments, write activity is usually predictable and steady. On the other hand, the smaller amount of read activity is subject to huge spikes – during boot and logon – that, if not planned for, can cause major problems.

With the above in mind, the vast majority of VDI environments are deployed against dedicated storage resources. At the very least, dedicated spindles/RAID groups but more commonly, especially in larger deployments, dedicated storage arrays.

On the topic of I/O, disk partition alignment (sometimes known as sector alignment) of VMs is something that, if not done, can cause major problems. Partition misalignment causes additional unnecessary, and costly, I/Os to be performed on the back end of shared disk arrays. In a VDI environment, where careful I/O planning is key to success, partition misalignment can cause major issues.

Misalignment results from the fact that some versions of Windows do not correctly align disk partitions with the underlying disk geometry and disk array caching structures. 

The Windows utility Wmic.exe can be used to determine partition misalignment, while DiskPart.exe can be used to correctly align partitions. 

Storage vendors may also provide proprietary tools for partition alignment.

While writes form the majority of VDI I/Os, there are two types of activity that do see huge spikes in read I/O: boot storms and logon storms.

Boot storms are caused when large numbers of workers arrive at work and start up their machines around the same time. This results in high I/O while virtual desktops are spun up. Boot storms cause predominantly read I/O and can be largely mitigated by pre-booting desktop images and by having large read caches. Pre-booting desktop images is the spinning up of virtual desktop images prior to people arriving at work. In this way virtual desktops can be ready and waiting for users as they arrive at the office. It also means that the I/O required to boot the desktops  happens when the storage array is not busy, often in the very early hours of the morning.

While boot storms can be mitigated by pre-booting desktop images, logon storms cannot. As large numbers of users log on, this causes high spikes in read I/O. A good way to mitigate the effects of this is to deploy a large read cache.

VDI and drive type

Different components of the VDI solution have different disk requirements.

  • OS images: Deploying the operating system images of a VDI environment on large, slow SATA drives is a risk and should be done with caution. If SATA is the drive technology of choice, then it should always be supplemented with a significant amount of cache, most importantly, a large read cache. Many companies choose to deploy the OS images of their VDI environment on 10,000 rpm or 15,000 rpm Fibre Channel or SAS drives. The cost of faster Fibre Channel and SAS drives can be hugely offset by using the right array-based deduplication technologies.
  • User profiles and home directories: These will usually sit comfortably on SATA drives.
  • Applications: Where these should reside depends entirely on their particular I/O profiles.

Supplementing hard drives with sufficient read cache in the array reduces back-end disk I/O and, therefore, latency, which means less back-end storage is needed for I/O purposes and there’s usually a better user experience due to decreased latency.

Data deduplication

Data deduplication is important to VDI storage implementations; it is to space requirements what read cache is to I/O requirements.

Deploying array-based block deduplication technologies is a no-brainer in today’s VDI storage environments. Most VDI deployments deploy a large number of Windows C drive images, and obviously there is a huge number of shared files and blocks in this scenario. Block-based data deduplication technologies commonly yield space savings of up to 80% to 90% so should be considered for these scenarios.

Post-process block-based deduplication is also a good fit, as there is not a huge amount of change data being ingested, meaning that space requirements will not explode during the working day. In most VDI environments, running these data deduplication jobs overnight or at other quiet times will work well.

On the topic of deduplication, placement of the Windows page file (pagefile.sys) is key. The page file is subject to large amounts of change during the working day and is not a good fit for data deduplication. It would add performance overhead to inline deduplication technologies and cause scheduled post-process deduplication runs to elongate. Many organisations choose to deploy page files on local disk in their servers as they contain throwaway data that does not require the resiliency provided by shared storage arrays. Those that deploy page files on shared disk place the page file in a location where it will not impact the performance of the volumes that serve OS data (Windows C drive) and not be deduplicated.

Choosing a protocol for VDI

While not all VDI solutions support all protocols, the majority of IT organisations choose to deploy VDI on NFS rather than Fibre Channel. NFS over dedicated 10 Gbps Ethernet is common in large deployments, as are dedicated VLANs. The majority of deployments seem to work well enough without requiring Fibre Channel.

Backup and VDI

Most non-VDI desktop machines are not backed up. If a user loses his/her laptop or their desktop breaks, they are normally given a new one, and all required applications, etc, are reinstalled. Therefore, it is not uncommon for VDI desktops not to be backed up. Many companies back up VDI desktops only where there is a specific and business-justifiable requirement.

One exception is the user home drive, the network share where the user stores personal files and folders, and the user profile component of a VDI solution. If user home drives and user profiles are currently on shared storage, such as NAS storage, and are currently backed up, they should continue to be backed up.

It is also common for user home drives and user profiles to remain on the shared storage that they are currently on. If home directories and profiles are being migrated or hosted from new storage infrastructure, these should at least be hosted on separate RAID groups and volumes/LUNs, and potentially on separate storage arrays.

Windows services

Most VDI deployments are virtualised Microsoft Windows desktops. In that case, unnecessary services such as disk defragmentation should be turned off. That’s because defragmenting a drive in a VDI, or any virtual server, environment can have undesired implications, including causing an image on a thin-provisioned volume image to bloat. Defragmentation is also unnecessary in VDI environments that deploy non-persistent desktop images, which are frequently re-provisioned.

Nigel Poulton is a storage architect currently working for a large global financial organisation.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in June 2011

 

COMMENTS powered by Disqus  //  Commenting policy