leungchopan - Fotolia
Many early virtual desktop infrastructure (VDI) implementations failed to live up to the hype.
All too often, what worked brilliantly in the pilot – delivering desktop PC functionality at a fraction of the cost and with enhanced manageability and security – hit the rocks when it was rolled out to the masses. In many cases response time slowed to a crawl, especially during morning boot storms when hundreds or thousands of users arrived at their desks and logged in.
The problem is that VDI – especially at scale – requires almost the opposite characteristics of those provided by traditional disk storage.
Once the read-heavy boot storm is over, VDI is write-heavy – maybe 80/20 write versus read – and it is highly random, because it combines so many users and desktops into a single, unpredictable input/output (I/O) stream. And although each desktop might only need 10 or 20 input/output operations per second (IOPS), there are huge spikes in demand at certain times of day.
Salvation came in the form of enterprise flash storage. Whether in an all-flash array or as a tier within a hybrid array with spinning disk, flash eats random I/O for breakfast. And with good array management software – for example, to queue writes until there is a whole memory page's-worth ready to go – it can handle that challenge too.
Of course, there were installations where VDI worked okay on disk-based arrays, perhaps with some flash or RAM-based caching to address read-heavy boot storms. Typically, these used stateless desktops for relatively undemanding standardised tasks, in callcentres for example, where the average storage load per desktop might be fewer than 5 IOPS. The desktops were VMware Linked Clones or Citrix Provisioning Server Images, with big storage savings on shared master images, with just personalisation and configuration data stored for each desktop.
VDI 2.0 – a whole new challenge for IT
The problem comes once you get past those task-based jobs and desktops to what some now call VDI 2.0. This is VDI for users with more varied PC set-ups and higher expectations of PC performance, reliability and user experience.
VDI offers these users significant advantages, including location and device-independent access, because they can access the same virtual desktop anywhere. But delivering all this is a whole new challenge for IT.
That's because you now need stateful or persistent desktops that are all different, so you need to store a separate disk image for each one. That means more storage, and stateful desktops also have higher requirements for resilience and remote or multi-site access. VDI 2.0 therefore needs to be treated like any other critical enterprise application, with storage designed accordingly and appropriate attention given to capacity, performance and high availability.
Because of the variety of desktop configurations, linked clone-type systems and read-oriented caching do not work so well. In addition, while those multiple disk images will be very amenable to data deduplication because of all the operating system and application files that they share, inline dedupe is too heavy a load for disk-based primary storage. But flash's low latency does allow it to support inline deduplication – at least, for anything other than the most performance-sensitive transactional applications.
This makes at least a top tier of flash pretty much essential. Even with enterprise flash costing more than disk per GB, you can do a lot more with flash. The commonality of VDI files should yield data reduction of at least 5:1 and perhaps as much as 20:1, meaning that a gigabyte of flash can do the work of at least 5GB of disk. This is what enables some flash array suppliers to claim – rightly or wrongly – that they have achieved price parity with spinning disk.
Performance-wise, while a task-based desktop might require fewer than 5 IOPS, a typical Windows 7 desktop may need at least 25 IOPS, and in rare cases perhaps as much as 1,000 IOPS, depending on the applications in use. Note that for some storage-heavy tasks, such as computer-aided design (CAD) workstations loading very large files over a wide-area network, it may be better to use remote desktops.
As mentioned, VDI disk access can be very write-orientated, so it can be beneficial to disable background services such as drive encryption, search indexing and virus scanning, especially if the same service can be provided more effectively on the VDI host.
Remember too that the I/O stream aggregates many different desktops, making it highly random. No longer can the storage system predict and pre-fetch what might be needed next; instead, with VDI 2.0 you are pretty much guaranteed a cache miss. Whether reading or writing, highly random I/O is bad for spinning disks, because each new seek involves mechanical latency while the read/write head moves into position and then waits for the desired disk sector to rotate underneath it.
Flash storage, on the other hand, is well suited to random I/O because there is no mechanical latency involved. All data can be read at pretty much the same speed, and while there are inherent delays in writing data if a flash page must be erased and rewritten, the array and drive firmware aims to minimise this, for example by write caching.
Suitable tools for assessing your storage workload under VDI and the ability of your arrays to cope may include LoginVSI and LoadDynamiX for load testing, benchmarking and capacity planning; Oracle's Vdbench for disk I/O load generation; and IOmeter for I/O subsystem measurement.
As well as VDI performance, you should also consider the administrative aspects. For example, how fast can you create a new desktop or pool of desktops, and how efficiently can you patch the desktops you have? All-flash and hybrid arrays can help with both these. Look too at your storage tiering – an auto-tiering array or an all-flash unit that is in effect a single tier can provide significant advantages.
One caveat with VDI 2.0 is that even with deduplication, each user may now have a substantial amount of data that must be safely stored, not just a few set-up files. This not only increases the overall volume of storage needed, perhaps into terabytes for a large deployment, but it also creates a single point of failure and means you may well need to replicate to a second site for disaster recovery and resilience. After all, this is now the user's main desktop, and while you have just enabled them to work from anywhere and on any device, you also need to guarantee that access.
As well as replication, you need to look at options for scalability. For example, if your storage array supplier's smallest box is £50,000 and can support 1,000 users, that is great until you have to buy a second £50,000 box for desktop number 1,001 – or worse, two more £50,000 boxes so you can replicate desktop number 1,001. An extreme example for sure, but it highlights a danger in cost-per-desktop calculations.
All this can make hybrid arrays a good option for VDI, because you get the low-latency and predictable flash performance, but also the lower price of disk for your volume storage.
A wider consideration is that once you have a flash tier or array to take care of its unusual I/O characteristics, VDI no longer needs an array dedicated to it. Indeed, you can look at cutting costs by consolidating multiple workloads onto a single all-flash or hybrid array – or a replicated pair, of course. It even makes the all solid-state datacentre a real possibility.
Read more about VDI storage
- A look at the fundamentals of VDI storage and how to specify storage for likely workloads and persistent versus non-persistent desktops
- Companies looking to improve desktop performance with their VDI storage system have many options to choose from. They can use all-flash arrays, hybrid storage or flash caching