Over the years, it’s been easy to make excuses about why virtual desktop infrastructure (VDI) projects failed or why VDI wasn’t ready for your environment. The list of excuses is endless, but each one generally ends up following one of a few themes:
- You could blame storage for not being able (or being too expensive) to support the unique workload that VDI presents, even for persistent desktops;
- You could say the graphical experience always left something to be desired;
- You could argue that non-persistent desktops are impractical to deploy.
It’s time for us to put those old excuses aside and look at VDI in a new light. Don’t get me wrong, these things considered to be “excuses” today were once valid “reasons”, but every one of them has been addressed in one way or another in recent years.
Storage is no longer an issue
In the early days of VDI, storage was the thorn in our side. Most people didn’t recognise the unique challenge that VDI workloads presented to storage systems. In fact, many companies realised the difference only after they started rolling out VDI.
Your run-of-the-mill SAN (storage area network) was geared towards capacity, so you’d have the storage person carve out whatever storage you thought you would need to support your shiny new VDI environment and forget about it. Then you’d do a proof of concept, followed by a pilot. You’d test on 20 or 30 users, sign off on VDI as “the platform of the future,” and start to roll it out.
Unfortunately for many companies, that’s when they learned that VDI storage isn’t as much about capacity as it is about I/O. Digging in further showed the limitations of SANs in that no matter what you did, you could only tweak your environment for reads or writes, despite all the indications that VDI requires both.
Visit Briforum 2015
If you’re interesting in digging deeper into VDI, Brian Madden is hosting a conference called“BriForum” in London from 19-20 May, 2015. BriForum is a vendor-neutral, highly technical conference dedicated to end-user computing technologies like VDI. This is the event’s fourth year in London, and will be attended by hundreds of experts who will share their stories and lessons learned with VDI projects.
Until a few years ago, this was a huge struggle. We were all dealing with, “When do the benefits of VDI outweigh the cost of the storage?” Fortunately, the industry solved this from the other angle - they brought the cost of delivering VDI storage down.
Today there is no shortage of vendors offering VDI storage optimisation. You have your hyper-converged products like Nutanix and Simplivity (along with offerings directly from VMware and Citrix), but there’s also over 20 other products (at various price points) that you can plug into your VDI environment without changing out your existing server hardware. Each one works slightly differently, but they all offer the ability to get both the capacity and the performance that both persistent and non-persistent desktops require.
The biggest challenge with VDI storage today is choosing which vendor to use, but make no mistake—the technology exists at a price point that makes VDI storage a non-issue in 2015.
Non-persistent is great now
It wasn’t long ago that non-persistent desktops presented two huge limitations that made it impractical to deploy for VDI. The primary challenge was with applications. To pull off fully non-persistent desktop with a single image for all users, each application had to be compartmentalised using something like App-V or ThinApp. That seemed fine, but the reality is that nothing available at the time could package and deploy 100% of the applications in that way. That meant that you’d end up putting some applications into your base image, but then as soon as you didn’t want one department having access to those applications, you’d fork the base image, ultimately winding up with a complicated mess of base images that ended up being more complex than just giving everyone their own image to begin with.
The other challenge with non-persistent was with user-installed applications. To realise the benefits of a non-persistent image (patching, upgrading, refreshing at boot), you had to lock it down so users couldn’t install their own apps. For many companies this was a show stopper. Even if they didn’t let their users install applications, taking away that autonomy was enough to halt the project.
Today there are technologies available that allow us to deliver applications with 100% compatibility, and non-persistent VDI desktops are viable. Though App-V and ThinApp appear to be useful strictly as application management platforms, at their core they’re solving the problem of dissimilar applications running side-by-side. These newer technologies like Unidesk, FSLogix, VMware AppVolumes, and others have been designed specifically to address the challenges of application management in VDI environments.
Of course, managing applications is just part of the problem. We also have to give some flexibility back to the users in the form of user-installed applications. There are several companies that offer some sort of product or feature to support user-installed applications in non-persistent environments, like Unidesk, AppSense, or Liquidware Labs. The real reason this is “solved” today, though, is that there just isn’t that much of a need anymore. What apps do our users need on their Windows desktop that can’t be accessed from a web browser or a smartphone? You don’t need iTunes installed on your desktop if you have an iPhone.
The last hurdle to clear for VDI was that of the user experience. Until recently, even the most top-notch VDI user experience was best described as “Not quite right. But it’s fine, really. I’ll get used to it,” by normal PC users, and “Ha! You really expect me to use this thing?” by designers and other employees with graphically-intense usage.
No matter what companies did to add “3D support” to their products (protocols, client software, thin clients, etc), nothing lives up to the real thing. In the days of Windows XP, the gap between a desktop that supported 3D and one that didn’t was pretty narrow, but today it’s huge. Even the £300 PC from Maplin has a GPU in it that can handle rendering graphics and text for Internet Explorer and Office. If you deploy VDI without a virtualised GPU today, you have a substantial difference in the user experience versus a traditional desktop. Even if your average user can’t articulate the difference they still see that something just isn’t right about their work desktop. Imagine how the high-end and more tech-savvy users feel.
Read more about VDI
- How do you size a VDI storage deployment to meet the performance needs of your desktop environment?
- When Frimley Park Hospital came under increasing pressure to make its IT budget go further, the IT team opted to deploy a virtual desktop infrastructure (VDI) in its A&E department
- Read a Computer Weekly special report on VDI
Today it’s now possible to deliver GPU-enabled virtual desktops to users with varying levels of performance. Task and knowledge workers can be given just enough of a slice of the GPU to make their desktops run and look better, while high-end users can access a virtual desktop that is as good, or better than their workstation-class PC. Nvidia has been talking about the technology for a while, and for the past few years has been releasing support for one platform at a time. Now, both Citrix and VMware can fully support GPU and vGPU-enabled virtual desktops.
A look to the future
These changes all indicate that VDI is finally ready for widespread adoption. You simply can’t use the old excuses anymore, and new ones are getting harder and harder to come by. Microsoft licensing usually ranks high on that list, but even Microsoft has started to bend a little by introducing a per-user licence for Window client operating systems as opposed to the per-device licence they’ve had throughout Windows’ existence. More changes are expected on the Microsoft front at or around the Windows 10 launch, so the future looks even brighter.
If VDI looks like a great option on paper, but the entire platform is too difficult for you or your company to support, there are a growing number of desktop as a service (DaaS) options available. Worldwide adoption has been slow, but as providers address the concerns that companies have, the pace should pick up.
And then there are disruptive products like HP MoonShot. MoonShot is a chassis that contains desktop cartridges, each of which contains four complete desktop computers with their own CPU, memory, networking, graphics, and storage. By deploying your desktop images to each desktop using Citrix Provisioning Server, you get a platform that provides all the benefits of VDI without the need to worry about a hypervisor or storage.
Look around and you’ll see a lot of answers to the questions and problems of the past. VDI is more awesome than ever.
About the author:
Brian Madden is editor of BrianMadden.com and an internationally recognised expert on desktop virtualisation.
If you’re interesting in digging deeper into VDI, Brian is hosting a conference called “BriForum” in London from 19-20 May, 2015. BriForum is a vendor-neutral, highly technical conference dedicated to end-user computing technologies like VDI. This is the event’s fourth year in London, and will be attended by hundreds of experts who will share their stories and lessons learned with VDI projects.
Read more on Virtualisation software
Check out the 63 BriForum session submissions so far. (Updated)
BriForum 2015 - RDSH versus VDI: 2015 Edition (With a Dose of Persistent versus Non-Persistent)
BriForum 2015 Denver is next week! The session list is crazy, and there's still time to sign up...
My answers to the "What would you ask Brian?" questions from BriForum 2015 London attendees