By Manek Dubash, Contributor
Implementing desktop virtualization, or virtual desktop infrastructure (VDI), requires a total rethink about storage requirements. Just as server virtualization abstracts the functions of the server from its physical box, desktop virtualization cuts the ties between a user's desktop and their local hard drive and processor. Storage is no longer local to the desktop and has to be optimised to the I/O requirements of the OS, user profiles and applications.
Analysts and industry watchers are all set for desktop virtualization to gain traction among users. For Colin Wright, director of virtualization practice at systems integrator MTI Technology, 2010 is "a tipping point year; we've seen lots of proof of concepts." Virtualization vendor Citrix also reports that just over a quarter of UK CIOs will deploy desktop virtualization over the next year.
Key drivers toward desktop virtualization are the maturity and availability of the technology, plus the potentially costly desktop hardware implications of the move toward Microsoft Windows 7. According to Wright, the minimum PC specification next year is likely to consist of a quad-core CPU with 4 GB of RAM and a 1 TB disk, so the cost of refreshing a desktop estate could become prohibitive compared with the potential cost-effectiveness of running desktops from the data centre.
According to Steve Pinder, principal consultant at analyst firm GlassHouse Technologies (UK), "the principal benefit of desktop virtualization is that it manages end-user desktops within a centralised virtual infrastructure, making them easier to control and manage."
He added that other benefits include fast desktop provisioning, personalised desktops, and centralised security and data protection. "Desktop virtualization ... leverages the security layer inside the data centre, which is stronger than a typical desktop's protections, to safeguard data from threats," Pinder said.
The simplest method of implementing a virtual desktop infrastructure is to convert each physical desktop into a virtual image, although this runs the risk of huge storage requirements.
Dave Austin, product marketing director at Citrix, recommends that the OS, applications and personalisation profile be stored separately. This approach also implies a single, standard OS image for all users, instead of one each. "You can then be more dynamic and make sure it's more secure, that patches can all be installed centrally to all images and that you can secure corporate IP," Austin said.
Joel King is infrastructure architect at Standard Bank in London, which has implemented VMware-based VDI for most of its users. He has taken a similar approach in which VMware View deploys one master OS image. "All reads are taken from that master, writes are written back and a new snapshot is provisioned onto the VM when they log off," he said. "This means that the image grows, but we force a logoff once a month so it compacts again."
Desktop virtualization deployment: Match storage needs
For a desktop virtualization deployment to be successful, the end-user experience should not be poorer than before, and that means ensuring that storage needs are matched to desktop virtualization in a way suited to the performance required.
According to GlassHouse Technologies' Pinder, the key is "to understand the I/O characteristics of not only the OS but of individual applications. Once you have a thorough understanding of the storage requirements for your existing environment, you can then size desktop virtualization storage appropriately and decide whether SAN or NAS is appropriate."
Each element of a virtual desktop has separate storage requirements. In practice, this means that the master OS image, which will be accessed by all users, often concurrently, needs to sit on tier 1 storage, preferably solid-state drives (SSDs), to optimise I/O performance especially during so-called boot storms, when, for example, a call centre shift logs on. Alternatively, you can add spindles to a disk array, but that runs the risk of adding capacity where none is needed as well as increasing energy consumption and space requirements.
"Where flash memory helps most in a desktop virtualization environment is for either scratch space or OS hosting; places where a large amount of random reads and writes are occurring," said Crosby Marks, product manager at SSD array vendor NextIO.
But solid-state drives are not essential for desktop virtualization storage. Standard Bank uses existing tier 1 arrays consisting of Hitachi Data Systems Universal Storage Platform V (USP V) systems as the team found that desktop virtualization imposed no special requirements. "We learnt from an Exchange deployment a few years ago to do enough I/O testing of the configuration on the SAN, so we baseline all storage requirements to ensure that loads are scalable," said Standard Bank's King.
As for the remaining storage requirements of desktop virtualization, such as user data, applications and profiles, Rory Clements, infrastructure architect at VMware, said usage determines the subsystem type needed.
"Customers deploy on all sorts of storage, whether NFS, iSCSI or Fibre Channel," he said. "It depends on end-user requirements. One storage model means you create different rules for different user groups, so developers and 3-D engineers might use tier 1 via Fibre Channel, while call centre workers who don't need high performance might access tier 2 JBODs over iSCSI."
MTI Technology's Wright is similarly agnostic. "The choice of storage comes down to design preferences and the skills of the individuals in the environment. You shouldn't introduce a new architecture at the same time as desktop virtualization, but work with what you have. You may not end up with the best storage, but you will make the best use of your available skills and budget."
Desktop virtualization backup and data protection
VDI deployments need little change to existing backup and data protection routines. Clements said that because the desktop elements are separated out, it's important to back up user data and profiles, and these can easily be incorporated into existing backup routines.
This is Standard Bank's approach: "All our user data is backed up using Symantec NetBackup," King said. "We already centralise user profiles, and this means we back up a bit more, but there are no real changes from before. For disaster recovery, all data servers provide high availability; we have a live/live data centre configuration with identical storage arrays in both. We cluster across them with all data mirrored so everything can be failed over, and we do that as standard for all servers."
Desktop virtualization implementation: The key to success
Success demands analysis of the existing infrastructure before heading for desktop virtualization. "I've seen so many desktop virtualization projects that have failed because of lack of planning for storage requirements," MTI' Technology's Wright said. "The user needs better or same performance as today, but in reality they get worse all too often because of logon storms, so you need to plan IOPS and throughput."
Standard Bank's King agreed. "We had issues, but the key [to success] was pre-planning everything. There's a lot of different components in a desktop virtualization implementation, including the profiles, the virtualization platform, endpoint devices and the broker layer, so take it all into account before you start putting the design together."
This was first published in June 2010