Feature

How will storage infrastructure change with VDI deployments?

 

If 100 people tried to access the same piece of data on conventional PC infrastructure simultaneously, it would result in a denial of service. And this is exactly the case with virtual desktop infrastructure (VDI).

VDIstorage.jpg

There is a growing realisation among IT professionals of an Achille's heel to desktop virtualisation, in the way conventional storage works in VDI. Virtualising hundreds, if not thousands, of desktop computers may make sense from a security and manageability perspective. But each machine has local processing, graphics processors and storage. 

Server CPUs may be up to the task of running most desktop applications and modern VDI offers local graphics accelerators. But storage needs to run centrally. So if each physical PC has 120 Gbytes of local storage, a 1,000 virtual desktop deployment needs at least 120 Tbytes of enterprise storage.

However, even this is not enough. For a good user experience on VDI, the infrastructure must minimise latency. During the BriForum VDI conference in London in May, Ruben Spruijt, CTO at ICT infrastructure specialist PQR, said: “In VDI, storage is a big issue.” It boils down to I/O operations per seconds (IOPS) – the number of data reads and writes to disks.

The industry has theoretical models of usage showing that a desktop PC spends 70-80% of its time performing disk reads and 10-30% of its time writing to disk. But Spruijt believes these numbers are gross underestimates. 

“In my experience a user’s PC spends 20-40% of the time doing reads, and 60-80% on disk writes,” says Spruijt. And writing to disk can be difficult in VDI.

The more IOPs virtual desktops need, the greater the cost. Consider streaming media, desktop video conferencing and any application that makes frequent disk read and writes. 

Josh Goldstein, vice-president of marketing at product management at XtremIO, a flash start-up recently bought by EMC, recently posted a blog stating: "Since successful selling of VDI storage requires keeping the cost of a virtual machine inline or lower than a physical machine, storage vendors artificially lower their IOPS planning assumptions to keep costs in line. 

"This is one of the reason many VDI projects stall or fail. Storage that worked great in a 50 desktop pilot falls apart with 500 desktops."

Innovation in disk technology

Storage expert Hamish MacArthur, founder of MacArthur Stroud, says: “If you read and write every time it builds up a lot of traffic. One way manufacturers of disk controllers are tackling this problem is to hold the data. 

“If you can hold a lot of data in the disk controller before writing to the disk, it reduces the number of writes to the disk.”

So a new breed of disk controllers is now tailored to virtualised environments. 

Some products try to sequence disk drive access to minimise the distance the disk heads need to move. Others perform data de-duplication, to prevent multiple copies of data being stored on disk. This may be combined with a large cache and tiering to optimise access to frequently used data. 

Today, the most talked-about breakthrough in disk technology is solid state disks (SSDs), which can be used as tier one storage, to maximise IOPS.

SSDs from companies such as Kingston improves VDI performance by boosting IOPs. 

Graham Gordon is operations director at ISP and datacentre company, Internet for Business. The company provides networking and datacentre services to mid-market energy, oil and gas, renewals and  professional services firms. The company is expanding its product portfolio to offer clients VDI. 

Gordon says: "SSD is still relatively expensive but it is getting cheaper. There is still some way to go before it becomes an option for companies in the mid-market."

Exploiting SSD flash niche

EMC revealed elements of its upcoming release resulting from its acquisition of XtremIO, during the EMC World conference in Las Vegas in May.

It will use the start-up's technology to create an entirely flash-based storage array which, whilst expensive, will give enormous amounts of performance compared to traditional disk drives.

Goldstein claimed it could achieve unlimited IOPs which, in a short demonstration, reached 150,000 write and 300,000 read speeds.

The key to the box is its ability to scale out and link up to other XtremIO arrays. In the keynote, Goldstein presented eight working together as a cluster, which had the potential to achieve over 2.3 million IOPs.

He also showed the array creating 100 10TB volumes in 20 seconds, configuring 1PB overall.

However, these flash-based technologies and solid state drives are too expensive to run as primary storage in enterprise environments. Instead, companies are deploying tiered storage arrays, using SSD for immediate access to important data and cheaper hard drives for mass storage.

Taking this a step further, in the Ovum report, 2012 Trends to watch: Storage, Tim Stammers notes that storage vendors are planning to create flash-based caches of data physically located inside servers, to eliminate the latency introduced by SANs. 

“Despite their location within third-party servers, these caches would be under the control of disk arrays. EMC has been the most vocal proponent of this concept, which is sometimes called host-side caching. EMC's work in this area is called Project Lightning," writes Stammers.

Project Lightning has now become a product called VFCache, which places flash memory onto a PCIe card that plugs into the server. It allows a copy of data to be taken immediately at the server level, rather than before it gets to the storage, upping performance yet again. 

Although EMC has its offering, the biggest name in this sector at the moment is FusionIO, which has flash memory which plugs into the server, but believes this is the only place it should be, contrasting EMC's strategy to put flash in a number of locations from end to end, depending on the application you want to run.

The impact of Raid technology 

SSD will improve storage performance, but just adding SSD at tier 1 may prove a waste of money unless the full storage architecture is taken into account.

Redundant independent disk array (Raid) technology provides storage fault tolerance but it also increases IOPS. According to Citrix, the Raid configuration affects how many write IOPS are available, due to the different types of redundancy in place. 

The write penalty reduces the overall IOPS for each disk spindle. So Raid 1 and Raid 10 have a penalty of two IOPS, Raid 5 gives a 4 or 5 IOPS penalty depending on the number of disks. 

In an example on its website, Citrix states that a system with eight 72GB 15K SCSI3 drives in a Raid 10 configuration, the storage would have 720 functional IOPS and would support 51 VDI users.

So, given the expense of deploying SSD, IT departments building a VDI must weigh up the level of fault tolerance appropriate for desktop users. Desktop software does not need the same resilience as an application deployed enterprise-wide, which may be deemed mission-critical and requires the highest level of resilience the organisation can afford. 

In a standard desktop environment of 1,000 physical PCs, for example, if one user's machine fails, the impact on the business is minimal. The PC can always be rebooted or fixed, and the affected user may be able to carry on working on another PC, depending on how the desktop infrastructure has been configured.

However, with VDI, all 1,000 users will be affected by a failure and no-one will be able to do any work with their PCs until the infrastructure is back up and running. 

VDI makes desktop PC applications mission-critical. Can the IT department using VDI afford not to run a high level of resilient storage?

Read more on how storage infrastructure is affected by VDI >>


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in May 2012

 

COMMENTS powered by Disqus  //  Commenting policy