Feature

What is driving desktop virtualisation?

Thin-client computing has been around for years. But until recently, it failed to make progress in the enterprise. But it is now a top priority.

Thin-client computing – or more accurately, server-based computing – has been around for years. Citrix entered the market in 1993 with a product based on Novell’s Netware, DOS and Quarterdeck Expanded Memory Manager, QEMM. In 1995, it shipped its first native Windows server-based computing product, WinFrame.

The idea behind server-based computing was to provide access to a virtual desktop held on another machine, (a shared server, rather than a PC), from a different device, (at the time, a specialised thin-client device, but now any device from a PC or laptop through to a tablet or smartphone device). However, in the early days, server-based computing was essentially used for task workers doing repetitive jobs in a single application. Employee mobility was low, remote connectivity slow and costs were high – and organisations preferred to allow the highly mobile sales person or field engineer to carry their world with them on dedicated laptops.

Providing task workers with specialist thin clients and consolidating the applications to a common place in the datacentre, allowed greater control over what they could do. Hot-desking could be implemented as the actual remote desktop environment was not tied to the access device itself. Server-based computing made inroads in areas such as contact centres, claims management departments and so on – but did not fair too well elsewhere.

Why desktop virtualisation version 1 failed

Attempts to move server-based computing into different areas of the organisation hit problems. The lack of voice and video capabilities in thin clients and the lack of support for re-directable printing and for local USB devices meant those who required more functionality than task workers did not take to the technology.

Where server-based computing was implemented as a distributed office solution supporting remote and branch offices, wide area network performance issues meant the response rates experienced by users were not up to expectations.

Even as mobile connectivity improved, the need for an always-on connection meant executives could not work in planes. Field engineers and sales executives often found themselves incapable of gaining a sufficiently robust and fast connection to carry out their jobs – often just at the point when they most needed such a connection.

Another issue found by technologists – even if it remained hidden to the general user – was that moving the workload from the client to the server did not always meet the promises made by the salesman.

Energy and scale

Many organisations kept their PCs and used them as thin clients to access the remote desktop – but this meant that the expected energy savings of moving from a 75W or more desktop to a 10W or less thin client were not realised – yet a new server farm was required to run the desktop images, which required even more energy.

Early implementations could only sustain a few desktops per single physical server and so large installations of server-based computing required very large server farms.

Citrix went on an acquisition spree, buying in companies such as Netscaler, Sequoia, XenSource, and more lately Cloud.com and App-DNA that improved its capabilities.

Others entered the market, including VMware, which also made acquisitions with the likes of Thinstall. VMware launched View as a direct competitor to Citrix, built on vSphere and touting “desktops in the cloud”.

Citrix and VMware continue to battle for the minds of buyers. The use of one or the other tends to come down to a buyer’s starting point. If the systems are going to continue to be under the management of a dedicated “desktop team”, the purchase generally comes down to Citrix.

If the systems are to be managed as part and parcel of the datacentre itself, then VMware tends to be more of a choice, as the server team will generally already be familiar with using VMware for virtualisation.

The impact of virtualisation

Improvements in server technology and the use of virtual machines, rather than clustering, meant more remote desktops could be supported per server.

Suppliers such as Nutanix have introduced highly effective all-in-one appliances that provide the improved performance required to successfully manage virtualised machine workload requirements at critical times. For example, one such is the period when everyone arrives at work at around the same time and accesses their desktops, causing a spike of activity known as a “boot storm”.

Backed by improvements in connectivity and wide area network and wireless performance, server-based computing is now making greater inroads into organisations and has moved away from just being seen as something for the task worker.

Additional improvements in how the remote desktop itself performs now means the massive server farms of old are no longer required – a single, virtualised server rack can now serve up hundreds to thousands of desktop images – and support a hybrid delivery model as well.

The changing ecosystem around the main vendors of thin-client computing is ushering in a new era of server-based computing. No longer is the choice a binary one between everything being held on the client device against everything being held as a server-based image. Now, the intelligence of the client device can be exploited – and it doesn’t have to be a Windows- or Linux-based machine.

Using client-side virtualisation, parts of a desktop can be streamed from a server to the device, so that a given application runs in a secure environment where controls can still be applied. For example, the compute power of a client device can still be used while ensuring data cannot be cut and paste between the application and the local storage on the device and vice versa. This maintains high levels of security while providing a good user experience in application performance and response.

Data can be stored encrypted on a device, but using digital rights management (DRM), from the likes of Adobe and EMC; and data leak prevention (DLP) from the likes of Symantec, Trend, McAfee and Check Point Software, means data can be stored safely and staff can deal with that data in a manner unlikely to compromise the organisation.

Remote desktop transparency

However, the main change in the user experience is in the transparency of the virtual desktops that can be provided. This is where the likes of Centrix Software, AppSense and RES Software come in.

These companies offer easier ways to identify common application usage patterns; advise on what would make good “golden images” (desktop images that can be shared between a set of people); and implement these in the best possible way.

By blending the mix of local, streamed, virtualised and server-side applications and functions, the desktop provided to the end user looks like a single, cohesive system.

Backing this up through licence management and self-service application provisioning means employees have greater control of their environments and get the best possible performance from their systems.

Fully managed server-based computing also means that a bring your own device (BYOD) strategy is not just possible, but is encouraged. The device becomes just an access mechanism, with everything that the individual does on that device around the organisation’s needs controlled completely from the centre.

What’s in it for the IT team

Server-based computing has come of age. The technology is now extremely advanced and what buyers should be looking for is where the real business value-add lies. This can be in areas such as being able to patch and update images en masse and areas such as full licence management to ensure that over- or under-licensing is not occurring.

The capability for Apple iOS, Android and other non-Windows systems to be able to participate as fully as possible in a hybrid server-based environment should also be looked for, as this enables BYOD.

Finally, don’t fall for the argument that server-based computing is all about cost. The complex transfer of energy from end-devices to the datacentre, combined with the additional need for systems management and maintenance of the systems in the datacentre, can lead to additional costs.

Strangely, it may be more cost-effective in the short to medium term to maintain your existing PCs as clients, using whatever operating system already exists there (for example. Windows NT) as a client.

The devices can then be replaced as they fail with low-energy thin clients, so getting more value out of the PCs. However organisations must be able to identify the sweet spot, when managing and maintaining such PCs outweighs the cost of replacement, even before device failure.

What server-based computing should be about is the capability to better manage and secure the organisation’s intellectual property in a manner which enables the user to work flexibly. And if this happens, the result will be a more cost-effective system. ■


 

Clive Longbottom is a director of analyst organisation Quocirca


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in July 2012

 

COMMENTS powered by Disqus  //  Commenting policy