Server virtualisation is very useful in extending the capability of commodity server technology to fulfil many functions. But, as it implies, server virtualisation works best when it’s enabling the processor complexes to be shared by different applications. That way, memory management uses central memory resources and ensures that applications can move to other server platforms in the case of hardware failure.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
However, many more functions are required to run system complexes, server stacks and applications. Data management must address such issues as ensuring no data is lost, that rapid recovery occurs if there are disk errors, and that functions when deployed by disk arrays enable servers to be used for server and application management. With dedicated disk arrays embodying these and other functions including disk utilisation, virtualised storage arrays work in combination with server virtualisation.
Businesses should never underestimate the importance of access to all the data they need to support customer services, supplier tracking, business management and communications. Of course hypervisors such as VMware and Hyper-V are able to write and read to disk and carry out a range of I/O functions.
Server execution and storage resource management
But these platforms tend to treat disk drives as just a bunch of disks (JBOD). While acceptable in terms of running and managing systems with a hypervisor centric mind-set, they are very much dependent on all the execution being completed by the servers. It is not the best way to manage and secure storage resources. Considering the application demands on servers and the memory requirements of the applications, using the servers to execute all the additional I/O and disk management leads to service issues, as performance is impacted and secure operations are put at risk.
Dedicated disk arrays such as IBM’s XIV and others perform functions which otherwise would have to be carried out by the operating system or the hypervisor, and that puts extra pressure on the virtualisation platforms.
Functions that are sometimes overlooked include:
- Writing to disk arrays, the data is stripped across the disks (RAID is one example of this) so that if there is any issue with the disk drives, data is not lost and recovery is to a copy taken sometime earlier, maybe 24 hours or more.
- Using commodity disk technologies to contain costs. Queuing writes to disk, especially random writes, improves I/O performance and reliability and masks disk performance limited by electromechanical latency factors of ration speed and the amount of time it takes the reading head to find the track.
- Storing frequently accessed data within the cache memory of a disk array in higher performance systems. If this were not a function of the disk controller, it would have to be completed within the server complex, which requires even higher performance CPUs and many more mega/terra bytes of expensive central memory.
- Using snapshot, or point-in-time images, of the changing data on the disk drives to increase the speed with which systems can be recovered in case of system failure. When overlaid into a hypervisor, this function’s complexity increases as the servers manage this activity. To maintain system performance, leave this function to a disk controller to carry out.
- Providing data compression techniques such as data deduplication to maximise the investment in storage systems. Again, this function is best kept as part of the storage system rather than embedding it within the hypervisor.
More on storage disk arrays
Why are the hypervisors looking to take over these functions? The answer has more to do with “owning” control over all aspects of the system operation rather than easily implementing simple and secure operations. Of course disk arrays need to operate with virtualised servers. And they do. Building sophistication into all systems improves reliability and ease of use.
When hotspots appear because of application demand or realigning the system load resulting from failure in system components, there can be significant impact on service levels to users. Response times become unreasonable, data is not provided in time to the application or user, and customers move their orders elsewhere. These unplanned demands on disk and I/O resources can also occur from the way resources are provisioned.
Tracking data in the cloud
Managing these hotspots before they create a challenge to the system managers is important and dedicated disk arrays help IT pros manage the hotspots. Tracking data and the related applications is critical; it’s something that must be managed carefully as systems move into cloud environments. JBOD is not enough for managing data on disk arrays within virtualised environments.
Hypervisor vendors often recommend direct-attached storage – it’s a better way to exert control over the system complex and the dependent applications. These systems can work equally well in networked storage environments, where the data is resident on networked disk arrays. It offers users options for acquiring storage and server resources and from whom. It also assists with disaster recovery practices. The data can be replicated between disk arrays without being dependent on the server complex.
As the volume of data continues to grow at 50%+ every year and organisations find that budgets have to be contained or reduced, the challenges related to managing and accessing large data volumes will also increase. IT managers will have to deploy new techniques in all areas of the system architecture. Beware of asking too much of the hypervisor and the resulting demands on the CPUs.
Be sure you use your hypervisor correctly and that it is supported with dedicated disk arrays to manage the workload associated with storing, retrieving and securing data on disks. Such a strategy brings the following benefits:
- As the system infrastructure evolves, there are benefits that will be derived from assigning storage tasks to dedicated disk arrays and not straining virtual servers. Server capacity can be aligned with server, memory and application management, getting the most out of investment in servers and delivering secure processing platforms.
- State-of-the-art data management capability is always delivered by virtualised disk arrays.
- As data management, system performance and system recovery requirements evolve, address them all appropriately with investments in server and/or storage arrays.
- As investment budgets are tightly scrutinised in the current economic climate, IT pros will be able to make investment decisions regarding server and storage developments separately and make purchase decision based on their specific IT problems.
Hamish Macarthur is the founder of Macarthur Stroud International, a research and consulting organisation specialising in the technology markets.