These days, the storage demands of users and applications are spiraling out of control in many enterprises. Additional disks and storage subsystems are frequently implemented to meet demands. But increased storage resources carry a heavier management penalty -- storage administrators must remember which applications are tied to what storage, allocate sufficient storage to address future growth and manually track performance. Over time, this inefficient manual process leads to wasted storage space.
Storage virtualisation alleviates these traditional problems by implementing a layer of abstraction between applications and physical storage, allowing storage to be combined and treated as a ubiquitous resource, regardless of location. While storage can be virtualised in a variety of ways, the ultimate objective is to improve storage utilisation (reducing or forestalling capital expenditures) and enhance the effectiveness of storage management. Aside from considering the obvious issues of pricing and support, here are some other important points.
Understand what storage virtualisation must bring to the enterprise. Storage virtualisation offers numerous potential benefits to the enterprise, but it's important to identify the benefits that are needed by your particular organisation. Improved storage utilisation, capital cost savings and ease of management are the three most common benefits, but there are other advantages that appeal to storage administrators. For example, virtualisation can ease disaster recovery (DR) or business continuity planning by allowing nonidentical hardware between sites. It can handle data migration between storage platforms and sites. Virtualisation also eases storage capacity expansion, often automating many of the manual tasks needed to allocate storage to applications when needed.
Weigh the added complexity. While storage virtualisation can ease some complexity problems, it can also add others. Many virtualisation products require additional hardware or software in the infrastructure. Beyond new servers or, determine what host device drivers, path managers, or shims are needed to support the prospective virtualisation product. Maintenance is usually first to feel the pinch here -- it's easy for IT staff to become bogged down with patching and updating a myriad of storage virtualisation servers when hardware is replaced or new versions become available. Inadequate attention to maintenance can result in version disparity leading to stability and performance problems. Evaluate any storage virtualisation product from a management and maintenance perspective, and determine if the problems that it solves are outweighed by the new issues that it introduces.
Know where storage virtualisation best fits in your environment. There are typically three means of implementing storage virtualisation; host-based, array-based and fabric-based virtualisation. Host-based virtualisation relies on software, installed on host servers, which monitors data traffic and storage. Veritas Storage Foundation from Symantec Corp. is an example of this type of product. Dedicated appliances, such as the File Director 7200 appliance from NeoPath Networks Inc., follow a similar direction. Array-based virtualisation integrates the technology directly into the storage array itself, such as a TagmaStore array from Hitachi Data Systems Inc. (HDS). More recently, there is growing attention paid to fabric-based virtualisation that runs dedicated software on intelligent switch devices. Each approach offers unique advantages and disadvantages that can impact its performance, scalability, cost and reliability.
Consider the scalability. There is a finite limitation to the amount of storage that a virtualisation product can support. Understand the tradeoff between scale and performance. This is particularly treacherous because many virtualisation initiatives begin as test or pilot deployments before broader deployment through the enterprise. Consequently, scaling issues may not appear until later in the deployment cycle. An up-front evaluation of scaling can help to weed out unacceptable products and allow network administrators to plan for future infrastructure updates.
Consider the interoperability. The promise of cross-vendor storage utilisation has been a compelling feature of this technology, but true heterogeneous support is still lacking in virtualisation environments. For example, array-based virtualisation typically locks in the array vendor. Host-based and fabric-based virtualisation products also impose a certain amount of vendor lock-in with the software or appliance that embeds the software. Virtualisation adopters should closely investigate potential products to determine their compatibility within the current environment. Also, consider compatibility with likely upgrades or updates into the future.
Test and start small. Analysts typically recommend a thorough lab evaluation of any storage virtualisation product prior to any purchase commitment -- including a comprehensive review of decommissioning drills. Once a purchase decision is actually made, the best advice is to start implementation on a small scale and then build out the virtualisation systematically. This conservative approach allows ample time for administrators to become accustomed to virtualisation management and prevents unforeseen oversights or problems from crippling an entire data center.
Understand how to undo the implementation. Storage virtualisation isn't perfect. Performance issues, scalability limitations and interoperability problems are just a few reasons to decommission a virtualisation product. Similarly, an organisation may choose to discontinue one product in favor of a more appropriate product. Unfortunately, backing out of virtualisation is extremely disruptive to applications -- and typically confusing to administrators. Before committing to a virtualisation product, discuss any back out options with the vendor.