Important pros and cons
Storage virtualisation basically works by adding a layer of abstraction (typically software) between the storage systems and the applications that use storage. Applications no longer need to know what disks, partitions or storage subsystems their data is stored on. When implemented properly, storage utilisation can improve to 80% or better. Virtualisation technology can improve availability. For example, an application may be associated with specific storage resources, and any interruption to those resources will adversely affect the application's availability. With storage virtualisation, the application is no longer coupled to the physical implications of storage.
Storage virtualisation can help to automate storage capacity expansion. Instead of manual provisioning, virtualisation can apply policies that assign more capacity to applications as needed. In addition, storage virtualisation can also allow storage resources to be altered and updated on the fly without disrupting application performance, generally reducing storage downtime for repairs and maintenance.
The problem with this is complexity -- the virtualisation layer is one more element of the storage environment that must be managed and maintained as virtualisation products are patched and updated. It's also important to consider the impact of storage virtualisation on interoperability and compatibility between storage devices. In some cases, the virtualisation layer may potentially interfere with certain special features of storage systems, such as remote replication.
Another issue with storage virtualisation is the difficulty involved with undoing or "backing out" once virtualisation has been implemented. It's not impossible, but the process of reassociating applications with storage locations can be a painful process. Consequently, experts suggest implementing virtualisation in a piecemeal fashion -- starting with a limited deployment and then systematically building out across the data center and the entire organisation.
Finding the right virtualisation point
Storage virtualisation can be implemented at the host level, the network level or the storage system level. Host-based virtualisation is the easiest and most straightforward out-of-band method, but it scales poorly and maintaining virtualisation servers can be troublesome, especially if an agent must be installed and maintained on each virtualised storage device. Conversely, storage virtualisation can be accomplished in the storage array itself (e.g. a TagmaStore system from Hitachi Data Systems). This offers convenience, but such vendor-centric deployment is generally not heterogeneous.
Today, the most popular point of implementation for storage virtualisation is in the network fabric itself -- often through a dedicated virtualisation appliance or an intelligent switch running virtualisation software, such as IBM's SVC software. Network-based storage virtualisation is the most scalable and interoperable point of deployment -- making it particularly well suited to storage consolidation projects -- but there may be a slight impact on network performance due to in-band processing in the virtualisation layer.
Influence on disaster recovery
Storage virtualisation can also affect consolidation in backups and disaster recovery (DR). In many cases, replication, especially remote replication, takes place between two identical storage systems (e.g., Symmetrix to Symmetrix) so that the duplicate data maps exactly to the original storage system(s). By virtualising storage, data can be replicated to almost any storage hardware at the disaster recovery site. This is often beneficial when older storage hardware is displaced by newer systems. The older hardware can then be redeployed at the disaster recovery site and continue to serve a valuable function.