Is SAN maintenance easier with virtualised servers than in a physical server setup?
I’ll assume that by SAN maintenance in a virtualised server environment, you’re referring to monitoring and troubleshooting I/O and performance issues. If that’s the case, the key point to note is that the location from where one monitors storage performance has changed and that the view can be somewhat obscured.
With physical servers, storage connectivity is independently configured on a server-by-server basis. This typically involves the installation of HBAs for Fibre Channel and iSCSI environments and the connection of these devices into a switched storage fabric. You monitor trends on individual nodes and alter performance characteristics from within the array’s storage management viewer or switch configuration console.
The same is not true in the virtualised server world. Server virtualisation brings many efficiencies from aggregating the use of CPU, memory and storage, but it comes at the expense of storage visibility.
To put this in context, consider the following example:
A storage area network provides connectivity to a virtualised infrastructure that uses a clustered file system, in this case VMware’s virtual machine file system (VMFS). The LUNs presented to the virtual infrastructure could possibly serve a number of virtual machines per LUN.
A support team receives reports from users that performance is slow for certain applications at certain periods in the day. The problem is tracked to the SAN infrastructure, and it is noted that the SAN serves up close to its maximum theoretical throughput. The LUN requesting the throughput is one that is provisioned for the virtual server environment, but no further details are reported by switch monitoring tools. You would have to leverage performance metrics from the performance graphs applet within vCenter.
Such an example shows how it can actually be harder to handle SAN maintenance in a virtualised server environment because virtualisation adds a layer of complexity that must be taken into account when troubleshooting. It is also worth mentioning that proper array-level design should be employed when considering virtualising applications and then mixing them with other apps with potentially toxic workloads.
On the brighter side, it could be argued that SAN maintenance in a virtual environment is simpler when provisioning storage locations to attach to virtual machines as compared to physical server equipment. It is unlikely you would create a storage volume for each independent virtual machine in your environment, unless you are using early 2008 Microsoft Hyper-V with clustering.
Nine times out of 10, disregarding special circumstances such as Windows clustering, a larger storage location is provided and similar virtual machines with similar workloads are stored within this partition. The time saving aspect of this as it relates to SAN maintenance is that there is little for the storage admin to actually do. Once the storage partition has been provided and connected to the virtualisation host, the virtualisation admin can simply run a script to clone or create a new virtual machine or dynamically add storage to an existing object.
VMware has built its management framework to enable vendors to hook into its APIs; plug-ins such as EMC Storage Viewer and the HP EVA plug-in facilitate this interaction, giving the virtual administrator more visibility of the underlying storage environment.
One exciting innovation from VMware is its VStorage APIs, which enables tight integration of advanced storage procedures and capabilities from within the virtualisation management console. We should expect to see advanced array based features such as snapshots, storage provisioning and other advanced SCSI commands in future.
This was first published in March 2011