Virtual server storage is a key consideration when embarking on a virtual server or virtual desktop project. Most people opt for shared storage for a virtual machine environment, as it has key advantages over direct-attached server drives. But the choice arises: SAN or NAS? In this second of two podcasts (the first explored the pros and cons of SAN for virtual machine storage) we look at the capabilities of NAS for VM storage.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In this interview, SearchStorage.co.UK bureau chief Antony Adshead speaks with Mike Laverick, an author, blogger and podcaster, about when it’s best to use NAS as virtual server storage and how to split management tasks between the hypervisor and the LAN.
SearchStorage.co.UK: In what cases is it best to use NAS [for VM storage]?
Laverick: I think it’s fair to say that it’s usually easier to configure and mount NFS and CIFS volumes either to a VMware host or a Microsoft host than it is to set up and configure iSCSI or Fibre Channel. And although there’s lots of talk about latency with IP-based storage, if your servers have got 10 [Gbps] networking from the host to the array, I challenge anyone to demonstrate any latency between the storage and the servers.
On the downside, I think it’s fair to say the support for some advanced features on NAS have often lagged behind Fibre Channel and iSCSI. That’s mainly because some of the vendors prefer their own file systems over NFS. And it’s also that the virtualisation vendors have to blow with the wind in terms of popularity.
The reality is that Fibre Channel still dominates the enterprise market and NFS has still got some way to go to eclipse it; some say that will never happen.
So, sometimes in terms of queuing, resources and setting priorities, NFS has sometimes come a little bit after in terms of advanced features.
But, getting a bit more specific, I think VDI [virtual desktop infrastructure] is a good fit for NFS and NAS because the file systems backing most NAS-based devices have often been very quick to introduce fast cloning of desktops and other features like data deduplication, which help to reduce the footprint of a virtual desktop infrastructure (VDI) but also reduce the IOPS generated through VDI.
More specifically, the new edition of VMware View 5.1 -- the virtual desktop broker from VMware -- has specific integration marrying its Linked Clones feature directly into the NFS provider’s ability to do clones. So, we’ll have the best of VMware’s Linked Clones together with the performance that NAS-based cloning can provide.
But, even if you’re not doing VDI I think there’s still a powerful use case for NAS-based storage for [ancillary] files. Most virtual environments have some kind of templates or some kind of ISO store, and if you have lots of clusters it’s often easier to mount that as an NFS volume or a CIFS volume to the hosts than it is to try and mess around with zoning and masking LUNs away in an attempt to get that storage to be visible across many clusters.
SearchStorage.co.UK: What configuration needs to be done at the hypervisor or in the LAN when using NAS [for VM storage]?
Laverick: I think the principles of separation still apply and whether you separate your storage network at a physical level with different switches or whether you choose to do VLAN-ing, your storage network, if it’s IP-based, must never be visible directly to your operating systems or wider network unless you’ve got a good usage case for doing so.
At the virtual level, most IP-based systems will allow for load balancing so that means some sort of virtual switch with two IP addresses for load balancing across the environment. And, obviously, two NICs backing that virtual switch so you’ve got redundancy at the physical level. So, should a switch or a NIC fail, your host [won’t] go down and virtual machines won’t stop functioning for lack of access to the storage.
[NFS] has its own privileges and access controls, and generally these will need to be set up with root access to mount directly to a host. What I would say is if you look at the plug-ins from storage vendors they will often set up LUNs and permissions and mount all the data stores to the hosts without having to know [very] much about the permissions of the system. It’s well worth having a look at these because they simplify the configuration.