This article is part of our Essential Guide: Hyper-V and vSphere storage APIs: Tailoring your virtual environment

VMware versus Hyper-V storage comparison

VMware versus Hyper-V storage: How do the two leading hypervisors compare when it comes to deploying storage and in terms of their key storage features and functionality?

VMware’s vSphere and Microsoft’s Hyper-V are the two leading platforms for server virtualization. Although they provide similar features and functionality, the way they manage storage for virtual machines is very different.

In this article, we ask: VMware versus Hyper-V: how do they deploy storage and what storage features and functionality are found in the two hypervisor platforms?

Hypervisor fundamentals

vSphere ESXi (currently at release 5.5 update 1) is a type 1 hypervisor and is based on what VMware describes as a “vmkernel” or microkernel used to run the features needed to support virtualization. Type 1 hypervisors run directly on server hardware and act as the abstraction layer between the physical resources of the server and the virtual resources assigned to the virtual machine.

Hyper-V is deployed in two forms, either as a standalone type 1 hypervisor (known as Hyper-V Server 2012 R2 in the latest release) or as part of the Windows Server operating system, where the Hyper-V feature is implemented as a “role”. On both platforms, the hypervisor manages physical storage resources and the presentation of storage to the virtual machines.

Virtual machine files

A virtual machine is encapsulated in a number of files that represent both the configuration and contents of the virtual server. Hyper-V and the vSphere platform use the concept of a virtual hard disk, which is analogous to the physical hard drive in a standard server.

vSphere uses the VMDK (virtual machine disks) format, whereas Hyper-V uses VHD and VHDX (virtual hard disk and virtual hard disk extended). Both platforms also use a number of additional files to track virtual machine configuration and to handle things when virtual machines are suspended and not active and for snapshots.

In the latest releases, vSphere (ESXi 5.5) supports VMDK files up to 62TB in size and Hyper-V supports VHDX files up to 64TB. Both formats allow the storage for a virtual disk to be fully or thin provisioned at allocation time.

VMDKs “grow” by adding VMDK file extents. Hyper-V dynamic VHDs grow by increasing the size of the VHD file itself. Microsoft recommends using the VHDX format because it is more space-efficient and has additional features, such as embedded metadata to mitigate corruption issues.

File systems

In both hypervisor environments, virtual machines need to be stored on a file system. This is achieved in slightly different ways on each platform.

vSphere stores virtual machine files in datastores, which can be supported on either NFS or block-based storage. Block storage can be presented through iSCSI, Fibre Channel or Fibre Channel over Ethernet protocols and is formatted with VMFS or VMware Virtual Machine File System. VMFS is a VMware proprietary file system that provides features such as clustering, locking and snapshots. It has evolved over a number of iterations that follow the releases of vSphere.

Hyper-V can utilise storage through SMB (CIFS) shares using Hyper-V for SMB, which was introduced in Windows Server 2012. Alternatively, Hyper-V can use block storage presented to the server using iSCSI, Fibre Channel and Fibre Channel over Ethernet. Block storage can be formatted using either Windows NTFS or ReFS file systems.

ReFS (Resilient File System) was introduced by Microsoft with Windows Server 2012 and is a more resilient and scalable file system based on NTFS. One of the benefits of using NTFS or ReFS is the ability to browse virtual machine files directly on the volume.

vSphere ESX 5.5 hosts are limited to a maximum of 2,048 virtual disks. Also, external storage is limited to 256 Fibre Channel LUNs (with a maximum LUN size of 64TB), 256 iSCSI LUNs or 256 NFS mounts.

Hyper-V hosts are constrained by the limits of the Windows operating system, which scales to a theoretical limit of many thousands of LUNs per HBA and so, in practical terms, more than will ever be needed for a single system.

Windows Server 2012 introduced data deduplication for NTFS file systems. This was subsequently enhanced in Windows Server 2012 R2. Microsoft supports the deployment of virtual hard disks for VDI storage workloads on Windows 2012 using deduplication, which can result in significant capacity savings. At this time, however, general server virtual machines are not supported, although there are no technical restrictions to prevent their implementation.

Improving performance

Hyper-V can use storage configured through Storage Spaces, the new logical volume manager introduced into Windows Server 2012. This provides features such as software-based RAID protection.

Storage Spaces also allows the system administrator to implement tiering for virtual machines and use flash solid-state storage as a write-back or write-through cache to improve performance. Meanwhile, vSphere provides only write-through cache facilities with the vSphere Flash Read Cache feature (unless using Virtual SAN)

Local storage

Both vSphere ESXi and Hyper-V are able to use local storage resources. In vSphere, local disks are formatted with the VMFS file system. VMware recently introduced the Virtual SAN feature, which allows three or more ESXi servers to act as a shared storage cluster. Virtual machines are replicated on local storage between the clusters for resilience and require the use of flash storage in each server to act as a read/write cache.

Data placement features

In a traditional storage environment, data placement and management would be a task for the storage administrator. Server virtualisation abstracts the placement of the virtual machine and so the idea of manually tuning the performance of an individual server is rather a legacy concept. To provide scalability, both vSphere and Hyper-V provide additional services.

Live data migration between storage (either on the same or different arrays) is achieved in vSphere using Storage vMotion. This is licensable feature allows the files comprising a virtual machine to be moved non-disruptively to another physical storage location while the virtual machine is running. Storage vMotion has the further benefit of working cross-protocol, in which a virtual machine can be moved from block to file-based external storage.

Hyper-V enables the migration of virtual machines through the Virtual Machine Storage Migration feature. This works in a similar way to Storage vMotion but, unlike vSphere, the feature does not require additional management software or licences and can be driven directly from the Hyper-V Manager tool within Windows. Also, Hyper-V is not limited in terms of the number of concurrent migrations, whereas vSphere is limited to two concurrent operations per host and eight concurrent operations per datastore in a cluster.

vSphere and Hyper-V provide more advanced features when it comes to virtual machine disk placement and management. Many of these features are implemented through the use of additional management software, in the case of Hyper-V through System Center 2012 R2 and vCenter Server for vSphere.

Automated placement

Intelligent Placement, also known as Virtual Machine Placement in System Center 2012, provides the automated placement of virtual machines to a host based on an algorithm that uses performance data and workload profiles for the host.

In vSphere, this feature is implemented as Storage DRS, which uses the concept of datastore clusters to manage placement recommendations based on space constraints and I/O workload.

Hyper-V and vSphere implement features for storage policies. Hyper-V provides Storage Classifications, implemented through SCVMM (System Center Virtual Machine Manager), while vSphere allows the creation of Storage Policies (previously known as Storage Profiles) through vCenter Server. Both implementations allow service metrics rather than physical characteristics to be used when assigning virtual machines to storage pools.

Storage I/O Control in vSphere provides workload prioritisation features for I/O, enabling congestion management for external storage.

Microsoft implements similar features for storage quality of service on virtual machines with the recently introduced Storage Quality of Service for Hyper-V. Storage QoS allows the administrator to specify both a minimum and maximum IOPS limit for each virtual machine, throttling workload that exceeds the predefined limit.

Storage-assisted features

In many cases, virtualisation increases I/O demands on the external storage array, especially when using features that copy or move data. vSphere provides the ability to offload some of the heavy lifting involved with creating and managing virtual machines through VAAI, or vStorage APIs for Array Integration. The equivalent feature on Hyper-V is called ODX and offers similar offload functionality.

Future functionality

Both Microsoft and VMware are continually adding new features and improvements to their support for external storage.

VMware has promised a new feature called Virtual Volumes (vVols) that will encapsulate all the objects that comprise a single virtual machine, enabling external arrays to implement VM-centric storage policies at the physical disk level.

VMware may choose to deliver vVols through enhancements to VASA (vStorage APIs for Storage Awareness), which is an API interface used to surface up the physical capabilities of an external storage array to the hypervisor.

Microsoft has made many enhancements to Hyper-V storage support, especially with the implementation of SMB 3.0. Hyper-V 3.0 now supports shared virtual hard disks between virtual machines, allows dynamic shrinking of virtual hard disks while online (with no equivalent vSphere feature) and can perform live migrations using SMB 3.0.


In summary, the gap in storage support between Hyper-V and vSphere has narrowed significantly.

Almost all features are supported equally on both platforms, while Hyper-V edges ahead on certain features, such as data deduplication and scalability for external storage.

VMware is trying to change the game by introducing Virtual SAN and moving some storage back into the server, but Microsoft may easily be able to emulate this feature through Storage Spaces, which would level the playing field again.

Chris Evans is an independent storage consultant with Brookend.

This was last published in April 2014

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Yes, iSCSI and FC works reliably fine with FOC and Hyper-V.


Using the exact same hardware (Dell R620's, Equllogic SAN's, Cisco 5548 switches, everything on 10gig) Our Hyper V environment is a hot mess of stability. Where are VMware environment is rock solid.

MS Clustering is a Jenga tower waiting to implode. If you burp in the room where the hardware is MS Failover clustering service can restart.

Good luck tracking down the MANY hot fixes required to keep Hyper V running. They are not provided in Windows updates and sometime only MS employees know about know when you working with them because Hyper V has crashed.

This article barely mentions VSAN at the very end even though it is a production product as of VMware 5.5 update 1 (listed at the top of the article).


Nice, I found the identical result, with VMs that would be powered off for no reason, hosts that randomly crash and reboot, and other weirdness with CSV. vSphere has been pretty solid for years, with maybe a random host every year or so with some minor issue, nothing like Hyper-V which was a daily disaster. HOWEVER, I have found Hyper-V with the 30 second replication feature using local storage to be flawless and use it for all remote sites without incident. Going on a year without really any issues to speak of.