As you may (or may not) know, there are three main virtual disk formats within VMware vSphere -- zeroedthick, thin and eagerzeroedthick. The "zeroedthick" format is the default and quickly creates a "flat" virtual disk file. In contrast, the new "thin" format creates a virtual disk that grows as you add data to the virtual disk format (and might be familiar to you if you have ever used VMware Workstation). Finally, the "eagerzeroedthick" format is used by VMware's new Fault Tolerance (FT) feature.
With this format, the blocks that make up the virtual disk are "zeroed" out when its created, which means the blocks that are used are pre-allocated to the VMFS file system. Its somewhat like a secure delete process that deletes data by zeroing out blocks with the file system.
The eagerzeroedthick format takes considerably longer to create but does result in a disk that provides excellent read/write capabilities. Now, here's the rub -- when you enable VMware FT on a virtual machine (VM), any zeroedthick or thin disk will need to be converted to the eagerzeroedthick format. The conversion process could take a very long time if you have a sizable disk that contains data.
How to tell the difference
Currently, the VMware user interface (UI) offers no way to confirm which of the three formats the virtual disks are. Well, it will tell you if you have a thick or thin virtual disk, but it won't tell you if your thick disks are zeroedthick or eagerzeroedthick. You could enable VMware Fault Tolerance, but it would take a very long to convert the disk formats. When this conversion takes place, the VM must be powered off to release the file locks on the virtual disks. Without further investigation, the VM could be offline for sometime before this process completes.
Some time ago, I discovered away of working out which disk formats were in use by VMs using the virtual machine's .log files and some simple command-line utilities. Sadly, VMware PowerCLI (VMware's PowerShell extension) cmdlets do not provide this information, although it may be possible to retrieve this data from the VMware SDK.
To illustrate this, I've created a VM called db01 with three virtual disks; each are, in turn, thin (HD1), zeroedthick (HD2) and eagerzeroedthick (HD3). Notice how hard disk three merely states that it is a thick disk. In terms of the graphical user interface, hard disk two would return the same information, despite the fact they are totally different disks.
The way to find out the disk format is from the machine's .log file. You begin by first connecting one of the ESX hosts with access to the data store that holds the VM using your favorite SSH client, such as PuTTy. Using the CD command, navigate to the data store and the virtual machine's directory, then issue this command:
cat vmware.log | grep 'FT enable'
This should produce an output similar to this:
This information tells me that both the SCSI 0:0 (HD1) and SCSI (0:1) would be converted into the eagerzeroedthick format if I enabled VMware Fault Tolerance. Alternatively, you can use:
This will print out much more detailed information. The difference between each output is the "allocation type" field that returns a number that represents each virtual disk type. These are as follows:
allocation type: 0 – EagerZeroedthick Virtual Disk
allocation type: 1 – Zeroedthick Disk (Default)
allocation type: 2 – Thin Virtual Disk
In the screen grab below, I ran the command
cat vmware.log | grep 'scsi0:2′. This shows that the virtual disk is a Type0 disk, which is the eagerzeroedthick format:
Of course, its great to know whether or not you will have a long upfront conversion process on your hands when you go to enable VMware FT on a VM. But what we all want to know is how long will this process take, so we can claim an adequate maintenance window beforehand. That's quite a difficult thing to ascertain, given there are so many variables to consider such as:
• Size of virtual disks to convert
• Number of spindles and RAID levels backing a datastore
• Amount of IOPS or R/W per-second currently generated by other VMs
So as an experiment, I increased the size of my SCSI 0:0 and SCSI 0:1 disks to 25 GB, each creating 50 GB to convert. All virtual machines were powered off on my "DB" datastore that had RAID5 with 9 FC-Disks (10K) backing it with 2 GB fabric.
As you might expect, in my lab environment I do really next to nil disk IOPS (as it is a non-production environment), so my disk subsystem has plenty of free disk time. Therefore, my experiences should be ones that show the conversion process in its best light. The test was carried out one of my EMC Clarrion CX-3 that is part of my NS-20 systems. To simulate an environment that was more heavily utilised, I also started a Storage VMotion of a VM from the "DB" datastore to another location. During this period, I monitored the IOPS generate using the "disk" view of esxtop.
Not a great deal can be read into these statistics, except to say that despite any fancy load-balancing on the Host Bus Adapters (HBA) we can see that vmhba1 (my first fiber-channel card) is taking the vast majority of the load.
Start time for the conversion was 10:21:29 AM and completed on 10:41:39 AM -- this represented a 20-minute wait time before the VM was converted into the correct format for VMware FT and was ready to be powered on. Anyway, I hope this demonstrates how critical it is to be able to see up front what virtual disk type you have in use.
ABOUT THE AUTHOR: Mike Laverick is a professional instructor with 15 years experience in technologies such as Novell, Windows and Citrix, and he has been involved with the VMware community since 2003. Laverick is a VMware forum moderator and member of the London VMware User Group Steering Committee. In addition to teaching, Laverick is the owner and author of the virtualisation website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users. In 2009, Laverick received the VMware vExpert award and helped found the Irish and Scottish user groups. Laverick has had books published on VMware Virtual Infrastructure 3, VMware vSphere4 and VMware Site Recovery Manager.