LUN storage: Working with a SAN's logical unit numbers
A comprehensive collection of articles, videos and more, hand-picked by our editors
Most enterprises have already virtualized existing physical servers (or are contemplating virtualization of their physical server environment) using server virtualization hypervisors. Today, the most widely used hypervisors include VMware’s ESXi and ESX platforms. Since storage is a vital component that determines hypervisor performance, logical unit number (LUN) management is critical to ensure optimal storage utilization as well as performance. Here are some best practices for managing LUNs presented to the ESX server:
1. Virtual machines running on ESX servers have boot volumes with low I/O rates. For efficient LUN management, it’s preferable to create LUNs with RAID 5 configuration from large fiber channel (FC) drives (such as 450 GB, 10K rpm drives), and create a virtual machine file system (VMFS) volume on those LUNs dedicated to the virtual machines’ boot volumes.
2. For virtual machines hosting applications that perform extensive logging, volumes should be assigned for VMFS volumes created on LUNs with RAID 1/0 configuration.
3. Infrastructure virtual machines such as the domain name server (DNS) utilize RAM and CPU resources for most activities with low I/O operations. For such virtual machines, it’s preferable to assign virtual disks from VMFS volumes created on LUNs with RAID 5 configuration from large FC drives (such as 450 GB, 10K rpm drives).
4. For virtual machine hosting applications with write-intensive workloads, virtual disks should be provided VMFS volumes created on LUNs with RAID 1/0 configuration with flash drives or fast FC drives (such as 73GB or 146GB, 15K rpm drives).
5. Log devices of databases hosted on virtual machines should be assigned from VMFS volumes created on LUNs with RAID 1/0 configuration, to facilitate better LUN management.
6. On virtual machines hosting applications that generate high small-block random I/O work load, raw device mapping (RDM) disks should be assigned with RAID 1/0 configuration.
7. Large fileserver virtual machines with static data should be assigned disks from VMFS volumes created on LUNs with RAID 5 configuration on SATA volumes.
In essence, for efficient LUN management, the above recommendations indicate that VMFS volumes should be created from LUNs. These volumes should be dedicated for creating virtual disks with specified workloads.
8. LUNs created for ESX servers should be visible to all ESX servers in the farm. This ensures availability, DRS and fault tolerance.
9. Use thick LUNs or thin LUNs provisioned from a thin pool. Then the LUN will stripe across all disks in the pool as well as across many spindles, resulting in better performance.
10. If the storage array supports thin provisioning, avoid thin provisioning at the ESX server level. This allows easier LUN management.
11. For better performance, use metaLUNs for creation of VMFS volumes. In VMware environments, always create metaLUNs using striped expansion, as striped metaLUNs perform better than concatenated metaLUNs.
12. During LUN management, avoid creating a VMFS on a single LUN, as it will result in performance degradation. Instead create the VMFS volume on multiple LUNs. This will create multiple queues to storage, ensuring minimal response times and more virtual machines per server.
13. For read/write intensive, variable I/O applications, use dedicated VMFS volumes on RAID 1/0 LUNs.
The VMware administrator should work with the storage administrator to create LUNs in line with the above recommendations; VMFS volumes are then created over those LUNs. With effective LUN management taken care of, the VMFS volumes can be pooled according to the different workloads of applications on the virtual machine.
About the author: Anuj Sharma is an EMC Certified and NetApp accredited professional. Sharma has experience in handling implementation projects related to SAN, NAS and BURA. He also has to his credit several research papers published globally on SAN and BURA technologies.