What are the key steps in Dell EqualLogic configuration for an iSCSI SAN in a VMware virtual server environmen...
The first thing to have in mind during the Dell EqualLogic configuration process is that these SANs use the iSCSI protocol for packet transport and traditional Ethernet networks. Therefore, the foundation for any EqualLogic SAN should be a stable, highly available switched network that can provide ample bandwidth for the storage array. Failure to meet that requirement can lead to performance or availability issues regardless of the EqualLogic or VMware configuration.
It’s a best practice to implement dedicated switches that solely service the iSCSI SAN network and to ensure switch backplanes exceed the total bandwidth in the EqualLogic system. For example, a single PS6000 has four Gigabit network ports in an active controller, so the backplane should exceed 8 Gbps to provide full duplex operations. Two Dell EqualLogic PS6000s in a group would have eight Gigabit network ports across the active controllers so the backplane should exceed 16 Gbps. As the SAN scales, so do the switch backplane requirements. To avoid bandwidth issues, you should use stacking cables or fibre links between switches.
Once the switches are in place, the switch ports should also be configured to support the following:
- Enable flow control: transmit and receive
- Enable jumbo frames
- Enable spanning-tree portfast
- Disable unicast storm control
After implementing a robust iSCSI switched network, you need to address the EqualLogic configuration itself. There are configuration issues you need to deal with when bringing a new EqualLogic subsystem online, such as:
The Group Membership option is a simple choice. Will the array join an existing EqualLogic group or form an entirely new one? This depends on the environment and purpose of the array.
IP configuration is again a simple process. You should dedicate a range of unique IP addresses to the iSCSI network even if it is to be entirely standalone from the rest of the LAN/WAN.
The RAID level you plan to use requires more upfront thought. It actually needs to be considered when sizing the SAN. This is because different RAID sets will provide different performance metrics. A PS6000 with 16x 300 GB 15,000 rpm SAS drives in a RAID 10 configuration will offer higher performance than the same solution in a RAID 6 configuration, although it will provide less usable disk space.
You should ensure during the procurement process that any EqualLogic subsystem purchased can provide sufficient I/O and network throughput to support the running of virtual machines or applications housed on the array. You should determine whether a particular RAID type is required to meet these I/O requirements or whether the default RAID 50 setting will suffice.
Unlike some other SAN arrays, all disks within an EqualLogic enclosure are utilised in the RAID set. You cannot change this. For example, if you have a PS6000 and select RAID 50, all 16 disks will be included (two are designated as hot spares). You cannot, for example, assign eight disks to RAID 6 and use the other eight in a RAID 10 configuration.
The workload is spread across all disks in the array, so it should not matter if you have two high-I/O virtual machines and five low-I/O virtual machines. Assuming the storage has been sized to handle the overall I/O requirements, all machines will function as expected.
The choice at this step of the EqualLogic configuration is to join the array to an existing disk pool or to create a new one. The answer to this question will vary and largely depends on the I/O requirements of particular virtual machines or applications. By default all arrays would join a global pool called Default.
EqualLogic controllers are designed to operate in active/passive fashion. This cannot be altered. Should the active controller fail, then the standby will take control and use the same IP addresses for the group and individual NICs. This provides seamless failover and requires no SAN-attached device to undergo configuration changes.
The final key is the VMware hypervisor. It needs to be configured appropriately to ensure that you get the best performance out of the hypervisor in terms of storage access. For VMware iSCSI networking, you should ensure that multipathing is correctly configured for the kernel ports. Jumbo frames and flow control should also be enabled.