In my earlier article on how to evaluate and purchase a SAN, I noted that your main objective is to ensure that the SAN meets the needs of your business at the point of purchase and throughout its lifecycle.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Now that you've taken delivery of your new SAN, you have the daunting task of assembling controllers, disk arrays and network fabrics, and then connecting them with a pile of cabling that looks like a spaghetti factory. What key challenges will you face in that initial installation?
Did you spec the right kit for the job?
As anyone with hardware implementation experience knows, the best foundation for an effective setup process is to be sure the specs of the kit are correct for your requirements. For example, the fabric switches that connect the subsystem to the server host bus adapters (HBAs) are usually a third-party product and, as such, must be fit for purpose. The quality of all connector cables must also be up to spec.
On my last SAN implementation, some setup delays existed because we couldn't get the SAN controllers to connect reliably to the disk arrays, no matter how many different configurations we tried. After some investigation it was discovered that the fibre connection leads supplied as part of the package weren't capable of supporting the data transfer rates we were applying to them. This meant the cables had to be replaced with a higher-rated alternative. The lesson here is that hardware problems can occur even with new equipment due to factory test faults or errors in component specifications during the purchasing process.
Getting connected, setting up zones
There are two main methods to connect server HBAs to your SAN: Fibre Channel (FC) and iSCSI. Despite the difference in physical connection methods and data transfer rates, both approaches make use of zoning to establish server connections to SAN controllers.
Zoning is the division of networks to guarantee a path for reliable data transfer. In FC-based systems, this takes the form of a dedicated function native to enterprise switches, which are configured using desktop-installed management software.
With iSCSI, zoning isn't a dedicated function. A good knowledge of Ethernet virtual LANS (VLANs) or multiprotocol label switching (MPLS) networks is therefore required to separate SAN traffic from the main data flow of the organisation and thus guarantee a reliable connection path. In this way, enterprise Ethernet switches can be configured to reserve a percentage of network bandwidth for SAN IP traffic to guarantee reliability.
This means iSCSI zone configuration is an Ethernet network management task and, as such, should be well within the scope of experienced network managers. If you have an Ethernet network already creaking under the weight of day-to-day traffic and then load it with additional iSCSI traffic, performance problems will likely result.
As you can see, an iSCSI SAN implementation requires some forethought in relation to your Ethernet network setup -- the "just plug it in" approach may work for a while in an underutilised network environment, but Ethernet traffic tends to increase over time. There will inevitably be a reckoning at a later date, so rather than hoping this happens after you've left the company, it's best to spend time making the network configuration as efficient as possible from the start.
An iSCSI SAN implementation requires some forethought in relation to your Ethernet network setup.
If the resources are available, you might consider a physically separate Ethernet backbone consisting of dedicated switches that link servers to the SAN via iSCSI, with a separate VLAN allocated to each server connection. With this approach and the new 10 gigabit Ethernet standard now becoming available, the potential for equaling the performance and reliability of FC connections using iSCSI is a distinct possibility.
Setting up LUNs
At the initial SAN disk volume setup stage, it's good practice to have an understanding of the concept of logical unit numbers (LUNs) and how they apply to your volume setup process. To briefly summarise, when creating a data volume on your SAN, you'll be required to input a unique LUN number that will identify the volume to the attached server. This number translates into the physical location of a volume within the disk array, which, if attached incorrectly, can result in unexpected performance problems later on.
A good example of this would be a SAN consisting of several tiers of disks with different performance ratings. If an administrator is creating a volume on the fastest disks, this will be indentified by a unique LUN number that will allow the administrator to identify and attach the required server. However, if another volume was created on lower performance disks and this LUN number is presented to the server by the administrator, the high-performance data would end up being served from a lower performance volume. Fortunately, with the advent of more user-friendly SAN administration interfaces, this process has become much simpler and more transparent. Thankfully, the days of manual LUN specification are waning and in newer systems the volumes and their locations are now advertised graphically. This considerably reduces the chance of specifying an incorrect LUN when initially attaching your servers.
Completing the initial setup
Once the SAN is fully assembled and the servers can see the SAN volumes, then the basics of the administrative setup can be completed. Some administrators prefer to approach this from the SAN end first and configure volumes, backups, virtual server support and other settings in full before any servers are attached to the SAN.
Others prefer to attach the servers and then fully configure each server volume as live and available for use immediately. Both those approaches can be equally effective. An initial SAN setup usually becomes a mixture of these two methods, as data storage and server requirements often change from when the system was first specified to its actual installation date.
These points have detailed some key areas to be aware of when implementing a SAN. If a solid methodology is followed when planning an implementation, you can avoid any major sticking points, both initially and in the future. Making full use of vendor and reseller support services that come bundled with a SAN purchase can also yield great configuration advice based on the many installations and environments they've experienced with their other customers.
Martin Taylor is a converged network manager with the Royal Horticultural Society, which recently went through the process of buying and implementing a Compellent SAN as a repository for its library of 200,000 images.