Advanced storage area networks overview

News Analysis

Advanced storage area networks overview

Stephen J. Bigelow, Features Writer
A storage area network (SAN) is vital to the enterprise. SANs concentrate storage into one ubiquitous and specialized network. But SAN deployments can be complicated -- an administrator must configure the hardware elements to achieve a mix of high performance and high reliability. Then the storage resources must be organized and allocated to the various users or applications across the organization. Now that you've learned a bit about the basics of storage networks, let's look at some of the details involved in components and architecture, connectivity and protocols, and management issues.

SAN components and architecture

Although the cabling used for many SAN implementations is tough and reliable, there are some important rules that can ease deployment and subsequent troubleshooting. Whether optical or copper, cables should always be labeled in a clear and consistent manner throughout the physical plant -- it doesn't matter what labeling scheme you adopt, but it should be followed faithfully. Once cabling is installed or replaced, make the time to denote it on SAN (and physical plant) documentation so that other installer or technicians can follow the SAN later. Poor installation practices also include tight runs, excessive bends (crimps) and inadequate protection across floors and other exposed areas. This is particularly true for optical cable and leads to premature failure. It is vitally important for administrators or other IT staff to oversee the work performed by professional cable installers to verify that reasonable installation practices are followed.

Normally, a single host bus adapter (HBA) is enough to establish a connection to each SAN server or storage device, but traffic congestion and connection reliability are always a concern. For example, a single HBA link handles all of the traffic to a storage device, so excess traffic can congest the HBA and reduce performance. Also, if an HBA fails, communication is cut off and the affected portion of the SAN can become unavailable. Storage administrators overcome this by implementing multiple HBAs at the host server and storage device. Multiple HBAs can improve better traffic handling through load balancing, as well as redundancy in the event of a fault. Software tools are used to define the load balancing and failover behaviors of the HBAs. These tactics also help to avoid single points of failure in the SAN, and are central to the notion of "high-availability" storage networks.

Today, switches are taking on more functionality in the network, supporting management features and intelligent functions, such as switch-level storage virtualization, storage tiering and data migration. The main benefit of this development is heterogeneity -- since switch traffic is largely independent of the particular manufacturers' servers or storage devices on the SAN, a switch can often oversee a much larger suite of systems without interoperability concerns. If you're considering iSCSI SAN technology in your enterprise, opt for Ethernet (IP) switch devices, rather than Fibre Channel (FC) switches. Further, Ethernet switches intended for iSCSI SAN deployment should offer high performance and low latency while avoiding the port oversubscription that is typical of common Ethernet switch products. For example, the V-Switch family from SANRAD is specifically designed to interface iSCSI host servers to FC SAN resources.

SAN connectivity and protocols

SAN connectivity is gaining bandwidth, allowing more users and applications to access burgeoning volumes of data. Fibre channel connectivity is particularly noteworthy, building on traditional 1 gigabyte per second (Gbps) and 2 Gbps speeds by adding 4 Gbps support available today, and even implementing some 10 Gbps ports. However, 10 Gbps FC is still in limited deployment and only found in high-end SAN devices like McData Corp.'s Intrepid series or the MDS 9513 director-class switch from Cisco Systems Inc. Since 10 Gbps FC is not backward compatible with slower FC port speeds, 10 Gbps is normally reserved for inter-switch links (ISLs). 8 Gbps FC has yet to emerge, but is expected to offer the backward compatibility needed to gain acceptance in current infrastructures. Storage switches typically support common protocols like SCSI FC Protocol (FCP) on open systems and FICON for IBM mainframes. An increasing number of storage switch products are including Ethernet/IP ports to handle iSCSI, iFCP and FCIP protocols.

As SANs grow, the number of switches deployed in the SAN also increases, and this can eventually lead to performance degradation due to interswitch latency (passing traffic switch-to-switch). Switches normally interconnect using dedicated inter-switch link ports. Reduce ISL latency by choosing switches with fast ISL ports, or trunk multiple ISL ports together for improved performance and redundancy.

ISCSI SAN deployments are quickly gaining ground in medium businesses and enterprise departments -- largely due to the simplicity and low cost of Ethernet technology, as well as the ready availability of Gigabit Ethernet (GigE) components. ISCSI performance is often bolstered by the use of network interface cards (NIC) with TCP/IP offload engine (TOE) features. ISCSI HBAs are available from major component makers, such as Adaptec Inc., Intel Corp. and QLogic Corp. With 10 GigE in the near future, iSCSI may eventually become a serious contender for enterprise SAN deployments against Fibre Channel. ISCSI software and operating system (OS) support are also improving, with initiator (client) and target (server) software is readily available for most enterprise OSs, including Windows, AIX, NetWare and Linux. The principal issue with iSCSI is security, and network administrators must take great care to keep iSCSI SAN traffic segregated from everyday user traffic using a virtual LAN (VLAN) or separate physical network.

Whether you're using FC or IP connectivity in the SAN, it's important to consider the connection speeds at every link between a server and storage device. As SAN deployments grow and change, administrators sometimes forget that all data in a given path will only be as fast as the slowest link. Investing in a 10 Gbps switch may not be worthwhile if the main storage system is only communicating at 2 Gbps. Auto-negotiation can also be an issue if one link fails to shift its speed properly to accommodate another data rate. Many administrators choose to eliminate potential connectivity problems by configuring the speed of each link manually.

Connectivity also ties closely to reliability, and SAN architectures frequently implement multiple simultaneous connections between HBAs, switches and storage systems. By proliferating redundant connections using different paths, SAN architects eliminate single points of failure that can potentially cut off storage from mission-critical applications -- a fault in one path can "fail over" to another path. Multiple paths can also be aggregated together to improve performance.

SAN management

As storage capacity spirals upward, IT staff has remained virtually unchanged; and has even been reduced in some cases. An administrator that might have managed 3 terabytes (TB) just a few years ago is now often responsible for 15 TB or more. This trend has put a new emphasis on SAN management -- especially in areas of process control and automation.

RAID platforms often include management tools, but the goal is to provide administrators with maximum flexibility, while incurring an absolute minimum of downtime. For example, RAID arrays typically required downtime to add disks or to change the RAID group size, and the entire RAID group would have to be rebuilt when changing RAID levels (e.g., migrating from RAID 5 to RAID 6). Today, look for RAID platforms and management tools that can support these features on the fly. RAID controllers are also touting advanced drive diagnostics, launching pre-emptive rebuilds of disks that report questionable behavior.

One of the biggest growth areas in SAN management is storage resource management (SRM) tools. SRM tools can analyze and report on available storage systems and utilization, and improved analytical features can even help to ease bottlenecks and other performance trouble areas. For example, Softek Storage Solutions Corp.'s Performance Tuner first establishes a performance baseline across the SAN and then proactively alerts IT staff when performance falls outside of the norm and offers suggestions for improvement.

But the most common push for SRM tools is in heterogeneity and automation -- creating, allocating, and managing large pools of storage within the data center. CA's BrightStor product supports over 100 storage arrays, SAN switches, tape libraries and applications. This kind of interoperability is important in order to centralize management functions. BrightStor also supports 500 million files and centralizes backup functions across a variety of popular products like ARCserve, Tivoli Storage Manager, Legato Networker, and Veritas NetBackup. Automated provisioning features also save considerable time for storage administrators, and emerging chargeback capability ensures that enterprise SAN users pay for only the storage capacity that they are utilizing -- a crucial element in tiered storage strategies.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy