A storage area network ( SAN) organises a broad assortment of storage devices into a single storage resource that can then be provisioned, allocated and managed for the entire enterprise. Although issues like storage capacity, performance and management often receive the most attention, the connectivity between each SAN device plays a critical role in successful SAN deployment. Each switch and storage system on the SAN must be interconnected -- usually through optical fiber or copper cabling -- and the physical interconnections must support bandwidth levels that can adequately handle the peak data activities that occur. This overview details the role of Fibre Channel (FC), Ethernet and iSCSI connectivity on a SAN.
FC is the quintessential SAN interconnect and virtually every storage switch and storage platform provides FC ports. Multiple FC ports support simultaneous data streams, but individual ports can often be aggregated into groups for even higher effective bandwidth. As an example, the All-In-One Buying Guide notes that the InServ E200 Storage Server from 3PAR Data Inc. supports up to 12 FC ports, while the TagmaStore AMS1000 from Hitachi Data Systems Inc. (HDS) provides up to eight FC ports. Servers and other devices can also be fitted with FC host channel adapters (HCAs) to enable an FC interface.
As a serial interface, FC bandwidth is denoted in Gbps. Early FC implementations ran at 1 Gbps per port, and 2 Gbps reigned until recently. Today, 4 Gbps FC is readily available and 10 Gbps implementations are appearing on some high-end systems and director-class switches. FC operates with numerous protocols, though SCSI and IP are the most popular implementations.
FC can use several types of physical media. Twisted pair cable is used to cover relatively short distances at low speeds between FC devices. Coaxial cables generally offer better shielding against signal interference and can run across somewhat longer distances. Optical fiber is routinely used to carry the fastest signals across distances up to 10 km.
While Ethernet connectivity is generally used on the greater local area network (LAN), its use in the SAN has been limited by its relatively slow bandwidth. Traditional Ethernet ports support 10/100 Mbps -- far slower than FC. This had limited Ethernet in the SAN to basic management tasks. For example, a storage device or switch might include a single Ethernet port that connects the device to the LAN where an administrator can manage the device across it. Ethernet typically uses two protocols; Transmission Control Protocol (TCP), which handles the organisation of data into packets, and Internet Protocol (IP), which handles the way those data packets are addressed. In fact, the terms "Ethernet" and "TCP/IP" are often used interchangeably.
Ethernet bandwidth is increasing today, which boosts performance on the LAN and also makes Ethernet use more practical for carrying data on the SAN. One Gigabit Ethernet (GigE) is now common on many servers and switches, and the eventual emergence of 10 GigE promises to put Ethernet on par with 10 Gigabit (Gbit) Fibre Channel.
Traditional Ethernet LAN deployments used coaxial cables, but twisted-pair cabling (e.g., Category 5 or Category 6 Ethernet cables) is the most common LAN cabling. Ten GigE often relies on optical fiber with transmission distances up to 40 km, which makes the technology far more expensive and limits its use to network backbones. As copper cabling becomes available for 10 GigE, the technology should see far more use within data centers and SANs.
ISCSI and FCIP
Fibre Channel SANs have long been challenged by deployment expense and management complexity -- usually keeping SANs out of reach of smaller IT organisations. The recent development of iSCSI promises to ease these challenges by encapsulating SCSI commands into IP packets for transmission over an Ethernet connection, rather than a FC connection. This approach eliminates FC in favor of Ethernet, which allows iSCSI to transfer data over LANs, WANs or the Internet and supports storage management over long distances.
In actual practice, a user or application will cause the operating system to generate corresponding SCSI storage commands. Those SCSI commands and data are then encapsulated and IP headers are added to make packets. The packets can then be sent over an ordinary Ethernet connection. The remote end of the iSCSI connection disassembles the encapsulated content and passes the SCSI commands to the SCSI controller and storage device. This also works in reverse, so any data or responses can be sent back to the user or application across the Ethernet connection.
Another alternative is FCIP. FCIP translates FC commands and data into IP packets, which can be exchanged between distant FC SANs. It's important to note that FCIP only works to connect FC SANs, but iSCSI can run on any Ethernet network. ***