Not just a big switch
Fibre Channel directors don't just provide lots of ports, they also offer ways to connect disparate SANs, isolate data and devices within a fabric, and configure throughput for specific applications. We look at how the big three directors match up.
Fibre Channel directors don't just provide lots of ports, they also offer ways to connect disparate SANs, isolate data and devices within a fabric, and configure throughput for specific applications. We look at how the big three directors match up. (This article originally ran in the February 2006 issue of "Storage" magazine.)
No longer just a big box with lots of ports, the Fibre Channel (FC) director has become the cornerstone around which next-generation SANs will be built. As more organisations are faced with managing petabytes of storage, director-class switches are easing management tasks by isolating SANs within a single fabric, delivering a higher level of data protection and parcelling throughput to individual ports depending on changing application demands.
![]() |
||||
|
![]() |
|||
![]() |
Port oversubscription occurs when the amount of internal switching fabric bandwidth allocated to a switch port is less than the device connection speed at that port. For example, if a port on an FC switch has a connection speed of 2Gbps, but is unable to achieve a wire-rate of 2Gbps, then the port is said to be oversubscribed. As a result, administrators need to plan how and under what circumstances to deploy these high port-count line cards.
Core components
With core components such as passive backplanes, concurrent microcode upgrades, purpose-built ASICs and redundant hardware components essentially equal among FC directors, vendors are finding other ways to differentiate their products. And with the growing need for higher port counts, distance replication and connecting SAN islands, vendors are adding functionality to FC directors in the following key areas:
- Line cards
- 1Gbps, 2Gbps, 4Gbps and 10Gbps FC ports
- FC port buffer credits
- Inter-switch link (ISL) aggregation and connectivity options
There are tradeoffs when maximising port capacity and using faster FC speeds. McData's i10K supports 10Gbps FC ports, but those ports can only be connected to other i10K directors with the same 10Gbps FC ports because 10Gbps FC ports are based on a different technology than those at 1Gbps, 2Gbps and 4Gbps speeds.
Users of Brocade's SilkWorm 48000 will encounter similar issues. A SilkWorm 48000 fully populated with its FC4-16 line cards is the company's only 4Gbps configuration that allows its FC directors to operate at 4Gbps without any blocking of bandwidth. Brocade's FC4-32 line cards allow scaling up to the maximum port count on its 48000, but that configuration can only operate at a maximum of 2Gbps without blocking.
Despite these concerns, lower per-port prices are driving the move to line cards with higher port counts. Vendors report that users should generally expect to pay a 10% to 25% premium for line cards that support a higher number of ports. And despite the potential of back-end bottlenecks using the higher FC port count cards, most users aren't at risk, say vendors, because few production environments are reaching throughput limits on their FC directors. Cisco recently checked the utilisation rates in its own production environment and found that most of its FC ports were averaging only 13MBps. This prompted Cisco to see if it could lower its own internal costs by increasing the number of ports on its blades.
To find the right balance between low and high port-count line cards, you need to identify the specific configurations and applications that require high-throughput FC ports. Applications such as backup/recovery and data replication, and FC ports dedicated to ISLs require high port throughput. By taking advantage of the port buffer credit and ISL aggregation features on the director ports, and by balancing which application or configuration uses which FC ports, there may not be a need to purchase lower port-count line cards.
Distance replication
The primary benefit of port buffer credits is to keep data flowing across distances. The size of the buffer credit needed on each FC port will depend on four factors:
- The amount of data going through the port
- The speed of the port
- The distance between the FC ports
- If the WAN gateway devices used provide additional buffering
For distance replication, vendors generally recommend approximately one port buffer credit for every kilometer over a 1Gbps link. In most situations, you'll only need to devote a few FC ports for long-distance replication with the rest of the ports reserved for local connectivity. To provide as much flexibility as possible, vendors offer choices for how buffer credits can be configured and re-allocated among ports.
Jerome M. Wendt ([email protected]) is a storage analyst specialising in the field of open-systems storage and SANs. He has managed storage for small- and large-sized organisations in this capacity.