Not just a big switch

Fibre Channel directors don't just provide lots of ports, they also offer ways to connect disparate SANs, isolate data and devices within a fabric, and configure throughput for specific applications. We look at how the big three directors match up.

Fibre Channel directors don't just provide lots of ports, they also offer ways to connect disparate SANs, isolate data and devices within a fabric, and configure throughput for specific applications. We look at how the big three directors match up. (This article originally ran in the February 2006 issue of "Storage" magazine.)


No longer just a big box with lots of ports, the Fibre Channel (FC) director has become the cornerstone around which next-generation SANs will be built. As more organisations are faced with managing petabytes of storage, director-class switches are easing management tasks by isolating SANs within a single fabric, delivering a higher level of data protection and parcelling throughput to individual ports depending on changing application demands.

Fibre Channel directors: Core components
Click here for a comprehensive list of Fibre Channel directors: Core components (PDF).
Of course, some things never change: First and foremost, companies look to directors to provide rock-solid stability with high levels of availability, throughput and port count. In this vein, the passive backplanes that are used in Brocade Communications Systems' SilkWorm 48000, Cisco Systems' MDS 9509 and McData's Intrepid 10000 (i10K) nearly eliminate the possibility of failures. Each of these models also supports at least 1Tbps of internal bandwidth in a single chassis and 384 FC ports in a single rack; Brocade and Cisco offer configurations that support up to 768 FC ports in a rack. But as some vendors pack more ports into their line cards to meet growing user capacity demands, they're using port oversubscription to do so.

Port oversubscription occurs when the amount of internal switching fabric bandwidth allocated to a switch port is less than the device connection speed at that port. For example, if a port on an FC switch has a connection speed of 2Gbps, but is unable to achieve a wire-rate of 2Gbps, then the port is said to be oversubscribed. As a result, administrators need to plan how and under what circumstances to deploy these high port-count line cards.

Core components
With core components such as passive backplanes, concurrent microcode upgrades, purpose-built ASICs and redundant hardware components essentially equal among FC directors, vendors are finding other ways to differentiate their products. And with the growing need for higher port counts, distance replication and connecting SAN islands, vendors are adding functionality to FC directors in the following key areas:

  • Line cards
  • 1Gbps, 2Gbps, 4Gbps and 10Gbps FC ports
  • FC port buffer credits
  • Inter-switch link (ISL) aggregation and connectivity options
Vendors offer FC director line cards that allow users to configure directors to support a variety of port speeds and counts. For instance, Brocade's SilkWorm 48000 offers three different line cards that each support a different number of ports and port speeds. Brocade's FC4-16 and FC2-16 line cards each provide 16 FC ports, with the FC2-16 supporting 2Gbps and the FC4-16 4 Gbps. For users to achieve the maximum 768 port count on the SilkWorm 48000, they need to use Brocade's FC4-32 line cards.

There are tradeoffs when maximising port capacity and using faster FC speeds. McData's i10K supports 10Gbps FC ports, but those ports can only be connected to other i10K directors with the same 10Gbps FC ports because 10Gbps FC ports are based on a different technology than those at 1Gbps, 2Gbps and 4Gbps speeds.

Users of Brocade's SilkWorm 48000 will encounter similar issues. A SilkWorm 48000 fully populated with its FC4-16 line cards is the company's only 4Gbps configuration that allows its FC directors to operate at 4Gbps without any blocking of bandwidth. Brocade's FC4-32 line cards allow scaling up to the maximum port count on its 48000, but that configuration can only operate at a maximum of 2Gbps without blocking.

Despite these concerns, lower per-port prices are driving the move to line cards with higher port counts. Vendors report that users should generally expect to pay a 10% to 25% premium for line cards that support a higher number of ports. And despite the potential of back-end bottlenecks using the higher FC port count cards, most users aren't at risk, say vendors, because few production environments are reaching throughput limits on their FC directors. Cisco recently checked the utilisation rates in its own production environment and found that most of its FC ports were averaging only 13MBps. This prompted Cisco to see if it could lower its own internal costs by increasing the number of ports on its blades.

To find the right balance between low and high port-count line cards, you need to identify the specific configurations and applications that require high-throughput FC ports. Applications such as backup/recovery and data replication, and FC ports dedicated to ISLs require high port throughput. By taking advantage of the port buffer credit and ISL aggregation features on the director ports, and by balancing which application or configuration uses which FC ports, there may not be a need to purchase lower port-count line cards.

Distance replication
The primary benefit of port buffer credits is to keep data flowing across distances. The size of the buffer credit needed on each FC port will depend on four factors:

  • The amount of data going through the port
  • The speed of the port
  • The distance between the FC ports
  • If the WAN gateway devices used provide additional buffering
Default port buffer settings on most directors will work fine without adjustment. Although the default settings range from eight on Brocade's SilkWorm 48000 to 16 on McData's i10K, these settings will work fine for most locally attached AIX, Hewlett-Packard, Sun Microsystems and Windows servers, and most storage arrays. When FC ports are used for distance replication, more buffer credits are generally required.

For distance replication, vendors generally recommend approximately one port buffer credit for every kilometer over a 1Gbps link. In most situations, you'll only need to devote a few FC ports for long-distance replication with the rest of the ports reserved for local connectivity. To provide as much flexibility as possible, vendors offer choices for how buffer credits can be configured and re-allocated among ports.


Jerome M. Wendt ([email protected]) is a storage analyst specialising in the field of open-systems storage and SANs. He has managed storage for small- and large-sized organisations in this capacity.

Read more on IT risk management