Fibre is the future
The Fibre Channel interface has the advantage of huge bandwidth. But it is currently too expensive for most businesses

For the general population it is easy to imagine a data transfer network as a broad, open, eight-lane freeway, with "blocks" of different types of data (like video and graphics) moving at high speed like buses, trucks and cars to various locations all over the world.

Unfortunately, while much has been made of the concept in the computing world, there has been little explanation of how it will actually be implemented. The technical solution used will not only have to be flexible and inexpensive, it will also require high performance characteristics to support extremely high data transfer rates. One solution that addresses those issues head on, but has not been widely publicised, is a relatively new technology called Fibre Channel.

Fibre Channel is an industry-standard interface adopted by the American National Standards Institute. It is usually thought of as a system-to-system or system-to-subsystem interconnection architecture that uses optical cable between systems in a point-to-point (or switch) configuration. Certainly, this is how it was first envisioned, and among the many protocols defined for it are IPI (Intelligent Peripheral Interface) and IP (Internet Protocol) which are ideal in those configurations.

Fibre Channel has since evolved to include electronic (non-optical) implementations and the ability to connect many devices to a host port - including disk drives - in a relatively low-cost manner. This addition to the greater set of Fibre Channel specifications is called Fibre Channel Arbitrated Loop (FC-AL). FC-AL has made it possible for Fibre Channel to be used as a direct disk attachment interface, opening whole new levels of I/O performance up to designers of high-throughput, performance-intensive systems. SCSI-3 (Small Computer Systems Interface-3) has been defined as the disk protocol, which is also technically refered to as the SCSI-FCP (Fibre Channel Protocol) for FC-AL.

Fibre Channel has been viewed as too expensive and power-hungry for lower-level functions like peripheral attachment, raising important questions as to why it should be used as a peripheral interface. What is Fibre Channel-Arbitrated Loop? Why use it? How can it be implemented? It is these questions that we will examine in detail.

What is Fibre Channel-Arbitrated Loop?

The Fibre Channel interface is a loop architecture as opposed to being a bus-like standard SCSI or IPI. The Fibre Channel loop can have any combination of hosts and disks up to a maximum of 126 devices.

The loop structure enables the rapid exchange of data from device to device. A PBC (Port Bypass Circuit), which is located on the backplane, is the logic that enables devices to be removed or inserted without disrupting the operation of the loop. In addition, the PBC logic can take drives offline or bring them back online by sending a command to any device to remove it from loop operation or reinstall it onto the loop.

SCSI has become the interface of choice for medium and large-sized computing systems. At the same time, the computer industry's experience with SCSI has brought to light the need for improvements in cable distance, number of variations, command overhead, array feature support and connectability.

Cable distance limitations

SCSI comes in several different versions, but the majority of products shipping today are of the single-ended variety, primarily because single-ended SCSIs cost less and are widely available. If a SCSI bus is limited to connecting devices found within a single cabinet and the interface cable length does not exceed three metres, it usually is not a problem to use single-ended SCSI. If the bus must link several cabinets, however, and in the course of doing so must convert from an unshielded ribbon cable in one cabinet to a shielded one externally, then back again to a ribbon cable within the second cabinet, the potential for SCSI signal problems increases. Differential SCSI solves this cabling issue, but usually requires that a system have both a single-ended and differential ports because most non-disk peripherals use only single-ended SCSI (single-ended SCSI devices cannot be attached to a differential bus).

For many companies building computers, this creates a problem of logistical complexity. A systems manufacturer might have to buy two or three versions of the same drive simply because various system models require different flavours of the SCSI. Costs can get high when a company has to account for inventory, sparing, qualification and testing for multiple versions. Such costs could be avoided if a single version of the interface were used.

SCSI's utility has improved with increased data rates several times since its introduction. Unfortunately, only the data rate has improved, which is the rate at which data is transfered off the disk. The rate at which SCSI interface information passes over a SCSI bus has not changed at all. SCSI command overhead is taking up an increasingly larger proportion of bus time, which makes it difficult to support multiple drives before the bus is saturated. Often peak transfer rate is achieved with three drives per each eight-bit bus, because three drives kept busy will produce so much SCSI protocol traffic that no additional drives can get any practical bandwidth that would contribute to the aggregate data rate.

Technological advances mean that magnetic hard disk areal density is increasing at about 60 per cent per year. Since bit density (measured in bits per inch), one of the two components of areal density, increases at about 30 per cent per year, data rate automatically increases proportionately. Disk rotation speeds, which have doubled over the last five years from 3,600 to 7,200rpm as they continue to climb, also contribute to higher data rates. In the next few years drives will be introduced which can sustain transfers in excess of 20Mb/s, thanks solely to improvements in bit density and rotational speed. Already there are disk drives in the market which can transfer data at rates over 10Mb/s. An interface limited to 20Mb/s can only support one drive at that data rate, making it technically impractical for future applications.

The increased use of SCSI disks in arrays and fault-tolerant configurations has revealed further limitations and problems of SCSI, particularly when devices must be inserted or removed from the interface without disrupting operations. It has taken considerable ingenuity to get SCSI to run in the presence of the glitches caused by insertions and removals.

New applications, like video and image processing, have created a demand for huge increases in storage capacity. Some capacity requirements are so large that it is difficult to configure enough SCSI buses to make sufficient drive addresses available to attach the needed number of drives. Simply increasing the addressability of SCSI, such as making it possible to have more than 15 devices per SCSI wide bus, would not be a solution because more bus bandwidth is needed to support the additional drives.

A disk interface based on the Fibre Channel standard can resolve each of these problems and provide functionality that has only been dreamed about up to now. Fibre Channel also has benefits that go beyond current disk drive interface solutions. Fibre Channel provides significantly higher bandwidth, superior drive array features and improved network storage capabilities than conventional interface technologies.

The Fibre Channel loop supports data rates up to 100Mb/s. Video storage and retrieval, supercomputer modelling, and image processing are among the applications growing in popularity that demand this kind of data rate. Moreover, as file servers are looked upon as replacements for mainframe computers, they will require ever higher transaction rates to provide comparable levels of service. Since most Unix and Windows servers lack the sophisticated I/O channel and controller structures of mainframe computers, they have not been able to match the large number of high-performance disk drives enterprise systems can support. Fibre Channel loops attached to such high-performance buses as S-Bus, Turbochannel or PCI - all of which run 70Mb/s or faster -offer I/O configurations that can sustain mainframe-like I/O rates. Performance estimates suggest that if a system requests the relatively short I/O transfers typical of business transaction processing (8K or less), more than 60 drives can be supported without saturating the loop and bogging down performance. Comparing the single host adapter to the many channels and controllers mainframes employ to attach as many drives illustrates the remarkable economics of Fibre Channel-attached disk storage.

Array controllers have traditionally been constructed with multiple standard SCSI interfaces for drive attachment, which enables the controller to supply data and I/O rates equal to several times those achievable from a single interface. Designing a specific number of drive interfaces into a given controller, however, has forced the customer to deal with the parity amortisation, granularity and controller cost associated with that decision. It severely limits the designer's choices for configuring the optimal combination for economy - that is, maximising the number of data drives per parity drive, granularity - minimising the atomic unit of capacity per array, and performance. A fully populated controller is in danger of bogging down under the burden of supporting the level of I/O activity that several rows of drives could generate.

For example, an array that has six rows amortises the parity data over five data drives, and usually requires adding six drives each time any increase in capacity is required. Using Fibre Channel it is possible for the first time to have more than enough bandwidth on a single bus to configure an array along a single interface (the array shown uses two loops to take advantage of Fibre Channel dual porting for better performance and reliability, but it could as easily be constructed with one Fibre Channel loop if full fault tolerance is not called for). Seagate Fibre Channel drives also have an exclusive XOR logic engine which allows for easy implementation of RAID 5, thereby avoiding the high costs associated with traditional RAIDs.

Some of the more important benefits from using Fibre Channel as the basis of a longitudinal array include:

The customer can decide the economy of the array. He or she can use five, eight, 18 or 24 data drives per parity drive, with no additional controller cost.

The granularity is a single drive. The customer need not add a whole row of drives as on a traditional array; he or she can increase the capacity of their subsystem by just the amount they require.

Performance. Because the controller has been significantly simplified, it is much less expensive than a multiple drive interface version. The customer can add controllers - and spread their drives across them - as they need more performance, but only add drives if they require extra capacity.

Remote online storage

Since the Fibre Channel interface is part of (and fully compatible with) the Fibre Channel standard, optical cabling can be used in any part of a subsystem, excluding the backplane. This makes it possible to have a disk subsystem quite a distance from the computer system to which it is attached. Using single-mode fibre optics, online disk storage could be as far as 10Km away.

Fibre Channel is a generic, standard interface. It has multiple uses and supports many different protocols, such as SCSI, IPI-3 Disk and Tape Link Encapsulation, Internet Protocol and ATM. All of these can run on the same Fibre Channel facility. In fact, some of the first Fibre Channel disk drive host adapters will support both SCSI and networking protocol. It makes for a very attractive investment: install a network loop and get 100Mb/s disk channel for free, or install a 100Mb/s disk loop and get a 100Mb/s network interface for free. Today's common discussions about 10Mb/s or 100Mb/s limitations of Ethernet seem trivial compared with the reality of a network that runs at mainframe backplane speeds.

While there is as yet no software support for it, Network Attached Storage offers intriguing possibilities for the future and Fibre Channel makes it architecturally practical for the first time.

Wide area network

In a typical non-fibre channel network the storage devices would be attached to a file server which would service the data needs of all other attached systems. The complexity of the Fibre Channel standard, since it covers so many and because it is so fast, may lead to the assumption that any implementation would be extremely expensive. The fact is that several recent developments have made it very economical.

Putting Fibre Channel on a disk drive will not be significantly more expensive than SCSI because the interface can be tailored to specifically run the SCSI FCP protocol. Using the SCSI protocol eliminates much of the complexity that would come with a comprehensive Fibre Channel implementation. In fact, the logic associated with Fibre Channel is only marginally more complex than today's SCSI disk drives.

Optical cable is usually thought to be the most promising vehicle for this kind of frequency. Recent developments in semiconductor chips have resulted in inexpensive 1GHz transmitters and receivers. These chips include all the logic needed to connect the inexpensive CMOS digital logic that is required for the rest of the Fibre Channel interface to a 1GHz serial link.

The memory speed needed to support 100Mb/s transfers concurrently with 20+ Mb/s disk media transfer would have been a problem several years ago. The introduction of Cache DRAMS has made it possible to implement full 100Mb/s data/cache buffers for the same cost as supporting fast/wide SCSI.

Compiled by Ajith Ram


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in March 2000

 

COMMENTS powered by Disqus  //  Commenting policy