Gigabit Ethernet – Is it the future?

Gigabit Ethernet offers performance enhancement for existing networks without having to change the cables, protocols and...

Gigabit Ethernet offers performance enhancement for existing networks without having to change the cables, protocols and applications already in use. But is ATM about to displace it?

The history

Ethernet was initially developed in the 1970s, and is now the most widespread network technology in the world. As a result of the standardisation of Gigabit Ethernet in June 1998, the scalability of Ethernet was again significantly improved. With a bandwidth of 1000Mbit/s (1Gbit/s), Gigabit Ethernet is 100 times faster than the original Ethernet.

As Gigabit Ethernet uses the same packet format and in half-duplex mode (shared media), also the CSMA/CD network access method, the standard is compatible with Ethernet and Fast Ethernet. From today's perspective, Gigabit Ethernet will be used as the backbone technology in corporate networks. The technology is further primarily suited to increasing the data transfer rate between clients and server farms and to connecting Fast Ethernet switches.

One other area of application is linking workstations and servers with very high bandwidth requirements, as in image editing or CAD environments. It is to be assumed that Gigabit Ethernet will be used above all in the more powerful full-duplex mode.

Today, Ethernet is synonymous with the IEEE 802.3 standard for a 1-persistent CSMA/CD LAN. The origin of the 802.3 standard can be traced back to the ALOHA network at the University of Hawaii, the forerunner of all shared-media networks. The original Ethernet developed by Xerox was thus also based on the ALOHA system. This first incarnation was a 2.94 Mbit/s CSMA/CD system which was used to connect more than 100 workstations on a 1km-long cable. The system was so successful that it was standardised by Xerox, DEC and Intel in 1985 as IEEE 802.3 with a data rate of 10Mbit/s.

In Ethernet, network access by all stations is controlled by the CSMA/CD method. When a station wishes to send data, it "listens" to the line. If no other station is currently transmitting data, it has the opportunity to transmit data itself. If two stations start data transmission at the same time, this is recognised as a collision. Both stations later try to repeat the process at different times.

Initially there were two types of coaxial cable used, which were known as Thick Ethernet and Thin Ethernet. Later, the main type of cable used was the unshielded twisted pair (UTP) copper cable originating from the field of telecommunications. When DEC, Intel and Xerox created what became known as the DIX Ethernet standard in 1980, 10Mbit/s was an enormous bandwidth. As computer technology progressed, however, the demand for more bandwidth in the network grew constantly, leading to the Fast Ethernet standard in the form of IEEE 802.3u in 1995.

The standardisation of Fast Ethernet was vigorously promoted by an economic consortium under the name of the Fast Ethernet Alliance. As a result of the Fast Ethernet standard, the conventional Ethernet achieved 10 times greater bandwidth and other new features such as full-duplex operation and auto-negotiation.

Fast Ethernet was defined for 100Mbit/s and opened the way for the scalability of the original Ethernet. Whereas traditional Ethernet networks operated in half duplex mode, full-duplex Ethernet technology was added with the introduction of Fast Ethernet. In full-duplex mode a station can simultaneously transmit and receive data - something that was previously impossible.

In May 1996, shortly after the IEEE announced the 802.3z Gigabit Ethernet standardisation project, 11 companies founded the Gigabit Ethernet Alliance. At the last count more than 120 companies in the networking, computer and semiconductor industries were members of this association.

In July 1997, the IEEE approved the 802.3z standard, currently the last in the series. Final standardisation was achieved in June 1998. The proposal for the 802.3z Gigabit Ethernet specification was submitted to the IEEE 802.3 committee in March 1996 for examination. This was followed in May 1996 by the founding of the Gigabit Ethernet Alliance by 11 companies: 3Com, Bay Networks, Cisco, Compaq, Granite, Intel, LSI Logic, Packet Engines, Sun, UB Networks and VLSI Technology.

The objective of the alliance was to create an open standard and interoperable products. This meant:

  1. Developing extensions for Ethernet and Fast Ethernet with a view to greater bandwidth.
  2. Drawing up technical proposals which were suitable for standardisation.
  3. Setting up processes and procedures for interoperability tests.

Today, more than 120 companies are united in the Gigabit Ethernet Alliance. The physical basis of Gigabit Ethernet is provided by well-proven technologies from the original Ethernet and the ANSI X3T11 Fibre Channel specifications. The physical layer of the Gigabit Ethernet standard 802.3z was adopted from Fibre Channel, for example, and supports transmission via multimode and single- node optical fibers as well as twinaxial cables.

The standard for twisted pair cabling is expected to be finalised in late 1999; an independent project was set up for this within IEEE 802.3ab. Gigabit Ethernet will therefore be based on four physical media types, which are defined as 802.3z (1000Base-X) and 802.3ab (1000Base-T).

Transmitting data

The 1000Base-X standard is based on the physical layer of Fibre Channel, a technology for connecting workstations, supercomputers, storage systems and peripheral devices. The architecture of Fibre Channel consists of four layers; the two lower layers, FC-0 (interface and medium) and FC-1 (encode/decode), are used in Gigabit Ethernet.

Ethernet uses a minimum packet size of 64 bytes. The reason for introducing a minimum packet size was that a station has to be capable of detecting a collision at the remote end of the cable. The minimum length of time required to detect this collision is called the slot time. Looked at the other way, the data that can be transmitted within the slot time amounts to the slot size.

The maximum cable length with Ethernet is 2.5km with a maximum number of four repeaters in the path. If the bit rate is increased, the sending station transmits the packets faster. Given the same packet size and cable length, this would mean that the packets would be transmitted faster than a detected collision could be reported.

In order to obtain the same cable lengths, the slot time, and hence the minimum packet size, would have to be increased. Alternatively, to be able to use the same packet size, the maximum cable length would have to be reduced. In Fast Ethernet, the maximum cable length was reduced to approximately 205m, while the packet sizes and the slot time were left unchanged.

However, as Gigabit Ethernet is 10 times faster than Fast Ethernet, the maximum cable length would have to be reduced to less than 20m. Instead of that, the slot size was increased to 512 bytes. To maintain the required compatibility with the minimum packet size despite this increase, it was decided to introduce what is known as the carrier extension. This entails padding packets smaller than 512 bytes with symbolic values. These symbols must not appear in the actual data area, and are referred to as the carrier extension.

1) Carrier extension

The carrier extension was introduced to guarantee the interoperability of Gigabit Ethernet with existing 802.3 Ethernet networks. If packets are smaller than 512 bytes, extension symbols are added to them until the size of 512 bytes is reached. In this way the transmitter is in a position to detect any collisions that occur with that packet. The extension symbols are removed from the packet at the receiver before the packet checksum (Frame Check Sequence - FCS) is calculated. The logical level (LLC - Logical Link Control) is unaffected by the carrier extension.

The use of a carrier extension meant that a means was found to remain compatible with the existing Ethernet packet sizes while retaining an acceptable cable length. However, this approach is inefficient and wastes bandwidth. A second technique, known as packet bursting, aims to counteract this disadvantage.

2) Packet bursting

When using a carrier extension with small data packets, it is possible that up to 448 additional bytes will be sent. The result would be a data throughput only insignificantly larger than with Fast Ethernet. Packet bursting was introduced to correct this disadvantage, at least in part. If a station wishes to send several packets, the first packet is filled according to the carrier extension method. The subsequent packets can then be added with the shortest possible spacing, the inter-packet gap (IPG). Packet bursting is limited to 8Kb of data.

Carrier extension and packet bursting are only of relevance to half-duplex operation. Neither of these methods are necessary for full-duplex operation, to which considerably greater importance is attached.

Full-duplex mode, standardised in IEEE 802.3x, is an important element in the deployment of Ethernet at gigabit speeds. It allows Ethernet network access and the extension of the network without limitation by the CSMA/CD procedure, which is subject to collisions. The channel capacity can be fully exploited and total throughput is increased.

Full-duplex mode

To be able to utilise full-duplex mode, a dedicated link is necessary between two nodes. Separate send and return circuits are implemented for this, as well as two MAC controllers, one at each end. The carrier sense and collision detect functions are deactivated, and the loop-back function is suppressed, which in half-duplex mode sends the received data back out onto the transmit path. In this way, a station is able to receive and transmit data simultaneously.

The use of full-duplex mode makes sense for switch-to-switch links, server and router links, and links over long distances. Nowadays in the Ethernet and Fast Ethernet field, even full-duplex links between workstations are no longer a rarity.

A full-duplex port provides buffer storage on the input and output path in case the transmitting stations send more data than the receiving stations can process at any one time. To prevent all subsequently arriving packets from being discarded in the event of the buffer store of a full-duplex port overflowing, which would result in a considerable overhead as a consequence of repeated sending of the lost packets, flow control for Ethernet was introduced. The effect of packet loss is covered by the protocols at the higher levels.

There are two bodies involved in determining the management capabilities of Ethernet: the IEEE, which defines the hardware-dependent specifications in the Ethernet standard, and the IETF (Internet Engineering Task Force), an interest group which concerns itself with problems relating to TCP/IP and the Internet. The specifications of the IEEE are usually found implemented in the hardware as counters or timers.

In contrast, the IETF occupies itself with the structure of the management information, as found in MIBs (Management Information Bases) for example. The outcome of this work is seen in such important standards as SNMP (Simple Network Management Protocol) and RMON (Remote Monitoring).

As was the case in the transition from Ethernet to Fast Ethernet, the management objects are also the same in Gigabit Ethernet. SNMP, for example, defines a standardised method of collecting and presenting Ethernet information at the device level. In relation to this, SNMP uses the corresponding MIBs to gather important statistics such as collision counters, the number of packets received or sent, error rates etc. Further information can be collected by RMON agents and displayed in network management systems.

As Gigabit Ethernet also uses the familiar Ethernet packets, the same MIBs and RMON agents as in Ethernet can be used to offer management capabilities at gigabit speeds.

Applications of Gigabit Ethernet

( Upgrading an Ethernet / Fast Ethernet network

The simplest way of connecting networks running Ethernet and Fast Ethernet via Gigabit Ethernet backbones is by using Fast Ethernet switches. These make both 10Mbit/s and 100Mbit/s available at each port, and Gigabit Ethernet switches, which allow the selection of 100 or 1000 Mbit/s. In this way it is also very easy to integrate existing hubs and routers.

( Upgrading an FDDI backbone or Token Ring network

For connecting FDDI (Fiber Distributed Data Interface) networks or installations using Token Ring it is advisable to use modular switches which allow switching between different technologies. In this way, the existing backbone structure can be segmented or parts of the backbone can be incorporated step by step into the Gigabit Ethernet backbone.

When FDDI components are transferred into Gigabit Ethernet, it must be ensured that existing redundant links are replaced with suitable Gigabit Ethernet links. If routers are used, it is very often possible to deploy VLAN-capable switches that exploit both the functionality of the routers and the performance advantages of the switches.

( Upgrading a server-switch link

In most networks, servers are set up centrally in arrangements known as server farms. Because individual servers usually serve a large number of clients, bandwidth requirements tend to add up at this point. On the network side Gigabit Ethernet is able to meet these requirements, although certain additional important points need to be taken into account in the selection of the network - server link.

If switching to Gigabit Ethernet for performance reasons, one should pay attention not only to network performance but also to the performance of the server system as a whole when selecting the server card. Here it is above all important to implement the interface between the network card and the motherboard - generally the PCI interface - with very high-performance ASICs. This is where the bus usage and CPU load resulting from the network adapter will determine the overall performance of the server system.

( Upgrading high-performance workstations

The integration of high-performance workstations into a Gigabit Ethernet network should be very easy if attention is paid to the points described above relating to overall performance. For cost reasons it is also possible to use full-duplex repeaters for this purpose.

( ATM vs. Gigabit Ethernet

When ATM (Asynchronous Transfer Mode) was introduced, its speed of 155Mbit/s made it 1.5 times faster than Fast Ethernet. ATM was therefore ideally suited to new applications with large bandwidths, such as multimedia applications. The consequence was that demand for ATM grew in both the LAN and WAN market.

On the one hand, the ATM vendors attempt to emulate Ethernet network by means of a LAN emulation (LANE) and IP via ATM (IPOA) ( while on the other hand, the advocates of Ethernet also make ATM functionality such as RSVP (Resource Reservation Protocol) and RTSP (Real Time Streaming Transport Protocol) available for Ethernet.

There is no question that both technologies have their desirable features and strengths, but it seems that the two widely differing technologies are constantly converging in terms of individual characteristics.

Whereas ATM originally came on the scene as a technology that could be deployed seamlessly from the LAN through the backbone and into the WAN, the reality today is rather different. The scalability achieved by Fast Ethernet and Gigabit Ethernet has once again made established Ethernet technology attractive, particularly in the field of the LAN and backbone.

As yet, however, most installed PCs and workstations are not capable of making use of the large bandwidths that both technologies provide. The scene of the contest has therefore shifted to the switches and server links in the backbone field, where Gigabit Ethernet appears to be well equipped. The swift approval of the standard and the speed with which interoperable and standard-compliant products have become available have helped Gigabit Ethernet to reach a good starting position.

ATM has the advantage of being around longer. Although it is true that installed products do not yet offer gigabit speeds, faster versions are already in the pipeline. ATM is better suited to applications such as the real-time transmission of video signals, through the implementation of QoS (Quality of Service) e.g: with CBR (Constant Bit Rate). Despite the efforts of the IETF (Internet Engineering Task Force), the association working to standardise Internet protocols, it will be difficult to catch up on the ATM lead.

The development of RSVP, which is intended to implement a kind of QoS on Ethernet, likewise exhibits limitations. It will remain a "best-effort" protocol, which, although it can detect and confirm a QoS inquiry, cannot guarantee provision. With ATM, however, there is the possibility of making data available with a defined delay time.

The greatest advantage of Gigabit Ethernet is its origin in the proven technology of Ethernet. Users can therefore work on the basis that migration from Ethernet to Gigabit Ethernet will be very easy and transparent. Applications that operate via Ethernet will also work via Gigabit Ethernet. If a user wants to run today's applications using ATM, on the other hand, there is often a significant overhead involved in integration into the ATM layers. The fastest ATM products at the moment operate at 622Mbit/s. At 1000Mbit/s, Gigabit Ethernet is almost twice as fast.

At present, it is impossible to foresee whether one of the two technologies will win alone. Probably both technologies will complement each other and continue to exist side by side.

So, to conclude, Gigabit Ethernet, the third generation Ethernet technology with a speed of 1000Mbit/s, is fully compatible with existing Ethernet technologies and promises a seamless transition to higher speeds. This means a performance enhancement for existing networks without having to change the cables, protocols and applications already in use.

Ajith Ram

 

Read more on Networking hardware

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close