Feature

White Paper: ATM enterprise switches

Effective computer networking is central to an organisation’s ability to thrive. Central to this is the enterprise switch - the system that powers enterprise backbone communications

Today, more than ever, organisations of all types are relying on their core computing resources to help balance the demands of fast growth, competitive challenge and quickly changing technology paradigms. For these organisations, effective computer networking is central to their ability to thrive. And central to their network infrastructure is the enterprise switch, the system that powers enterprise backbone communications.

Many of these organisations employ Asynchronous Transfer Mode as their backbone network technology. ATM supplies the bandwidth, scalability, fault tolerance and Quality of Service (QoS) needed to support even the most demanding applications and complex enterprise topologies. But to bring out the best in ATM backbone performance, the enterprise switch must add specific qualities to day-to-day operations. It should be capable of combining high port density and fast switching performance with ultra-high reliability and broad scalability. It should support popular ATM standards and possess substantial ease-of-management functions.

Enterprise demands

Companies depend on enterprise-class switches to carry their most demanding applications across the network backbone. This dependence is compounded by the newest dynamics in corporate computing and networking. Increasingly, business-critical applications are moving out of the glass office and into widely dispersed enterprise divisions, departments and remote offices. Enterprise network traffic patterns are changing, due in large part to the rise of intranet computing. No longer does an average 80 per cent of network traffic remain local without crossing the backbone. Today, when every workstation is potentially an intranet server, traffic patterns are highly unpredictable.

These factors add to the normal constraints and challenges facing IT and network managers: the need to support growing numbers of users, and newer, bandwidth-hungry and time-sensitive applications with limited budget and support resources. Taken together, they produce a challenging scenario for enterprise computing, particularly for the network's core switching resources.

While ATM delivers substantial backbone network benefits in performance, service differentiation and scalability/availability, it takes an extremely robust, enterprise-class switch architecture to help network managers satisfy the demands and adapt to the unknowns of enterprise networking today.

For instance, high switching performance is critical for data traffic, especially at the edge of the LAN where Ethernet traffic is converted for ATM backbone transmission. But specific QoS characteristics, such as multi-queue hardware, are also necessary for applying high-priority handling to latency-sensitive voice and video traffic within the switch.

Resilient, fault-tolerant operation is another critical challenge for ATM enterprise switches. Thousands of users may depend on the enterprise backbone at any given minute of the day or night; a failure here can bring untold difficulties to these users and the work they do.

Scalability is important because of the long potential life cycle of such a switch. Large backbone switches should be usable for five, 10, or more years, providing network stability as well as low cost of ownership. This means that an enterprise switch must be capable of keeping pace with exponential increases in user numbers and application types.

To ensure compatibility and interoperability in multi-vendor environments, and thus long-term investment protection, enterprise-class switches should support relevant ATM standards. Finally, these switches should be optimised to simplify configuration management and minimise network administration functions wherever possible. This benchmark can be a challenge with large, high-performance switch architectures.

Why ATM?

Asynchronous Transfer Mode represents one of the newest, and one of the longest lasting, networking protocols in use today. Developed in the 1980s as an end-to-end technology, ATM is seen today as the best means of integrating voice and video traffic with data transmission in the WAN, as well as in the LAN backbone and the MAN.

ATM's fixed-length cells allow for faster, simpler and more consistent switching than do the variable-length packets of other technologies. And ATM's connection-oriented nature, where end-to-end signalling sets up protected virtual circuits for all transmissions, is clearly a better means of transporting time and latency-sensitive voice and video traffic than are connectionless protocols.

For these reasons, and for its inherent scalability and high-availability characteristics, ATM has grown popular as an enterprise backbone technology. According to the Dell'Oro Group, a noted industry watcher, the worldwide market for ATM switches and concentrators grew at the rate of about 60 per cent in 1998.

For organisations around the world, ATM delivers significant benefits as a backbone technology. These include:

Differentiated Quality of Service. ATM's service classes offer a range of performance options. For instance, Constant Bit Rate (CBR) guarantees bandwidth delivery and so can support true business-quality voice and video. Other service classes, such as Available Bit Rate (ABR), give users greater economy for less demanding traffic.

Scalability. ATM is unmatched in its ability to scale in terms of speed, bandwidth and distance. ATM interfaces range from 1.54Mbit/s up to 622Mbit/s today, and will soon reach 2.4Gbit/s. ATM topologies can range from one to several thousand switches and can span building, campus, and metropolitan and wide-area networks. And ATM networks are extensible over long distances. Connecting an ATM LAN directly to a SONET/SDH service or an ATM WAN provides virtually limitless extensibility.

Internetworking simplification. ATM simplifies internetworking and network administration by de-coupling the logical framework from the underlying physical infrastructure. Virtual segments, known as emulated LANs (ELANs), allow location independence for workgroups and Internet-working devices, such as routers and Layer 3 switches, and they eliminate the need for the address changes conventionally required when users move to new locations.

High availability. ATM enables high-availability network designs in fully active, meshed networks, using resilient links with automatic re-routing in case of failure. With such powerful advantages, it's no wonder that ATM is the backbone of choice for so many enterprises.

LAN Emulation

LAN Emulation (LANE) is the ATM Forum specification for integrating traditional LANs with ATM networks. LANE works by encapsulating protocol datagrams into ATM cells, and mapping conventional Media Access Control (MAC) addresses to ATM addresses. LANE therefore ensures that the essential characteristics of Layer 2 LANs - such as connectionless transport, broadcasts and multicasts, and NIC-defined MAC addressing - interoperate smoothly with ATM's efficient, connection-oriented protocols. As a result, enterprises can run existing LAN applications while taking advantage of the benefits (flexibility, scalability, and availability) of an ATM backbone.

Multi-protocol over ATM

The second major ATM Forum standard, Multi-Protocol over ATM (MPOA), extends LANE's functionality by allowing an edge switch to act as a Layer 3 forwarding device. MPOA centralises the path determination process on a route server and distributes the Layer 3 forwarding function to the edges of the ATM network. Once the destination is resolved, the edge devices communicate directly. MPOA is thus capable of cutting through the emulated LANs (ELANs) created through LANE. Cut-through routing, as it is sometimes called, is valuable in establishing direct connections between frequent users; it conserves network bandwidth resources and frequently improves response time. As valuable as they are, LANE and MPOA client operations are processor intensive.

Local SAR processing

Segmentation and reassembly of Ethernet packets and ATM cells is a mandatory function in any enterprise switch that handles both types of traffic. It can be a time-consuming process, since all Ethernet frames must be transformed into ATM cells for transmission through the ATM part of the switch or network. Likewise, the ATM cells must be transformed into the Ethernet frame structure before they can be put back onto the LAN. In switches with centralised SAR capabilities, SAR processing often creates a performance bottleneck in the switch fabric. The ATM interface from a central SAR device must, by its nature, be a User-Network Interface (UNI) port, and there is no specified method for load sharing on UNI ports.

Support for ATM standards

Support is required for major ATM signalling, interface and traffic management standards. These include:

UNI 4.0 QoS signalling, including the standard service categories defined by the ATM Forums: CBR, rt-VBR, nrt-VBR, ABR and UBR, and supports mechanisms for negotiating individual QoS parameters within those service categories. Through UNI 4.0 software, a switch is able to map the service categories to its multiple queue priorities.

PNNI 1.0 dynamic interswitch routing. PNNI 1.0 covers services that are critical to end-to-end networking, such as end-to-end QoS, hierarchical routing and application multicasting. Additionally, PNNI delivers important high-availability functions: it supports an active mesh topology, it can automatically re-route around failed links, and it enables automatic self-configuration.

ILMI 4.0 management exchange. Through support for ILMI 4.0, a switch can perform auto-discovery and configuration of interface management entities (IME) for bi-directional exchange of management information. ATM IMEs can be network, user or symmetric types; they reside in ATM end stations and switches, serving as termination points for the ILMI protocol.

LANE 2.0 services and clients. The latest version of LANE specifies new interfaces, a LANE User-Network Interface (LUNI) and a LANE Network to Network Interface (LNNI) that will bring about significant advances in LANE functionality and resiliency. LUNI provides QoS support for ELANs as well as data direct robustness in which virtual circuits remain active in case of a LANE services failure. LNNI enables standards-based, fault-tolerant and scalable ELANs with redundant load-sharing LANE services.

Traffic Management 4.0 support. The TM 4.0 standard was adopted by the ATM Forum in 1996 to set the key mechanisms necessary for achieving end-to-end QoS. The section below details some of the possible traffic management capabilities.

Traffic management capabilities

Connection Admission Control (CAC) is the process whereby two end stations determine whether network resources are sufficient for call set-up.

Usage Parameter Control (UPC) is a means of policing connections for QoS compliance. To cut back overly assertive connections, some switches can support the "dual-leaky bucket" algorithm, which can be assigned and adjusted on a per-port, virtual-circuit or virtual-path basis. The algorithm functions as if it were letting data flow like water into a bucket with a small hole in the bottom - a leak. The permissible, pre-determined traffic flow would match the size of the leak and the bucket would fill up with over-limit traffic, waiting its turn. Beyond a certain point, however, additional cells overflow into the second bucket - which is actually a buffer - where they are marked as lower priority than the cells in the first bucket.

ABR flow control. Three congestion-control mechanisms bring incrementally stronger controls to ABR transmissions. First and most basic is Explicit Forward Congestion Indication (EFCI), which works by using a special header bit to alert network nodes of congestion and to trigger remedial actions through network protocols. Second is Relative Rate, which performs backward marking of a special cell (called Resource Management, or RM) as a means of alerting the source to slow transmission by some relative value. Third is Explicit Rate, which employs sophisticated algorithms to determine exactly how much the source should slow down.

Early packet discard/partial packet discard. EPD and PPD are ways of minimising waste when cells are discarded due to congestion. EPD and PPD make use of the ability of ATM Adaptation Layer type 5 (AAL5) to map higher-layer frames over the ATM cell structure. The instant a single cell is discarded due to congestion, EDP/PPD come into play to discard all other cells in the packet, thus reducing the waste of corrupted data frames. The difference between EPD and PPD has to do with how much of the frame is discarded: if packet discard is enacted with the first cell of a frame, it is EPD; if it happens at a point later in the frame, it triggers PPD.

Weighted round-robin queuing. This is a means of shaping ATM QoS performance by assigning priorities to the switch's buffer queues. With multiple buffers, each VC connection can be assigned a specific queue; in the event of congestion, the switch will allocate bandwidth for each in turn via round-robin scheduling, but with an overlay of user-predefined priority - the weighted part.

Management features

Fundamental to operation of an enterprise backbone switch is a bullet-proof, highly resilient design that can deliver ultra-high system availability. Also critical is a switch management infrastructure that can facilitate set-up, day-to-day operation, maintenance and troubleshooting.

A three-way judging system can be used to query the fabrics' health with run-time diagnostics every 100 milliseconds, typically. If a failure in the primary fabric is detected by two out of three of the judges (primary and redundant switch engines and management controller), the system software can automatically cut over to the redundant module in less than a second, thus preventing application-session timeouts. And even in the extremely unlikely event that both fabrics fail, on-board switching engines in the frame-based interface modules can continue to switch intra-module traffic, despite the loss of the central fabrics.

BY LINE Compiled by Geoff Marshall

( 1999, The ATM Forum


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 1999

 

COMMENTS powered by Disqus  //  Commenting policy