Networks: Fast mover

The London Internet Exchange Point was forced into early adoption of 10 Gigabit Ethernet. Karl Cushing reports.

The London Internet Exchange Point was forced into early adoption of 10 Gigabit Ethernet. Karl Cushing reports.

Switching to 10 Gigabit Ethernet is for most organisations about as high a priority as sweating the assets on their water coolers. The technology is still at an early stage - the Standards Board of Governors at the Institute of Electrical and Electronics Engineers (IEEE) Standards Association only ratified the IEEE 802.3ae specification for 10 Gigabit Ethernet in June - and there have been few takers for it until now. Most simply do not have the traffic to justify it. However, for some the question of "should we or shouldn't we?" has not been an option. This was certainly the case for LondonInternet Exchange Point (Linx) - Europe's largest Internet exchange point. At the end of last year, with its gigabit network creaking under the pressure of ever-increasing traffic, the decision was taken to make the jump, with the exchange becoming the first European organisation to install high performance standards-based 10 Gigabit Ethernet technology.

Waiting for the technology to mature was not an option says Linx chief executive John Souter. "It was pretty much a no-brainer," he says. "Basically, we had no choice." In September 2001 Linx found it needed to engage telecoms and network provider Thus to increase capacity after the amount of Internet traffic it was dealing with had more than doubled from 4gbps in the previous October to 9gbps. Since then peak rates have risen further, to 18gbps. As Souter explains, traffic is "somewhere between doubling and trebling every year".

Since its foundation in 1994 the exchange has gone through three major network overhauls, replacing switches and changing suppliers. "In the early days it was all about packet engine switches and 10mbps and 100mbps ports," says Souter. Before the conversion to 10 Gigabit the Linx network was centred on four Gigabit Ethernet links joining its two main London Dockland sites and aggregated into a trunk group to form one large "pipe". This was an improvement on what had gone before but managing load balancing over the four physical links was problematic and the exchange wanted more head-room to deal with spikes in demand.

There were other options to consider. Souter says that at one stage the exchange was "quite serious" about switching to dense wave division multiplexing (DWDM), which made sense as the exchange has its own dark fibre. What swung it in the favour of 10 Gigabit was the ease with which it could upgrade and the fact that it could still use the existing fibre. Another important factor was cost. Souter explains that the exchange has budgets of "a couple of million a year" and DWDM would have been "hugely costly". Instead of upgrading the technology it would have had to replace it, creating more work and significant downtime - an issue that Souter says the exchange's members are understandably "extremely sensitive" about. As well as UK and other European members the exchange has members from countries in Asia and the US, raising the issue of what time of day to permit downtime. Switching technology would have also raised training requirements.

But 10 Gigabit appeared to raise none of these problems and Foundry, the existing supplier and provider of the switches, said it could make the transition with a minimum amount of fuss and downtime. "It was one of those occasions when we were looking for the pitfalls but there weren't any," says Souter.
The 10 Gigabit set-up was purchased through Foundry reseller TNS in January and tested by Foundry before it was implemented at Linx. Although, as Souter explains, it was virtually impossible to simulate how it would be used in reality. Any nagging doubts were dispelled after it went live, however. As Souter says, "It went ridiculously well." To put it simplistically, the network was taken down, the blades in the existing BigIron chassis swapped and then turned back on - a process that resulted in about three minutes of downtime. That was in late February 2002.

Initially the 10 Gigabit network will be used to link the four core sites in Docklands which handle the most traffic - Telecity Mill Harbour, Redbus Harbour Exchange and the two sites in Telehouse. All the rest are still connected by one Gigabit. At present they don't need 10 Gigabit but when the time comes it should be pretty straightforward. "We don't do things just for the sake of it," says Souter, although he says they will probably be upgraded by the end of the year.

Looking back, Souter believes that the decision to go with 10 Gigabit was the right one for Linx. "The advantages of what we did were overwhelming," he says. The extra capacity offered by 10 Gigabit has given the exchange some breathing space, although, with traffic increasing every year, Linx cannot really afford the luxury of sitting back and putting its feet up for too long.

Linx fact file
  • Founded 1994

  • Started out with five ISPs as members but now has over 120 members - chiefly ISPs and content delivery service providers - which effectively own it

  • Provides the physical interconnection for its members to exchange Internet traffic through co-operative peering agreements

  • Has nine sites in London but most of the traffic is handled by its four key sites in London's Docklands area

  • Chief executive John Souter describes the organisation as "mutual", promoting co-operation for the good of all its members

  • Has also recently taken on a role similar to that of a trade association for its members

  • Although Souter says it is "almost impossible to measure" the percentage of the UK traffic it handles, it is generally taken that it handles about 90% of it

  • Last year it handled 50 trillion packets of data (a typical data packet contains about 500 bytes)

  • Is the leading European exchange point. Souter says that the second largest, Amsix (Amsterdam Internet Exchange), peaks at about one third of the traffic

  • Charges a membership fee and members pay for using its ports on top of that - so the ones that use more pay more.

Read more on IT strategy