In the five years since the Web took off, the Internet has exploded at an incredible rate. It is easy to forget how packets of information get from one place to another across it, and to take for granted the technology that makes it happen. None of it would be possible unless the Internet "backbone" had developed.
The backbone to the Internet evolved after the formation of the root network, called Arpanet, in the late 1960s. Other networks developed, such as the National Science Foun-dation's NSFnet, which was aimed at connecting academic institutions.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In 1990, the US govern-ment stopped civilian traffic running over Arpanet, and routed it across NSFnet instead, upgrading the speed of the backbone to 45mbps. As late as 1995, NSFNnet was privatised, and that gave the backbone companies the chance to get in on the act. These days, the Government, academic institutions and some commercial suppliers are working on applications and equipment that will drive a second generation Internet at much higher speeds. The National Science Found-ation's Very High-Speed Backbone Network Service (VBNS), is being used to help support this activity.
The various private backbones in the UK and elsewhere connect together in large facilities called Internet exchanges.
In the UK, the most popular of these is Linx, the London Internet Exchange. Based in the Docklands area of East London, this facility connects together multiple backbones onto a high-speed link out of the UK, into Europe and through to the US. In the US market, there are similar facilities, although these have often been owned by single large backbone providers, in contrast to the UK, where Internet exchanges are non-profit organisations jointly owned by service providers.
The US WorldCom, for example, was given a contract by the National Science Foundation in 1993 to handle the Internet exchanges to its backbone, which were called Network Access Points.
So, how do your messages get onto the backbone? Generally, they will go via the local loop growth (either dial-up phone, ISDN, leased line or, if you're really lucky and are in BT's good books, ADSL), to a local point of presence (POP). From the POP, your traffic will travel along the backbone to Linx, or another Internet exchange, from where it will jump onto an international backbone.
If you are using a very large ISP, then the chances are that you will get a better service from the backbone. Large carriers, such as BT, have their own backbone already laid. Using a company like this as an ISP guarantees that your traffic is as close to the backbone as possible at an early stage in its journey across the Internet. More to the point, the traffic has better access to that backbone.
The danger of partnering with a smaller provider is that they will be renting backbone bandwidth from a larger backbone owner, so their bandwidth on the backbone will be more restricted. This can present problems if the company grows more quickly than expected and ends up with congested traffic that can't all fit into the available bandwidth.
For a company running an e-commerce site, the throughput to the server becomes very important because it affects the number of customers that can be served at any one time.
One way to guarantee a better throughput is to circumvent the local loop altogether and collocate the server at the Internet exchange itself, thereby placing it right on the backbone. Typically, this will give access to backbone speeds in excess of 1gbps.
One of the biggest problems for backbone providers is that the world is moving towards a quality of service (QS) model. In this scenario, Internet packets are differentiated according to their importance, and importance can be based on different issues such as the type of data being sent, or how much the customer is paying. If, for example, you are sending video down the line, you are likely to want your traffic to travel as quickly as possible along the backbone, so that the video arrives smoothly at the other end.
Because Internet data travels in packets, this has been difficult to achieve in the past. Instead, packets have been divided up, routed along different paths in the network and reassembled at the other end as they arrive, leading to scrappy delivery of data. Moving towards a quality of service model, backbone providers will tag the packets as they enter the edge of the network, subsequently routing packets through the network with different priorities based on those tags.
This is all very well in theory, and is feasible when dealing with one network, but what happens when one backbone provider has to pass data to another, so that it can reach a destination on the other backbone? It is not guaranteed that one backbone provider will use the same quality of service mechanisms as another.
Consequently, several technology standards are in development at the Internet Engineering Task Force (IETF). Diffserv, for example, is designed to enable packets to be prioritised, as is Multi-Protocol Label Switching (MPLS), and hopefully these will be adhered to by the different backbone providers when they exchange packets in the future.
In the meantime, QS along the backbone is not guaranteed, especially when dealing with multiple backbone providers.
Be sure of one thing, however - when sending your packets down the line, they embark upon an incredible journey before reaching their final destination.
Like the contents of handbags, wallets and most car glove compartments, the traffic on the Internet always expands to fill the available space. While backbone bandwidth may be increasing exponentially, market research firm Datamonitor predicted in early 1999 that the volume of Internet traffic would surpass the volume of voice traffic this year. It also found that IP traffic had been rising at roughly 1,000% a year, compared to a growth of under 10% in the public switched telephone network (PSTN) area.
Predicted growth of Internet traffic