The networking industry has come a long way since Ethernet was originally developed back in 1973. Back then, the boffins at Xerox Parc had trouble squeezing data down the line at a paltry 2.67 megabits per second (mbps). These days, office managers can hook up 100mbps networks during lunch. It has taken almost 30 years to get this far, but the rate of change has accelerated, with data throughputs increasing exponentially and network features growing increasingly sophisticated. And the market is likely to get even more exciting in the next couple of years.
One of the most promising areas for development is wireless local area network (Lan) technology. The prevalent standard in this area, commonly referred to as WiFi, is the IEEE 802.11b protocol that communicates at 11mbps. The technology has taken off in a big way, with mobile market analyst Analysys predicting that there will be 21 million users of wireless Lans in the US by 2007.
At present, the 802.11b standard dominates the wireless Lan market, and security worries about the security of the standard's wireless equivalent protocol notwithstanding, it is proving of interest to companies with staff on the move. Good uses of the technology are in temporary venues such as construction sites.
Perhaps the most exciting area for wireless Lans is in public services. Fed up with waiting for 3G technology to arrive, and with high prices for the so-called "2.5G", general packet radio service (GPRS) technology, people may find public fixed wireless sites using Wi-Fi hubs to be a plausible alternative, especially as more suppliers begin to build the technology into their laptops. On 10 June, the Department of Trade and Industry announced proposals to change the Wireless Telegraphy Act to enable the 2.4GHz spectrum to be used commercially. As 2.4GHz is Wi-Fi's stomping ground, this opens up the possibility for companies to offer high-speed access in public places like airports and town centres.
BT is already making plans for such services. The company has announced its intention to roll out 4,000 such wireless hotspots in large conurbations by 2005 and it is already starting to deploy them. They will provide similar facilities to existing services in the US. Starbucks, for example, has teamed up with a third-party operator to provide wireless access in its outlets, so that you can sip your latte while surfing on your laptop.
There are other standards emerging in the wireless market, too: 802.11a is the successor to 802.11b and features data transfer rates up to five times faster, although this is optional - the specification only mandates 24mbps throughput. The downside is that it operates on the 5GHz frequency, and will not be compatible with its predecessor, so the market may split as the technology is introduced with consumer, low-end and public access on the one side, and higher-speed corporate Lans on the other.
The European Telecommunications Standards Institute is pushing Hiperlan/2, a European alternative to 802.11a operating in the same space, which is likely to fragment the market still further. And if that protocol spaghetti is not confusing enough, consider 802.11g, which operates in the same spectrum as WiFi but with at least 20mbps throughput. Clearly, as with many emerging technologies, there will be some considerable market churn before dominant standards emerge.
Meanwhile, the tension between the wireless Lan and 3G markets will be one of the biggest annoyances for end-users, and a source of potential opportunity for network companies to sell solutions, according to Neil Dipple, IP development manager at IT services company NextIraOne. "We have yet to see a device that does 802.11b, 3G and Voice over IP (VoIP). It is something that everyone is talking about but 3G is holding that up. That is where it has got to go."
The VoIP part would be easy - 2.5G and 3G specifications are packet-based, so IP-based voice packets would pass over them like any other traffic, just as they will with the 802.11 standards. But the idea of a device that handles all three, and which switches from one to the other seamlessly, raises the issue of convergence.
Mark Darvill, director of technology at network consultancy and managed services firm Logical, argues that we have moved beyond the idea of convergence as simply putting voice and data down the same line. "It is about putting a 'shim' in between your corporate back-office applications and mobile converged devices," he says, "Whether it is an IP phone on a desk, or something else."
Examples of convergence include making your GSM or GPRS mobile device part of the corporate PBX, and then using it to take IP-based calls, while accessing VoIP-based voicemail and e-mail through a unified messaging system, for example. However, the number of companies doing this in the UK is relatively low, not only because of the business climate but also because few people are willing to rely on notoriously unreliable data networks to carry voice signals, argues Johnny Rollett, technical director for network consultancy Unified Networks.
The other boundary to truly transparent convergence is network roaming. Standards for roaming between WiFi hotspots have yet to be defined, and when you fold roaming between wireless Lans and cellular networks into the mix, things get complicated, not least because of vested operator interest, says Darvill. "The operators do not want the free roaming thing to happen. You can understand from their point of view that they want to do everything possible to ensure that a smart phone can't roam onto a wireless Lan," he says. With network operators battling uphill to increase mobile data revenues, the last thing they want to do is co-operate with fixed wireless operators, unless they happen to be run by the same company. In the latter case, some synergy may be possible.
The one thing that may propel the VoIP market over the next two years is the session initiation protocol (Sip). This is a lightweight protocol for IP-based conferencing, and covers areas including voice and video. It is expected to replace the heavier H.323 protocol, which has been "interpreted" by many suppliers, resulting in a lack of product interoperability. Sip will level the playing field, but not entirely, says Dipple, who explains that suppliers are coming out with their own interpretations of the standard. But at least there is less room to manipulate it if it is a simpler specification, and there is a greater chance of interoperability.
While VoIP suppliers use Sip to change what is sent across the pipe, others are more interested in changing the nature of the pipe itself. Higher-speed networking technologies continue to compete in the market, especially in the area of storage area networks (San). Several standards have emerged to battle it out for supremacy in this area, the most notable of which is IEEE802.3ae (10 Gigabit Ethernet is its catchier name). Ratified as a standard on 17 June, 10 Gigabit Ethernet provides lightning-fast throughput, as its name suggests, and suppliers are capitalising on its ability to communicate over long distances, targeting it at metropolitan area networks as well as Lans and Sans. The big downside for the 802.3ae standard is that it relies on fibre-optic cable, which can be expensive. But for server clustering and Sans it represents a good investment.
It may even steal the thunder of Infiniband, a rival storage networking technology founded by companies including Dell, HP and IBM. On the other hand, Infiniband runs over copper and offers speeds of up to 48 gigabit per second (gbps) according to data released from Infiniband player Mellanox technologies. Suppliers also have Fibre Channel, another technology that offers up to 2.1gbps.
One thing is certain: whichever technology companies choose, the majority of customers are unlikely to implement them immediately. A recent report by New York-based technology network TheInfoPro, which surveyed 152 storage professionals, found that adoption of both iSCSI (a means of passing SCSI commands over IP networks for Internet-connected storage) and Infiniband were between one and two years further out than expected.
All these technologies have one thing in common: they are supplier-driven standards, foisted onto a cautious corporate customer base. They are potentially useful, but cash-strapped users are understandably conservative, especially as the various standards in each market vie for dominance. Conversely, it is refreshing to see some standards adopted organically through genuine market need. A good example of this is the IPv6 protocol, developed by the Internet Engineering Task Force which is making lots of headway in the Far East. The documentation on the protocol, which superceded IPv4, explains the primary benefits. These include the increase in the address size from 32 bits to 128 bits, creating more levels of addressing, and leading to a far greater number of addresses.
Increased Internet addresses are important, especially for Asian users, according to Uri Rahamim, vice-president of global sales and marketing for Hitachi Internetworking. "Each IP user needs a unique address for it to work and be connected. What happened is that 70% of the addresses were allocated to North America and 30% to the rest of the world," he explains, lamenting the disenfranchisement of countries that are experiencing rapid technological growth but which are constrained by a lack of Internet resources. "Asia and Europe are therefore at the forefront of pushing IPv6 because they are first at risk of running out of addresses. If you go to Korea and ask for 'A' class addresses, there are none to be found."
The other benefits of IPv6 include simplified header fields to speed up processing and limit bandwidth costs, and a feature that enables packets to be labelled according to the stream of traffic that they're in to ensure quality of service. Although translation facilities exist for relaying IPv4 packets into IPv6 infrastructures, telecommunications carriers have to implement IPv6 support in the core of their network, and client-side hardware and software must also support it, if it is to take off.
IPv6 has to catch on if the Internet is to continue growing and this opens up the possibility for peer-to-peer computing. Grid computing takes peer-to-peer file sharing a stage further, connecting computing resources from across the network to share processing tasks in a digitised swarm.
Grid computing has the potential to offer huge computing benefits as it utilises spare power on both clients and servers, but it is likely to be a longer-term bet for networking futurists. Although the grid concept is backed by reliable players like IBM, customers will nevertheless be nervous about the security implications of spreading processing tasks throughout the organisation, and they will also want to ensure that the network and management overhead will not be counterproductive.
The next two years will be exciting for the network industry, although they will doubtless be tempered by the current slow economy, which will take a while to pick up speed and which will discourage customers from embarking on extravagant projects. The slump in the telecoms market will be a particular hindrance. Nevertheless, even with a relative lack of cash, the sector is showing just how innovative it can be.