Danny Bradbury in the second part of our series peering into the future of IT
Along with processing power, networking has been one of the quickest-moving areas of the computing sector. Both sectors have been concerned with processing raw data in volume, and those volumes have been rapidly increasing. Meanwhile, the space needed to process them and the time taken to do so has been dropping exponentially.
A 1994 documentary concerning the future of technology, called Visions of Heaven and Hell, showed the head of BT's Martlesham R&D laboratory, Peter Cochrane, holding up a 1930s telephone trunk cable the width of his arm. He held up a second, thinner one from the 1960s, containing a honeycomb of individual cables, each of which contained the capacity of the 1930s cable many times over. Finally, he held up a fibre-optic cable not much wider than a human hair, which had a capacity millions of times that of its 60-year-old ancestor.
This capacity growth curve is set to continue, but not because of any great change in the type of cable used. Rather, companies are making quantum leaps in the equipment used to push data down the fibres. One of the biggest barriers to data transfer speed in the past has been the need to change between optical (photonic) and electrical media during data transfer. Packet-switched systems such as the Internet Protocol (IP) need to be routed or switched at various points along the fibre network, like envelopes moving in and out of postal sorting houses.
Historically, even when transmitted as light beams along a fibre cable, this packet-based data when it hits a router or switch has been converted to an electrical signal so that it can be read and processed. The disparity between the speed of optical computing (which operates at the speed of light) and electrical computing means that these switches have introduced a huge bottleneck into the proceedings.
Now companies are on the verge of introducing optical switches that can intercept packets coming across the fibre, process their destination data and send them on their way - all without converting them into electrical signals. It is literally all done with mirrors; lots of very small mirrors that redirect light beams representing data streams. Lucent is one company making excellent headway in this area. The firm plans to ship the first of its micro-electrical mechanical (MEMs)-based optical switches this year, and has signed a contract potentially worth $100m (£66m) with Time Warner Telecom. The early adopters are up and running.
What this means for consumers and business users alike is simply more of the same. It will bring faster data transmission to the carriers that eventually use it, meaning more bandwidth.
This is why the introduction of more intelligent networking services will be so important to data and telecommunications providers over the next few years. A quick glance at the share prices of the major carriers shows that their traditional business - making profits on simply pumping information through their lines - is under threat. The switch in emphasis from brawn to brain has started in the networking world, and it will filter through to the corporate user market too.
Quality of service will be an issue for carriers and enterprises alike in the networking world in the next few years, especially as newer technologies like voice over IP (VoIP) become popular. The decreasing cost of bandwidth will encourage firms and carriers to over-provision their networks, meaning that companies will not be as caught up on this technology as some would hope, but it will nevertheless be a way to maximise network efficiency.
But this will only be the start of smart networking. Policy-based networking utilising directory standards is already standardised thanks to the efforts of the Distributed Management Task Force and, while uptake is still relatively slow, an increasing awareness of directory services such as those provided by Novell and Microsoft will see this market grow at a moderate rate in the next few years.
Networks promise to get even more intelligent in future. One possible development outlined in the PricewaterhouseCoopers 2000 Technology Forecast is the creation of active networks. The report argues that as core networking technology gets faster, carriers will have more processing power at their disposal. Currently, routers and switches only process the header destination information. In future, it proposes that programs could be included in the packets, making the very data itself smart.
This work, which has been in progress at the USDefense Advanced Research Projects Agency and Massachusetts Institute of Technology since the late 1990s, could open up the possibility of self-healing networks, a holy grail envisaged by industry veteran and technical guru George Georgiou, product manager at Siemens Network Systems.
Virtual Lans that can be set up from a central console by the administrator are already a reality, but Georgiou foresees a world of transient domains, where the network can automatically set up and tear down domains based on users' behaviour. "It is about interrogating all of the network layers from one to seven and readjusting the network continuously," he says. "Transient networking will happen in the next three or four years. People are already starting to talk about it."
Much of this functionality will not be immediately apparent to end-users, who might not even notice the increased performance. The real added value for network users will arrive with the development of enhanced services. VoIP firms are already talking about the concept of self-provisioning using IP. For example, a manager within a company using a VoIP service could fire up a browser and provision a new telephone number for a temp who was only due to work in the office for two weeks. This number, which would also have its own voicemail box and IP fax facility, could be deleted from the office when the temp finished working there.
Similar enhanced services could be provided if the concept of pervasive computing ever takes off. The idea of a seamless network in which a wide variety of non-PC resources are connected would be the most relevant to end-users.
There are a number of different pervasive computing software technologies in the offing, but one of the earliest ones to garner significant media coverage was Jini, a software technology under Sun's Java standard. Using Jini, companies can produce devices - eg a storage mechanism, PDA, telephone or printer - that declare themselves on the network. Any other Jini-enabled device will then recognise them and be able to use their services.
The Universal Plug and Play consortium, organised by Microsoft, is also working towards a pervasive computing environment with its own standard. Last summer Microsoft also introduced its software-focused .net initiative, which will underpin its computing strategy for the next few years. Apart from covering new user interface elements such as voice recognition, the framework will also make it possible to access information and applications over the Internet from different devices, in an application service provider-type model based on the notion of software services.
Such pervasive computing technologies will no doubt be assisted by the introduction of both personal area network and data-rich third generation (3G) cellular services. These will enable users to interact with other devices wirelessly - and hopefully seamlessly - when used in conjunction with the software protocols. Bluetooth devices are still thin on the ground despite the ratification of the standard, which enables devices to exchange data with others up to 10m away.
The 3G cellular standards and the intermediate 2.5G technologies that are just coming into play now are complex.The General Packet Radio System and HighSpeed Circuit Switched Datatechnologies have both already been commercially rolled outbycompaniesincluding Orange in the UK. 3G standards undertheIMT-2000 umbrella put in place by the International Telecommunications Union will start to be implemented around 2002, but there will be a sluggish uptake of data services running over these networks.
Look for two major trends in the next five years - expanding data throughput at lower prices and the introduction of enhanced services, mainly over IP.
A day in the life of
Colin, a sales manager for Anycorp, is stuck in the car on the M25. Luckily, unlike motorway traffic, data network congestion is a thing of the past, and he does not have to be in his office to pick up his e-mail either. Firing up his 3G-enabled PDA, he accesses his e-mail over the cellular network and finds that a client wants to meet him at 3pm that day. He logs on to his .net-enabled online diary and finds that his secretary has already scheduled a meeting at that time. He calls the client using the PDA - which also doubles as a smart phone - and conducts an IP-based videoconference with him. The client tells Colin that he needs an estimate delivered immediately or he will award the contract to a competitor.
Colin uses his PDA to access the office network and attempts to run a database query to produce the necessary estimates which will be compiled into a lengthy word processor document on his PDA. Unfortunately, the network is under extreme strain because of a router fault, and not much traffic is getting through. He calls the network manager, who is able to elevate Colin's priority on the network. Colin's packets get through to the PDA, and he mails the document to the client, sealing the deal.
Forecast revenues for mobile portals and ASPs
|Mobile Portal Revenue||2000 $m||2001 $m||2002 $m||2003 $m||2004 $m||2005 $m|
|Wireless ASP revenue|
Source: Analysis 2000
Worldwide mobile users (in millions)
Source: Analysis 2000
Geaographical spread of mobile users (in millions)
|Year end Dec||2000||2001||2002||2003||2004||2005|
|Total mobile users||655||800||914||1,012||1,081||1,142|
|Mobile Internet users||35||94||198||316||440||613|
|Other mobile users||619||709||719||696||641||529|
|Year end Dec||2000||2001||2002||2003||2004||2005|
|Africa and Middle East||33||45||55||65||72||79|
Source: Analysis 2000
Internet 2: the future for high-speed access?
Internet 2 is a project to explore various applications that can be routed over high-speed networks. Started in 1996 by 34 universities, the initiative focuses on areas such as immersive virtual reality, in which groups of people interact with each other in cyberspace. Other applications include digital libraries containing multimedia information and a virtual laboratory in which computing tasks are farmed out over the network to different machines in remote locations.
There is no real commercial use for the Internet 2 at present. For one thing, the networks required to run these applications are not in place outside the military or academic communities yet, although projects like Abilene and the Next Generation Internet project - high-speed networks from commercial partners - look promising. With optical networking on the way, it will only be a matter of time before high-performance network applications make their way into the commercial world.