Decisions about the openness of the internet and the technology behind it will affect all web users
The next couple of years will define whether the internet remains open, but acquires more heavy-duty infrastructure to meet the demands that all of us – businesses, consumers, government – have come to expect of it, or whether it is hijacked by private interests.
Several key technological choices will have to be made by internet users, IT suppliers, standards organisations and governments. Those choices will determine the shape of the internet for the next decade and how fast new applications develop and grow.
The US Commerce Department and the Internet Corporation for Assigned Names and Numbers (Icann) recently renewed their agreement over the
administration of the internet domain name system (DNS) .
The agreement establishes guiding principles for managing the DNS. The US government has agreed to private management of the DNS, with the government’s only involvement being to provide a “backstop” in extraordinary circumstances.
Alongside those discussions, one of the greatest debates over the future of the internet has been building for the past year. The battle over “net neutrality” has had both sides issuing dire predictions about the consequences if their side doesn’t prevail.
Net neutrality is the principle that internet users can go to any legal website, run any legal web application and attach any legal device to the network without restriction by their internet service provider (ISP).
The situation has been driven by a decision from the US Federal Communications Commission (FCC) to exempt digital subscriber line (DSL) providers from so-called common carrier rules, which require US telecoms carriers to allow voice and data traffic from any company that wants to get on their networks.
Some claim that this exemption could allow cable modem and DSL providers to shut out competing ISPs, potentially limiting most US residents to one or two broadband providers.
Such an approach has left internet pioneers such as Vint Cerf, co-designer of the TCP/IP backbone of the internet and a vice-president at Google, deeply unhappy. “This does not constitute a competitive environment,” he says, pointing to comments by some broadband providers that companies such as Google are getting a “free ride on their pipes”.
The proponents of a net neutrality legislation, which include both Microsoft and Google, argue that such laws are needed to protect consumers, promote competition, maintain internet innovation and prevent ISPs from blocking access to important services.
Their opponents argue that the law would itself stifle innovation, prevent providers from introducing new services, cut down on consumer choices and lead to the degradation of the
internet. The spat is destined to develop into outright hostilities as each side marks out its turf.
Already two companies, AT&T and BellSouth, have proposed a high-speed broadband video network separate from the public internet, guaranteeing its own video service at a level of quality unavailable on the public internet.
The net neutrality backers claim that such broadband providers want to move away from the open internet and create an internet fast lane for their own services (at a premium price) and a slow lane for everyone else.
Similar stronger – or laxer – controls might also effectively operate a class structure for the control of spam, or business and consumer online security.
Michael Nelson, vice-president for policy at the Internet Society, says, “Let the internet be the internet.” In other words, some internet watchers seem unable to understand that the key reason for the internet’s success – it is the keystone of global communications and commerce – is its unique governance structure.
Built “on the run” yet still evolving, the internet governance system, Nelson suggests, is a “hearty hybrid of technical task forces, website operators, professional societies, IT companies, and individual users that has somehow helped to guide the growth of an enormous, creative, flexible, and immensely popular communications system”.
Nelson adds that the internet has grown so rapidly while running many powerful applications because it was designed to provide individual users with as many choices as possible, while preserving the end-to-end nature of the network.
“Because there are competing groups with competing solutions to users’ problems, users, suppliers, and providers get to determine how the internet evolves,” he says.
“The genius of the internet is that open standards and open processes enable anyone with a good idea to develop, propose and promote new standards and applications.”
And that is good news for the winery owner in South Africa, the teacher in the Andes, or the small Asian merchant who does not care about how the internet is governed or structured – but still benefits from the debate.
Under the existing model for internet governance, it is users – not governments and phone companies – who have the most influence. It is an open landscape and users’ demands drive innovation and competition.
As the internet is truly global, there is an undeniable need for international co-ordination on a range of issues, including standards, the management of domain names, cybercrime and spectrum allocation.
In an age where security threats have become ever more sophisticated, targeted at swindling consumers through identity theft, it is important that co-ordination exists to drive effective, best-of-breed solutions.
Ken Silva, chief security officer at Verisign and chairman of the board of the Internet Security Alliance, says the open nature of TCP/IP means it did not have security built in, so suppliers have had to build security into their applications.
One solution, Silva suggests, is the wider adoption of Internet Protocol 6 (IPv6), a network layer IP standard that increases the number of addresses available for networked devices, allowing, for example, each mobile phone and mobile electronic device to have its own address.
By December 2005, IPv6 accounted for a tiny percentage of the live addresses in the publicly accessible internet. Now the US government wants all federal agencies to deploy IPv6 in their network backbones by 2008.
This mandate for federal agencies, Silva says, is an example of the role that governments must play in driving a new internet infrastructure, which is not only more open, but also secure.
“Governments have a responsibility to innovate and think of ways of protecting users,” he says. “If your PC network equated to your water supply, you would effectively have filters on every tap. Instead, we have regional treatment plants. It is not perfect, but it is better than nothing.
“The internet grew out of an investment in defence and academic networks and governments still have the spending power to develop that IP infrastructure and shape it, perhaps to include some degree of authentication. I am just hoping they do it pretty quickly, because all the people that were the brains behind the ‘original’ internet are getting older.”
Silva says the pace of change is greater outside the Western world – the traditional internet infrastructure builder – with the real infrastructure updating taking place in the East.
“Real change is happening in China and Japan because their original communications network was modest at best, but is now being revolutionised by the mobile networks. For example, the only company that can offer end-to-end IPv6 is Japanese: NT&T.
“In the developing world too, countries such as Afghanistan can create an infrastructure of the latest and
best technology around. The irony is that in future, we will be getting on their networks, rather than them getting on ours, because their networks will be more advanced than ours,” Silva says.
Security will remain an issue, however, with any future infrastructure enhancement likely to demand greater use of authentication systems to guarantee users’ identities.
At one internet infrastructure meeting, Vint Cerf said, “If it ain’t broke, don’t fix it.” His words have often been misinterpreted to mean there is nothing wrong with the internet, and that nothing needs to be fixed.
But according to Nelson, there are many issues to address. “We need to reduce the cost of internet access and connect the unconnected. We need to improve the security of cyberspace and fight spam. We need to make it easier to support non-Latin alphabets. We need to promote the adoption of new standards that will enable innovative uses of the internet. And we need better ways of fighting and stopping cybercriminals,” he says.
Then there is the issue of the type of information we can expect to acquire from the internet, and what sort of devices will be used to gain access, according to Howard Gerlis, chairman of the BCS Internet Specialist Group
“In 10 years, the chances are we will have embraced the semantic web, where machine-readable descriptions will give meaning to content, and facilitate contextual navigation,” he says. “However, by then we might also be seeing the demise of the computer as we know it, to be replaced by some form of mobile device.”
Whatever the future holds, all of those internet goals are going to be more achievable, many would argue, by keeping the “open” model of the internet that has driven its success, even if that approach upsets the lights-flashing, horn-blaring “fast-laners”.