Networking is becoming more central to computing than ever before. With wireless becoming increasingly prevalent, and new devices making mobile data more usable (and, therefore, more relevant), companies are having to take more notice of the way that they manage their digital communications.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The march of network convergence and the continuing encroachment of IP into every part of the networking world also presents its own challenges. Computer Weekly takes a look at the significant networking trends and events in 2008.
Wimax out of the spotlight
Worldwide Interoperability for Microwave Access (Wimax) is like unified communications, or public key infrastructure (PKI) before that. Every year seems to be the one when it will take off, but it never seems to happen.
In late 2007, Intel was touting 2008 as the high-speed wireless standard's big year, but things have not panned out that way.
In the UK, Ofcom delayed a crucial 2GHz spectrum sale, which put a spanner in the works for operators mulling widespread Wimax services.
In the US, Sprint launched a mobile Wimax service in September, and Clearwire already had a fixed wireless service, but a joint venture between the two companies has been frustratingly slow to emerge.
The two firms originally planned to get together on a country-wide Wimax service in 2007, but cancelled the deal. "Sprint had a massive plan for Wimax and it all got canned. That was a major body blow," says Scott Morrison, a research vice-president at Gartner.
Then the two companies reconvened, and have just officially launched their joint venture, called Clear. The plan is to build a nationwide Wimax network to rival AT&T's in 2009, but that will require a large capital expenditure.
A number of companies including Intel, Comcast and Google, have piled money into the $14bn initiative. Perhaps, once again, Wimax's big year has simply been deferred.
Move to green network
For the last couple of years, the industry has been complaining about sprawling network and server equipment in the datacentre. With the server world in the middle of a virtualisation feeding frenzy, it is no wonder that network equipment suppliers want to get in on the act.
In March, Cisco launched the aggregated services router (ASR) 1000, an edge-class router aimed at both service providers and large companies, designed to combine different services such as firewall, virtual private network (VPN) and deep packet inspection in a single box.
Juniper also announced plans to get into the datacentre with its edge routing equipment, which is a significant green move because it simplifies the aggregation layer of a network, says Keith Humphreys, managing consultant at analyst euroLAN.
"It is a neat model, the way it collapses the traditional network architecture into a simpler one," he says. "You do not have so many aggregation points."
ISPs struggled to keep up
Internet service providers (ISPs) and carriers faced business pressures as people accessed increasing amounts of content. This year, with the increasing success of sites such as Hulu and tools such as iPlayer, the problems began to crystallise, warns Sharifa Amirah, principal consultant at Frost and Sullivan. "2008 is the year where we finally saw things coming to fruition.
"The plasticity of end-user content has become more prominent in the business models of not just the telcos but also internet-based companies," she says, arguing that mobile users in particular are consuming content in spades, and that carriers she speaks with are panicking as they try to keep up.
"The big problem is the infrastructure but then there is also the cost of it. And while consumers are increasingly demanding high-bandwidth services, there is a low propensity to pay for those services," she adds.
As consumer appetites for video and other high-bandwidth content continues to grow, the tensions are likely to worsen.
The internet is broken
The internet may not be perfect, but we try to live with it. But then, pesky researchers occasionally come along and tell us what we would rather not know - that the system is inherently broken. Dan Kaminsky, director of penetration testing at security consulting company IOActive, knew he was onto something in March when he discovered a fundamental flaw in the way that the domain name system (DNS) worked.
The flaw would allow attackers to persuade a recursive DNS server to refer to their malicious DNS server when performing a DNS lookup, rather than querying the correct one.
Because this attack could work all the way up to the top level domain, it meant that in principle, attackers could essentially own .com or any other domain extension. Kaminsky marshalled together influential organisations including the United States Computer Emergency Readiness Team (US-CERT), ISPs, and multiple large web players to install a rapidly-developed patch that would make it much more difficult for attackers to successfully compromise the system.
Days after Kaminsky unveiled full details of the DNS flaw at Black Hat in August, researchers Anton Kapela and Alex Pilosov demonstrated another problem at sister conference Defcon.
This time, the bug was in the Border Gateway Protocol (BGP), and it could be used to eavesdrop on traffic by routing it via an intermediary server.
The significance of both these flaws is that they are not coding bugs - they are basic design flaws in the way that DNS and BGP work.
Fixing them at that level would require going back and recrafting the initial specifications, and with the whole internet already running on them, that is very difficult to do.