Is net neutrality being zapped by radio waves?

There is a principal that internet service providers (ISPs) and governments should treat all data crossing the internet equally. It should not matter what type of device is being used, who the user is, or what site or application the data is being coming from/going to – net neutrality should mean no difference in charging models, no discriminating between the different use cases.

The arguments go back and forth as to whether this should be enshrined in legislation as a right, or allowed to drift in a competitive open market.

Despite the arguments and the capacity of technology to advance there are restrictions due to the laws of physics and certain resources that are therefore limited. This might not be too much of an issue with the massed bundles of fibre optics at the heart of fixed-line networks, but wireless networks have to balance range, capacity, power and the frequency spectrum in what is increasingly ‘noisy’ environment. Ideally without ‘frying’ anything en route.

While the resources are constrained, the boundless enthusiasm and appetite to access mobile data and applications is not. Nor, given the numbers of subscribers and devices, is the number of endpoints diminishing. In fact, with a re-awakened interest in machine-to-machine communications (M2M), or an ‘internet of things’, this is likely to accelerate further.

So what about unwired net neutrality?

There are already differential services that break the spirit, if not the letter, of the principal. To see how this happens consider how the way hotels have been offering Wi-Fi is changing. Initially it appeared to be a new revenue stream, but then establishments realised it was costly to get right. As more venues started to offer it, the differentiation was lost and it became a ‘table-stakes’ offering of free Wi-Fi once hoteliers realised that actually they really made money from renting out rooms and selling food and drinks.

Not all have reached this point yet, but the more progressive organisations have already gone a step further. They offer ‘basic’ Wi-Fi for free, but have a premium service that offers greater bandwidth, improved latency etc – what might be described as ‘professional’ Wi-Fi, compared to currently simple ‘hotspots’. Basic allows a bit of email and gentle browsing, but the premium service would be good enough for consumers’ IP telephony, gaming and video streaming or virtual desktops and unified communications for the enterprise user.

Then there are cellular networks. Some carriers are premium-pricing their higher speed 4G offerings compared to the tariffs on their 3G networks. Of course with differential caps on usage it also gets a little confusing as to which is the best service for an individual user. In countries where only one or a few of the mobile networks are offering 4G today, there will be rapid pricing changes as operators switch between land grab, maximising revenue and maintaining network quality modes.

Given that users have different needs – from M2M applications that might only require a few guaranteed kilobytes to video streaming gamers who need high bandwidth and low latency – there will have to be different types of services offered. Setting caps on how many minutes of communication or megabytes of capacity will be bundled and then charged for will no longer be sufficient.

Different qualities of service will need to be differentially priced. This might require application bundling, e.g. all the social media you can eat, but video is charged by the megabyte or guaranteed service levels, e.g. all gaming traffic in sub XYZ latency, but email transmitted as ‘best efforts’.

It will be a real challenge for rating, billing and marketing, but there is no dark fibre in the sky and all the innovative use of spectrum has its eventual limit, which with ever more users and usage is close by. 

The superfast mobile net is unlikely to be very neutral, but that might work out to be beneficial in the long run.