The Ofcom Consultation and the future of Digital Britain

HMG has just launched a consultation to extend the remit of Ofcom to promote “efficient investment in infrastructure”. The six week timescale is determined by the need to legislate before the next government reviews the very existance of Ofcom. But can we afford a two year wait for a proper review of the UK communications infrastructure, given the stress tests it will face in 2012 if not before? And can we afford to leave that review to Ofcom? 

The internet and broadband collapses of the past fortnight, internationally and domestically, have once again shown the fragility of the infrastructure on which we are gambling the future of society – from centralised databases to cloud computing. 

Forty years ago there was a view that big systems were best designed by small teams producing a simple core with complexity devolved to ring-fenced subsystems. The Internet was itself a classic example.

We have yet to have a large-scale computer-created catastrophe. In consequence, there is a growing view that it is acceptable for society to be critically reliant on complex interdendent systems that no-one understands.

That thinking is about as shallow as that of most of those who want to “save the planet”. They tend to forget that saving the planet is not the problem, The planet will survive. But part of the price may well be the removal if its main pollutant – homo sapiens. 

Recent FIPR alerts juxtaposed a story in the Times that “11 million parents must have CRB checks and go on a database if they wish to acompany school trips” with a story in the Independent that “4 million UK identities and supporting card details are available for sale over the Internet”. Lobbyists say we must be able to better identify ourselves on-line so that their clients can sell to us. Meanwhile fraud and impersonation soar. The only thing preventing melt-down is that organised crime wishes to milk e-commerce and e-banking not kill it. 

The engineers who designed the Internet, watched as an inifinity of marketing monkeys, aided and abbetted by another infinity of IPR and regulatory lawyers, heaped layer upon layer of combustible gobbledeygook onto a very simple core.

The Internet will survive the melt-down that is to come.

Most of the layers of vulnerability and obfuscation will not.

Nor will those whose applications and business models assume a level of communications infrastructure reliability and resilience that is not yet even being properly planned, let alone available. Meanwhile the means of identifying which parts of the communications infrastructure really are secure and resilient are blocked by levels of obfuscation – variously in the name of “competiton”, “commercial confidentiality” and “security”.

Network failures are said to be becoming less common but more serious: partly because of growing centralisaiton and partly because back-up facilities, from hot-standby battery back-up and local storage to altenative routings and power suppliers, have been rationalised away to reduce cost and avoid the impact of business rates.  

The experience from catastrophes such as Hurricane Katrina is that most modern wireless-based communication networks, including Internet connections, fall over within 30 minutes of the power supply failing, but can be brought back up quite quickly, provided they are not out for more 3 – 4 days.

Meanwhile traditional networks keep going until the diesel in the standby generators runs out: perhaps 3 – 4 days. If the problems are not sorted by then, the cost of recovery begins to escalate, dramatically after about ten days. It may then take months, or even years.

The 2020 world of secure fixed and mobile broadband networks carryng IPV6 services will  almost certainly be based on communications, hardware, software and wetware (people processes) whose provenance (including security and resilience) can be audited along chains of trust. Most of the audit tools are now in place but are rarely used for real outside the United States and China. They need to be used in earnest to create secure resilient networks that will survive the melt-down of cheap, always-on, cloud computing that is just around the corner.

There are signs that this is beginning to happen around the Pacific Rim but the UK and Europe still appear at the stage of fragmented and duplicated emergency planning exercises, with no-one responsible for harvesting and diseminating the lessons and skills gained, let alone planning the investment necessary to remove the vulnerabilities identified.   

What will cause that to change?

Doing nothing and letting events take their course.

The panic after the first major computer-enhanced catastrophe will bring about change.

Of course a few tens of thousands may die when the centralised medical records go off air before the casualties reach A&E from the food riots after our centralised food distribution systems have failed alongside power suppliers, alarm systems et al   .

But that is the price we have already committed to pay by not taking resilience seriously.

You did not notice that decision being taken.

SIlly you.

Next month I am due to help package the material being produced by the EURIM Security by Design group for presentation to the Class of 2010: the biggest intake of new MPs since 1833. The new MPs will be getting up to political speed when disaster strikes during the 2012 Olympics – probably because of overload or cock-up, rather than terrorist or criminal assault.

“Henny penny the sky is falling” is not the message I or my colleagues wish to give. We would far rather say that “security and resilience need not be expensive – provided they are planned in advance”.

But who is doing that planning? 

Meanwhile a parallel EURIM “Value of Information” sub-group is looking at the ways of justifying the spend necessary, based on the business value of networks, systems and the information they carry.

By how will we secure that spend?

By extending the powers of Ofcom?

Or by getting organisations like the Royal Academy of Engineering to work alongside those who wish their business operations to survive a major communications or power failure that takes their competitors off air?

Can cybersecurity czars, tick box compliance regulators and blame avoidance government planners really change market behaviour and bring us infrastructures that are fit for purposes?

Or do they merely get in the way of change? 

Could Ofcom have a role in helping bulldoze the obstacles to change out of the way – even if that meant civil war in the corridors of power?

And how do we ensure it better protects consumers (including business consumers) against abuse by the cartel that runs the current infrastructure and betters enables us all to make genuinely informed choices?

The evolution of UK/EU regulatory structures over the past couple of decades (not just Ofcom but all the rest of the Ofalgebra) shows that regulators are indeed a necessary evil. The fewer we have the better. The best are those that work to make themselves redundent. But nirvana never happens. The best are succeeded by those who seek the quiet life or to plan the way ahead, in collaboration with the dominant players of the day.

No-one ever accused those currently at the heads of Ofcom of seeking a quiet life, but is it any easier to plan the future now that when we failed in the 1970s – the last time that indusrial strategies were fashionable?

I ran one of the more successful strategic exercises of the early 1970s. It reported in May 1974 as “The Next Ten years: A Computing Development Plan for Regional Water Authorties”. Implementation got off to a flying start but the plan was out of date inside three years. That was an invaluable lesson when, in 1978, I became involved in the planning studies for telecommunications privatisation and liberalisation.   

The consultation period over extending the powers of Ofcom may be short but could nonetheless determine whether the UK will still be part of the global knowledge economy in 2012, let alone 2015 or 2020. 

Speak now or else for-ever hold your peace.