The digital revolution is challenging the regulatory environment across every westernised, developed economy. Governments in the EU, UK, France, Germany and the US are each trying to take a lead in working out how to deal with the new challenges presented by internet companies such as Facebook and Google. There are debates taking place around data protection, privacy. responsibility for content, copyright, algorithms and there will be other issues arising that have barely been considered even now – not least around artificial intelligence (AI).
It has long been the case that regulation lags well behind technology, and as a result regulators tend to try to shoehorn new digital developments into existing structures. A prime example comes with social media, especially after the fallout from Facebook and Cambridge Analytica over use of customer data.
Internet platforms have always been regulated as if they were telecoms companies – US President Bill Clinton established in 1997 that web companies were classified as “mere conduits”. This means that, like a phone company or broadband provider, such firms were not considered legally responsible for content shared on their platforms when it was created by their customers / users.
More recently, issues such as extremist content and child abuse images have challenged that doctrine, with many observers calling for web platforms to be treated like media companies, which are legally responsible for all content published on their sites.
Neither scenario works, and it’s clear that a new style of regulation is needed for a different type of company. Can regulators ever keep up with the pace of technological change? Probably not – but they can be better prepared for it.
The next great challenge for regulators will be AI – how, for example, should we oversee the algorithms that will increasingly make decisions that affect our lives? How do we ensure those algorithms are fair and unbiased? What about the development teams that create them – should they be sufficiently diverse to make sure everyone in society is considered? And what about machine learning, where algorithms evolve without human intervention as the AI system “learns”?
Nigel Shadbolt, one of the UK’s leading academics in AI and open data, told Computer Weekly that if the UK wants to take a lead in AI, then an area for focus is ethics. Realistically, the UK can’t compete with the multibillions that China is throwing at the sector – but China’s social and political culture is unlikely to take the same approach to regulation and ethics as we would.
It’s an easy thing to say, much harder to do – but the UK has a unique opportunity to lead the world in ethical regulation of the digital revolution. Don’t regulate on specifics – regulate on values and principles that can underpin technology development for years, maybe even decades to come.
The UK government is already setting up a Centre for Data Ethics and Innovation, and Theresa May has called for the UK to be a world leader in ethical AI. We have a genuine opportunity to set the standards that the world will follow. In such uncertain times for the UK tech sector, ethics is one area where we can and must take the lead.