Sergii Figurnyi - stock.adobe.co
The UK government has published a whitepaper outlining its “adaptable” approach to regulating artificial intelligence (AI), which it claims will drive responsible innovation while maintaining public trust in the technology.
Published on 29 March, the whitepaper emphasised the government’s commitment to “unleashing AI’s potential across the economy”, which it said had generated £3.7bn for the UK in 2022 alone.
The whitepaper builds on the government’s national AI strategy, published in September 2021, which outlined its ambition to drive corporate adoption of the technology, boost skills and attract more international investment.
Heralding AI as “one of the five technologies of tomorrow”, the government said in the whitepaper that organisations are currently being held back from using AI to its full potential by a patchwork of legal regimes, which is causing confusion and creating administrative burdens.
However, the government noted that it would avoid introducing “heavy-handed legislation which could stifle innovation”, and instead take a more “adaptable” approach by empowering existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.
The whitepaper also outlines five principles that regulators should consider to facilitate “the safe and innovative use of AI” in their industries. These are safety and security; transparency and explainability; fairness; accountability and governance; and contestability and redress.
It added that over the next 12 months, regulators will be tasked with issuing practical guidance to organisations, as well as other tools and resources such as risk assessment templates, that set out how the five principles should be implemented in their sectors. The government said this could be accompanied by legislation, when parliamentary time allows, to ensure consistency among the regulators.
Science, innovation and technology secretary Michelle Donelan said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
In mid-March 2023, the government accepted in full recommendations on the regulation of emerging technologies made by its outgoing chief scientific adviser, Patrick Vallance, who advocated a “light touch” approach and called for a regulatory sandbox to trial AI in real-life situations under the close supervision of regulators.
The government has now confirmed in its whitepaper that the sandbox will receive £2m funding, and that it favours implementing sandboxes as envisaged by Vallance, who recommended a multi-regulator approach overseen by the Digital Regulation Co-operation Forum.
On the recommendation from Vallance that it should create a clear policy position on the relationship between intellectual property law and generative AI, the government said in mid-March that it would create a code of practice for generative AI companies to facilitate their access to copyrighted material, and that it would come up with specific legislation if a satisfactory agreement cannot be reached between AI firms and those in creative sectors.
While the whitepaper reaffirmed the government’s commitment to implementing Vallance’s generative AI recommendations, it gave no further indication of what the proposed code of practice would entail, adding that “it would be premature to take specific regulatory action” over generative AI at this point, as doing so would “risk stifling innovation, preventing AI adoption and distorting the UK’s thriving AI ecosystem.”
However, it did note that, under its pro-innovation framework, regulators may decide to issue specific guidance and requirements for large language models, a type of generative AI.
Welcoming the whitepaper, Sue Daley, director for tech and innovation at trade association TechUK, said her organisation supports the government’s “plans for a context-specific, principle-based approach to governing AI that promotes innovation”, and that “the government must now prioritise building the necessary regulatory capacity, expertise and coordination”.
Grazia Vittadini, chief technology officer at Rolls-Royce, added: “Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.
While industry was welcoming of the whitepapers, others from civil society and unions were less enthusiastic, pointing out a number of issues with the approach outlined thus far.
Michael Birtwistle, associate director of data and AI law at the Ada Lovelace Institute, for example, said that while the Institute commends the government’s engagement with the regulatory issues around AI, and welcomes the call for improved regulatory coordination, it is concerned “the UK’s approach has significant gaps”, leaving it “underpowered” relative to the scale and urgency of the challenges.
“The UK approach raises more questions than it answers on cutting-edge, general-purpose AI systems like GPT-4 and Bard, and how AI will be applied in contexts like recruitment, education and employment, which are not comprehensively regulated,” he said.
“The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software,” said Birtwistle. “We’d like to see more urgent action on these gaps.”
Read more about artificial intelligence
- AI interview: Elke Schwarz, professor of political theory: Elke Schwarz speaks with Computer Weekly about the ethics of military artificial intelligence and the dangers of allowing governments and corporations to push forward without oversight or scrutiny.
- Lords AI weapons committee holds first evidence session: In first evidence session of Lords AI weapons committee, expert witnesses unpack claims that artificial intelligence in weapon systems will help military organisations to improve their compliance with international humanitarian law.
- MPs warned of AI arms race to the bottom: Expert tells Parliamentary committee that tech companies developing artificial intelligence are cutting corners and placing safety on the backburner, opening up ‘enormous risks’ for the future of AI.
He added that the UK will also struggle to effectively regulate different uses of AI across different sectors without substantial investment in the capacity of its existing regulators.
General secretary of the Trade Union Congress, Paul Nowak, said AI will transform the way millions work, and is already being used across the economy to line-manage, hire and fire. “To ensure AI is used ethically – and in a way that benefits working people – we need proper regulation,” he said.
“But the government is passing the buck. Today’s whitepaper is vague and fails to offer any clear guidance to regulators. Instead, we have a series of flimsy commitments. It is essential that employment law keeps pace with the AI revolution. This whitepaper spectacularly fails to do that.”
The TUC previously warned in March 2021 that “huge gaps” in British law over the use of AI at work could lead to “widespread” discrimination and unfair treatment at work, and warned again a year later that AI-powered workplace surveillance is “spiralling out of control”.
Over the next six months, the government said it would consult with a range of actors on its whitepaper proposals; work with regulators to help them develop guidance; design and publish an AI regulation roadmap; and analyse findings from commissioned research projects to better inform its understanding of the regulatory challenges around AI.
In October 2022, the European Commission announced the AI liability directive, which contains proposals to help those negatively affected by AI’s operation to claim compensation, and is designed to complement and give teeth to the European Union’s AI Act, a separate piece of legislation introduced in April 2021 that adopts a risk-based, market-led approach to regulating the technology.