vchalup - stock.adobe.com

UK government does not see need for specific AI legislation

The UK government does not currently see the need for new artificial intelligence legislation, as many regulators are already dealing effectively with AI-related harms

The effective governance of artificial intelligence (AI) is a question of regulatory capacity and capability, rather than a need among regulators for new statutory powers, government officials have told MPs.

In March 2023, the UK government published its AI whitepaper, outlining five principles that regulators should consider to facilitate “the safe and innovative use of AI” in their industries.

These are safety and security; transparency and explainability; fairness; accountability and governance; and contestability and redress.

It added that over the next 12 months, regulators will be tasked with issuing practical guidance to organisations, as well as other tools and resources such as risk assessment templates, that set out how the five principles should be implemented in their sectors. The government said this could be accompanied by legislation, when parliamentary time allows, to ensure consistency among the regulators.

Giving evidence to the Science, Innovation and Technology Committee inquiry – which launched an inquiry into the UK’s governance of AI in October 2022 with a remit to examine how the government ensures the technology is used in an ethical and responsible way – Whitehall officials outlined the current thinking around why the government does not see the need for new AI-focused legislation.

Responding to chair Greg Clark’s questions about the lack of an AI bill in the King’s Speech (which he noted lays out the legislative programme for the next Parliamentary session up until 2025), Emran Mian, director general of digital technologies and telecoms at the Department for Science, Innovation and Technology (DSIT), said the government has taken the view that “we don’t need to legislate” because regulators are already taking action to deal with AI-related harms.

“We are already seeing the regulators take action and have due regard, if you like, to the [whitepaper’s principles],” he said, adding that the government has “no current plans for legislation” when pressed on the possibility of this changing as a result of its consultation on the whitepaper, which is yet to be published.

US regulation

Clark, however, noted that even the US – which “is not known for heavy-handed regulation” – has felt the need to start regulating to control use of the technology, and that some UK regulators – including Ofcom and the Competition and Markert’s Authority (CMA) – have either already been given, or are in the process of being given, updated powers so they can regulate in the face of new digital technologies.

“We think it’s really important for us to test whether the legislative framework we already have – and the legislative framework that we’re still in the process of adapting [with the Online Safety Act, the Digital Markets Bill and new data protection legislation] – gives us the powers and gives regulators the powers that they need to deal with the risks,” said Mian.

“If we identify examples of harms or potential harms arising where we don’t think regulators are taking the right steps, or we don’t think regulators have the right powers, then that is obviously [something we will take into consideration].”

Matt Clifford, a prominent investor and the Prime Minister’s representative on AI, added that there is a real need for new skills and people in the public sector to deal with AI, arguing that “these are questions of capacity and capability rather than powers in general”.

Role of the AI Safety Institute

Mian said part of the motivation for establishing a UK AI Safety Institute – recently announced by Sunak in the run-up to the Summit – is to help develop the public sector’s capacity to carry out high-quality testing and evaluation of private AI developers’ models, which is currently only being done by the companies themselves.

Echoing Sunak’s statement that companies should not be left to mark their own homework, he added that while “they should be contributing to the development of the science”, it is appropriate for government to play a role as it is not bound by the same “commercial imperatives” that may, for example, make them “rush to market” without fully dealing with the safety risks of their AI models.

Noting that leading frontier AI labs have already agreed to voluntarily provide the Institute with access to its underlying foundation models, Clifford added that the government is not asking for full access as it is not needed for the type of testing envisioned.

“Ultimately, in order to test for risks, we need access in the form that the public will have,” he said, later adding that any new testing regime will also need to take into account the power disparities between giant corporations with the resources needed to operate AI models at scale (both financially and in terms of access to the computing power) and smaller application developers.

“Safety has to happen at the model layer, and SMEs are not going to build their own models, they cost hundreds of millions of dollars,” he said. “The SME needs to know that when it uses an OpenAI model, a Google model or an Anthropic model, that actually the developers invested … lots of time and money in making it safe.

“The way to get safe adoption of AI by application builders is to reassure the public that their underlying models are safe.”

Read more about artificial intelligence

Claiming the UK is “probably a good bit ahead of other countries” in terms of its work on AI Safety, in part because of the Institute, Mian also cited the UK government’s creation of an Algorithmic Transparency Recording Standard via its Centre for Data Ethics and Innovation as proof of its progress, although Clark pushed back by pointing out that only six reports have been issued by public sector organisations under the voluntary standard since it was published over a year ago.

Mian responded that the UK government recognises the importance of transparency, and that similar work may be picked up by the Institute.

Commenting on the government’s £100m Frontier AI Taskforce, which has been on a mission to persuade leading AI researchers to join the unit, Mian said he has been “very pleasantly surprised by the quality of the calibre of the people that we’ve been able to bring in” from academia and the private sector.

He said that while there will be limitations on what the Institute will be able to publicly say about its evaluations, particularly regarding sensitive AI capabilities that have national security implications: “We want to do as much of this out in the open as possible … I think there’s a really fine balance there between how much information we can provide.”

Mian further added that while none of the Institute’s work to date has been peer reviewed, it has already been helpful to policymakers in driving forward conversations around AI safety and kickstarting international collaboration on the issue.

“We clearly need to keep building the science around what good evaluation looks like,” he said.

Read more on Technology startups

CIO
Security
Networking
Data Center
Data Management
Close