Romolo Tavani - stock.adobe.com

Lack of resources greatest hurdle for regulating AI, MPs told

Regulators warned that statutory powers alone cannot address the ethical harms of artificial intelligence

Closer cooperation between regulators and increased funding are needed for the UK to deal effectively with the human rights harms associated with the proliferation of artificial intelligence (AI) systems. 

On 4 February 2026, the Joint Committee on Human Rights met to discuss whether the UK’s regulators have the resources, expertise and powers to ensure that human rights are protected from new and emerging harms caused by AI. 

While there are at least 13 regulators in the UK with remits relating to AI, there is no single regulator dedicated to regulating AI.

The government has stated that AI should be regulated by the UK’s existing framework, but witnesses from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO) and Ofcom warned MPs and Lords that the current disconnected approach risks falling behind fast-moving AI without stronger coordination and resourcing. 

Mary-Ann Stephenson, chair of the EHRC, stressed that resources were the greatest hurdle in regulating the technology. “There is a great deal more that we would like to do in this area if we had more resources,” she said.

Highlighting how the EHRC’s budget has remained frozen at £17.1m since 2012, which was then the minimum amount required for the commission to perform its statutory functions, Stephenson told MPs and Lords that this is equivalent to a 35% cut.

Regulators told the committee that the legal framework is largely in place to address AI-related discrimination and rights harms through the Equality Act.  

The constraint is therefore in capacity and resources, not a lack of statutory powers. As a result, much of the enforcement is reactive rather than proactive.

Stephenson said: “The first thing the government should do is ensure that existing regulators are sufficiently funded, and funded to be able to work together so that we can respond swiftly when gaps are identified.”

Andrew Breeze, director for online safety technology policy at Ofcom, stressed that regulation could not keep pace with rapid AI development.

However, regulators also stressed that they are technology-neutral; their powers with regard to AI are limited to the use case and deployment level. Ofcom, the ICO and the ECHR have no power to refuse or give prior approval to new AI products. 

The committee itself expressed a strong interest in having a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the pharmaceutical industry. 

“Big business, lots of jobs, capable of doing enormous good for so many people, but equally capable of doing a lot of damage,” she said. “We would not dream of not having a specific medicines regulator in this country or any developed country, even though there might be privacy issues and general human rights issues.”

Regulators were in favour of a coordinating body to bring stronger cross-regulator mechanisms rather than a single super-regulator. They stressed that because AI is a general-purpose technology, regulation works best when handled by sector regulators that cover specific domains.

Forms of coordination are already in place, such as the Digital Regulation Cooperation Forum (DRCF), formed in July 2020 to strengthen the working relationship between four regulators. 

It has created cross-regulatory teams to share knowledge and develop collective views on digital issues, including algorithmic processing, design frameworks, digital advertising technologies and end-to-end encryption. 

The then-outgoing information commissioner, Elizabeth Denham, told MPs and peers that information-sharing gateways between regulators and the ability to perform compulsory audits “would ensure that technology companies, some the size of nation-states, are not forum shopping or running one regulator against another”.

Spread of misinformation 

Breeze made the case for greater international regulatory cooperation with regard to disinformation produced by AI. 

Ofcom clarified that, under the UK’s Online Safety Act, it does not have the power to regulate the spread of misinformation on social media. 

“Parliament explicitly decided at the time the Online Safety Bill was passed not to cover content that was harmful but legal, except to the extent that it harms children,” said Breeze.

While misinformation and disinformation regulation is largely absent in UK law, it is present in the European Union’s counterpart to the Online Safety Act. 

Because of the cross-border nature of large tech companies, Breeze noted that legal action on discrimination can sometimes be taken using European legislation.

Age regulation and the Online Safety Act

Regulators also addressed scepticism on age assurance safeguards in the context of the proposed social media ban for under-16s and restricting access to online pornography.

Breeze said age assurance represented a trade-off for regulators between child protection and ensuring a high degree of online privacy.

Responding to criticism that the Online Safety Act has been ineffective due to the widespread use of virtual private networks (VPNs), Breeze said: “Checks are about ensuring as many young people as possible are protected from seeing products deemed harmful to them ... and there is no impregnable defence that you can create on the internet against a determined person, adult or child.”

He said that according to the evidence, the majority of children who report seeing harmful content usually weren’t looking for it. 

The same committee heard in November 2025 that the UK government’s deregulatory approach to artificial intelligence would fail to deal with the technology’s highly scalable human rights harms and could lead to further public disenfranchisement.

Big Brother Watch director Silkie Carlo highlighted that the government’s “very optimistic and commercial-focused outlook on AI” and the Data Use and Access Act (DUAA) have “decimated people’s protections against automated decision-making”.

Carlo added that there is real potential for AI-enabled mass surveillance to “spiral out of control”, and that a system built for one purpose could easily be deployed for another “in the blink of an eye”.

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics