Transnational AI regulation needed to protect human rights in the UK

Tech companies have told MPs and Lords they would welcome greater harmonisation in regulatory standards at a global level

The transnational nature of artificial intelligence (AI) means international regulation is essential to tackle the safety issues associated with advanced AI, according to tech chiefs. 

In the final evidence session of the Joint Committee on Human Rights inquiry into human rights and the regulation of AI on 25 February, MPs and Lords pressed the AI minister and senior executives from Meta and Microsoft on the adequacy of current safeguards in protecting fundamental rights.

Lawmakers questioned the panel on misinformation, accountability, child safety, existential risk and Britain’s AI sovereignty, probing whether current safeguards are strong enough to protect democratic rights and freedoms as AI systems become embedded across society.

The session came just weeks after the committee warned that the UK’s existing regulatory framework is struggling to keep pace with AI harms – with several regulators telling MPs that a lack of resources, rather than statutory powers, is the greatest hurdle to effective oversight.

Ginny Badanes, general manager of tech for society at Microsoft, and Rob Sherman, deputy chief privacy officer of policy at Meta, welcomed greater harmonisation in regulatory standards at a global level. 

Speaking on AI governance, Badanes told MPs the current issue is not a lack of activity, but the bigger challenge of fragmentation. 

“I worry at times when we have this variety of approaches that we’re not actually addressing the broader safety or human rights risks that are at the centre of what everyone is trying collectively to solve,” she said.

Transnational by design

Badanes added that “everything about advanced AI is transnational by design – the systems are developed, tested and deployed in a variety of places across borders and within multiple supply chains, and then integrated into products that are used at a global scale”.

She argued that an alignment in international standards could lead to a base layer of agreement, “creating a strong place to get out of fragmented models”. 

Sherman mirrored this, noting that Meta operates in most countries worldwide, and that its human rights policy applies globally.

He added that Meta does not build separate AI models for different countries, despite the regional variation in AI governance. 

Asked whether the UK’s AI Opportunities Action Plan strikes the right balance between innovation and human rights, both companies were broadly supportive.

Badanes said the UK had made “a sensible start”, building on its “strong foundation of human rights” law and taking a risk-based approach.

Public trust, she argued, is “absolutely critical” to AI adoption. “People will not embrace and use a technology that they do not trust,” said Badanes, adding that strong but proportionate regulation would help secure that trust.

Sherman described the UK’s strategy as “a really thoughtful and sensible approach”, and, in some respects, “a global model”. He also praised the UK’s AI Security Institute as “a global thought leader” in technical AI governance.

Misinformation and democracy

The committee asked if Meta was doing enough to challenge the use of AI by foreign actors on social media, raising concerns about how AI and social media are being used to undermine democratic rights and freedoms.

The committee noted that anonymous posting is increasingly the main way people post on Facebook groups.

Sherman stressed that Facebook is a “real identity platform”, meaning identity is verified using government-issued photo IDs, and that these groups were intended to allow people to share sensitive information without attaching their identity to it. Without accounting for the platform’s own role in spreading misinformation, he said, “I would encourage people to be thoughtful about the sources of the information that they consume”.

However, Sherman said the company would “certainly never suggest that the work to do that is done”, noting that adversaries “continue to evolve their tactics” and “behave adversarially”.

On the reliability of large language models, executives admitted AI systems can generate false information – so-called “hallucinations”. While models are “designed to tell you the truth”, Sherman conceded they are not 100% accurate.

Read more about artificial intelligence

Badanes added: “I think it’s incredibly difficult to ask a large language model to consistently provide you with the truth, in part because of the inherent flaws of the way the systems are designed. I do expect they will continue to get better, but also because truth is at times subjective, and it is a challenging environment to guarantee or ensure anything.”

The committee asked about situations when chatbots provide incorrect or manipulative outputs. Badanes noted the importance of public trust in AI, saying it is lost when the system does not answer a question.

The witnesses said Facebook and Microsoft are working to improve factual alignment, provide citations and, in some cases, indicate levels of confidence in responses. They also emphasised the importance of AI literacy and managing expectations of what services chatbots should provide.

The most difficult questions centred on accountability. When asked who should be responsible if someone suffers harm after relying on incorrect or manipulative AI outputs, such as bad legal advice or encouragement of self-harm, executives stopped short of proposing a specific legal framework.

Microsoft’s Badanes said accountability should attach “where there’s meaningful control”, suggesting responsibility may vary depending on whether harm stems from the model itself, its deployment, or a malicious user. Meta’s Sherman agreed courts would likely need to examine “multiple players” in any given case.

Parental controls

Sherman highlighted that age verification often varies app to app, and highlighted that standardised, platform-level verification is not in the current ecosystem, but would be valuable.

Badanes emphasised the variation in experiences of AI across platforms. “A chatbox where a child can form relationships is going to be a higher-risk scenario than potentially a tutoring app,” she said, encouraging a risk-based approach to AI governance rather than attempting to apply a single age-based threshold across AI tools.

“It’s not just about restricting access, we also need to build these age-appropriate designs and safety guardrails – it’s about adding clear boundaries into the system from the very beginning,” said Badanes.

Existential risks from AI 

Asked if individuals should be able to opt out of AI entirely, Sherman said AI has been embedded in services such as Facebook and Instagram “since the beginning”, from news feed ranking to spam filtering. “I don’t think that opting out of AI as a technology is probably realistic,” he said, warning against the idea that it would be possible to “wall off AI from the rest of technology”.

Sherman and Badanes pushed back against binary artificial general intelligence narratives, such as the 2023 extinction-risk statement from the Centre for AI Safety, signed by many leaders in the tech industry, that warned of possible risks of extinction from AI.

Sherman said: “I think the reality is maybe a little bit less exciting and a little bit more mundane, which is that the technology will continue to improve iteratively. I don’t think we’re in a situation where we’re going to wake up one day, and the world is vastly different.”

Badanes described existential harm as “low-probability, high-impact”, stressing that companies are focused on managing both long-term and immediate dangers. “We have to address the risks in the here and now,” said Sherman, while continuing to plan for more extreme scenarios.

Both firms pointed to internal governance structures, including red-teaming exercises, external expert consultation and frontier risk frameworks. Sherman told MPs that through the Frontier programme, Meta evaluates models for “chemical, biological, cyber security and autonomy risks” before and after deployment. 

They also emphasised the importance of collaboration with governments, noting that states hold intelligence and national security information unavailable to the private sector.

Speaking to the committee in a separate session, AI minister Kanishka Narayan praised the UK’s AI Security Institute, saying it provides “unparalleled pre-deployment access” to advanced models and plays a key role in developing international evaluation standards.

AI sovereignty

The AI minister faced sustained questioning over who is ultimately responsible for AI ethics and redress across government, as peers raised concerns about regulatory gaps, accountability and sovereign capability.

Peers also questioned whether the UK is equipped to handle risks from frontier AI systems, referencing a December 2024 report from the AI Security Institute report, which revealed a near 100% success rate for attacks on these models, which failed to block prompts related to illegal activities and cybercrime

The minister defended the work of the AI Security Institute, formerly the AI Safety Institute, arguing that the name change reflected growing national security concerns.

The AI minister acknowledged limits to his remit, clarifying that he is “definitely not responsible for public service deployment of AI,” which falls to another minister. Instead, individual departments retain responsibility for ethics and regulatory compliance within their sectors.

Narayan outlined a three-step model for increasing the UK’s AI sovereignty: securing critical inputs such as chips; diversifying suppliers to strengthen bargaining power; and ultimately building domestic capability. While acknowledging the UK does not control the entire AI “stack”, he argued it holds strengths in areas such as chip design and AI for science.

“The most important question that I think about,” he said, is how Britain avoids “losing out” in AI as it did in earlier waves of cloud computing.

Badanes likened AI to nuclear regulation. “There are a lot of really complicated challenges that we as a big, large society, have been able to resolve that have had similar roots,” she said.

However, MPs raised concerns about AI researchers who have left major companies over safety disagreements. Asked whether voluntary corporate safeguards were sufficient, Sherman responded that firms have “clear internal reporting mechanisms” and “encourage dissent”, but stopped short of calling for binding global treaties.

Industry leaders urged policymakers to prioritise “interoperable, risk-based global standards” for the most capable systems and invest in content provenance tools, including watermarking, to counter misinformation.

Narayan noted that compared with the first AI summit in Bletchley Park, the India AI Impact Summit was much more focused on the day-to-day experience of people rather than the more abstract, long-term questions of how AI might fundamentally transform the economy, or the more long-term risks it may pose.

Read more on Technology startups