The AI regulation gap: Is the UK’s pro-innovation approach enough?

As AI advances, the UK’s pro-innovation regulatory stance faces growing scrutiny. With non-statutory principles at its core, can the framework maintain public trust and provide effective oversight?

The UK has positioned itself as an innovation hub for artificial intelligence (AI), favouring a regulatory model designed to support experimentation rather than mandate wide-ranging regulatory control. Unlike the European Union’s (EU’s) sweeping AI Act, the UK’s approach relies on sector regulators applying principles rather than enforcing a centralised, binding statute. 

Supporters argue that a more hands-off approach gives businesses the freedom and confidence to build and deploy AI at speed. However, critics warn that without statutory safeguards, the UK risks exposing citizens and markets to significant harm. As general-purpose AI accelerates and crosses traditional boundaries, the debate over whether the UK’s light-touch approach is sustainable and practical. 

On the publication of the UK’s AI white paper, its flexibility was cited as a core strength. Rather than replicate the EU’s more restrictive approach, the UK emphasised proportionate oversight rooted in context, yet also paying close attention to existing regulatory duties.

Fraser Raleigh, managing director of public affairs at SEC Newgate, said the model has given AI developers room to explore new applications. He noted that the UK’s regulatory regime “has not stymied adoption and innovation, mainly because the government chose to work through existing watchdogs rather than create a one-size-fits-all law”. This flexibility, however, has surfaced new issues such as disputes over how large language models use creative content.

But as AI capabilities have shifted from narrow tools to general-purpose systems shaping decisions across healthcare, finance, education and public administration, several experts argue that a sector-led model is increasingly difficult to maintain.

Louise McCormack, an AI consultant at Daon, emphasised that general-purpose AI “does not respect the boundaries of traditional regulators”. She explained that dangers such as opacity, provenance failures and bias propagation arise “wherever the system is used, making it difficult for sector regulators to keep pace on their own”.

This view is echoed across industry. Jane Smith, field chief data and AI officer at ThoughtSpot, told Computer Weekly the belief that a context-led model can keep up with general-purpose AI is becoming harder to sustain. “Regulation legitimises industry and fuels adoption,” she said, adding that the “bigger risk for AI is the absence of regulation”, particularly where systems are embedded into many parts of daily life. 

And Petr Baudis, co-founder and chief technology officer (CTO) of Rossum, took a more cautious stance, arguing that centralised intervention should be reserved for issues with “society-wide risks”.

He said existing regulators could manage sector-specific concerns for now, “but only if there is a clear distinction between ordinary risks and those that require central action”. Without that clarity, he warned, fragmented oversight could become an obstacle. Together, these perspectives reveal the pressure placed on a model built for a different technological era.

Public trust and the danger of voluntary principles 

The UK’s regulatory framework centres around five non-statutory principles: safety, transparency, fairness, accountability and contestability. Regulators are expected to interpret these within their existing powers. The intention is to keep rules adaptable and innovation-friendly, supporting the central tenet of the UK’s approach to AI policing. 

Yet multiple experts warn that voluntary principles struggle to deliver the trust required for high-stakes deployment. 

Smith argued that “there isn’t really any incentive to abide by them if there aren’t legal obligations or sanctions”. She said this could lead to reduced uptake of AI and lower trust, especially if failures occur in policing or welfare, for example. High-profile problems, she said, “would compound this” and contribute to the delegitimisation of public institutions. 

McCormack reinforced this point, and said voluntary principles often become “neat lists rather than living duties”, and risk creating “ethical hollowing”, where companies prioritise the appearance of responsibility rather than meaningful safeguards – AI washing, perhaps?

She emphasised that systems influencing decisions about dignity, rights or compassion “cannot be governed by aspiration alone”.

Rich Went, head of client services and strategy at Gallium Ventures, offered a historical perspective. He noted that industries left unchecked for too long – from banking pre-2007 to cryptocurrency in the 2010s – suffered major reputational crises.

Went added that with AI, “we are now seeing the same patterns emerge”, and that trust drops significantly “when guardrails arrive too late”. For him, “effective enforcement” is essential for long-term confidence. 

Regulators themselves acknowledge the challenge. Sabeen Malik, vice-president of global government affairs and public policy at Rapid7, told Computer Weekly that meaningful accountability depends on clear expectations, measurable progress and strong collaboration between government and industry. She said AI safety “cannot rely solely on consultation” and must embed secure-by-design practices to avoid creating “the appearance of security rather than real resilience”.

As AI moves deeper into public services, the cost of insufficient oversight becomes more visible – and more personal. 

Fragmented oversight, global competition and strategic uncertainty 

The UK’s approach depends on multiple regulators working together coherently, but the scale of that challenge is growing. Some regulators face resource constraints, and many are adapting their first significant AI cases while the technology continues to advance.

Mark Pestridge, executive vice-president and general manager at Telehouse Europe, said the ability to test frameworks before legislating is valuable, especially for smaller organisations. However, he warned that prolonged uncertainty is already shaping investment choices. Many boards, he said, are pausing large projects until they understand how the UK, EU and US intend to diverge or align. Businesses, added Pestridge, “need confidence about where sensitive data may be processed and how long it must stay within a jurisdiction”. 

Wayne Cleghorn, technology partner at Excello Law, took a more critical view. He described the UK’s reliance on disparate regulators as “an experiment in an uncontrolled environment” and argued that the country “has no known and proven model” for coordinating oversight of technologies evolving at this pace. He also warned that the UK risks inheriting foreign norms by default because “AI developed in the US, EU and China will already embed those jurisdictions’ laws and standards”.

Read more about artificial intelligence

  • Inaugural AI Security Institute report claims that safeguards in place to ensure AI models behave as intended seem to be improving.
  • Proposed AI Growth Lab aims to cut red tape and allow companies to test artificial intelligence products in a safe environment.
  • The UK government continues to build on the release of its AI Opportunities Action Plan, with the announcement of another AI Growth Zone plus investment opportunities for the nation’s startup community.

This issue is not only technical, but also geopolitical. The UK has attempted to strengthen its influence through initiatives such as the AI Safety Summit, but without a robust statutory regime, its long-term leadership is questioned by some.

Malik argues that the UK can retain credibility if it treats AI safety as an operational discipline. 

That means championing “practical approaches to compute governance”, establishing strong incident-reporting processes and ensuring regulators can enforce clear expectations while legislation is still under development.

For Baudis, leadership also requires pragmatism. He said moving too quickly on sweeping legislation without consensus could “be counterproductive to innovation and the UK’s competitiveness”, particularly as other nations leverage AI at scale. 

The debate highlights a tension at the heart of the UK’s position: can a decentralised model attract investment while maintaining global credibility on safety?

Innovation, risk and the threshold for legislation

Supporters of the UK’s pro-innovation stance argue that early or overly prescriptive regulation risks stifling the development of AI tools that could generate significant economic and social value. But patience has limits. 

Raleigh said that while the flexible regime has helped adoption, emerging challenges – such as copyright, data rights and content provenance – show that flexibility alone cannot address every issue. He noted that the creative sector is already pushing for stronger protection, indicating that sector regulators are struggling to provide clarity quickly enough. 

Smith was more direct. She argued that by delaying sweeping legislation, the UK has “abandoned the one area where it could have led globally: rigour”.

Smith said the US competes with innovation and China with scale, and that the UK’s natural differentiator should be standards. Instead, she warned, the delay “leaves everyone unclear about what to expect” and risks looking like capitulation to major technology companies.

Pestridge agreed that patience is only useful if the time is used well. He said regulators must provide “clear, practical guidance and a timetable for legislation” to maintain confidence. 

Cleghorn added that some uses of AI – such as automated decisions determining liberty or critical healthcare outcomes – require a statutory definition of what is unacceptable. Leaving these decisions to voluntary principles or sector regulators, he argued, risks long-term harm and undermines public trust.

The UK’s crossroads moment

The UK’s pro-innovation approach has delivered real strengths. It has supported rapid experimentation, allowed regulators to learn in context and positioned the UK as a distinctive voice in global AI debates.

But the growth of general-purpose AI has exposed the limits of a system built on voluntary principles and decentralised oversight. As models become more capable and influential, the pressure to introduce statutory duties becomes harder to ignore.

Every expert consulted acknowledges that the UK must eventually introduce targeted legal requirements. The debate, therefore, is about timing and scope to maintain momentum while protecting citizens and markets.

For now, the UK finds itself at an inflection point. If it can refine its framework, strengthen regulators and define clear thresholds for intervention, it could deliver a balanced model that combines flexibility with accountability. If it waits too long, it may find that both innovation and trust begin to slip beyond its control.

Read more on Artificial intelligence, automation and robotics