The UK data regulator has outlined how it will approach the regulation of artificial intelligence and biometric technologies, which will focus in particular on automated decision-making systems and police facial recognition
The UK Information Commissioner’s Office (ICO) has launched an artificial intelligence (AI) and biometrics strategy, which the regulator says will support innovation while protecting people’s data rights.
Published on 5 June 2025, the strategy highlights how the ICO will focus its efforts on technology use cases where most of the risks are concentrated, but where there is also “significant potential” for public benefit.
The regulator added it would also consult on updating guidance for ADM profilers, working specifically with early adopters such as the Department for Work and Pensions (DWP), and produce a horizon scanning report on the implications of agentic AI that is increasingly capable of acting autonomously.
“The same data protection principles apply now as they always have – trust matters, and it can only be built by organisations using people’s personal information responsibly,” said information commissioner John Edwards at the launch of the strategy. “Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails.”
The strategy also outlined how – because “we consistently see public concern” around transparency and explainability, bias and discrimination, and rights and redress – these are areas where the regulator will focus its efforts.
On AI models, for example, the ICO said it will “secure assurances” from developers around how they are using people’s personal information so people are aware, while for police facial recognition, it said it will publish guidance clarifying how it can be deployed lawfully.
Police facial recognition systems will also be audited, with the findings published to assure people that the systems are being well governed and their rights are being protected.
Artificial intelligence is more than just a technology change – it is a change in society. But AI must work for everyone ... and that involves putting fairness, openness and inclusion into the underpinnings
Dawn Butler, AI All Party Parliamentary Group
“Artificial intelligence is more than just a technology change – it is a change in society. It will increasingly change how we get healthcare, attend school, travel and even experience democracy,” said Dawn Butler, vice-chair of the AI All Party Parliamentary Group (APPG), at the strategy’s launch. “But AI must work for everyone, not just a few people, to change things. And that involves putting fairness, openness and inclusion into the underpinnings.”
Lord Clement-Jones, co-chair of the AI APPG, added: “The AI revolution must be founded on trust. Privacy, transparency and accountability are not impediments to innovation – they constitute its foundation. AI is advancing rapidly, transitioning from generative models to autonomous systems. However, increased speed introduces complexity. Complexity entails risk. We must guarantee that innovation does not compromise public trust, individual rights, or democratic principles.”
It noted that public concerns are particularly high when it comes to police biometrics, the use of automated algorithms by recruiters, and the use of AI to determine people’s eligibility for welfare benefits.
“In 2024, just 8% of UK organisations reported using AI decision-making tools when processing personal information, and 7% reported using facial or biometric recognition. Both were up only marginally from the previous year,” said the regulator.
“Our objective is to empower organisations to use these complex and evolving AI and biometric technologies in line with data protection law. This means people are protected and have increased trust and confidence in how organisations are using these technologies.
“However, we will not hesitate to use our formal powers to safeguard people’s rights if organisations are using personal information recklessly or seeking to avoid their responsibilities. By intervening proportionately, we will create a fairer playing field for compliant organisations and ensure robust protections for people.”
In late May 2025, an analysis by the Ada Lovelace Institute found that “significant gaps and fragmentation” in the existing “patchwork” governance frameworks for biometric surveillance technologies means people’s rights are not being adequately protected.
While the Ada Lovelace Institute’s analysis focused primarily on deficiencies in UK policing’s use of live facial recognition (LFR) technology – which it identified as the most prominent and highly governed biometric surveillance use case – it noted there is a need for legal clarity and effective governance for “biometric mass surveillance technologies” across the board.
However, while most of these focused purely on police biometrics, the Ryder review in particular also took into account private sector uses of biometric data and technologies, such as in public-private partnerships and for workplace monitoring
Read more about UK data protection issues
UK’s error-prone eVisa system is ‘anxiety-inducing’: People experiencing technical errors with the Home Office’s electronic visa system explain the psychological toll of not being able to reliably prove their immigration status in the face of a hostile and unresponsive bureaucracy
European Commission should rescind UK data adequacy: Civil society organisations have urged the European Commissioner to not renew the UK’s data adequacy, given the country’s growing divergence from European data protection standards.
Met Police to deploy permanent facial recognition tech in Croydon: The Met Police is set deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities.