Mehdi - stock.adobe.com

DWP accused of shielding AI deployments from public scrutiny

Amnesty International and Big Brother Watch say the Department for Work and Pensions’ ‘unchecked’ and opaque experimentation with AI in the UK’s social security system, which treats benefit claimants as automatically suspicious, is being shielded from public scrutiny

Civil rights groups have highlighted a “worrying lack of transparency” in how the Department for Work and Pensions (DWP) is embedding artificial intelligence (AI) in the UK’s social security system.

According to separate reports from Amnesty International and Big Brother Watch, both published in early July 2025, the opaque use of AI systems by the department to determine people’s eligibility for social security schemes like Universal Credit (UC) or Personal Independence Payment (PIP) is having “serious consequences” for benefit claimants.

Both organisations highlighted clear risks of bias associated with the use of AI in this context, and how the technology can exacerbate pre-existing discriminatory outcomes in the UK’s benefits system.

They also detailed how such systems undermine benefit claimants’ data and privacy rights, and treat them as automatically suspicious by design.

However, despite such risks, they say the DWP has created a “wall of secrecy” around how its AI systems operate, leaving people in the dark about how it has made important decisions that affect their day-to-day lives.   

“Internal DWP documents obtained by Big Brother Watch show that the Universal Credit Advances model, used to risk score almost a million Advances claims each year, displays consistent, statistically significant bias,” said the privacy campaign group.

“Fairness analyses of the Advances model and a string of other pilot tools show that algorithmic disparities have been found for age, nationality, relationship status and reported illnesses – even more concerning as these characteristics are also used as proxies for ethnicity, marital status and disability.”

Refusal to publish information openly

Big Brother Watch added that the operation of the DWP’s “suspicion machine” has been shielded from public scrutiny because of its refusal to publish information openly.

Similar points were made by Amnesty, which added that the DWP is justifying its opacity on the basis that disclosing information could enable individuals to exploit the benefit system. “This fundamentally misunderstands how these systems are frequently discriminatory and demonstrates the adoption of a punitive approach,” it said.

“These systems often rely on identity characteristics to establish profiles or risk scores, which for ‘race’ or ‘disability’ are fixed and therefore cannot be changed by an individual to cheat the system. This results in a system that is optimised for fraud detection, rather than being optimised to serve the majority of applications which are legitimate while still being able to detect outlying cases of fraud.”

Both organisations are therefore calling for greater transparency from the UK government about how it is using these high-risk data tools, so that DWP can be held to account for the negative impacts of its systems, and people can challenge wrong decisions made about them.

“The DWP’s ongoing roll-out of high-tech algorithmic tools, which its own assessments have found to be riddled with bias, is alarming,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “This becomes even more concerning when the DWP is hiding behind a wall of secrecy and refuses to disclose key information that would allow affected individuals and the public to understand how automation is used to affect their lives, and the risks of bias and to privacy involved.

“Instead of pressing forward, the DWP should take a step back and pause the use of any model containing unexplained disparities, and it must become more transparent about how it uses high-tech tools. It is wrong to subject millions of innocent people to shadowy automated or algorithmic decisions, and refuse to explain how these work.”

Read more about AI and algorithms

Amnesty said in its report that while the “real-world impacts” of the DWP’s AI systems are apparent from people’s experiences of interacting with them and what’s been uncovered by civil society groups, the full impacts cannot be truly known without greater transparency. “Without transparency over the use of technology, there can be no meaningful evaluation of whether these systems are operating efficiently or lawfully, and whether or not discrimination is occurring,” it said.

Big Brother Watch similarly highlighted how documents such as data protection impact assessments containing vital details about the systems – including the types of personal data they process and how these are ultimately used by the models, as well as how they meet legal tests around proportionality and necessity – are either not disclosed, or are otherwise redacted heavily if they are published.

“The DWP uses the threat of welfare fraud as a get-out-of-jail-free card to operate in secret, but with machine learning tools allowing for human-light profiling on a large scale, more scrutiny and transparency is clearly needed – not less,” it said.

Imogen-Richmond Bishop, a researcher on technology, economic, social and cultural rights at Amnesty International, added that the DWP’s mission to reduce cost is at the heart of its over-reliance on these problematic technologies. “People are struggling to make ends meet and put food on the table due to cuts in social security, and yet the DWP is more concerned about experimental technologies to surveil claimants,” she said.

“The tech-enabled system to claim and manage welfare benefits is resulting in relentless dehumanisation and strain for people who are already wrestling to access their basic needs in a broken system.” 

Amnesty also highlighted in its report that despite the DWP’s stated goal of “cost-saving”, it doubts whether these professed savings will ever actually materialise.

“The UK’s National Audit Office has expressed doubts that these savings will ever materialise,” it said. “As of 2024, the DWP estimates that, since 2010, it has cost £2.9bn to implement UC. These growing costs and continually delayed completion dates bring into question whether it is even possible for the digitalisation process to introduce the efficiency and cost savings which were used to justify its introduction.”

DWP comment

Computer Weekly contacted the DWP about both reports and the claims made about its use of AI throughout the social security system.

“We want to improve the experience for everyone who needs to access and use our services, and technology plays an important part in that,” said a spokesperson.

“We ensure the appropriate safeguards are in place to guarantee the lawful, proportionate and ethical use of data and technology. All decisions regarding benefit entitlement or payment are made by DWP staff, who look at all available evidence.”

Read more on Artificial intelligence, automation and robotics