TUC says government is failing to protect workers from AI harms

TUC issues warning about artificial intelligence leading to more widespread workplace discrimination if the right checks are not put in place

The UK government is failing to protect workers from being “exploited” by artificial intelligence (AI) technologies, which if left unchecked could lead to greater workplace discrimination across the economy, the Trades Union Congress (TUC) has said.

The union body further warned that many workers are not even aware AI is being used to make decisions affecting them – including decisions around line-managing, hiring and firing – or that it is being used to otherwise monitor, profile and analyse them.

It added that the government’s AI whitepaper – which was published in March 2023 and outlined its “pro-innovation” framework for regulating AI – only offered a series of “vague” and “flimsy” commitments for the ethical use of AI at work, and that its separate Data Protection and Digital Information Bill (DPDI) has set a “worrying direction of travel”.

TUC assistant general secretary Kate Bell said that despite the risks that AI could lead to widespread workplace discrimination, the government is refusing to put in place the necessary “guardrails” to safeguard workers’ rights. “Instead of clear and enforceable protections, ministers have issued a series of vague and flimsy commitments that are not worth the paper they are written on,” she said. “And they have failed to provide regulators with the resources they need to do their jobs properly. It’s essential that employment law keeps pace with the AI revolution. But last month’s dismal AI whitepaper spectacularly failed to do that.”

Regarding the DPDI, Bell added that the government is “watering down important protections… [which] will leave workers more vulnerable to unscrupulous employers”.

To lower the potential for AI-powered discrimination, the TUC said it wants employers to reveal how AI is being used to make decisions about staff, and that all decisions should be subject to a human review so workers can challenge them.

The TUC previously warned in March 2022 that AI-powered workplace surveillance is “spiralling out of control”, and could lead to widespread discrimination, work intensification and unfair treatment without stronger regulation to protect workers.

AI deepens power imbalances

During the TUC’s AI@Work conference on 18 April 2023, speakers discussed in more depth the current issues around AI governance in the UK, and the impact the technology has on those subject to it in the workplace.

They specifically highlighted the potential for AI to deepen power imbalances in the workplace, and stressed the need for workers to be included in conversations about the introduction of new technologies.

Andrew Pakes, deputy general secretary at Prospect Union, for example, said that while there are important philosophical debates to be had about the values embedded in AI and the overall role of the technology in society, the core practical issue of AI at work is how it perpetuates or magnifies existing power imbalances between employers and workers.

“This idea that AI can now hire us, manage us, monitor us, promote us and fire us means we are seeing a change on a level we have not seen before,” he said. “But the important thing is there is a boss who sits behind it and a manager who sits behind it, and that’s the relationship we want to try to fix.”

Pakes added that the move to remote working fostered by the pandemic has led to an explosion of AI and automated digital surveillance of workers – including from small dedicated surveillance software firms as well as household names such as Microsoft – which in turn has intensified the “datafication” of workers.

Read more about AI

While AI-powered workplace surveillance offers greater control to organisations over worker behaviour, Pakes said the increasing datafication of employees is also a “profit centre” for employers, which can sell the data on to third parties.

“Not all of us, but many of us, can take our work just about anywhere now with technology, but it also means our work and our bosses can follow us just about everywhere, into our private lives, into our homes,” he said, adding that AI-powered surveillance is no longer restricted to the “canary in the coal mine” of logistics and warehouse workers.

“It doesn’t matter if you’re blue collar or white collar, doesn’t matter if you’re in a factory, in the office or at home – this software can check us and track us and invade us, and we really need to talk about it.”

Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, said that as part of her research she has interviewed numerous economists who only offered a “collective shrug” when asked what they think the overall, long-term impact of AI will be on work.

“We are rolling technologies out without an understanding of how they impact work and workplaces,” she said, describing the process of AI’s deployment as a “massive social experiment” instigated from above. “If there’s one lesson that we take from the history of digital transformation, [it’s that] innovation doesn’t happen to us – it’s made: and it’s made by people in their jobs.

“We are in the midst of an enormous power grab, make no mistake about it,” added Neff. “When we talk about artificial intelligence in the workplace, we are talking about a shift of power. We need to have everybody, especially workers, at the table to understand what these changes are doing to our lives.”

Including workers

Renate Samson, interim associate director at the Ada Lovelace Institute, also noted that the public and workers have, so far, largely been excluded from conversations around the development, deployment and regulation of AI. “If we’re going to talk about public good, I think we all need to be part of the conversation, and that means talking to people with lived experience, talking to people at the EHRC, talking to people to understand where this bias has happened to them,” she said.

Samson added that while current debates around AI focus on how to build trust in the technology, redress is even more important. “It’s not about developing trust,” she said. “It’s about developing redress when trust is broken, and how can we have redress if we don’t know [whether] the system’s being used against us, or for us, or with us?”

Highlighting the disability movement’s historic slogan of “nothing about us without us”, Pakes agreed that workers and unions are simply not in the room when conversations about AI are taking place.

“Too much of the debate is driven by government or technologists, or experts in the third sector,” he said. “We’re not on the AI council, we’re not mentioned in the government’s AI strategy. It doesn’t need to be that difficult. There is a reason Britain has one of the safest sets of workplaces in the world: because we know how to manage risk and reduce harms, and workers are involved in that process.”

Robin Allen, an employment and equalities lawyer at Cloisters Chambers who co-founded the AI Law Consultancy, said: “AI is essentially a stereotyping technology, and therefore almost always has some sort of discriminatory impact unless you’re very careful.”

He added that unions should start to see and use data as a negotiating tool, and that they should push to have access to the same data employers do. “If you collect it, it’s on the terms that we have it as well,” said Allen.

Pakes concluded that the key is for unions to get organised around workplace tech issues, as the current basis of regulation is built about “neoliberal market competition”, and it’s therefore no surprise that UK regulators “generally don’t stand up for workers, they hardly stand up for consumers”. “We don’t just need regulators,” he said. “We need regulators that recognise, hear and represent us. If we just create more market-based regulators – and that’s what I think some people want – we will fail in our task.”

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close