dusanpetkovic1 - stock.adobe.com
Eight US politicians have written to nine leading American tech companies to demand answers about the working conditions of their data workers, who are responsible for the training, moderation and labelling tasks that keep their artificial intelligence (AI) products running.
The letter – addressed to Google, OpenAI, Anthropic, Meta, Microsoft, Amazon, Inflection AI, Scale AI, and IBM – calls on the companies to “not build AI on the backs of exploited workers” and outlines how data workers are often subject to low wages with no benefits, constant surveillance, arbitrary mass rejections and wage theft, and working conditions that contribute to psychological distress.
Signed by a number of senators and representatives – including Edward Markey, Ron Wyden, Elizabeth Warren, Bernie Sanders and representatives Pramila Jayapal, Jamaal Bowman, Katie Porter and Mark Pocan – the letter added that, contrary to the popular notion of AI as an autonomous autodidactic system, its operation in practice depends heavily on human labour.
“We write with deep concerns and questions about the working conditions of those who perform your companies’ “ghost work” – unseen but critical tasks such as data labeling, without which there would be no artificial intelligence,” they wrote.
“Despite the essential nature of this work, millions of data workers around the world perform these stressful tasks under constant surveillance, with low wages and no benefits. These conditions not only harm the workers, they also risk the quality of the AI systems – potentially undermining accuracy, introducing bias and jeopardizing data protection.”
Detailing the “gruelling” work conditions, they added, for example, that the median wage of workers on Mechanical Turk (Amazon’s digital labour platform) has been estimated at just $1.77 per hour, most workers receive no health or insurance benefits, and as much as a third of their time is spent on uncompensated work.
They also noted that workers are often under surveillance, with keystroke logs, computer screenshots and even webcam photos taken by digital labour platforms, and that in many cases the workers are completely supervised via algorithm, meaning they face withheld payments and account lockouts with no ability to appeal to a human.
“Tech companies have a responsibility to ensure safe and healthy working conditions, fairly compensated work, and protection from unjust disciplinary proceedings… Unfortunately, many companies have sidestepped these duties, and that must change,” they said in the letter.
The questions posed by US lawmakers to the companies – to which they have requested written responses by 11 October – include whether the firms make publicly available information about the role data workers play in developing their AI technology, how they use data workers in the production of their AI products, the steps each takes to ensure data workers are paid a living wage, and what measures they have in place to ensure data workers have freedom of association and collective bargaining rights at work.
In response to these and other questions, the companies will be expected to outline, for example, how many data workers they employ (both directly and through contractors), the main tasks the workers perform, whether they engage data workers through a digital labour platform, and how pay is set for the work.
Krystal Kauffman, Turkopticon
Responding to the letter, Krystal Kauffman, a lead organiser with Turkopticon – an “advocacy organisation” for the collective interests of Amazon Mechanical Turk workers – said it represents “a step forward” in the fight against the unfair practices these companies use in their shadow workforce.
“Tech giants engage in shady labour practices which hurt workers through low pay and a lack of accountability. Turkopticon has been fighting Amazon on one such strategy – called mass rejections – in which employers reject all task submissions, keep the data, but do not pay the workers,” she said. “Amazon has not wanted to work with us on a solution to this problem, but we are excited to see that members of Congress are standing up for these workers.”
Alex Hanna, director of research at the Distributed AI Research (DAIR) Institute, added generative AI systems such as ChatGPT and other language models would not work if not for the massive amounts of human intervention performed by data workers across the world.
“We need accountability for big tech firms who like to pretend their new tools operate through engineering magic, rather than large-scale dispossession of labour,” she said.
Speaking with Computer Weekly in October 2022, a number of AI experts, including Hanna, noted that despite the widespread proliferation of ethical frameworks and principles for AI, virtually none of them take into account the extensive human labour that underpins the technology.
“In the discussion around AI ethics, we really don’t have this discussion of labour situations and labour conditions,” said Hanna at the time. “That is a giant problem because it allows for a lot of ethics-washing.”
Read more about artificial intelligence
- UK government quietly disbands data ethics advisory board: The government has disbanded its Centre for Data Ethics and Innovation’s advisory board in favour of pulling the relevant artificial intelligence (AI) and data knowledge from a pool of external experts.
- Amazon takes $4bn minority stake in AI safety research startup Anthropic: Amazon Web Services (AWS) will build on its existing ties to the firm to help it scale its enterprise-grade large language model (LLM) Claude.
- Lords begin inquiry into large language models: Lords will examine the risks and opportunities of large language models and look at how government can effectively manage them in the coming years.