Who is Danny - stock.adobe.com

Creative workers say livelihoods threatened by generative AI

Computer Weekly speaks with various creative workers about the impact generative artificial intelligence systems are having on their work and livelihoods

Tools such as Open AI’s DALL.E 2, SORA and Chat GPT, Google’s Bard (now Gemini), and Microsoft’s Copilot, now make it possible for anyone to create images, text and even music from a single prompt. The sector has quickly come to assert its dominance in the tech industry, with Bloomberg Intelligence analyst Mandeep Singh estimating that the generative AI market could grow by 42% to reach $1.3tn by 2032.

There has been huge hype around the potential of these tools for fuelling creativity, with many creative industry leaders celebrating the potential of generative AI (GenAI). But what does this mean for those whose creativity is their livelihood?

Laurence Bouvard is an actress and voiceover artist, who has worked in TV, film, theatre, videogames and audiobooks. A member of Equity, she is also chair of its Screen and New Media industrial committee.

While Bouvard recognises that GenAI is not solely responsible for the insecurity of creative work, she says it has made earning a living even more challenging. “People are beginning to lose work, sometimes finding themselves in the odd position of competing with their synthetic avatars for jobs, and with very little power to do anything about it,” she says.

In the TV and film sector, Bouvard has noticed that some industry giants are already weaponising generative AI to their advantage, suppressing workers’ rights and risking their livelihoods in the process.

“There are some companies that will not employ performers unless we sign away all our claims and rights to our work, which means the companies can then sell on our work to AI companies wanting training data. There is a serious power imbalance in this way, and it has been the case for many years,” she says.

And Bouvard is not alone in her concerns. In 2022, performing arts and entertainment trade union Equity published a report titled Stop AI stealing the show, revealing that 65% of their members felt that AI posed a threat to their employment opportunities, a number that rose to 93% for audio artists.

Digital enclosure

In their book Data grab: The new colonialism of big tech and how to fight back, authors Ulises A. Mejias and Nick Couldry outline how the extractivist business model on which generative AI is built exploits creative workers by scraping content from the internet, including their copyrighted works.

They argue that the hype around generative AI hype has served as a distraction from the processes of theft and enclosure that the makers of these tools depend on, and that this enclosure has largely happened under our noses.

“Our collective cultural and social products now serve as the extracted raw material on which AI relies,” they wrote, giving the example of how Google has used 280,000 hours of music to train its MusicLM software.

“If the generated music were to sound too much like the source material from which it derived, this would open the door to potential lawsuits. But eventually Google released the programme free of charge, like most of its products. In its final form, the AI will not comply with requests to copy specific artists or vocals, allowing Google to avoid potential charges of copyright infringement.

“But there is still a corporation expropriating our cultural production as source material to train a machine to do the work of humans, because machines will be able to do the work more quickly and cheaply.”

Given the controversy around its use of copyrighted material in its training corpus, Google said in October 2023 that it would shoulder any potential legal liabilities for users of its generative AI services arising from copyright claims about the use intellectual property in its training data.

Devaluing human creativity

Outside the threat GenAI poses to workers’ livelihoods, others are concerned that the rise of the technology could devalue original art. Chris Barker, a graphic artist and art director for Campaign magazine, for example, says that people have responded to his recent artwork – a take on the Willy Wonka experience scandal that went viral on Twitter – by asking if it was generated by AI.

“I used to do cover illustrations for The New European,” he says. “It was a very fulfilling job, brainstorming ideas with the editor about the week’s political developments and then staying up late at night photoshopping a load of little Boris Johnson ninjas protecting Dominic Cummings or Donald Trump with an arse for a face. I was really pleased with the covers and they got a load of attention, but now I would imagine people would just assume it was all AI generated, which is a shame.”

Ela Lee, whose debut novel Jaded was published earlier this year, also believes that generative AI cannot replicate the labour of love that human creativity requires.

“When I wrote my novel, I thought about every word, revisited every paragraph, scrutinised how each punctuation mark affected the prose,” she says. “Whilst AI might be able to ‘write’ something that is concise and convenient in its grammatical accuracy, I worry that it veers us towards a standard that is uniform, stripped of nuance and, ultimately, lacking the human voice that makes people connect to art.”

Some creatives, however, have a more optimistic outlook of how emerging technologies might shape the future of their work.

Serenda is a music producer and artist. She believes that creatives have always been innovative in their use of technology, and generative AI will be no different: “If anything, I hope AI inspires people to take more risks and push boundaries with the art they make. If you’re constantly pushing boundaries and creating interesting and experimental art, I think you’re always one step ahead and you have nothing to worry about.”

Government inaction

Despite the concerns from workers, creators tell Computer Weekly that the UK government has been slow to introduce new protections or update existing legislation in the context of emerging technologies – such as the Copyright, Design and Patent Act or General Data Protection Regulation (GDPR) – and is instead prioritising tech companies self-regulation and profits over the rights of workers.

“The government cannot take a laissez-faire approach to AI in the arts and entertainment industry. The UK is an attractive place to invest in the arts and entertainment because we have a strong copyright and intellectual property framework which protects performers,” says Tom Peters, Policy Officer at Equity.

“We want to see the government adopt the Beijing Treaty to give performers more say over how their work is used. This would build a proper protection regime that would encourage inward investment and ensure performers are able to consent to the use of their work and receive fair remuneration.” 

According to advice from The Society of Authors, which represents more than 12,000 writers, authors and translators across the UK, authors ought to be required to give consent before their work is used by an AI system, in accordance with UK copyright law.

Despite government inaction, there is already growing pushback against the practices of generative AI firms, even from within the industry itself.

Ed Newton Rex was former head of studio at Stability AI, and has worked in AI and music for 13 years. At the end of last year, he resigned from the company over the ethics of using creative work and copyright content to develop AI without consent – an issue which it is facing legal action for.

In the digital world, artists are also staging protests and fighting back with tools that prevent their work from being stolen. Offline, the US Writers Guild of America strike is undeniably the most public case of worker resistance against the exploitative use of generative AI.

Following one of the longest labour strikes in Hollywood history, they saw the introduction of an agreement over the terms of use of artificial intelligence. The new terms prevent studios from using AI to write scripts or editing scripts that have been written by a writer.

Joy Buolamwini, author of Unmasking AI and founder of the Algorithmic Justice League, proposes that creative rights can be asserted using the 4 Cs; consent, compensation, control and credit.

Undeniably, creative workers have a pivotal part to play in asserting the terms of use of AI in relation to their work, as recognised by Bouvard.

“Hopefully, between collectively bargained agreements and eventual changes to legislation, this situation will change, but it may not happen soon enough to prevent serious damage occurring to the industry,” she tells Computer Weekly.

Read more about artificial intelligence

  • UK government responds to AI whitepaper consultation: The UK government is considering introducing binding legal requirements for companies developing the most powerful AI systems, and has outlined a range of funding to realise the ambitions of its ‘pro-innovation’ framework for artificial intelligence.
  • UN chief blasts AI companies for reckless pursuit of profit: The United Nations general secretary has blasted technology companies and governments for pursuing their own narrow interests in artificial intelligence without any consideration of the common good, as part of wider call to reform global governance.
  • Government insists it is acting ‘responsibly’ on military AI: The UK government has responded to calls from a Lords committee that it must “proceed with caution” when it comes to autonomous weapons and military artificial intelligence, arguing that caution is already embedded throughout its approach.

Read more on Business applications

CIO
Security
Networking
Data Center
Data Management
Close