As artificial intelligence permeats aspects of the economy and society, individuals and civic groups are devising creative ways to rebel. Although what impact this will have on AI’s development, adoption and regulation is unclear
Artificial intelligence (AI) exploded into the mainstream in late 2022 with the release of ChatGPT. Now, barely three years later, it has colonised modern life to a remarkable degree.
Leading chatbots are registering roughly 1.5 billion active users per month. OpenAI says 230 million of them pose questions to ChatGPT about their health and wellness each week. Global survey data indicates virtually every CEO is clamouring to integrate AI into their company due to fear of missing out. And labour markets and dating apps are now largely mediated by AI gatekeepers.
“Everything is AI now, so nothing is AI,” one industry analyst recently toldWired magazine ahead of the 2026 Consumer Electronics Show, the tech industry’s largest retail trade fair held every year in Las Vegas. “It has reached such a point of saturation,” he said, adding that consumers don’t even consider a product utilising AI as a unique selling point anymore.
It reflects a stunning normalisation of a revolutionary technology. But not everyone is happy about it.
But whether grassroots action is enough to reverse this trend and effect lasting change remains an open question.
Citizens flex their agency
Frustrated with lawmakers and byzantine legislative processes, some citizens have decided to take action into their own hands.
Fanfiction writers are coordinating campaigns against open source platforms that host catalogues of their work harvested without consent. Other creators have used tactics such as data poisoning to interfere with models built on their material. Web developers are increasingly turning to the practice of “tarpitting” – preventing AI bots from scraping content from their page by trapping them in an endless maze of site code protocols.
There’s also surging demand for older cell phones incapable of running the newest versions of popular apps. Windows users that account for more than a billion devices are refusing to upgrade to Windows 11, with its embedded AI upgrades.
“Consumer fatigue surrounding AI’s rapid rise has begun to catalyse a more analogue 2026,” wrote a Forbes contributor in January, referring to a mantra emerging on social media whereby users vow to reduce their screen time this year. In one sense, she says, it amounts to “course correcting a years-long pattern of glorifying efficiency and automation over creativity and community”.
Tyler Johnston, the executive director of The Midas Project, a non-profit watchdog group that monitors leading tech companies, says it’s clear that AI does not have the widespread popularity that its developers may have hoped for: “I’m personally not sure why this is, but I imagine that the benefits still aren’t salient for most people, and it’s not clear if the technology will, on net, benefit everyday people or disempower them.”
Activists worldwide are also confronting lawmakers about the downsides of datacentres. Locals complain that the always-on facilities are an eyesore. The buildings also drain precious water reserves, deflate adjacent property value due to relentless noise pollution and overload energy grids, triggering blackouts and a spike in utility prices. And all while accruing generous tax breaks and offering sparsely few permanent jobs.
Pockets of the invisible precariat of human workers that make machine-learning models work are mobilising too. Rather than employing their own staff for data labelling, AI developers mostly crowdsource gig workers through platforms such as Amazon Mechanical Turk (AMT). Another method is to outsource work to third-party agencies that offer employment in “digital sweatshops” in low-wage regions abroad – places such as the Philippines, Venezuela and elsewhere.
In February 2025, a group of 339 Kenyan data workers employed to train and maintain the AI systems of major tech companies founded the Data Labelers Association. “The workers power all these technological advancements, but they’re paid peanuts and not even recognised,” the group’s president told Computer Weekly at the time. The members he represents are spread across various subsectors of AI development – ranging from self-driving vehicles to robot vacuum cleaners.
For several years now, the non-profit group Turkopticon has likewise functioned as an online forum for AMT contractors by organising mutual aid, resources and advocacy in support of better working conditions.
Same goal, different tactics
Past attempts to address the pathologies stemming from social media show bottom-up challenges to the roll-out of novel technology can take on multiple forms.
For one, communities – in democracies, at least – can assert their interests by engaging elected representatives to influence legislation. While technologically deterministic rhetoric underpins a lot of current AI-related policies, headlines and discourse, governance remains an inherently contested process involving a range of actors and interests. And when it comes to ensuring safety around AI systems and product adoption, crowdsourcing ideas from the citizenry will be essential.
“The biggest risk is allowing a small group to unilaterally decide how AI is built and deployed,” Taiwan’s former minister of digital affairs, Audrey Tang said in an interview last year. “Only a handful of nations have the power to compete for dominance, while the other 200+ countries are effectively in a race to safety, as they have little control over AI development.”
This highlights an opening for grassroots AI pushback to coalesce into “counter-governance” – an academic concept that refers to citizen interventions altering state-led governance processes.
As Blair Attard-Frost, a fellow at the Alberta Machine Intelligence Institute, explains, this hinges on action being overtly political. The key is effective organisation. When individuals or groups deliberately seek to challenge or disrupt state or industry control over AI, their actions qualify as “counter-governance”. By contrast, opting out of AI use based only on personal ethics or privacy concerns does not meet this threshold unless it is tied to a broader effort to reshape governance itself.
It’s not clear if [AI] technology will, on net, benefit everyday people or disempower them
Tyler Johnston, The Midas Project
This dynamic was visible in recent years around the demise of the Sidewalk Labs project, a smart city development proposal for Toronto’s waterfront led by Google’s parent company Alphabet.
Despite promises of data-driven urban innovation, the project faced sustained criticism over data governance, privacy, surveillance and corporate control of public space. It was ultimately abandoned following extensive public opposition. It remains a rare example of how localised efforts and community resistance might try to constrain large-scale AI initiatives and assert collective authority over the conditions under which such technologies are adopted.
Elsewhere, federated AI systems are attempting to offer an alternative to leading models by allowing data control and model training to remain with local actors rather than being consolidated in large state or corporate platforms. Training models on distributed data without pooling raw information reduces privacy and security risks while supporting regulatory compliance and data sovereignty.
Open source models compliment this approach by giving downstream developers greater freedom to refine models for new purposes. Here, among the most striking trends, wrote two researchers in Tech Policy Press in December, “is the rise of unaffiliated developers and loosely organised online communities”.
Citing a new study of AI model downloads from Hugging Face, a machine learning tool hub, the researchers point to how “the open source ecosystem is no longer shaped primarily by large companies but by hobbyists, independent researchers, small collectives and new intermediary groups that specialise in repackaging, quantising and adapting models for widespread use”.
This new vanguard, they say, is steadily determining “which models become practical options for ordinary developers”.
Together, federated and open source AI offer the promise of broadening civic participation in AI development by incrementally shifting influence away from a handful of closed frontier models owned by Big Tech toward communities that adapt and circulate their own bespoke models. This raises the possibility that new governance standards and use cases can be built even as formal policy responses are slow-balled or fail to gain traction.
All told, the unfolding push for decentralisation by citizens to wrest some control over how AI is affecting their lives echoes the early days of the internet. Back then, American libertarian John Perry Barlow 1996’s essayA declaration of the independence of cyberspace argued web users and civil society must actively contest how digital systems are governed as they scale and consolidate power.
The future remains uncertain
Nevertheless, the concentration of AI control within Big Tech appears destined to continue in 2026. Although this may generate unpredictable impacts at the ballot box – especially in America’s November midterms – it could also have ripple effects for global AI governance given Washington’s jurisdictional authority over Silicon Valley.
More than 1,000 proposed bills to regulate AI were introduced at the state level in the US in 2025 alone. Yet the Trump administration’s AI action plan “reads just like a wish list” from Big Tech, an executive from the AI Now Institute, a non-partisan research organisation, said when the strategy was released last July.
In December 2025, Donald Trump took further action by signing an executive order blocking US states from regulating AI – a move some experts think may fracture his base of political support ahead of the polls.
“I think one of the more interesting implications of this will be the political consequences,” says Tyler Johnston of The Midas Project. “I predict that AI will be a very salient topic in 2026, and that incumbent lawmakers that have so far allied with the industry to fight budding regulation may start to face electoral consequences for having done so. Time will tell.”
But advocacy can go both ways. Meta, for example, spearheaded the launch of a California-based super political action committee last year to enable dark money to derail the campaigns of lawmakers serious about regulating AI. It’s just one of several such efforts being sponsored by the tech industry’s most ardent boosters.
Industry associations representing Big Tech are also lobbying the Trump administration to demand carve-outs in new trade agreements to allow for the use of copyrighted material to train AI models. Meanwhile, CEOs reportedly plan to spend more than ever on AI in 2026. And the stakes are personal: roughly half of them believe their job is on the line if they can’t usher their company into the promised land of AI benefits.
So, while AI appears to have entered its grassroots backlash era, whether it is just a passing phase remains to be seen.
Read more about artificial intelligence
Berlin anarchists cite AI in attack on key energy infrastructure: Berlin anarchists have sabotaged key energy infrastructure in the city, claiming it as an act of ‘self-defence’ against the planet’s destruction by ‘energy-guzzling’ technologies and the wider fossil fuel economy that underpins their development.
AI interview: Thomas Dekeyser, researcher and film director: On the politics of ‘techno-refusal’, and the lessons that can be learned from a clandestine group of French IT workers who spent the early 1980s sabotaging technological infrastructure.
UK copyright law unfit for protecting creative workers from AI: As the UK government considers its approach to artificial intelligence and copyright, Computer Weekly explores the dynamics at play in copyright markets, and what measures can be taken to ensure that creatives are protected.