metamorworks - stock.adobe.com

AI interview: Dan McQuillan, critical computing expert

Critical computing expert Dan McQuillan speaks to Computer Weekly about the top-down imposition of artificial intelligence on society, and how AI is a fundamentally political technology shaped by humanity’s most reactionary tendencies

This article can also be found in the Premium Editorial Download: Computer Weekly: We’re doing AI all wrong

The ways that artificial intelligence (AI) will impact upon our lives are being determined by governments and corporations, with little input from ordinary people, says AI expert Dan McQuillan, who is calling for social changes to resolve this uneven power dynamic and in turn reshape how the technology is approached in the first place.

A lecturer in creative and social computing at Goldsmiths, University of London, and author of Resisting AI: an anti-fascist approach to artificial intelligence, Dan McQuillan argues that AI’s operation does not represent a particularly new or novel set of problems, and is instead simply the latest manifestation of capitalist society’s rigidly hierarchical organisational structure.

“Part of my attempt to analyse AI is as a kind of radical continuity. Clearly [imposition of AI from above] isn’t in itself a particularly original problem. Pretty much everything else about our lives is also imposed in a top-down, non-participatory way,” he says.

“What primes us for that imposition is our openness to the very idea of a top-down view… that there is a singular monocular vision that understands how things are and is in a superior position to decide what to do about it.”

However, given the socio-technical nature of AI – whereby the technical components are informed by social processes and vice versa – McQuillan highlights the need for social change to halt its imposition from above.

That social change, he argues, must be informed by prefigurative politics; referring to the idea that means cannot be separated from ends, and that any action taken to effect change should therefore be in line with the envisioned goals, and not reproduce existing social structures or problems.

In a previous conversation with Computer Weekly about the shallow nature of the tech sector’s ethical commitments, McQuillan noted that AI’s capacity to categorise people and assign blame – all on the basis of historically biased data that emphasises correlation rather than any form of causality – means the technology often operates in a way that is strikingly similar to the politics of far-right populism: “I’m not saying AI is fascist, but this technology lends itself to those kinds of solutions.”

He further contends in his book that AI is also underpinned by the logic of austerity (describing AI to Computer Weekly as a “mode of allocation” that comes up with “statistically refined ways to divide an ever smaller pie”) and “necropolitics” (the use of various forms of power, now embedded in the operation of algorithms, to dictate how people live and die).

“AI decides what’s in and what’s out, who gets and who doesn’t get, who is a risk and who isn’t a risk. Whatever it’s applied to, that’s just the way AI works – it draws decision boundaries, and what falls within and without particular kinds of classification or identification”
Dan McQuillan, Goldsmiths, University of London

“AI decides what’s in and what’s out, who gets and who doesn’t get, who is a risk and who isn’t a risk,” he says. “Whatever it’s applied to, that’s just the way AI works – it draws decision boundaries, and what falls within and without particular kinds of classification or identification.

“Because it takes these potentially very superficial or distant correlations, because it datafies and quantifies them, it’s treated as real, even if they are not.”

Prefiguring the future

In Resisting AI, McQuillan argues that it is fundamentally a political technology, and should be treated as an “emerging technology of control that might end up being deployed” by fascist or authoritarian regimes.

“The concrete operations of AI are completely entangled with the social matrix around them, and the book argues that the consequences are politically reactionary,” he writes in the introduction. “The net effect of applied AI… is to amplify existing inequalities and injustices, deepening existing divisions on the way to full-on algorithmic authoritarianism.”

McQuillan adds the current operation of AI and its imposition from above is therefore “absolutely contiguous with the way society is organised”, and that ultimately its power comes from people already being prepped to accept a “single, top-down view”.

For McQuillan, it is vital when developing socio-technical systems like AI to consider means and ends, “so that what you do is consistent with where you’re trying to get to… that’s why I would basically write off AI as we currently know it, because I just don’t see it getting any better [under our current social arrangements]”.

Highlighting the historical continuities and connections between fascism and liberalism – the Nazis, for example, took inspiration from the US’s segregationist Jim Crow laws, as well as the construction of concentration camps by European colonial powers like Spain and Britain, and came to power via electoral means – McQuillan questions the popular notion that liberal democracies are an effective bulwark against fascism.

He adds there is a real lack of understanding around the role of “regular citizens” in the fascism of the early 20th century, and how liberal political structures tend to prefigure fascist ones.

“It doesn’t happen because the SS turn up, they’re just a kind of niche element of complete sociopaths, of course, but they’re always niche – the real danger is the way that people who self-understand as responsible citizens, and even good people, can end up doing these things or allowing them to happen,” he says.

Relating this directly to the development and deployment of AI as a socio-technical system, McQuillan further notes that AI itself – prefigured by the political and economic imperatives of liberalism – is similarly prone to the logic of fascism.  

“One of the reasons why I’m so dismissive of this idea… that ‘what we really need is good government because that’s the only thing that has the power to sort this AI stuff out’ is because of the continuity between the forms of government that we have, and the forms of government that I think are coming which are clearly more fascistic,” he says.

McQuillan adds that the chances of state regulation reining in the worst abuses of AI are therefore slim, especially in context of the historical continuities between liberalism and fascism that allowed the latter to take hold.

“The net effect of applied AI… is to amplify existing inequalities and injustices, deepening existing divisions on the way to full-on algorithmic authoritarianism”
Dan McQuillan, Goldsmiths, University of London

“Whatever prefigurative social-technical arrangements we come up with must be explicitly anti-fascist, in the sense that they are explicitly trying to immunise social relations against the ever-present risk of things moving in that direction… not necessarily just the explicit opposition to fascism when it comes, because by then it’s far too late!”

Towards alternative visions

Riffing off Mark Fisher’s idea of “capitalist realism” – the conception that capitalism is the only viable political and economic system and that there are no possible alternatives – McQuillan posits that AI is starting to be seen in a similar way, in that AI’s predicted dominance is increasingly accepted as an inevitability, and there are no attempts to seriously question its use.

Citing a December 2022 paper by sociologist Barbara Prainsack, titled The roots of neglect: Towards a sociology of non-imagination, McQuillan further notes how our ideas about the future are often shaped by our present imaginations of what is possible, which also has an important prefigurative effect.

“Our imagination of the future runs on railway lines which are already set for us,” he says, adding this has the effect of limiting alternative, more positive visions of the future, especially in rich countries where governments and corporations are at the forefront of pushing AI technologies.

“It’s very difficult to see dynamic movements for alternative futures in the global north. They are around, but they’re in different places in the world. Somewhere like Rojava [in Northern Syria], or with the Zapatistas [in Chiapas, Mexico] and many places in Latin America, I think, have actually got alternative visions about what’s possible; we don’t, generally.”

McQuillan says this general lack of alternative visions is also reflected and prefigured in the “sci-fi narratives we’ve all been softened up with”, citing the fatalistic nihilism of the cyberpunk genre as an example.

“Cyberpunk is an extrapolation of technology in the social relations that we’ve already got, so it’s hardly surprising that it ends up pretty dystopian,” he says, adding while the sci-fi subgenre is more realistic than others – in that it’s an “extrapolation of the relations we’ve actually got and not what people think we’ve got, like an operating democracy” – there is a dire need for more positive visions to set new tracks.

Pointing to the nascent “solarpunk” genre – which specifically rejects cyberpunk’s dystopian pessimism by depicting sustainable futures based on collectivist and ecological approaches to social organisation and technology – McQuillan says it offers “a positive punk energy” that prioritises DIY problem solving.

He says it also uses technology in such a way that it is “very much subsumed” to a wider set of positive social values.

“One of the drivers in solarpunk, that I read out of it anyway, is that it’s got a fundamentally relational ontology; in other words, that we all depend on each other, that we’re all related [and interconnected] to one another and to non-human beings,” he says, adding that “it’s very similar to most indigenous worldviews”, which see the environment and nature as something that should be respected and related to, rather than dominated and controlled.

In line with this, and in contrast to what he calls the “reactionary science” of AI – whereby “everything is reducible, mappable and therefore controllable” – McQuillan points to the cybernetics of Stafford Beer as a potential, practical way forward.

Because it emphasises the need for autonomy and dynamism while acknowledging the complexity involved in many areas of human life (thus embracing the idea that not everything is knowable), McQuillan suggests the adoption of Beerian cybernetic could prefigure a number of social and technological alternatives.

“The other thing that strikes me about cybernetics is it’s not about a specific type of technology, it’s more about organisational flows, if you like, that can be non-computational and computational,” he says. “It’s that idea of riding the wave a bit, but having different levels in which you need to do that.”

He adds: “You need to deal with the local stuff, if you don’t deal with that, nothing matters, but then that doesn’t work by itself – you’ve got to have coordination of larger areas, natural resources, whatever, so you nest your coordination.”

Somewhere between the Luddites and the Lucas Plan

Although the term Luddite is used today as shorthand for someone wary or critical of new technologies for no good reason, the historical origins of the term are very different.

While workplace sabotage occurred sporadically throughout English history during various disputes between workers and owners, the Luddites (consisting of weavers and textile workers) represented a systemic and organised approach to machine breaking, which they started doing in 1811 in response to the unilateral imposition of new technologies (mechanised looms and knitting frames) by a new and growing class of industrialists.

Luddism was therefore specifically about protecting workers’ jobs, pay and conditions from the negative impacts of mechanisation.

“The way to tackle the problems of AI is to do stuff that AI doesn’t do, so it’s about collectivising things, rather than individualising them down to the molecular level, which is what AI likes to do”
Dan McQuillan, Goldsmiths, University of London

Fast forward to January 1976, workers at Lucas Aerospace had published the Lucas Plan in response to announcements from management that thousands of manufacturing jobs were at risk from industrial restructuring, international competition and technological change.

The plan proposed that workers themselves should establish control over the firm’s output, so that they could put their valuable engineering skills towards the design and manufacture of new, socially useful technologies instead of continuing to fulfil military contracts for the British government, which accounted for about half its output.

For McQuillan, the collective response to AI in 2023 should fall somewhere between the endeavours of the textile workers and aerospace engineers, in that there should be a mixture of direct action against AI as we know it, and participatory social projects to envision alternative uses of the technology.

However, he notes it can be hard for many without “positive experiences of real alternatives” to “believe that people would act that way, would support each other in that way, would dream in that way… They’ve never experienced the excitement or the energy of those things that can be unlocked.”

To solve this, McQuillan notes that people’s ideas change through action: “This can’t be just a matter of discourse. It can’t be just a matter of words. We want to put things into practice.

“Most of the putting into practice would hopefully be on the more positive side, on the more solarpunk side, so that needs to happen. But then action always involves pushing back against that which you don’t want to see now.”

On the “more positive” hand, McQuillan says this could involve using technology in community or social projects to demonstrate a positive alternative in a way that engages and enthuses people.

On the other, it could involve direct action against, for example, new datacentres being built in areas with water access issues, to highlight the fact that AI’s operation depends on environmentally detrimental physical infrastructure wholly owned by private entities, rather than controlled by the communities where they exist for their own benefit.

McQuillan also advocates for self-organising in workplaces (including occupations if necessary), as well as the formation of citizen assemblies or juries to rein in or control the use of AI in specific domains – such as in the provision of housing or welfare services – so that they can challenge AI themselves in lieu of formal state enforcement.

“The way to tackle the problems of AI is to do stuff that AI doesn’t do, so it’s about collectivising things, rather than individualising them down to the molecular level, which is what AI likes to do,” he says. 

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close