AI can never be given control over combat decisions, Lords told

Artificial intelligence is technically incapable of distinguishing between the complex contextual factors of combat situations, and will likely never be able to, according to legal and software experts

Introducing autonomy into weapon systems will increase the unpredictability of armed conflict due to the technical inability of artificial intelligence (AI) algorithms to parse complex contextual factors, Lords have been told.

During the latest session of the House of Lords AI weapons committee – which was set up at the end of January 2023 to explore the ethics of developing and deploying autonomous weapons systems (AWS) – legal experts and software engineers told Lords that current AI systems are not able to assess whether a given military action is appropriate or proportionate, and will likely never be able to.

They added that while AI will never be sufficiently autonomous to take on responsibility for military decisions, even limited autonomy would introduce new problems in terms of increased unpredictability and opportunities for “automation bias” to occur.

Instead, they argued there must always be “meaningful human control” of AI-powered weapon systems. “Once autonomy is happening, you have brought in another type of actor into the system. Human beings behave in various ways that are typically sensitive to the context that we operate in,” said Laura Nolan, principal software engineer at Stanza Systems and member of the Stop Killer Robots campaign, adding that while humans can easily adapt to each other and the context of a situation, even the most advanced AI systems are currently not able to.

“You have to script out what they should do in what context, and the machine learning components are typically about sensing the environment, sensing a target profile – but the decision is not context-appropriate.”

She added that autonomous weapons also make it “extremely difficult” for operators and commanders to control the location and timing of attacks, and therefore to anticipate whether it was proportionate or if there will be collateral damage.

“You’re asking the commanders to anticipate the effects of an attack that they do not fully control or cannot fully anticipate,” she said. “A core tenet of complex system theory says that when you have systems with multiple components, multiple actors interacting … the number of potential outcomes grows exponentially. It then becomes very, very difficult to predict those effects.”

Automation bias

Then there is the added problem of automation bias (referring to the tendency for humans to trust the outputs of automated systems more than they would trust information from another source), the complete elimination of which Nolan said would be a “pipe dream”.

“It’s an extremely active and long-running area of human factors research on how to reduce automation bias or eliminate it, and we don’t know how,” she said.

On whether an AI-powered weapon would ever be able to autonomously assess proportionality of combat decisions, Nolan said she believes it is “absolutely impossible” for a machine to make those kinds of determinations, as only a human could assess the overall strategic context.

“You need to know the anticipated strategic military value of the action, and there’s no way that a weapon can know that,” she said. “A weapon is in the field, looking at perhaps some images, some sort of machine learning and perception stuff. It doesn’t know anything. It’s just doing some calculations which don’t really offer any relation to the military value.”

Read more about military artificial intelligence

Explaining how AI models mathematically allocate pixels to identify the contents of images – which any AWS would have to do in the field from a live feed – Taniel Yusef, a visiting researcher at Cambridge University’s Centre for the Study of Existential Risk, said that although the underlying maths could be “accurate”, that does not necessarily mean the outcomes will be “correct”.

Giving the example of when she tested an algorithm designed to distinguish between images of cats and dogs, Yusef said that even simple tasks such as this can and do go wrong.

“It decided the cat was a dog,” she said. “What concerns me is, when this happens in the field, you will have people on the ground saying these civilians were killed, and you’ll have a report by the weapon that feeds back, ‘But look at the maths’.

“The maths says it was a target that was a military base … because the maths says so, and we defer to maths a lot because maths is very specific, and the maths will be right. There’s a difference between correct and accurate. There’s a difference between precise and accurate. The maths will be right because it was coded right, but it won’t be right on the ground,” said Yusef.

“So when you ask the question about proportionality and if it’s technically possible [to delegate responsibility to AI], no, it’s not technically possible, because you can’t know the outcome of a system, how it will achieve the goal that you’ve coded, until it’s done it, and you don’t know how it’s got there,” she said.

Peer interjection

When a peer interjected to say humans could make similar mistakes, as “the other day, I saw a dog which I thought was a cat”, Yusef replied: “You didn’t shoot it.”

Christian Enemark, professor of international relations at the University of Southampton, said: “The autonomous discharging of [discrimination and proportionality] to a non-human entity is a philosophical nonsense, arguably.”

He added that it should always be a human agent that makes decisions and takes responsibility for them, and that the general conversation about AWS should be expanded out to include other practical areas where they could be used.

“Weapons can be used outside of armed conflict, and yet the conversation has been primarily directed towards armed conflict and the law that governs armed conflict, which is international humanitarian law,” he said. “But it need not be so restricted, and arguably it ought to be expanded to include the use of violence by the state, for example, in law enforcement purposes – we need to be thinking about what the implications of AI incorporation might be in that context.

“And once we get out of the context of armed conflict, we’re not restricted to talk about humanitarian law. We’re open now to be inspired and guided by international human rights law as well.”

In its first evidence session, the committee heard that the potential benefits of using AI in weapons systems and military operations should not be conflated with better international humanitarian law compliance, on the basis that speeding up warfare beyond the ordinary cognition capabilities of humans would limit people’s ability to prevent an unlawful or unnecessary attack.

The expert witnesses in that session also noted that the deployment of AI weapons could also make the use of violence more rather than less frequent, because the threshold of resorting to force would be significantly lower.

Read more on IT governance

CIO
Security
Networking
Data Center
Data Management
Close