Daniel - stock.adobe.com
Lords committee urges caution on UK use of autonomous weapons
UK government must ensure proper democratic oversight of its development and use of AI-powered weapon systems, says Lords committee
The UK government must “proceed with caution” when developing and deploying artificial intelligence (AI) for military purposes, particularly autonomous weapons systems (AWS), a House of Lords committee has said.
AWS, also known as lethal autonomous weapons systems (LAWS), are weapons systems which can select, detect and engage targets with little or no human intervention.
Established 31 January 2023, the Lords Artificial Intelligence in Weapon Systems Committee explored the ethics of developing and deploying such weapons, including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws.
In a 1 December report detailing its findings, the Committee said that while the government promised in its 2022 Defence Strategy to approach military AI in an “ambitious, safe and responsible” way, these aspirations have not lived up to reality, and must be translated into practice.
“Discussion of autonomous weapons, and to a significant extent AI in general, is bedevilled by the pursuit of agendas and a lack of understanding,” it said. “One of our aims is to provide a factual basis for constructive debate, and frankness and transparency on the part of Government will support this process.”
The committee said it was key that the government seek, establish and retain both public confidence and democratic endorsement in the development and use of AI generally, and particularly in respect of AWS.
To achieve this, the committee made a number of recommendations for how government can increase public understanding and confidence around military AI applications, as well as enhance the decision-making role of Parliament in this area.
Nuclear prohibition
While much of the report focused on improving oversight of military AI, it also called for a specific prohibition for the use of the technology in nuclear command, control and communications due to the risks of hacking, “poisoned” training data and escalation – whether it be intentional or accidental – during moments of crisis.
“Artificial intelligence has spread into many areas of life, and defence is no exception. How it could revolutionise defence technology is one of the most controversial uses of AI today,” said committee chair Lord Lisvane, adding that while government recognition of the need for responsible AI in its future defence capabilities is welcome, there are a number of steps it must take to ensure appropriate caution is taken.
“It must embed ethical and legal principles at all stages of design, development and deployment, while achieving public understanding and democratic endorsement,” he continued. “Technology should be used when advantageous, but not at unacceptable cost to the UK’s moral principles.”
Key recommendations of the committee include the government ensuring human control at all stages of an AWS’s lifecycle; adopting an operational, tech-agnostic definition of AWS so that meaningful policy decisions can be made; and appropriately designing procurement processes for an AI-driven world so there is proper accountability.
In terms of public confidence and awareness, the committee said it was “disappointed” that the Ministry of Defence (MoD) is not currently working to understand public attitudes towards AWS, adding that the public must be properly consulted: “It must also ensure ethics are at the centre of its policy, including expanding the role of the Ministry of Defence’s AI Ethics Advisory Committee.”
On improving scrutiny around the UK’s AWS capabilities, it said Parliament’s capacity for oversight depends on the availability of information, on its ability to anticipate issues rather than reacting after the event, and on its ability to hold ministers to account.
“The Government must allow sufficient space in the Parliamentary timetable and provide enough information for Parliament, including its select committees, to scrutinise its policy on AI effectively,” it said. “We naturally understand that elements of policy development may be highly sensitive, but there are established ways of dealing with such information. Arguments of secrecy must not be used to sidestep accountability.”
Read more about military AI
- AI interview: Elke Schwarz, professor of political theory: Elke Schwarz speaks with Computer Weekly about the ethics of military artificial intelligence and the dangers of allowing governments and corporations to push forward without oversight or scrutiny.
- AI can never be given control over combat decisions, Lords told: Artificial intelligence is technically incapable of distinguishing between the complex contextual factors of combat situations, and will likely never be able to, according to legal and software experts.
- UK, US and Australia jointly trial AI-enabled drone swarm: British, American and Australian military organisations have trialled the use of artificial intelligence (AI) in drones in a collaboration designed to drive their adoption of AI-powered military tools.
The committee also urged the government to lead by example on international engagement over military AI and AWS.
“The AI Safety Summit was a welcome initiative, but it did not cover defence. The Government must include AI in AWS in its proclaimed desire to “work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe” and to support “the good of all through existing international fora and other relevant initiatives”, it said, referring to the Bletchley Declaration signed by 28 countries during the Summit.
“The international community has been debating the regulation of AWS for several years. Outcomes from this debate could be a legally binding treaty or non-binding measures clarifying the application of international humanitarian law – each approach has its advocates. Despite differences about form, the key goal is accelerating efforts to achieving an effective international instrument.”
Responding to the report, Peter Burt of non-governmental organisation Drone Wars UK, which gave evidence to the committee in April 2023, said “the Lords Committee has acknowledged that autonomous weapon systems – ‘killer robots’ – are one of the most controversial uses of AI today, and has unequivocally stated that these systems must remain under human control at all stages of their lifecycle”.
“Unfortunately, the UK government is racing ahead to develop autonomous weapons regardless of the risks they pose,” he said. “Consideration of the ethical hazards and harms from autonomous weapons has lagged far behind research into the weapons themselves.”
Burt added that much of what the UK government has publicly said on the issue of AWS “has been little more than window-dressing”, and that it was “telling that the government’s high-profile international AI Safety Summit explicitly ruled out discussing military applications of AI – the most harmful potential uses imaginable”.
International controls
Burt said the key goal now, as the Lords concluded, is to accelerate efforts to create an “effective international instrument” to control military uses of AI. “If ‘Global Britain’ is to have any meaning, the UK government must now throw its weight behind the growing movement to ban fully autonomous weapon systems,” he said.
The UK Campaign to Stop Killer Robots similarly said if its commitment to be at the forefront of global technology regulation is serious, "the Government must heed urgent calls for autonomous weapons systems to be prohibited and regulated – and support the development of an international treaty."
In a report on “emerging military technologies” published in November 2022 by the Congressional Research Service, analysts noted that roughly 30 countries and 165 non-governmental organisations (NGOs) have called for a pre-emptive ban on the use of LAWS due to the ethical concerns surrounding their use, including the potential lack of accountability and inability to comply with international laws around conflict.
In early November 2023, a United Nations (UN) disarmament body approved a draft resolution on the negative impacts of AI-powered weapons systems, saying there is an “urgent need” for international action to address the challenges and concerns they present.
The resolution specifically asked UN secretary-general António Guterres to seek member states’ views on AWS – as well as their views on “ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives, and on the role of humans in the use of force” – and publish them in a “substantive report” that reflects the full range of perspectives given.
The resolution also states the First Committee will include a session on LAWS in its agenda for the next session, starting in 2024. As a draft resolution, it is currently with the General Assembly for its consideration and action.
A UK government response to the Committee’s report is expected shortly after in January 2024.