Laurent - stock.adobe.com
Nuclear-armed states will leverage AI, digital and cyber technologies to enhance their national security apparatus, according to a new technology-focused report by the Stockholm International Peace Research Institute (SIPRI).
SIPRI’s report The impact of artificial intelligence on strategic stability and nuclear risk (Euro-Atlantic perspectives) mirrors concerns raised by defence and technology experts that the potential benefits of rapid military adoption of advanced AI systems may prove irresistible to some nuclear-armed states.
One fear is that some of these states may choose to lower the level of their safety systems and reliability standards to maintain or develop a technological edge over their competitors, said report author Vincent Boulanin, a senior researcher at SIPRI.
“States might apply different standards in order to review new technologies and determine whether these are in compliance with international law,” he said. “This remains an area which is open to interpretation. Some states may interpret international laws differently to others. They might authorise development and proceed with the deployment of some form of weapons technology that other states might decide against.”
But the wider deployment of AI and associated technologies by world militaries is not taking place in a legal vacuum, said Boulanin. States are bound by legal checks and balances in the use of such technologies.
“States are obliged by international law to commit to the process known as Article 36,” he said. “This is a weapons review process that is a requirement in the 1977 Additional Protocol to the 1949 Geneva Conventions. It imposes a practical obligation on states to determine whether in the research, development, acquisition or adoption of new weapons, to investigate whether these weapons of warfare can be employed lawfully in compliance with obligations on these states.”
The military applications of AI are covered by this process. As a result, states are required to ensure that any AI technologies and applications used in weapons of war remain lawful and in compliance with international treaties and agreements.
“This means there is a mechanism in place to police the development and use of AI,” said Boulanin. “A positive of this framework is that it is agnostic and doesn’t mention a specific type of technology. Each new system must comply with various protocols and treaties. This applies as much to AI as it does to other kinds of technology.”
The current system of checks and balances for AI usage by militaries risks being eroded should some states feel their rivals are adopting a more flexible approach to interpreting international laws, said Boulanin.
Read more about AI use in the Nordics
- The governments of the Nordic and Baltic region want to work closer together to further increase the status of artificial intelligence in business and society.
- Industry and academia in Norway are working together to boost the country’s artificial intelligence capabilities.
- In this e-guide, we explore how artificial intelligence is going to play a key role in the future of the Nordics.
- The authorities for Finnish city Espoo are testing out AI technology to help them target services at the residents who need them most.
“The current process is based on existing law,” he said. “This generates concerns about the development of AI and its use based on moral principles, for example having a machine select human targets. Such a moral stance is not considered currently because it doesn’t exist in law. There may be a need to expand international law with new principles and norms to properly address all the challenges relating to the use of AI.”
The inherent nature of AI technology is also recognised as being problematic, given that, at its core, it is a software-based technology that can make a tangible evaluation of military capabilities difficult. Nuclear-armed states could, potentially, misinterpret their adversaries’ capabilities and intentions. In the field of nuclear strategy and deterrence, the SIPRI report said: “The perception of an enemy’s capability matters as much as its actual capability.”
The post-Cold War global strategic landscape is currently in an extended process of being redrawn as a result of a number of technological trends. The underlying dynamics of world power are shifting with the economic, political and strategic rise of China and the military resurgence of Russia, said Dan Smith, director of SIPRI.
“The world is undergoing a fourth industrial revolution, characterised by rapid and converging advances in multiple technologies, including artificial intelligence, robotics, quantum technology, nano technology, bio technology and digital fabrication,” said Smith.
On the other hand, AI systems have a number of limitations that make their potential use problematic from ethical, legal and security perspectives.
The SIPRI report highlights the risk that, if not properly programmed or used, AI systems could misinform human decisions and actions while reinforcing existing human biases or creating new ones. AI-associated systems also carry the inherent risk of failing in unpredictable ways or being particularly vulnerable to cyber attacks, the report said. “In the military context, the potential consequences of these limitations could be dramatic,” it added.
The latest problem-solving breakthroughs in AI and machine learning have the capacity to improve the design of autonomous military systems and deliver significant qualitative improvements to a broad range of military applications, said Martin Hagström, deputy research director at the Swedish Defence Research Agency and a specialist in autonomous systems, aeronautics and unmanned aerial vehicles (UAVs).
SIPRI report contributor Hagström said: “Machine learning techniques are especially well suited for data-rich applications where explicit system modeling is difficult. Every system needs a model of its universe – the system’s design space – that describes the environment and the system’s interactions with it.
“Machine learning methods can also be used for pattern recognition. They can be used to identify patterns of ‘normality’ in data and then to detect data patterns that differ from the normal state.”
Reconnaissance and surveillance applications
Advances in machine learning and AI will also add capability to military systems that are dedicated to information management in reconnaissance and surveillance (R&S) applications as well as cyber defence capabilities.
UAV-deployed modern R&S systems are calibrated to collect vast amounts of information, continuously sending a stream of data through a network to a command analysis centre. Data analysis is recognised as an important application area where machine learning can improve the success of military operations.
Boulanin pointed out that improvements in machine learning and autonomy have the potential to advantage all the key areas of nuclear deterrence architecture.
Areas primed to benefit include command, control, communications, computers, intelligence, surveillance and reconnaissance, in addition to nuclear weapon delivery and non-nuclear counterforce operations, such as air defence, cyber security and the physical protection of nuclear assets.
“AI is a very versatile technology that can be applied to a broad range of functions, including cyber warfare,” said Boulanin. “AI can be used for cyber defensive applications and can facilitate the ability to identify new patterns and improve the possibility to detect malware and abnormal activity in networks.”
As part of a cyber platform, AI also has the capacity to enable automated responses to neutralise or block attacks from cyber space, he added.
“Apart from their defensive facility, AI and machine learning also have the capability to be used to support research work that helps identify vulnerabilities in networks,” he said. “The fruits of this research can be used for offensive purposes against cyber threats.”