Technology experts warn of dangers of artificial intelligence arms race

An open letter signed by more than 12,000 technology experts calls for a ban on artificial intelligence (AI) to manage weapons “beyond meaningful human control”

More than 12,000 technology experts, scientists and researchers have signed or endorsed an open letter warning of the dangers of autonomous weapons.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the letter signed by 2,051 artificial intelligence (AI) and robotics researchers.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” said the letter, which has been endorsed by scientist Stephen Hawking, entrepreneur Elon Musk, Apple co-founder Steve Wozniak, Massachusetts Institute of Technology (MIT) professor Noam Chomsky and Google AI head Demis Hassabis.

According to the letter, autonomous weapons that select and engage targets without human intervention – such as armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria – could be feasible in years, not decades.

The letter, presented to the International Joint Conference on Artificial Intelligence conference in Buenos Aires, calls for a ban on the use of AI to manage weapons “beyond meaningful human control”.

The letter comes as the United Nations considers a ban on certain types of autonomous weapons, reports the BBC.

Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group, the letter said. “We therefore believe that a military AI arms race would not be beneficial for humanity,” wrote the authors of the open letter.

In December 2014, Stephen Hawking told the BBC he was concerned that AI could spell the end of mankind. "Humans, who are limited by slow biological evolution, couldn't compete [with artificial intelligence], and would be superseded," he said.

In October 2014, Elon Musk warned of the risks humanity faces from AI at an MIT Aeronautics and Astronautics symposium in Boston.

Musk, who co-founded PayPal and electric car company Tesla, said international legislation would be needed to prevent the technology being abused in a way that would harm humanity.

"There should be some regulatory oversight at the national and international level, just to make sure we don't do something very foolish," he said.

AI weapons' roots in WWII

World War 2 code-breaker Alan Turning, who was instrumental in cracking Germany's Enigma code defined modern computing in his universal Turing Machine, also described many of the concepts behind AI.

His Turing Test outlines a series of questions a human could pose to a computer. A machine is said to exhibit human-like behaviour if the answers it gives are indistinguishable from a human response.

In June 2014, a computer program called Eugene Goostman became the to succeed at stimulating a human response in the Turing Test.

The engineers behind the program – developed to simulate the responses of a 12-year old Ukrainian boy – said it could be used manage social network correspondence and spell-checking.

Read more about artificial intelligence (AI)

Highlighting the positive aspects of other AI research, Microsoft research chief Eric Horvitz said computation had done much for society. “In applications like healthcare it's been incredible. AI will change so many things," he said in a video posted online.

Horvitz – who signed the open letter warning of an AI arms race – said AI research brings a lot of hope and possible benefits, but some concerns. “I think there are very interesting questions that need to be solved along the way, but I expect largely positive beneficial results coming out of this research,” he said.

Even the authors of the open letter said AI has “great potential to benefit humanity” in many ways, and that the goal of the field should be to do so.

Next Steps

Radiology is going to be improved with artificial intelligence in healthcare

Read more on IT innovation, research and development

CIO
Security
Networking
Data Center
Data Management
Close