Technology experts warn of dangers of artificial intelligence arms race

An open letter signed by more than 12,000 technology experts calls for a ban on artificial intelligence (AI) to manage weapons “beyond meaningful human control”

More than 12,000 technology experts, scientists and researchers have signed or endorsed an open letter warning of the dangers of autonomous weapons.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the letter signed by 2,051 artificial intelligence (AI) and robotics researchers.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” said the letter, which has been endorsed by scientist Stephen Hawking, entrepreneur Elon Musk, Apple co-founder Steve Wozniak, Massachusetts Institute of Technology (MIT) professor Noam Chomsky and Google AI head Demis Hassabis.

According to the letter, autonomous weapons that select and engage targets without human intervention – such as armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria – could be feasible in years, not decades.

The letter, presented to the International Joint Conference on Artificial Intelligence conference in Buenos Aires, calls for a ban on the use of AI to manage weapons “beyond meaningful human control”.

The letter comes as the United Nations considers a ban on certain types of autonomous weapons, reports the BBC.

Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group, the letter said. “We therefore believe that a military AI arms race would not be beneficial for humanity,” wrote the authors of the open letter.

In December 2014, Stephen Hawking told the BBC he was concerned that AI could spell the end of mankind. "Humans, who are limited by slow biological evolution, couldn't compete [with artificial intelligence], and would be superseded," he said.

In October 2014, Elon Musk warned of the risks humanity faces from AI at an MIT Aeronautics and Astronautics symposium in Boston.

Musk, who co-founded PayPal and electric car company Tesla, said international legislation would be needed to prevent the technology being abused in a way that would harm humanity.

"There should be some regulatory oversight at the national and international level, just to make sure we don't do something very foolish," he said.

AI weapons' roots in WWII

World War 2 code-breaker Alan Turning, who was instrumental in cracking Germany's Enigma code defined modern computing in his universal Turing Machine, also described many of the concepts behind AI.

His Turing Test outlines a series of questions a human could pose to a computer. A machine is said to exhibit human-like behaviour if the answers it gives are indistinguishable from a human response.

In June 2014, a computer program called Eugene Goostman became the first to succeed at stimulating a human response in the Turing Test.

The engineers behind the program – developed to simulate the responses of a 12-year old Ukrainian boy – said it could be used manage social network correspondence and spell-checking.

Read more about artificial intelligence (AI)

Highlighting the positive aspects of other AI research, Microsoft research chief Eric Horvitz said computation had done much for society. “In applications like healthcare it's been incredible. AI will change so many things," he said in a video posted online.

Horvitz – who signed the open letter warning of an AI arms race – said AI research brings a lot of hope and possible benefits, but some concerns. “I think there are very interesting questions that need to be solved along the way, but I expect largely positive beneficial results coming out of this research,” he said.

Even the authors of the open letter said AI has “great potential to benefit humanity” in many ways, and that the goal of the field should be to do so.

Next Steps

Radiology is going to be improved with artificial intelligence in healthcare

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on IT innovation, research and development

Join the conversation

4 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Let the genii out of the bottle and there's no telling what he can do on his day off.... Of course this is a huge danger and the warnings are well deserved. OTOH, I suspect that those intent on weaponizing AI are far too busy plotting dominance to listen to the warnings. Or too afraid of what someone else might be plotting.

Before we worry too much about loosing hoards of ransacking robots, consider how easy it is already to weaponize a hacked database or assume control of a speeding Jeep. The future is here and we've been very sloppy keeping it under control.
Cancel
@ncberns - and I suspect those intent on weaponizing AI are most likely not very concerned about international law prohibiting it, either.
Cancel
@mcorum - I fear you're right. But since no one will say NO, it looks like we're in for a robots race. What makes me think this cannot turn out well....
Cancel
Any irresponsible release of technology is dangerous. As for the arms race, I'm less concerned - the modern warfare is asymmetric.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close