jim - stock.adobe.com

AI leaders call for a stop on development of autonomous weapons

Business leaders and artificial intelligence experts sign an open letter to governments warning of the risks of an AI arms race in autonomous weapons

CEOs from 115 technology companies have signed an open letter urging governments to address their concerns over the development and use of fully autonomous weapons.

The CEOs, who include Elon Musk of SpaceX and Tesla and Mustafa Suleyman, founder and head of applied artificial intelligence (AI) at DeepMind, UK, have founded the Campaign to Stop Killer Robots, demanding urgent action on the use of killer robots.

The letter says: “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

The letter adds to the growing number of warnings given by business leaders and AI experts on the consequences of misusing AI.

In January, 100 AI researchers from academia and industry and thought leaders in economics, law, ethics and philosophy met at a conference organised by the Future of Life Institute to address and formulate principles of beneficial AI.

The researchers have developed the Asimolar AI Principles as guidelines to govern the future development of AI. The principles include avoiding an arms race in lethal autonomous weapons and that superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.

Musk, physicist Stephen Hawking and futurologist Ray Kurzweil are among the 3,500 signatories and endorsers of the Asimolar AI Principles.

Musk is an outspoken critic of the inherent risks AI poses to humanity. At a recent conference, he said his worst nightmare scenario was deep intelligence in the network.

“What harm could deep intelligence in the network do?” Musk asked US government delegates at the National Governors Association on 15 July. “Well, it could start a war by doing fake news and spoofing email accounts and just by manipulating information.”

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Artificial intelligence, automation and robotics

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close