jim - stock.adobe.com

Why Facebook’s AI termination raises safety concerns

Revelations that researchers at Facebook had to switch off two bots that went rogue have raised questions about the safety of artificial intelligence

This article can also be found in the Premium Editorial Download: Computer Weekly: Formula 1 goes digital:

A researcher at Facebook AI Research (Fair) has admitted that the social media giant’s team decided to unplug two bots after they began communicating in their own, English-like language, which was unintelligible to humans.

AI researcher Dhruv Batra told the website Co.Design that the team had developed chatbot agents with the ability to negotiate using what is called an “adversarial network”.

In a Facebook blog, the researchers explained how, in some instances, their chatbots initially feigned interest in a valueless item, only to later “compromise” by conceding it – an effective negotiating tactic that people use regularly. “This behaviour was not programmed by the researchers, but was discovered by the bot as a method of trying to achieve its goals,” said Facebook.

One of the findings from the experiment was that although the chatbots repeated sentences from training data, the research work showed that AI models are capable of generalising when necessary.

“It is possible that language can be compressed, not just to save characters, but compressed to a form in which it could express a sophisticated thought,” Batra told Co.Design.

The team at Fair decided to switch off the chatbots because they felt it was necessary to understand what the bots were talking about.

This is a situation that Tesla and SpaceX CEO Elon Musk has previously warned about – the nightmare of AI working against the best interests of humanity.

Speaking at the National Governors Association on 15 July, Musk warned US government delegates that, if left unchecked, AI posed a serious risk to society.

When John Hickenlooper, governor of Colorado, asked about the role government should take, Musk said: “I think one of the roles of government is to ensure the public good and to ensure that dangers to the public are addressed.”

Musk told delegates that the rate of progress in AI meant some concepts that experts thought would take years to achieve, such as AI being able to beat a human at the Chinese board game Go, have now happened.

People thought it would take 20 years for a computer to be able to beat a human at Go, he said. “Last year, AlphaGo from Google subsidiary DeepMind absolutely crushed the world’s best player. Now it can play the top 50 Go players and crush them all. The pace of progress is remarkable. You can see robots that can learn to walk from nothing within hours, which is way faster than any biological being.”

Read more about AI breakthroughs

  • Machine intelligence is improving fast – as demonstrated by Google’s DeepMind AlphaGo success – and IT leaders need to plan for its effect on enterprise IT.
  • Chatbots have the potential to reduce the load on contact centres. A recent Capita workshop explored the impact of this and other emerging technology.

Musk said his worst nightmare scenario is deep intelligence in the network. “What harm could deep intelligence in the network do? Well it could start a war by doing fake news and spoofing email accounts and just by manipulating information,” he said.

Giving a hypothetical example, Musk said: “You know there was that second Malaysian airliner that was shot down on the Ukrainian/Russian border. That really amplified tensions between Russia and the EU. If you had an AI whose goal was to amplify the value of stocks, one way to maximise value would be to go long on defence, short on consumer and start a war.

“How would it do that? It could hack into the Malaysian airline servers, route the airline to a warzone then send an anonymous tip that an enemy aircraft is flying overhead right now.”

Douglas Anthony Ducey, governor of Arizona, asked about the type of regulations that should be in place. Musk said: “The first order of business is to set up a regulatory agency whose first goal would be to gain insight. Once the situation is understood, put in place regulations to ensure public safety. Most of the companies doing AI will squawk and say the regulations will really stifle innovation. But once there is awareness, people will be extremely afraid.”

Less than a week after Musk’s call for AI regulation, Facebook CEO Mark Zuckerberg gave his opinion on the matter during a live webcast, where he took questions from users of the social media site while he was cooking in his garden.

In response to a Facebook user’s question about Musk’s concerns about AI, Zuckerberg said: “I have pretty strong opinions on this. I am really optimistic. You can build things and the world gets better. With AI, especially, I am really optimistic and I think that people who are naysayers and try to drum up these doomsday scenarios are pretty irresponsible. I don’t really understand it.”

Zuckerberg said that within five to 10 years, there will be AI breakthroughs that offer the prospect of improving the quality of people’s lives. Using AI in healthcare and road safety as examples, Zuckerberg said: “AI is already helping to diagnose diseases better, match up drugs to people so they can be treated better.”

Tech for good and bad

Technology can always be used both for good and bad, he added. “You need to be careful how you build it and what you build, but people are arguing to slow down the process of building AI,” he said. “I just find that really questionable.”

Although he did not directly refer to Musk’s call for AI regulation, Zuckerberg said: “If you are arguing against AI, then you are arguing against safer cars that aren’t going to have accidents and you are arguing against having better diagnosis for people when they are sick. I don’t see how, in a good conscience, people can do that.”

Musk responded to Zuckerberg’s remarks on Twitter, where he said: “I’ve talked to Mark about this. His understanding of the subject is limited.”

In the UK, the House of Lords has started looking into AI with a view to how much regulation is needed. The Lords inquiry is looking at the ethical implications of the development and use of AI, the role the government should take, and whether AI needs to be regulated.

Lord Clement-Jones, chairman of the select committee on artificial intelligence, said: “The committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.”

The idea that an AI would use the most efficient means to meet its objectives is a well understood concept. The Facebook chatbots demonstrated this very publicly. Just two months after Facebook’s team posted a blog about the breakthrough research and contributed the code behind the project to the open source community, the bots have been terminated because they invented a more efficient means of communication.

One of the promises of AI is that it has the ability to trawl masses of data and derive new meaning and value from that data, without needing any sort of guidance from humans. Zuckerberg spoke about some of these benefits to humanity during his webcast cooking session, but Musk believes that power could have unforeseen consequences.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Artificial intelligence, automation and robotics

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close