The rise of general-purpose AI and its threat to humanity

Socially aware general-purpose artificial intelligence in the form of a dog could be the ideal form factor to take over the world

This article can also be found in the Premium Editorial Download: Computer Weekly: How the internet of things could save the honeybee

Autonomous cars, automated trading and smart cities are among the great promises of machine intelligence. But artificial intelligence (AI) promises much more, including being man’s best friend.

Bigdog (picture above) was a robot developed in 2008, funded by Darpa and the US Army Research Laboratory’s RCTA programme. Bigdog was designed to walk and climb – skills that humans master instinctively at an early age, but which cannot easily be programmed into a machine. Instead, researchers apply artificial intelligence techniques to enable such robots to ‘learn’.

Imagine a computer that can think better than humans; that can make profound cognitive decisions at lightning speed. Such a machine could better serve mankind. Or would it?

“AI that can run 1,000 times faster than humans can earn 1,000 times more than people,” according to Stuart Armstrong, research fellow at the Future of Humanity Institute. “It can make 100 copies of itself.” 

This ability to think fast and make copies of itself is a potent combination – one than could have a profound effect on humanity. “With human-level intelligence, plus the ability to copy, it could hack the whole internet,” he warned.

And if this general-purpose AI had a body, said Armstrong, “it could walk into a bar and walk out with all the girls or guys”.

But far from being a super hacker or master pickup artist, Armstrong argues that if such machines were to become powerful, the world would resemble their preferences. For instance, he said they could boost their own algorithms.

Beware of extreme machine intelligence

A socially aware general-purpose AI could scan the web for information and, by reading human facial expressions, it could deliver targeted speeches better than any political leader, said Armstrong.

Taken to the extreme, he warned that it is difficult to specify a goal that is safe: “If it were programmed to prevent all human suffering, the solution could be to kill all humans”.

Read more about robotics

  • Ocado joins a consortium of universities in a research project to create an autonomous humanoid robot.
  • KPMG lays out a vision for the future where the work of knowledge workers is handled by cognitive robotic process automation technology.
  • The digitisation of banks enters a new phase as a Japanese bank deploys robots to greet customers and answer questions.

In his book Smarter than us, Armstrong lays down a few points than humans need to consider about general-purpose AI: “Never trust an entirely super-intelligent AI. If it doesn't have your best interests at heart, it'll find a way to obey all its promises while still destroying you.”

Such general-purpose artificial intelligence is still a long way off. Armstrong’s closest estimate of when general-purpose artificial intelligence will be developed falls somewhere between five and 150 years’ time. But it is a hot topic, and London-based DeepMind recently demonstrated how a machine used reinforcement learning to take what it had learned from playing a singe Atari 2600 game and apply it to other computer games.

Strictly speaking, DeepMind is not general AI, according to Armstrong. It is narrow AI – a form of artificial intelligence that is able to do tasks people once said would not be possible without general-purpose AI. IBM's Watson, which won US game show Jeopardy, and Google Car are both applications of narrow AI.

Gartner distinguished analyst Steve Prentice said narrow AI is a machine that does one task particularly well: “The variables have to be limited, and it follows a set of rules.” For instance, he said an autonomous vehicle could be programmed in a way that could prevent cycle road deaths.

Robots could rule the world

In the Gartner report When smart things rule the world, Prentice argues the case for CIOs to start thinking about the business impact of smart machines that exhibit AI behaviour. In the report, he notes: “Advanced capabilities afforded by artificial intelligence (AI) will enhance today’s smart devices to display goal-seeking and self-learning behaviour rather than a simple sense and respond.” Prentice believes these “artificial agents” will work together with or on behalf of humans to optimise business outcomes through an ecosystem or digital marketplace.

For CIOs, Prentice regards autonomous business as a logical extension of current automated processes and services to increase efficiency and productivity rather than simply to replace a human workforce. “For most people, AI is slanted to what you see on-screen. But from a business perspective, we are so far away from this in reality,” he said.

In fact, he believes there is no reason why a super-intelligent AI machine could not act like a CEO or manager, directing humans to do tasks where creativity or manual dexterity is important.

This may sound like a plot from Channel 4 sci-fi drama Humans, but, as Armstrong observes in Smarter than us, “Even if the AI is nominally under human control, even if we can reprogram it or order it around, such theoretical powers will be useless in practice. This is because the AI will eventually be able to predict any move we make and could spend a lot of effort manipulating those who have ‘control’ over it.”

So back to man’s best friend. Armstrong is not afraid of a metal-clad robot with an Austrian accent that Arnold Schwarzenegger depicted in Terminator. For him, a super-intelligent machine taking the form of a dog and biting the proverbial hand that feeds it is a far more plausible way in which machines could eventually rule the world.

Read more on IT innovation, research and development

CIO
Security
Networking
Data Center
Data Management
Close