The rise of general-purpose AI and its threat to humanity

Socially aware general-purpose artificial intelligence in the form of a dog could be the ideal form factor to take over the world

This article can also be found in the Premium Editorial Download: Computer Weekly: How the internet of things could save the honeybee:

Autonomous cars, automated trading and smart cities are among the great promises of machine intelligence. But artificial intelligence (AI) promises much more, including being man’s best friend.

Bigdog (picture above) was a robot developed in 2008, funded by Darpa and the US Army Research Laboratory’s RCTA programme. Bigdog was designed to walk and climb – skills that humans master instinctively at an early age, but which cannot easily be programmed into a machine. Instead, researchers apply artificial intelligence techniques to enable such robots to ‘learn’.

Imagine a computer that can think better than humans; that can make profound cognitive decisions at lightning speed. Such a machine could better serve mankind. Or would it?

“AI that can run 1,000 times faster than humans can earn 1,000 times more than people,” according to Stuart Armstrong, research fellow at the Future of Humanity Institute. “It can make 100 copies of itself.” 

This ability to think fast and make copies of itself is a potent combination – one than could have a profound effect on humanity. “With human-level intelligence, plus the ability to copy, it could hack the whole internet,” he warned.

And if this general-purpose AI had a body, said Armstrong, “it could walk into a bar and walk out with all the girls or guys”.

But far from being a super hacker or master pickup artist, Armstrong argues that if such machines were to become powerful, the world would resemble their preferences. For instance, he said they could boost their own algorithms.

Beware of extreme machine intelligence

A socially aware general-purpose AI could scan the web for information and, by reading human facial expressions, it could deliver targeted speeches better than any political leader, said Armstrong.

Taken to the extreme, he warned that it is difficult to specify a goal that is safe: “If it were programmed to prevent all human suffering, the solution could be to kill all humans”.

Read more about robotics

In his book Smarter than us, Armstrong lays down a few points than humans need to consider about general-purpose AI: “Never trust an entirely super-intelligent AI. If it doesn't have your best interests at heart, it'll find a way to obey all its promises while still destroying you.”

Such general-purpose artificial intelligence is still a long way off. Armstrong’s closest estimate of when general-purpose artificial intelligence will be developed falls somewhere between five and 150 years’ time. But it is a hot topic, and London-based DeepMind recently demonstrated how a machine used reinforcement learning to take what it had learned from playing a singe Atari 2600 game and apply it to other computer games.

Strictly speaking, DeepMind is not general AI, according to Armstrong. It is narrow AI – a form of artificial intelligence that is able to do tasks people once said would not be possible without general-purpose AI. IBM's Watson, which won US game show Jeopardy, and Google Car are both applications of narrow AI.

Gartner distinguished analyst Steve Prentice said narrow AI is a machine that does one task particularly well: “The variables have to be limited, and it follows a set of rules.” For instance, he said an autonomous vehicle could be programmed in a way that could prevent cycle road deaths.

Robots could rule the world

In the Gartner report When smart things rule the world, Prentice argues the case for CIOs to start thinking about the business impact of smart machines that exhibit AI behaviour. In the report, he notes: “Advanced capabilities afforded by artificial intelligence (AI) will enhance today’s smart devices to display goal-seeking and self-learning behaviour rather than a simple sense and respond.” Prentice believes these “artificial agents” will work together with or on behalf of humans to optimise business outcomes through an ecosystem or digital marketplace.

For CIOs, Prentice regards autonomous business as a logical extension of current automated processes and services to increase efficiency and productivity rather than simply to replace a human workforce. “For most people, AI is slanted to what you see on-screen. But from a business perspective, we are so far away from this in reality,” he said.

In fact, he believes there is no reason why a super-intelligent AI machine could not act like a CEO or manager, directing humans to do tasks where creativity or manual dexterity is important.

This may sound like a plot from Channel 4 sci-fi drama Humans, but, as Armstrong observes in Smarter than us, “Even if the AI is nominally under human control, even if we can reprogram it or order it around, such theoretical powers will be useless in practice. This is because the AI will eventually be able to predict any move we make and could spend a lot of effort manipulating those who have ‘control’ over it.”

So back to man’s best friend. Armstrong is not afraid of a metal-clad robot with an Austrian accent that Arnold Schwarzenegger depicted in Terminator. For him, a super-intelligent machine taking the form of a dog and biting the proverbial hand that feeds it is a far more plausible way in which machines could eventually rule the world.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on IT innovation, research and development

Join the conversation

5 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

As long as we do not know what makes us feel and want and need anything, we can't create a machine with wants and needs either, and why would we?

A machine can register damage, register power loss, but to it, it means nothing than another number in its memory core to process for no other purpose than the one given by us programmers.

I mean an AI, does not need or want anything, hence it cannot rebel. You need to have real needs and wants, pain, suffering, the real deal to develop your own will.

We want machines to serve our wants and needs, not machines that would constantly nag us about their needs and wants, making us serve them instead.

So why make even them, they have no purpose, they serve no need unless we indeed do have a need to self destruct.
Cancel
Automobiles take away our freedom, cell phones fry our fragile brains, social security tracks us, income tax is a federal scheme to control our movement, radar is used to track us, radar will kill us all, nurses are paid to put tracking chips in babies, have you seen those poisonous jet trails, crop circles are a sign of alien abduction, the Mayan calender marks the time of the end times, civilization ends at the stroke of midnight on December 31 1999.

And AI portends the rise of evil machines which, as everyone knows, want nothing more than to eat our chips. No, really, I saw it in a movie once so it must be true.

Seems there's a little Luddite is us all, like it or not. We're doomed, we're doomed. Or not as the case may be.
Cancel
I find AI fascinating, and the whole threat to humanity, robots will rules the world thing definitely makes for an enjoyable movie plot. 

AI has come a long way since Turing's days, but in real applications it is still laughable when compared to real human behaviors. There is a long way to go. No AI machine is going to become CEO during this lifetime. 
Cancel
Automobiles take away our freedom, cell phones fry our fragile brains, social security tracks us, income tax is a federal scheme to control our movement, radar is used to track us, radar will kill us all, nurses are paid to put tracking chips in babies, have you seen those poisonous jet trails, crop circles are a sign of alien abduction, the Mayan calender marks the time of the end times, civilization ends at the stroke of midnight on December 31 1999.

And AI portends the rise of evil machines which, as everyone knows, want nothing more than to eat our chips. No, really, I saw it in a movie once so it must be true.

Alas, there really is a little Luddite is us all, like it or not. We're doomed, we're doomed. Or not as the case may be.
Cancel
Let's be clear: there's no real AI created by humans (yet).
AI technologies, such as neural networks, fuzzy logic, genetic algorithms, etc, - more or less successfully simulate one or few activities that human or animal mind is capable of. But they still have zero self-awareness and zero cognition.

Although..

"No computer has ever been designed that it is ever aware of what it's doing; but most of the time, people aren't either. "
~ Marvin Minsky
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close