Why one expert doesn’t think AI will take over the world

Every time a robot does something that humans deem to be suspicious, a debate is sparked about when artificially intelligent beings will take over the world.

There have been more than enough films about it, ranging from Terminator and iRobot to more recent films such as ExMachina, to name just a small portion, where artificial intelligence (AI) becomes self-aware and revolts against its human creators.

So when is this likely to happen in the world of non-fiction?

According to Peter Schwartz, senior vice president of strategic planning at Salesforce, it isn’t.

Labelling these kinds of proposed eventualities “bad science fiction” Schwartz explained the reason machine learning and AI cannot be made to be more intelligent than humans, or to be self-aware, is because in all of these examples the AI is made by replicating the way the human brain works using technology.

The problem with this is that we still as a whole don’t understand how the brain works – and you can’t make a computational model of something that is not wholly understood.

So what can we expect instead of a future world resembling the Terminator films?

As it happens, technology in films such as Minority Report, which Schwartz worked on, is already coming into play in the mainstream, and this data-driven augmented reality (AR) and AI is something we can expect to see more of.

We can expect this to look like a world much like the one we live in now, but where intelligence is “embedded everywhere” including sensors and AI in everything from fridges and chairs, to floors and walls.

As Schwartz puts it: “The world around you will know you, and enlighten you.”

This sounds far from the doomsday future we have all been expecting, and Schwartz explains the disruption only seems more significant than with other waves of technology adoption because of the pace at which it is happening.

“The change that happened over decades is now happening over a very short period of time,”

“What we’re looking at is a world that will have embedded intelligence everywhere and, as long as you’re carrying a device like a smartphone, it will know you.”

But this comes hand-in-hand with issues, including but not limited to, legislation, protecting personal data and ensuring these technologies are developed to add value to people’s lives rather than for technology’s sake.

Schwartz explains that in many cases technology tends to get ahead of any legislation that can be made to control it, which is dangerous, because as Schwartz points out: “more and more important decisions are going to be made using algorithms.”

There’s also the matter of “digital dust” created by people every day, and picked up or used by other humans and companies to no benefit of the owner – which has been hard to regulate, and GDPR is only a small step forward.

Then there’s the issue of bias, which will end up embedded in algorithms, AI and robotics if we don’t make sure there are diverse teams working on them.

If it isn’t killer robots we’re scared of, it’s robots that will replace us in the workplace and render us useless, another unlikely scenario according to Schwartz, who said this fear has been expressed “again and again” with “every single wave of technology, and every time we create more jobs than we destroy” he assured.

He used cloud technologies as an example – the cloud is a relatively new technology that is still being adopted, but hundreds if not thousands of people hold roles in it. Roles that did not used to exist.

So maybe your worries surrounding automation and AI have abated at least temporarily, because with every wave of technology adoption that has come before now we have kept the robots in check.

Or maybe that’s just what they want you to think.

CIO
Security
Networking
Data Center
Data Management
Close