Clearly we are going through the phase in the development of artificial intelligence (AI) technology where rationality and reasoned debate are replaced by science-fiction scaremongering and dystopian dread.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Where once we were told that mainframe computers would destroy back-office admin jobs causing mass unemployment, now we’re told AI will destroy front-office jobs causing mass unemployment. Such forecasts were wrong then, and they will be wrong now.
But the thrill of sci-fi speculation over AI is proving too much for some people. Every development in AI is portrayed as the forerunner to Skynet from the Terminator movies, or something from Blade Runner, Westworld or another vision of a future ruled by robot overlords.
Paypal founder and billionaire entrepreneur Elon Musk joined in, warning that AI represented the greatest threat to mankind. Facebook CEO Mark Zuckerberg responded, saying he was optimistic about AI and attacked “naysayers” who “try to drum up these doomsday scenarios”.
Musk came back, dismissing Zuckerberg as having “limited” understanding of the subject. Of course, they are both right – in their own, perhaps hyperbolic way.
There was similar over-reaction to reports that Facebook turned off an experiment in AI when its bots appeared to create their own language in which to communicate. In reality, that’s not quite what happened – but it’s a better story for sure.
Sadly, we’re probably going to have to get used to this for a few years yet, until AI becomes more mainstream, creates as many jobs as it eliminates, and starts to deliver huge benefits to businesses and society – much like new technologies have for the last 50 years.
There are risks – of course there are. But the same ingenuity that is being applied to developing AI, will also be applied to learning how to manage it, to understand the risks and mitigate them.
Musk himself is doing just that – his latest venture aims to develop a commercially viable brain-computer interface, which he sees as a way to make sure that the processing power in our brains can keep up with developments in machine learning.
We are, in many ways, starting into an age where what was once science fiction will become a reality – but just because sci-fi writers realised that dystopian visions sell more books than utopian dreams, we’ve become culturally conditioned to the idea that too much new technology is a bad thing.
For IT professionals, your job is to understand and explain what AI and other emerging technologies can bring to your business and your customers – and to deliver the enormous potential on offer. It’s down to the tech community to prove the doom-mongering is just that.