By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
“AI is very good at looking for ‘needle in the haystack’ problems and changes in patterns,” he told the 2017 RSA Conference in San Francisco.
There are a number of companies that could interpose themselves in ways to look at IoT data flows, do data analysis and give alerts, he said.
“You could imagine a botnet alert network that says thousands of baby monitors [IP connected cameras] have all woken up at once and sent pictures to one site, which has never happened before,” said Schmidt.
“That is a highly detectable phenomenon by a proper network analysis system, and there are plenty of systems that provide proxy caches that could do that. Google would be one of the companies able to do that, but there are plenty of people in the ecosystem who could provide these kinds of alerts, which would be valuable.”
This would provide what security people are usually looking for – a heads-up to focus their attention and apply traditional investigative methods to something that is out of the ordinary, said Schmidt.
However, he did not have much more to say about AI and security, but focused instead on the evolution of the technology, where it is now and to strip away some of the fiction around that.
Schmidt said his favourite scenario is where general AI gets developed, the world enters an era in which intelligence is increasingly non-biological, AI gets smarter than humans, humans have a battle with AI, the humans decide to turn off the computers, but the AI is so smart that it hops from computer to computer, AI eventually wins and humans are destroyed.
“But this is a movie script,” he said. “It is not true. It also based on an enormous number of assumptions.”
Read more about AI
- SEB bank is currently integrating AI into its customer services channels, following an internal trial of the technology.
- Retailers are beginning to explore how cognitive computing and AI could make e-commerce smarter and more personalised.
- Science adviser Mark Walport voices concerns around transparency, accountability and personal security concerning the use of artificial intelligence by the government.
- At this year’s CES, automotive companies revealed how they are working together to develop autonomous vehicles platforms.
Similarly, he said, people have questioned at what point – when AI is intelligent enough to self-modify – can we verify that the modification is correct.
“This is a reasonable intellectual question, well worth a discussion, but we are nowhere near this in real life,” he said. “We are still in the baby stages of doing conceptual learning and deconstructing concepts.”
While this and questions about how to ensure AI systems have human values are important philosophical issues, Schmidt said these are not issues that will be faced soon.
A more immediate concern is to ensure the integrity of data used for training AI systems, he said. “These systems depend on data and they are trained against that data, but, assuming a perfect algorithm, if the data has been manipulated in some way, you will not get the outcome you expect.
“So it is very important for these systems to understand that they are advisory – they help you to understand something, but ultimately you want humans to be in charge of these things.”
Focusing on the current state of AI technology, Schmidt said “enormous strides” have been made in computer vision, which is now better than human vision in most scenarios and will have “huge impacts” on self-driving cars and medical care, for example.
“We have result after result coming now that if you give the same picture to a computer and a group of doctors, you get a better diagnosis from the computer,” he said, because the computer sees a million pictures and a human will potentially see 10,000 in their lifetime.
“It is just better training,” said Schmidt. “Computers do not get bored, which means they can be trained 24 hours a day, so this vision result, which is similar to speech in the way it is done mathematically, means that systems will have human-like qualities with respect to vision and speech, which is not the same thing as AI.”
But this means things like traffic accidents and inaccurate medical diagnoses are going to get much better, which is clearly positive, he said.
“I will stake my reputation that this will be the real narrative over the next five years, and although it is hard to know what will happen after that, there are many startups that are trying to take medical and transportation processes and automate them in a straightforward way,” said Schmidt.