Despite recent advancements in deep learning, which has its roots in neuroscience, it not a dramatic breakthrough in artificial intelligence as it is sometimes portrayed.
That was the key point made by Tomaso Poggio, a renowned professor at MIT’s department of brain and cognitive sciences, and artificial intelligence laboratory, at the EmTech Asia conference in Singapore this week.
Poggio argued that many of the concepts behind deep learning were developed in earlier decades, and that for artificial intelligence to achieve the next breakthrough, we would have to solve the problem of understanding how the human brain works. “That goes beyond deep learning,” he said.
Machine learning and deep learning, for example, is still based on the premise that machines learn from large datasets to solve a problem, answer a question or perform a task. Human learning, however, does not require one to even look at dozens of images to learn what an object is for the first time.
“There must be the ability to synthesise programmes on the fly based on a set of small routines,” Paggio said, adding that his team will be exploring this research area using neuroscience and cognitive tools over the next five years.
Besides the research community, private sector companies such as Google are also looking into the possibility of having machines learn from smaller datasets, or even from a single example.
“If you’ve seen something just once in the morning, you’ll definitely be able to recognise it again, but machines have a hard time doing that,” said Oriol Vinyals, research scientist at Google Deepmind.
When applied in real-world settings, Vinyals said this would allow a robot, for example, to process its environment and perform an action without codifying all the possible actions that it can take.