Who is Danny - stock.adobe.com

Icelandic researcher advocates for an overhaul of artificial intelligence

While technologies such as ChatGPT provide a glimpse of what may lay ahead, they are still a long way from real intelligence

Artificial intelligence (AI) might be getting a lot of press at the moment, but let’s cut through the hype and look at where the field really is today.

Currently, all AIs are what is known as weak AI – that is, they can only solve problems in one domain. Strong AI – artificial intelligence that can solve more than one kind of problem – is still years away. Take, for example, a system that can play chess better than any human being. That same system doesn’t have the faintest idea about how to play poker – a much easier game. 

Furthermore, the current state of AI is that unsupervised learning is still in its infancy. All practical algorithms still rely on supervised learning, where they learn with data that is labelled. And they do so during a phase dedicated to learning, rather than learning as they go along. 

Icelandic researcher Kristinn Thórisson, a professor at Reykjavik University and founder and director of the Icelandic Institute for Intelligent Machines (IIIM), has been saying for years that the current approach to AI will never lead to real machine intelligence.

Thórisson has worked for 30 years on artificial general intelligence projects and applied AI projects, both in academia and industry. He predicts that over the next three decades, a new paradigm will take over, replacing artificial neural networks with methodologies that more closely approximate real intelligence. The result will be more trustworthy systems that transform industry and society. 

Reykjavik University hosted The Third International Workshop on Self-Supervised Learning in July 2022, and the papers presented were published in the Proceedings of machine learning research. “The proceedings from the event have a lot of really good work in one place,” says Thórisson. “I think the ideas in these papers will turn out to be central to the way AI evolves over the next 30 years.” 

One interesting article included in the proceedings was authored by Thórisson himself, along with Henry Minsky, co-founder and chief technology officer of Leela AI. The article, titled The future of AI research: Ten defeasible ‘axioms of intelligence’, calls for less emphasis on traditional computer science methodologies and mathematics, arguing that a new methodology should be developed with a greater focus on cognitive science. The authors make the point that real intelligence includes the unification of causal relations, reasoning and cognitive development. 

What are the key attributes of future AI? 

According to Thórisson and Minsky, autonomous general learning, or general self-supervised learning, involves creating knowledge structures about unfamiliar phenomena or real-world objects without assistance. An AI needs to represent cause and effect and use that as a key component in its reasoning processes. When confronted with a new phenomenon, the AI should be able to develop a hypothesis about causal relationships. 

“The most important ingredients for future general machine intelligence are the ability to handle novelty, the ability to manage experience autonomously, and the ability to represent causal-effect relationships”
Kristinn Thórisson, Reykjavik University and Icelandic Institute for Intelligent Machines

An AI must be capable of learning incrementally, modifying its existing knowledge based on new information. Cumulative learning involves reasoning-based acquisition of increasingly useful information about how things work. Models should be improved when new evidence becomes available. This requires hypothesis generation – a topic for future AI research. 

What is already known, though, is that hypotheses should be formed through a reasoning process that includes deduction, abduction, induction and analogy. And an AI should keep track of arguments for and against a hypothesis. Another important requirement of a future AI is that it should model what is not known at the time of planning. It should then be able to bring useful knowledge to a task at any point in time.  

“The most important ingredients for future general machine intelligence are the ability to handle novelty, the ability to manage experience autonomously, and the ability to represent causal-effect relationships,” says Thórisson. “A constructivist approach to AI already provides a useful starting point for addressing the first two points – handling novelty and managing experience autonomously. However, we still have a long way to go before systems can model causality autonomously, in an effective and efficient manner.” 

How do we get there from here? 

The current generation of artificial intelligence systems uses a constructionist approach, which Thórisson says has resulted in a diverse set of isolated solutions to relatively small problems.  

“AI systems require significantly more complex integration than has been attempted to date, especially when transversal functions are involved, such as attention and learning,” he says. “The only way to address the challenge is to replace top-down architectural development methodologies with self-organising architectures that rely on self-generated code. We call this ‘constructivist AI’.”

Both Thórisson and Minsky are working on algorithms based on these principles. Thórisson demonstrated an approach to constructivist AI with a system, known as AERA, that autonomously learned how to participate in spoken multimodal interviews by observing humans participate in a TV-style interview. The system autonomously expands its capabilities through self-reconfiguration.  

AERA, which has been under development for 15 years, learns highly complex tasks incrementally. Starting with only two pages of seed code for bootstrapping and running on a regular desktop computer, the AERA agent created semantically meaningful actions, grammatically correct utterances, real-time coordination and turn taking – without learning anything beforehand. It did this after only 20 hours of observation. The knowledge produced consisted of over 100 pages of executable code that the system wrote on its own to allow it to take either role of interviewer or interviewee.  

Focusing on industrial automation, Minsky is taking a very similar approach to intelligent systems at Leila AI. Its neuro-symbolic technology has resulted in a new approach to industrial automation that can track the activities of people and machines on a factory floor and produce actionable information about their operations.  

According to Minsky and Thórisson, the current focus on deep neural networks is hampering progress in the field. “Being exclusively dependent on statistical representations – even when trained on data that includes causal information – deep neural networks cannot reliably separate spurious correlation from causally-dependent correlation,” says Thórisson. “As a result, they cannot tell you when they are making things up. Such systems cannot be fully trusted.”  

Read more about AI research

 

Read more on Artificial intelligence, automation and robotics