Artificial intelligence (AI) invaded the toy shops last Christmas. But, although products such as Sony's Aibo robot dogs are impressive, the connection with business AI is tenuous.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This is no surprise - of all computing fields, AI displays the biggest gulf between laboratory work and practical business application. In a recent Institute of Electrical and Electronic Engineers publication, AI researchers were asked to contribute their thoughts on "AI's greatest trends and controversies".
In 17 pages, there is hardly a mention of real-world applications. This theoretical orientation has caused difficulties in the workplace.
Real AI is rarely about duplicating human intelligence, usually concentrating on a small subset of intelligent behaviour. But one AI discipline, expert systems, held out the promise of replacing the most expensive and uncontrollable business resource - human expertise - with machine intelligence, which could be called on 24 hours a day, at practically no cost.
Interviewing and observation were used to extract an expert's reasoning. This was then built into a series of rules modelling the expert's behaviour. Specialist programming languages, such as Lisp and Prolog, were devised to handle such rules and lists effectively. But expert systems did not deliver.
Few experts could clarify how they reached their conclusions. The unconscious content of the process was too significant, the experts wanted to protect their expertise, and the sheer volume of knowledge involved made it difficult to capture. To make matters worse, captured expertise is dead expertise. The world moves on, and the expert moves on, while the captured expertise becomes increasingly useless.
The decline of expert systems was clear in the way they were renamed knowledge-based systems, then rule-based systems. Practical applications were often limited to simplistic processes with little value (a classic was automating a company's expense claim procedures), or impractically tight subsets of knowledge, such as diagnosing a single illness. In the end, expert systems spawned knowledge management, but this involves the management of large quantities of data and information with hardly any intelligence.
As it became obvious that expert systems had limited business impact, other technologies became available. The next to be presented to the commercial market was neural networks, based on the mechanisms of the brain.
In the brain, a huge number of nodes, neurons, are linked by electrical pathways. These pathways are not static. They form a self-patterning system, a concept that can be illustrated with a tray of wax and a hot liquid. Pour the liquid down an angled tray of wax. To begin with the liquid flows in many directions, the rivulets directed chaotically (in the mathematical sense) by tiny imperfections in the wax. But the heat of the liquid melts the wax, forming channels.
Once there are channels, more liquid flows down these easy routes, reinforcing them. Similarly, links between neurons are easier to use the more they are employed - they learn.
Neural networks are computer programs that operate in a similar fashion. A set of logical nodes are connected by factors. When numbers are input to the front-end of the system, a new series of values is generated out the back-end by combining the factors attached to each route. These new numbers are predictions, based on the information input.
The results are compared with the actual outcome, and the discrepancies are fed back to modify the factors between nodes. Such systems can be used to predict events that depend strongly on recent history, and in pattern recognition applications like optical character recognition.
Although general-purpose neural network packages were sold in the 1990s, successful business applications have tended to be purpose-built around a particular requirement, often in engineering.
In parallel with the development of neural networks, case-based reasoning has become a leading practical AI technique. Here, past experiences are stored, containing a verbal description of the problem and the solution that proved effective. When a new problem is encountered, these cases are studied, and the best match (perhaps combining elements of several cases) found.
The key to the effectiveness of case-based reasoning is in the matching process. In basic systems this involves simple word matching in the description of the problem, while more sophisticated systems look for semantic context. As yet, by comparison with human experience, this is very crude, and does not allow for creativity, which requires the use of connections that cannot be deduced from past cases.
But case-based reasoning is promising, with more common-sense appeal to customers than the more esoteric techniques.
Looking to the future, the AI application most likely to have commercial success is the intelligent agent. The concept was embodied in Knowledge Navigator, Apple's 20-year look into the future from 1990. In this concept video, a computer animated personal assistant pulls together information from a wide range of sources based on a fuzzy request.
Since Knowledge Navigator, the reality of the Internet has made such intelligent information searching and collating of even greater value. To date, few agents have intelligence, but there is an opportunity to combine rules, case-based reasoning and learning processes to make an agent that will be much more effective than any search engine.
Thanks to the lack of deliverables, corporate AI groups have withered. A colleague who spent 10 years in the field has now moved into training. He observes that the only way to make money from artificial intelligence is to teach people about it. But this will not last forever. The development of robotics will produce both direct applications and AI spin-offs.
There are many areas where true AI can help, from speech and text recognition to computer vision. However, for the moment, AI remains at the bleeding edge of information technology.
The bluffer's guide to artificial intelligence