What happens when you put modern processing power and data storage and reliable communications to cloud based services behind 40 year old AI processes, repackage the results to get protection under US IPR law and then add a flow of press releases from academics hungry for research funds, start-ups hungry for investment and consultants suppliers hungry for clients?
Rory Cellan-Jones has just done an excellent short piece, querying whether AI is just a buzzword, to introduce a Tent Tech talk by Zia Chishti who has recently built a $2billion business (Afiniti) using technology that was already mature when I was one of the speakers at the Sperry briefing on AI and Robotics for the UK Technical Press in 1981. Last week we had massive publicity for a claim (by UCL and a UK research team) that AI was equal to experts in diagnosing eye diseases . The use of the same techniques (albeit then for diagnosing cancer) were described by Walter Bodmer and Ed Feigenbaum in 1981. They also described the obstacles to take up by the medical profession. Those obstacles are still there. They are the prime reason for the slow take up of other technologies that could have transformed the delivery of remote diagnosis, telemedicine and telecare decades ago.
What has changed over recent decades is that the data files being used to “train” the algorithms are much bigger, the response times from on-line services (alias “the cloud”) are much faster and the internationalisation of development is beginning to bypass cultural and institutional obstacles to take-up. This is illustrated by the US and International Study which the publicity for the UCL work was probably intended to upstage in the eyes of UK funding bodies.
What has not changed is the nature of the threat to employment. And that threat is not to the low paid. The Bank of England Chief Economist recently joined those warning that AI could cut a swathe through the jobs market, hollowing out the middle. Back in 1981 I was tasked to speak to the impact of AI and Robotics on the world of education and training . I began by making a similar prediction: “the book-learning and machine-like logical skills of most lawyers, accountants and consultants … most of the work of the Inland Revenue, most administrative accountancy, the routine conveyancing that keeps most solicitors in business, the complex diagnoses that elevate the Harley Street consultant above the local general practitioner, can already be done faster and more accurately by computer. In twenty years the local tax office will give an instant response to your query and the general practitioner will no longer refer you to the hospital for analyses and diagnoses but will do them himself with the aid of his surgery expert systems backed by links to national epidemiological and other databases“.
How wrong I was.
Nearly forty years on and those predictions have yet to come true.
It is instructive to consider some of the reasons why:
- the growth of ever more complex and labour intensive regulatory and compliance regimes, from data protection (the common excuse for not using technology to improve customer service) to defensive medicine (not just by doctors but through the entire NHS). In consequence we had have had burgeoning employment opportunities for accountants, lawyers, consultants and compliance staff while doctors and teachers complain that much of their time is spent on “paperwork” instead of with patients or pupils.
- our failure to train professionals and technicians, let alone users, in the disciplines related to the development, testing and application of AI based systems. That has only just begun to change. Last week a graduate level apprenticeship standard for Data Scientists was approved with support from the BCS and over 40 employers. Hopefully that standard, coupled with the standard for technician level Data Analysts will help break the log jam. That is particularly important because AI systems are particularly vulnerable to GIGO (garbage in leads to garbage out) when the data used by the algorithms has not been subjected to adequate quality control.
- the structures, cultures and statutory/legal positions of those whose roles could, in theory, be largely automated. Thus the NHS is “the worlds most successful brand”: a common image masking a service that is neither national, nor holistic, nor standard. It takes time to identify and spread any new technique across the tangled web of organisations, roles and responsibilities which works only because of the personal dedication of its staff. Spreading the use of processes like AI, which threaten the status of those at the top of the main tribes (alias Royal Colleges). is not easy. And Medicine is not unique. We have similar situation with the other main professions.
- security, responsibility and liability: who do you sue when a “decision” by an AI controlled device or process (car, robot, diagnostic system) leads to death or suffering? Liability for third party risk is now routinely excluded from cyber insurance polices. Cyber is routinely excluded from all mainstream insurance policies. No-one offers product liability insurance for IoT devices. This problem is not new. I was able to attend the 1981 event on Robotics and AI (and spend time preparing my own paper) because I had been tasked to look at whether The Wellcome Foundation (Foundation, not Trust) should enter the medical technology market (including AI assisted diagnosis). The issues of liability were a show-stopper.
But that is not the whole story. Many doctors are technophiles who like to keep embrace new technology when allowed. They routinely use AI based techniques, products and services without recognising or acknowledging hem as such. Many (most) accountants and lawyers have long done likewise. Tomorrow came yesterday.
That brings me back to the headline for this blog. Why is there all the current hype about AI?
I have recently engaged in a Facebook exchange on whether the digital world has produced any new concepts in the last 40 years or whether it has been simply applying more computing power, bigger data files and more reliable communications to old ones and rebranding them. The effects have indeed often been transformative. But, as in so many other areas, our ability to exploit AI is limited by our failure to train technician and professional level staff with the necessary disciplines. I have added AI to the list of skills areas that I will suggest be covered in the round table which I recently agreed to help organise for the Digital Policy Alliance. With the final wind up of Tech Partnership there is a need to identify who is willing to work with who in digital skills partnerships to train those that they (and those in their supply and customer chains) needed in a world in which demand changes fasting than curriculum planning can handle and most forecasting techniques are of little, if any, practical value. AI is one of the areas where understanding, let alone take-up, is crippled by the lack of relevant education and training as at all levels. The way in which the GDPR is being used to deny access to the data files needed to “train” AI systems is symptomatic of another skills black hole – no training or qualifications, just lots of “marketing seminars” from consultants and lawyers. Contact me via Linked In is you are interested in helping organise action – not just “admiring” the problem.