Tombaky - Fotolia
Talk about a fourth industrial revolution driven by artificial intelligence (AI) was called into question in the last session of the House of Lords select committee on AI on 19 December.
David Edgerton, an historian at King’s College, London, and author of The shock of the old: technology and global history since 1900, said current talk about a fourth industrial revolution is “just reheated rhetoric from years ago”. He quoted a lengthy extract from Harold Wilson’s famous “white heat of the technological revolution” speech from 1963 that could be transposed, word for word, to 2017.
In the final evidence session of an inquiry that has been going on for several months, the House of Lords artificial intelligence committee considered “the public narratives surrounding artificial intelligence, and what can be done to improve the wider understanding of this emerging technology”.
The session also looked at how AI can be conducted in a way that “engenders public trust, whether lessons can be learned from history, and what can be done to inform consumers about the use of AI in their everyday lives”.
Edgerton was adamant that hyping up artificial intelligence was “ahistorical, crude nonsense”. He said the “public has a right and duty” to reject “new techniques” that were in plentiful supply in an already “complicated world”.
He also said that “talking about things ethically can be a way of not doing so politically or economically” and that, as a country, the UK must stand up to private companies more and rediscover the sense of changing society for the better collectively that was evident in the post-1945 period of the Attlee government.
Edgerton warned that “we need also to be careful about nationalistic approaches” that would put the UK in competition with other countries in terms of “what we can get out of AI”. He noted that the government is looking to get an economic advantage from AI in divergence from the European Union, based on the idea that the UK has some comparative advantage in AI.
He said that in looking at similar claims made for the unique pre-eminence of British biosciences, he found those to be specious, and speculated that this is probably true in the field of AI.
“We would do a disservice to the public if we told people that their children need to learn AI techniques for a future that might never happen,” he said. “There was massive over-investment in supersonic [flight] and atomic energy in the past. These enthusiasms for particular techniques are overblown. It is not a cost-free exercise to hype one technique over another. You end up with an over-supply of scientists and engineers for whom there are no jobs, and so they go into the City.”
As for the threat posed to children’s future job prospects, he said: “Brexit will have much larger impact.”
Peter McOwan, vice principal, public engagement and student enterprise at Queen Mary University of London, also cautioned against over-egging the AI pudding of finding “patterns in data”.
But he struck a more upbeat note than Edgerton, citing the inspiration of seeing AI in science fiction. “I’m here because of Star Wars,” he said.
McOwan spoke of the importance of cultural context in assessing AI. In Japan, he said, because of the Shinto religion’s belief that non-human objects have souls, robots are seen as heroes. By contrast, the Judeo-Christian tradition says that only humans can have souls, and robots are seen as evil, as in the Golem legend in Jewish central Europe. “Both views are neither true nor false,” he said. “This shows the power of culture.”
McOwan recommended making more visible the mathematics that lies behind technologies [such as MP3 players] that are making most people’s lives better, and eschewing overly didactic approaches that say: “You must learn this.”
He also pointed out the dangers of “recombinant data”, where data from different sources can be combined with malevolent intent. He gave the example of pleaserobme.com, which showed in 2010 how social media posts and geo-tagging on holiday snaps could lead to a nasty surprise on getting back from holiday.
As for AI’s impact on the job market, McOwan said: “One thing we can be certain about is uncertainty. Being flexible is key. There will be new jobs, and a need for retraining.”
He argued for getting AI researchers in universities to “not just disseminate [research], but to co-create curricula in schools”, above all in primary schools. And he pointed out the danger that young AI academics in UK universities, and their startups, will be lost to Google and Facebook. “We offer fewer hammocks but more freedom and the enjoyment of teaching,” he said.
Read more about AI and government
- The House of Lords select committee on artificial intelligence has issued a call for evidence as it looks into the ethical, social and economic impact of the technology.
- In the first session of its enquiry into artificial intelligence and the UK economy, a House of Lords select committee takes contrasting testimony from academic enthusiasts and press sceptics.
- House of Lords artificial intelligence select committee hears evidence on societal risks of AI, data, life-long learning and the changing role of white-collar workers.
- House of Lords artificial intelligence committee hears evidence from experts about the problems of sharing NHS patient data.
David Spiegelhalter, president of the Royal Statistical Society and Winton professor for the public understanding of risk at the University of Cambridge, expressed himself “deeply suspicious of AI”, having worked in it 30 years ago, when expert systems were all the rage and “utter puff”.
By contrast, machine learning – a subset of AI – “is impressive, with extraordinary feats”, he said, referring to the software in self-driving cars. Spiegelhalter added that the UK has a strong machine learning community.
“Puff stories [about AI] need to be called out, and more engagement with journalists is needed. We need ambassadors to do this,” he said, and referred to a Royal Statistical Society programme of training young statisticians to communicate more effectively with the media, signally the BBC.
Spiegelhalter also said that AI is, like microwave ovens and mobiles were, the potential beneficiary of the “affect heuristic”, whereby people dismiss their concerns once they like a technology.
Asked to comment on the idea of a kitemark scheme for AI, he said: “This is not like food labelling. It is more about the implanting of the ability to see why something is being recommended. What algorithm is being applied to me at this moment? We could be told to what extent a decision being made [online] is automated.
“Data literacy is important. It is essential for modern citizens, and should be introduced in schools, so that children can identify fake news, for example. It should be part of your armoury as an educated citizen.”
Spiegelhalter added that data governance is “not about individuals protecting selves”, but is a collective enterprise. The ICO [Information Commissioner’s Office] “could be strengthened further”, he said.