Stuart Monk - Fotolia

House of Lords committee hosts clash between AI enthusiasts and sceptics

In the first session of its enquiry into artificial intelligence and the UK economy, a House of Lords select committee takes contrasting testimony from academic enthusiasts and press sceptics

This article can also be found in the Premium Editorial Download: Computer Weekly: Where will artificial intelligence take us?:

The House of Lords select committee on artificial intelligence (AI) has started gathering oral evidence, with a session that showed up a stark contrast among commentators.

At a session on 10 October 2017 in Westminster, a trio of academics showed excitement over the prospects of AI for the UK’s economy, while a group of journalists poured sceptical cold water over the current hype on the topic. Tim Clement-Jones, who presided at the session, chaired the committee.

Wendy Hall, professor of computer science at the University of Southampton, began by saying while this is the fourth or fifth wave of AI, systems can now be made that can learn from the huge amounts of data that more powerful machines can store and process.

“It is already there in financial services and in healthcare, and there is now an acceleration – the genie is out of the bottle, and automation is in every walk of life. The downside is that we need to get a grip of AI because it is happening so fast, and there will be job losses as well as new jobs,” she said.

Hall is the co-chair of a government-commissioned review into AI, which she said will be published in the near future.

She was joined by two other academic witnesses before the committee: Michael Wooldridge, head of department and professor of computer science at the University of Oxford, and Nick Bostrom, director of the Future of Humanity Institute.

When asked to comment on opportunities and risks associated with AI over the next decade, Bostrom said: “Ten years may be enough for self-driving cars and for widespread facial recognition software for surveillance and autonomous weapons. The most exciting developments may not be obvious at the outset.”

Opportunities of AI

The second group of witnesses was made up of Sarah O’Connor, employment correspondent for the Financial Times; Rory Cellan-Jones, technology correspondent for the BBC; and Andrew Orlowski, executive editor at the The Register.

Cellan-Jones said AI is at the peak of a hype cycle, which is something he feels partly responsible for due to a famous interview he did in 2014 with Stephen Hawking, who has counselled against AI.

“Five years ago it was big data, three years ago [it was] cloud, and now it is AI. There are extraordinary claims being made. We need to worry about bias in algorithms and the infrastructure to make driverless cars work,” he said.

Oxford’s Wooldridge seemed to be the most enthusiastic of the six witnesses about AI. “Software can now do things it could not do before, like the automatic translation of speech. There are huge opportunities, and the UK is in an unusual position of being at the centre,” he said.

He added that smartwatches and Fitbits gather health data that will enable more personalised healthcare, which he believes can help predict the onset of dementia and heart problems.

He also expressed the view that driverless cars will be the norm in 20 years, and said graduates with an Oxford DPhil in AI have a “reasonable expectation of being millionaires” within a few years of thesis completion.

According to Wooldridge, the government should encourage the UK education system to produce programmers at a higher level. He also called for more PhDs and a friendlier environment for startups. “We have this, but it is fragile and needs to be nurtured,” he added.

There are huge opportunities [in AI], and the UK is in an unusual position of being at the centre
Michael Wooldridge, University of Oxford

He drew attention to international reasons, such as the effect of Brexit, to embrace AI, saying Paris and Berlin “would love to take DeepMind”, which is the University College, London startup acquired by Google in 2014. He added that, at present, London has as many startups in AI as all other European countries combined.

Hall, like Wooldridge, drew attention to the worldwide context. “Because of the US situation, no one is running science there. The Obama administration [by contrast] produced two very good reports on AI,” she said.

“There is demand for AI skills. AI is the new big salary job – Singapore, China and US companies are all looking for the brightest and the best. China has set the target of being number one in AI, and they have huge amounts of data to train AI [systems] on.”

However, the three academics registered the limitations of AI in not being able to mimic the human brain or explain the strategies behind the abilities of machines to triumph at Go – as DeepMind’s AlphaGo did this year – or chess.

Dose of AI reality

The Register’s Orlowski expressed himself the most “contrarian” of the witnesses. He believes the hype started with “opinion formers wanting to talk about employment” implications for middle-class workers.

In a rare demurral from the conventional stress on science, technology, engineering and mathematics (Stem) education among technology commentators, he said: “Kids are taught algorithms every week now, but hardly taught music or history at all.”

The BBC’s Cellan-Jones also argued for a “more flexible attitude with respect to what kids learn”, adding that children need creativity as well as digital skills. “Too often, we have too rigid schemes of education,” he said.

Although Hall expressed satisfaction that computer science is taught in schools, she believes there remains is still a “huge amount to do to upskill sectors in the population at large”.

While Hall said AI will create more jobs than will be lost in the long term, she realised “that doesn’t help truck drivers” in the short term. Jobs that need people with empathy to care for people should be valued more, she added.

The Financial Times’s O’Connor said the main opportunity in AI was a step change in productivity, while the risk is that “all the wealth will go to people who own the AI”.

She advocated an approach of not falling into hype but not underestimating either. Rather than the media highlighting speculative surveys, she called for it to get into the “nuts and bolts of what is really happening now”, as well as for academics to get out and about more, speaking in plain English.

“Government is at the same stage [of figuring things out] as we are. We need to start anticipating, but you can’t predict the future. You can know now which jobs will definitely exist [in health and social care] … and you can predict that there will be more churn, so it becomes about people reskilling and how we equip people to be resilient,” she said.

According to O’Connor, much of the AI media hype stemmed from middle class fears of job losses. She said much of legal due diligence for mergers and acquisitions activity, for example, is being automated.

Cellan-Jones added: “It won’t be a case of entire professions being wiped out, but tasks. As for the law, lawyers will find new things to do.”

Risk of exploiting data

As for the ethics of big data exploitation through AI systems, there was a marked difference in emphasis between the two groups.

Wooldridge said his students were exposed to ethical issues through case studies, and Hall said any computer science degree accredited by the British Computer Society is required to teach ethics.

“I’m not convinced there is anything AI specific here, though it is worth reviewing codes of ethics, maybe [with respect to] the insurance industry, but not generally,” said Wooldridge.

Meanwhile, Orlowski said: “Silicon Valley wants to exploit [personal] data. The negation of personal data sovereignty rights does mean no privacy.”

The three reporters said they had next to no engagement with government before testifying before the committee.

Read more about AI and government

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Artificial intelligence, automation and robotics

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

AI evangelists need to recognize that AI is not a solution. AI is a tool that like a gun or hammer or any other tool, can be used for good or evil outcomes. AI evangelists need to recognize their own bias that they build into their models. And they need a little humility. AI is not Jesus Christ.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close