peshkov - stock.adobe.com
The House of Lords Artificial Intelligence Select Committee’s report, AI in the UK: Ready, willing and able?, was published in April 2018.
The Department for Business, Energy & Industrial Strategy (BEIS), the Department for Digital, Culture, Media & Sport (DCMS), and the Office for Artificial Intelligence have now welcomed it, in the name of Greg Clark, the secretary of state for BEIS.
The response mentions the government’s commitment to an “artificial intelligence sector deal” and “missions” as part of its trumpeted industrial strategy.
“The first mission, announced by the prime minister on 21 May 2018, seeks to make the UK a world leader in the use of data, AI and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030,” said the response document.
On the topic of the ethical use of AI – identified as a possible métier for the UK – the document said: “The Office for Artificial Intelligence, the future Centre for Data Ethics and Innovation, and the AI Council will work together to create data trusts. Data trusts will ensure that the infrastructure is in place, that data governance is implemented ethically, and in such a way that prioritises the safety and security of data and the public.”
In relation to possible economic opportunities, the government said in the response document: “Data trusts could help SMEs pool resources to rationalise access to data and work together to pre-process data – allowing them to compete with more established firms. In so doing, a healthier AI and data business ecosystem could be fostered.”
In response to a Lords recommendation that “the Centre for Data Ethics and Innovation investigate the open banking model, and other data portability initiatives”, the government said it would “launch a ‘smart data review’ to identify the lessons learned from existing data portability initiatives, and consider how the approach of open banking can be implemented in other regulated markets”.
Beware emphasis on algorithm transparency, says government
However, in response to a Lords emphasis on an ethical need for technical transparency in AI systems, and an imperative for such in “safety-critical scenarios”, the government response came down more on the side of a non-regulatory approach.
“Government believes that transparency of algorithms is important, but for development of AI an overemphasis on transparency may be both a deterrent and, in some cases such as deep learning, prohibitively difficult. Such considerations need to be balanced against positive impacts use of AI brings,” it said.
The Lords report also registered unease at the power of big technology companies in the UK, such as Google and Facebook. “The increasing consolidation of power and influence by a select few risks damaging the continuation, and development, of the UK’s thriving home-grown AI startup sector,” the report said.
“We urge the government, and the Competition and Markets Authority [CMA], to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”
In response, the government said: “The CMA is building a new technology team to strengthen its ability to keep pace with the use of algorithms, artificial intelligence and big data in business. The team will be made up of data scientists, computer experts and economists. A new position of chief data and digital insights officer has been created.”
As part of its response to a recommendation that university education in AI be bolstered, the government pointed to the introduction of an industrial master’s programme for AI. “We are kicking off work with the help of the British Computer Society, supported by the Alan Turing Institute, and in partnership with universities and major businesses such as Ocado, Amazon, Rolls-Royce, McKinsey’s Quantum Black, and others,” it said.
Meanwhile, the government has also separately announced a slew of senior AI appointments, including that of Google DeepMind’s co-founder and chief executive, Demis Hassabis, as an adviser to the newly minted Office for Artificial Intelligence.
Tabitha Goldstaub, co-founder of AI company CognitionX, has been appointed as the chair of the AI Council, described as a “new industry body”. Wendy Hall, from the University of Southampton has been confirmed as the skills champion for AI in the UK, while Goldstaub is the business champion.
Hall was the co-author of a government-sponsored, but independent, report, Growing the artificial intelligence industry in the UK, published in October 2017. It said the UK’s artificial intelligence sector would squander a historic lead unless government, industry and academia were to come together to give it cohesive support.
Read more about artificial intelligence and the UK government
- Government promises £300m extra funding as part of £1bn AI sector deal.
- In the final session of its enquiry into AI and the UK economy, a House of Lords select committee played host to scepticism about the so-called fourth industrial revolution.
- The UK government is starting to put bodies behind its AI commentary as two departments announce job roles.