cherezoff - Fotolia

AI must develop for the greater good and cut government admin

A group of experts explored the issues of UK policy and funding in artificial intelligence at a recent seminar in London. We report on the discussion

Artificial intelligence (AI) should develop for the common good and benefit of humanity and it should operate on fairness. This was one of the 10 commandments presented by Lord Bishop, Peter van Gelder, member of the House of Lords select committee on artificial intelligence, at the recent Westminster eForum on AI and robotics innovation.

Speakers at the eForum included representatives from TechUK, PwC, the Engineering and Physical Sciences Research Council (EPSRC), Cisco, the Department of Digital, Culture, Media and Sport (DCMS) and the Department of Business, Energy and Industrial Strategy.

Sue Daley, head of programme for cloud, data, analytics and AI at TechUK, said: “We see huge economic potential of AI and robotics and robotic process automation. The UK stands at brink of AI change.”

But there is a challenge, both from a funding perspective and in how AI is sold to society. Philip Nelson, CEO of the EPSRC, said: “We only spend 1% GDP on R&D compared to the 2.4% OECD [Organisation for Economic Cooperation and Development] average.”

Rob McCargow, AI programme leader at PwC, highlighted research showing that jobs will be impacted by the technology. “AI can affect people’s lives,” he said. Although there seems to have been some movement towards cross-industry collaboration during 2017, said McCargow, “there has been a lack of public engagement”.

Matt Houlihan, director of government and corporate affairs at Cisco, said: “The industry should take much more leadership to ensure people have confidence in technology.”

According to Nesta CEO Geoff Mulgan, the AI industry can learn a lot from genomics and human in vitro fertilisation (IVF), which both brought in new models that challenged existing views on ethics and morality. But these scientific breakthroughs also managed to bring the public along with them. Commenting on how AI has engaged with the general public to raise awareness, Mulgan said: “It has been slow.”

Gila Sacks, director for digital  and tech policy at the DCMS, said: “We need to engage people in a conversation. AI technology is a tool and we need to decide how we use it and how AI can help people do the things they actually care about.”

Role of diversity

Speakers at the Westminster eForum also discussed the importance of diversity. Wendy Hall, professor of computer science at Southampton University, said “bias in, bias out” is equivalent to the adage of “garbage in, garbage out” in data management, where poor-quality data at the point of collection leads to poor data output.

For TechUK, diversity needs to span the whole of business, said Daley. “We need more diversity in the AI community and across management to apply AI within business,” she said.

Some businesses risk losing out because they are simply too far behind in digitising their internal processes, said Daley. “Lots of organisations are still using paper invoices and are not on their digital journey,” she added. This limits their ability to make the most of AI in their business.

Hall described AI today as the the fourth wave of AI, which has developed thanks to algorithms that can now make the most of available processing power to analyse vast pools of data. But she pointed out that current data legislation may limit the ability of startups to exploit this data.

“This wave of AI is all about how we deal with data,” she said. “China doesn’t have GDPR [the EU’s General Data Protection Regulation] and there are billions of people in China and they will be able to hone their AI algorithms better than we can. How do small companies get access to data to hone their algorithms?

“Our main recommendation is that there should be a legal and ethical framework to enable access to confidential government data, such as health records.”

Read more about AI policy

  • A panel discussion at the World Economic Forum has highlighted the need to widen the artificial intelligence developer skills pool.
  • Following TfL's decision against Uber, we investigate the role of professionalism and ethics in software development.

TechUK’s Daley added: “Getting data right for AI needs data to flow from the UK to Europe and the rest of the world in a post-Brexit world.”

Chris Melhuish, director of the Bristol Robotics Laboratory, argued that there needs to be a culture change in academia, which has traditionally been focused solely on pure research, to encourage the creation of new businesses to exploit technologies.

Nesta’s Mulgan said that beyond establishing policies and funding to support an AI-empowered society, governments should also look at how AI can optimise their own inefficiencies. “Most of the investment of AI in the last 50 years has been from government, but there are no systematic programmes to use AI in government,” he said.

Southampton University’s Hall added: “Government needs to be a shining light in AI adoption. It is all about modernising the approach to IT, and government needs to be a model of best practice.”

The discussions and presentations at the eForum explored topics that experts have been investigating for a number of years. There was a sense during the seminar that AI will augment jobs, rather than replace them. But AI could completely remove certain types of job, which means people will need to acquire new skills.

In 2016, PwC’s McCargow blogged: “Developing the right skills to ensure we can continue to innovate is important. One school of thought is to equip the workforce of the future purely with digital skills, but because AI has the potential to democratise access to technology and code for us, humans will need to focus on creativity and critical thinking.”

This was an issue that Andrew Joint, managing partner and commercial technology partner at law firm Kemp Little, covered in his presentation, warning of the risk of automating too many tasks.

Joint described how, when he started out as a young lawyer, his job involved reading masses of case histories – a task that can now be automated using an AI reading algorithm. But lawyers gain a greater understanding of case law by reading this material, he said, and there would be less opportunity to gain this knowledge if the process was fully automated.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close