studiostoks - Fotolia
The UK government stands to benefit from incorporating artificial intelligence (AI) and related data science techniques into its processes, but must tread carefully when it comes to accountability and transparency, according to the government’s chief scientific advisor Mark Walport.
This is according to the recent report, Artificial intelligence: opportunities and implications for the future of decision making, produced by Walport and Home Office permanent secretary Mark Sedwill, which grew out of a British Academy seminar on the ethical and legal issues surrounding AI.
Walport said there were clear examples for using AI in UK government, including to make services more efficient by anticipating demand and maximising available resources; to make it easier for government officials to use more data to inform their decisions; to make these decisions more transparent; and to help departments improve service delivery by better understanding the users they serve.
However, he said, government has more obligations around transparency, due process and citizen accountability that do not necessarily always fall on private businesses.
For example, he wrote, AI has a clear role to play in making decision-support systems more effective. However, these systems would always need to have a human somewhere in the loop prepared and able to provide oversight and question or go against the advice given by the AI system if needed, such as in the case of decisions made with relevance to benefit claimants.
“As with any advisor, the influence of these systems on decision-makers will be questioned, and departments will need to be transparent about the role played by AI in their decisions,” wrote Walport.
Walport also raised the spectre of AI infringing on regulations such as the 1998 Data Protection Act and, in the future, the European Union (EU) General Data Protection Regulation (GDPR), albeit inadvertently.
“Teams making use of artificial learning approaches need to understand how these existing frameworks apply in this context. For example, if deep learning is used to infer personal details that were not intentionally shared, it may not be clear whether consent has been obtained,” he wrote.
Other ethical issues could include the use of AI-backed statistical profiling to predict likely behaviours or qualities associated with certain groups, which could leave the government open to accusations of discrimination based on ethnicity or lifestyle.
“There will be calls for redress and compensation in the event that the use of AI causes some harm. The challenge is to establish a system that can provide this. Current approaches to liability and negligence are largely untested in this area,” wrote Walport.
He called for the government to build on work that is already underway to engage with the public, understand wider attitudes to AI, and help build trust in the technology, urging a dialogue along the lines of the Warnock Report that led to the establishment of the Human Fertilisation and Embryology Authority in 1991.
The debate will need to explore how to treat mistakes made through the use of AI, how to understand probabilistic decision-making, and the extent to which society should trust decisions made without AI or against the advice of AI systems.
Read more about artificial intelligence
- While there are challenges to AI in healthcare, the potential uses include advanced analytics and bots that help patients schedule appointments or provide medication reminders.
- The GSMA’s inaugural Global Mobile Trends report predicts that artificial intelligence will soon emerge as a catalyst for development of mobile technology.
- If you're an applications developer, You must add AI, cognitive computing, machine learning, and analytics expertise to your skills portfolio.
The report also acknowledged and discussed some of the other challenges that are likely to be presented by the rise of AI, such as its effect on the labour market and whether or not it will create more jobs than it displaces, a point on which experts are still divided.
“The right form of governance for AI, and indeed for the use of digital data more widely, is not self-evident,” concluded Walport.
“It is important to consider forms of data governance that cover all elements of the increasingly complex space, from responsibly generating data from people’s behaviour to remaining accountable for autonomous software agents.”
In the report’s preamble, digital and culture secretary Matt Hancock described it as “timely and important” work.
“Get this right and we can create a more prosperous economy with better and more fulfilling jobs,” he wrote.
“We can protect our environment by using resources more efficiently. We can make government smarter, using the power of data to improve our public services.”
Hancock said the report supported the prime minister’s recent announcement of an independent review of modern employment practices, designed to ensure that the government can tailor the support it provides to workers and businesses as the labour market and economy undergo massive change.