AI in public service must be accountable

Committee on Standards in Public Life recommends that government departments make ethics a top priority when tendering artificial intelligence systems

A new report from the Committee for Standards in Public life looking at the risks of artificial inteligence (AI) has recommended that ethics is embedded in AI governance frameworks.

According to the Artificial intelligence and public standards report, automated decision-making will revolutionise many human-centric administrative tasks in the public sector. The report notes that the use of AI is subject to the provisions of the General Data Protection Regulation (GDPR), the Equality Act and sections of administrative law.

The report recommends that the government and regulators establish a coherent regulatory framework that sets clear legal boundaries on how AI should be used in the public sector. “Regulators must also prepare for the changes AI will bring to public sector practice,” it says. “We conclude that the UK does not need a specific AI regulator, but all regulators must adapt to the challenges that AI poses to their specific sectors.”

A poll from DeltaPoll, published in the report, found that 69% of the 2016 adults surveyed were comfortable with AI decision-making  if a human made the final decision. Citing the Information Commissioner's Office AI auditing framework, the AI and public standards report stated: “Reviewers must ‘weigh up’ and ‘interpret’ the recommendation, consider all available input data, and also take into account other additional factors.” 

The report’s author recommended that the government establish a Centre for Data Ethics (CDEI) as a centre for regulatory assurance to assist regulators in the area of AI.

Speaking to the BBC’s Today programme, Jonathan Evans, chairman of the Committee on Standards in Public Life, said: “We believe AI will be used right across the public sector. We need to give serious thought to how AI stands up to the tests set out in the seven principles of public life – things like transparency, objectivity and accountability.”

Evans suggested the public would have the right to ask how AI is being used within a public service. “If, as a member of the public, you are concerned that your information is being used or your services are being provided through the use of artificial intelligence, then it is incumbent on government and public service to let you know,” he said.

Evans acknowledged that such requests may come under the existing Freedom of Information framework, but suggested the government should also be more proactive. “It is very difficult to find out what algorithms are being used and where, and that is a significant failing,” he said. “We think there should be a proactive declaration by any agency using AI so that people can find out [this information] quickly and easily.”

Responding to the report, shadow digital minister Chi Onwurah said: “The Conservative government is failing on openness and transparency when it comes to the use of AI in the public sector. Last year, I argued in Parliament that the government should not accept further AI algorithms in decision-making processes without introducing further regulation.

“I will continue to push the government to go further in sharing information on how AI is currently being used at all levels of government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”

The effectiveness of an AI system to make decisions is entirely dependent on the training dataset used, said the report. “Even where AI is introduced with good intentions, poor-quality data or a lack of knowledge about how an AI system operates will lead to unwanted outcomes,” it said. “Public bodies should periodically re-test and validate their models on different demographic groups to observe whether any groups are being systematically advantaged or disadvantaged, so that they can update their AI systems where necessary.”

Read more about AI ethics

Bill Mitchell, director of policy at BCS, the Chartered Institute for IT, added: “There is a very old adage in computer science that sums up many of the concerns around AI-enabled public services: ‘Garbage in, garbage out.’ In other words, if you put poor, partial, flawed data into a computer, it will mindlessly follow its programming and output poor, partial, flawed computations.

“AI is a statistical-inference technology that learns by example. This means that if we allow AI systems to learn from ‘garbage’ examples, we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.”

Mitchell said the report highlighted the importance of having diverse teams that would help to make public authorities more likely to identify any potential ethical pitfalls of an AI project. “Many contributors emphasised the importance of diversity, telling the committee that diverse teams would lead to more diverse thought, and that, in turn, this would help public authorities to identify any potential adverse impact of an AI system,” he said.

The report urged the government to use its purchasing power in the market to set procurement requirements to ensure private companies developing AI systems for the public sector address public standards appropriately. The report suggested this should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close