Getty Images

UK government publishes framework on automated decision-making

The framework focuses on making the use of algorithms and automated decision-making systems within the public sector more ethical, transparent and accountable

The UK government has released a framework on the use of automated decision-making systems in the public sector, setting out how the technology can be deployed safely, sustainably and ethically.

Developed collaboratively by the Cabinet Office, the Central Digital and Data Office, and the Office for Artificial Intelligence, the Ethics, Transparency and Accountability Framework for Automated Decision-Making is designed to improve the general literacy within government around the use of automated or algorithmic decision-making, and is intended for use by all civil servants.

The framework applies to both solely automated decisions (where no human judgement is involved) and automated assisted decision-making, and should be used to inform best practice in each.

Consisting of seven core principles – such as testing the system to avoid any unintended consequences, delivering services that are fair for all users and citizens, having clarity around who is responsible for the system’s operation, and handling data safely in a way that protects citizens’ interests – the framework includes practical steps on how to achieve each.

For example, when testing the system for unintended outcomes, the framework recommends that organisations adopt “red team testing”, which works from the assumption that “all algorithms systems are capable of inflicting some degree of harm”.

The framework also emphasises the need for organisations to conduct data protection and equality impact assessments, which are needed for compliance with UK legislation.

It further recommends using the framework alongside existing guidance such as the National Data Strategy and Data Ethics Framework.

It should be noted, however, that the Government Digital Service (GDS) released an updated Data Ethics Framework in September 2020 after finding “there was little awareness” across government of the previous version.

Other principles of the framework include helping citizens and users understand how the systems impact them, ensuring compliance with the law and building something that is future-proof.

“Under data protection law, for fully automated processes, you are required to give individuals specific information about the process. Process owners need to introduce simple ways for the impacted person(s) to request human intervention or challenge a decision,” it said.

“When automated or algorithmic systems assist a decision made by an accountable officer, you should be able to explain how the system reached that decision or suggested decision in plain English.”

The framework also explicitly acknowledges that “algorithms are not the solution to every policy problem”, and that public authorities should consider whether using an automated system is appropriate in their specific contexts before pressing forward with their deployment.

“Scrutiny should be applied to all automated and algorithmic decision-making. They should not be the go-to solution to resolve the most complex and difficult issues because of the high risk associated with them,” said the government website, adding the risks associated with automated decision-making systems are highly dependent on the policy areas and context they are being used in.

“Senior owners should conduct a thorough risk assessment, exploring all options. You should be confident that the policy intent, specification or outcome will be best achieved through an automated or algorithmic decision-making system.”

It added the framework must be adhered to when public sector bodies are working with third parties, requiring early engagement to ensure it is embedded into any commercial arrangements.

Despite the framework including examples of how both solely automated and partly automated decision-making systems are used in workplaces – for example, to decide how much an employee is paid, the principles themselves do not directly address the effects such systems can have on workplace dynamics.  

“If the government wants to build trust in how technology is used at work, we need much better rules on how tech is used at work”
Andrew Pakes, Prospect Union

“The new framework is a big step forward in providing clear guidance around the use of automated decision-making. We very much welcome the clear statement that algorithms are not the answer to every question, in particular when it comes to the growth in digital surveillance and people management,” said Andrew Pakes, research director at Prospect Union, which represents science, tech and other specialist workers.

“With the rise in AI [artificial intelligence] and people analytics software during Covid, it is deeply disappointing that the framework does not recognise the need to consult and involve workers in decisions about how technology will affect us. If the government wants to build trust in how technology is used at work, we need much better rules on how tech is used at work.”

In November 2020, a review into algorithmic bias published by the Centre for Data Ethics and Innovation (CDEI) said the UK government should force public sector bodies to be more transparent about their use of algorithms that make “life-affecting” decisions about individuals.

“Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals,” it said.

“The Cabinet Office and the Crown Commercial Service should update model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness.”

Any transparency-related publications should therefore be easy to find, understand and use, and should absolutely avoid being used to serve narrow communications objectives or to purposefully manipulate an audience, it added.

The CDEI’s review into algorithmic bias is cited in multiple parts of the framework as a relevant resource that should be considered by public sector bodies when deploying automated decision-making systems.

“Actions, processes and data are made open to inspection by publishing information about the project in a complete, open, understandable, easily accessible and free format,” it said.

Read more about algorithms and automated decision-making

 

Read more on IT for government and public sector

CIO
Security
Networking
Data Center
Data Management
Close