Hieronymus Ukkel - Fotolia

Police use of algorithms needs stronger safeguards to prevent bias, report says

A study by the Royal United Services Institute calls for new code of practice to guide use of algorithmic tools in policing

The use of analytics and algorithms for policing in England and Wales needs to be more closely scrutinised and regulated, according to a report from security think tank the Royal United Services Institute (Rusi).

Commissioned by the Centre for Data Ethics and Innovation (CDEI), the report will be used to inform the organisation’s review into algorithmic bias in the policing sector.

The review is intended help CDEI develop a new code of practice for the use of data analytics in policing, to mitigate risk of bias and address wider legal and ethical concerns pertaining to the use of this technology.

“The CDEI’s goal is to support police forces across the UK to feel confident that their use of these emerging technologies is improving the service they provide and will engender public trust,” said a spokesperson for CDEI.

“The findings from this report will help us develop national guidance which will help to do just that.”

Focusing on the use of machine learning in predictive crime mapping and individual risk assessments specifically, the report finds an increased emphasis on preventive policing has led forces to increasingly focus on data-driven assessments of risk, which is where the problem of biases may arise.

“Algorithms that are trained on police data ‘may replicate (and in some cases amplify) the existing biases inherent in the dataset’, such as over or under-policing of certain communities, or data that reflects flawed or illegal practices,” said the report.

It then goes on to say that bias can arise even at the point of choosing which specific crime areas will be the subject of these data-driven assessments.

“Predictive technological solutions have been criticised for focusing on low-level ‘nuisance’ crime, or on areas with high crime levels and thus poor neighbourhoods,” said the report.

“One academic expert suggested that bias may arise via the police’s conception of a problem (and thus input data would reflect only this conception).”

The report points to the example of the conception of a ‘gang’ being framed around a single demographic, as it was in London following the 2011 Tottenham riots with the Gangs Matrix.

The Gangs Matrix

While not explicitly mentioned in the report, the Metropolitan Police’s Gangs Matrix is an example of individual risk assessments that highlights how the police’s conception of a problem can have serious consequences.

In the case of the Matrix, individuals are given an automated risk score based on police information about past arrests, convictions and any other relevant intelligence.

However, an Amnesty International report, Trapped in the Matrix, found that 40% of people on the Matrix have a harm score of zero, meaning the police have no record of them being involved in a violent offence, while 64% of all individuals have been labelled “green”, meaning they pose a very low risk.

Of those listed on the database, 78% were black.

“The software treats that record of possible gang activity as a concrete piece of data, so it’s very unclear how the Gangs Matrix can actually rank the prominence, or the certainty, of this intelligence when it’s entered into a database it draws on,” Jaime Grace, a senior lecturer in law at Sheffield Hallam University, told Computer Weekly at the time.

In November 2018, the Information Commissioner’s Office found that the Matrix “seriously” breached data protection laws.

The report further asserts that an increased emphasis on data-driven risk assessments has been shaped, in part, by significant reductions in resources since 2010.

This, coupled with specific funds for digital transformation, has created strong incentives for police to frame any new developments around technology, so they can receive more central government support, creating a bias towards digital initiatives.

“With this in mind, the choice of whether to implement a particular new technological capability may itself be subject to bias,” said the report.

It also states that, within the context of austerity, questions are also raised about the justifiability of using algorithmic policing techniques, which might not be necessary if more resources were made available to forces.

Emerging findings

The Rusi report is the first of two papers that will be published as part of this project, with the second expected to be released in March 2020.

“Interviews conducted to date evidence a desire for clearer national guidance and leadership in the area of data analytics, and widespread recognition and appreciation of the need for legality, consistency, scientific validity and oversight,” said the report.

“It is also apparent that systematic investigation of claimed benefits and drawbacks is required before moving ahead with full-scale deployment of new technology.”

One police officer interviewed by the report’s authors described the current technological landscape as a “patchwork quilt, uncoordinated and delivered to different standards in different settings and for different outcomes”.

The report concludes that a new code of practice for police use of algorithmic tools must therefore establish a standard process for model design, development, trialling and deployment, along with ongoing monitoring and evaluation.

It should also be tech-agnostic, provide clear operational guidelines that complement existing practices or codes, and specify clear roles and responsibilities for every person involved with law enforcement in the UK, including the College of Policing, the National Police Chief’s Council and the Home Office.

The report was written by Alexander Babuta, a research fellow in national security studies at Rusi, and Marion Oswald, the vice chancellor’s senior fellow in law at the University of Northumbria.

Read more about police use of technology

  • In a landmark hearing, the High Court has ruled that police use of automatic facial recognition technology is lawful, but that it still infringes on privacy rights.
  • The Metropolitan Police Service has been secretly developing a new database, but similarities to the controversial Gangs Matrix have raised concerns among data protection and racial equality activists.
  • Multi-agency engagement has a significant potential to boost the digital delivery of citizen safety services, says report.

Read more on IT legislation and regulation

CIO
Security
Networking
Data Center
Data Management
Close