chalabala - Fotolia
As the government advances its agenda around understanding the opportunities and risks related to algorithms in policing, a number of concerns have emerged.
The Centre for Data Ethics and Innovation (CDEI) has partnered with the Royal United Services Institute (RUSI) to carry out independent research into the potential for bias in algorithms used by police forces as part of its wider review in algorithmic decision-making.
Events have been held in July by both bodies, involving police forces, civil society organisations, academics, policymakers and trade associations such as TechUK, to discuss the implications of the use of the technologies and how the regulatory and governance environment can be improved, as well as the role of the suppliers in algorithm use in policing.
In the roundtables, a number of risks were identified. While those attending agreed that predictive analytics technologies could help police better manage and gain insights, there is a potential for biased outcomes against certain groups, particularly if the algorithm was trained on historic police data.
If unchecked, the technologies could also have important implications on individuals’ civil liberties and human rights, participants of the events argued. Delegates also reached a consensus that “meaningful public engagement” is needed, especially among the affected groups, prior to roll-outs.
While there is an opportunity around greater data sharing between police forces and local councils to get a better picture of trends such as youth violence drivers, there are risks related to sharing data.
However, event participants noted this could could lead to further surveillance, data protection issues and be challenging to implement in practice.
The need for a consistent approach to the development of policing algorithms across the UK was another point emphasized in the events. Even though there were strong calls for clearer governance and oversight, there was no consensus as to which body should take the lead on this.
The initial research by RUSI, to be published in September, will be used by the CDEI to develop with a code of practice for trialling algorithms.
The guidelines, also to be published next month, would be co-developed with policing bodies and will also aim to mitigate bias and ethical concerns around predictive analytical technology.
CDEI’s final report on the review into bias in algorithmic decision making and recommendations to the government is due to be published in March 2020.
The recent government efforts to better understand the impact of use of algorithms follows warnings from civil rights organisations and think tanks, which argued that that better use and regulation of data-driven technologies should be the norm in UK policing and that uncritical use of algorithms in justice can undermine public trust and individual rights.
The centre for data ethics and excellence has recently concluded a major programme of public events in support of another review it is leading into the ethics of online targeting.