London has published the latest iteration of its Emerging technology charter, a set of practical and ethical guidelines that outline the city’s expectations for how new data-enabled technologies should be developed and deployed for use in the public realm.
The fourth version of the charter, which was first announced in July 2020, is built around four principles – openness, respecting diversity, being trustworthy with people’s data, and sustainability – that are designed to lay out a clear pathway for how a range of future smart city technologies can be used ethically.
This includes driverless cars, facial recognition software, drones, sensor networks, robotics, mobility services, augmented or virtual reality, and automated or algorithmic decision-making systems.
Through these principles, the charter aims to set common expectations for how buyers and makers can innovate successfully; give Londoners and their elected representatives a clear framework to ask questions about the technologies being deployed in London; and improve transparency around the products and services that data protection law considers a high risk to privacy.
“By setting out the charter’s principles in this way, we aim to foster a trustworthy environment for innovation to flourish, and to do so responsibly for the benefit of Londoners,” it said.
Although the guidelines are voluntary, both local government and technology companies are being encouraged to adopt them.
The charter will be shared with the Global Observatory on Urban Artificial Intelligence (AI), an initiative launched by London, Barcelona and Amsterdam in June 2021 that aims to monitor artificial intelligence (AI) deployment trends and promote its ethical use, as part of the wider Cities Coalition for Digital Rights (CC4DR).
Public consultation is vital
Drafted by the chief digital officer for London, Theo Blackwell, and advised by a working group drawn from the Smart London Board, the charter was developed through an open process of public consultation with subject experts, innovators and Londoners themselves.
Speaking to Computer Weekly, Blackwell said the consultation process is vital to the ongoing development of the charter, and has already lead to a number of important changes from previous versions, including the focus on sustainability.
“It became very clear that people wanted a separate principle around sustainability…when we started we thought ‘Let’s just deal with the main area of debate around the ethics of new technology’, but actually by the end of the process – and we’re really glad we did develop it in the open – this was part of the discussion,” he said, adding that these interactions led to a realisation that while many smart city technologies are “perceived as virtual”, they are still “very much part of our physical, built environment”.
Blackwell further added that many technology ethics charters or guidelines are produced either by government or academia, both which rely on polling and expert interpretations of that polling as their form of engagement with the public. “We’re in a position where we have the opportunity to talk to people, and we got really valuable input from Londoners around this,” he said.
Another new addition to the charter (which falls under the ‘being trustworthy with data’ principle) is around the use of biometric technologies such as eyeball tracking or live facial-recognition (LFR) technology by non-law enforcement bodies, which was added in light of an opinion issued by the information commissioner Elizabeth Denham in June 2021.
Noting in a blog post at the time that she was “deeply concerned” about the inappropriate and reckless use of live facial-recognition in public spaces, Denham said that none of the organisations investigated by her office were able to fully justify their use of the technology, leading her to publish an official “commissioner’s opinion” to act as guidance for companies and public sector bodies.
“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” she wrote.
Blackwell said the commissioner’s opinion “sets a very, very high bar for the use of these technologies”, as organisations would need to “consider what alternative technologies might meet the same outcome” – something that has been written directly into the charter.
Data protection impact assessments
In line with Denham’s opinion that any organisation considering deploying LFR in a public place must carry out a data protection impact assessment (DPIA) to decide whether or not to go ahead with the deployment, the charter also encourages companies to publish their completed DPIAs.
“A DPIA is a legal obligation to identify and minimise the risks of a project that when it is likely to result in a high risk to individuals,” it said. “Following the completion of your DPIA, we recommend you publish it on the GLA’s [Greater London Authority’s] central register of DPIAs on the London Datastore to promote public transparency and good practice.”
Blackwell said that by providing “an element of transparency where there was once obscurity…I think that helps build trust”, adding that one of the biggest criticisms people level at smart city projects is their often unexplainable, black-box nature.
“One of the challenges with smart cities is technology solutions being imposed on us, under the aegis that it helps ‘operational efficiency’, [meaning] its like an administrative matter, rather than something that has to do with you in your daily life,” he said, adding that “with new technologies, people don’t often know the right questions to ask of the people responsible for buying it”.
Through this emphasis on consultation and transparency within the charter, and by providing both citizens and politicians with a clear framework for how new technologies should be approached, Blackwell said the charter will enable a higher quality of scrutiny.
“The arbiter of this is actually reputation, that’s the enforcement mechanism here. If we set out to our elected representatives a framework by which we ask the right questions of technology, the quality of the answer becomes even more important,” he said.
“When we say the charter is voluntary, it might sound a bit weak, but this is also a nice framework to bring together the kinds of things that might impact the reputation of your organisation if you don’t consider them and don’t explain what you’re doing.”
In terms of how technology companies themselves have reacted to the charter and its development, Blackwell said while many were very welcoming of the principles in particular, some wanted to know what London itself would do to “meet them halfway” and help spur innovation.
“The quid pro quo here is that we’re setting out a clear framework that allows you to successfully innovate – we don’t need additional incentives in play,” he said.
Read more about technology and ethics
- The use of artificial intelligence to predict criminal behaviour should be banned because of its discriminatory outcomes and high risk of further entrenching existing inequalities, claims Fair Trials.
- Any attempt to regulate artificial intelligence (AI) must not rely solely on technical measures to mitigate potential harms, and should instead move to address the fundamental power imbalances between those who develop or deploy the technology and those who are subject to it, says a report commissioned by European Digital Rights.
- Global analysis by Ada Lovelace Institute and other research groups finds algorithmic accountability mechanisms in the public sector are hindered by a lack of engagement with the public.