Cities worldwide band together to push for ethical AI

The chief digital and technology officers of London and Barcelona speak to Computer Weekly about their joint initiative launched with other cities to promote the ethical deployment of artificial intelligence in urban spaces

This article can also be found in the Premium Editorial Download: Computer Weekly: The cities planning for ethical use of AI

From traffic control and waste management to biometric surveillance systems and predictive policing models, the potential uses of artificial intelligence (AI) in cities are incredibly diverse, and could impact every aspect of urban life.

In response to the increasing deployment of AI in cities – and the general lack of authority that municipal governments have to challenge central government decisions or legislate themselves – London, Barcelona and Amsterdam launched the Global Observatory on Urban AI in June 2021.

The initiative aims to monitor AI deployment trends and promote its ethical use, and is part of the wider Cities Coalition for Digital Rights (CC4DR), which was set up in November 2018 by Amsterdam, Barcelona and New York to promote and defend digital rights. It now has more than 50 cities participating worldwide.

Apart from city participants, the Observatory is also being run in partnership with UN-Habitat, a United Nations initiative to improve the quality of life in urban areas, and research group CIDOB-Barcelona Centre for International Affairs.

According to Michael Donaldson, Barcelona’s chief technology officer (CTO), the Observatory is designed to be a “space of collaboration and exchange of knowledge” where cities can share their experiences – both positive and negative – in developing and deploying AI systems.

He said that by sharing best practice in particular, cities will be able to avoid repeating previous mistakes when deploying AI systems.

“We know the benefits AI can give us in terms of having a more proactive administration and better public digital services, but at the same time we need to introduce that ethical dimension around the use of these technologies,” said Donaldson, adding that Barcelona is currently undertaking public consultations to define exactly what is and is not ethical when it comes to AI – work that will be shared with the Observatory when complete.

London’s chief digital officer (CDO), Theo Blackwell, said his team is taking a similar approach by developing the “emerging technology charter for London”, which will also be fed back into the Observatory “so that we’re not doing this in isolation and we’re learning from each other”.

Blackwell said that as CDO for London, the opportunity to learn from, and be in active dialogue with, his peers in other cities is “the most valuable information that I get” because it is informed by on-the-ground, practical experience of deploying AI in an urban context, rather than the more legislative focus of think-tanks and government committees.

“We don’t have any powers to legislate here, but we do have powers to influence,” he said. “Cities are often at the coalface, with our staff directly talking to these technology firms, and that’s some way away from the people who make the laws. We can come to the party with that lived experience, and try and shape them in a way that guarantees people safeguards on the one hand, but also promotes innovation in our economy.”

Guillem Ramírez, policy adviser on city diplomacy and digital rights at Barcelona City Council, told Computer Weekly that this approach will help cities collaborate internationally to see what “ethical” means in different cultural contexts, and to build a common understanding of what it means to develop AI ethically.

“The first thing that we’re doing is identifying the principles of what should be considered ethical when it comes to AI,” said Ramírez, adding that the Observatory hopes to have a report finalised on this in September.

“We’ve been discussing with the cities that are part of the Coalition, and we’ve identified some of these principles, which includes non-discrimination and fairness, but there’s also cyber security, transparency, accountability, and so on.

“Then what we’re doing is to operationalise them, not in terms of super concrete indicators, but in terms of guiding questions, because at this point cities are not even developing complex AI systems, so the idea is to lay the ground for scaling up in an ethical way.”

Read more about ethics and AI

  • The European Commission’s proposal for artificial intelligence regulation focuses on creating a risk-based, market-led approach replete with self-assessments, transparency procedures and technical standards, but critics warn it falls short of being able to protect people’s fundamental rights and mitigating the technology’s worst abuses.
  • The TUC has published a report warning of AI-powered discrimination against working people enabled by gaps in existing British employment law.
  • Despite the abundance of decision-making algorithms with social impacts, many companies are not conducting specific audits for bias and discrimination that can help mitigate their potentially negative consequences.

Donaldson and Blackwell both stressed that many of the cities taking part in the Observatory are at very different stages of their “AI journey”, and that anything produced by the Observatory is meant to help guide them along a more ethical path.

At the moment, many of the AI-based technologies and tools being used in urban centres are not the products of the cities’ own development efforts, but are instead developed in the private sector before being sold or otherwise transferred into the public sector.

For example, the facial-recognition system used in the UK by both the Metropolitan Police Service (MPS) and South Wales Police (SWP), called NeoFace Live, was developed by Japan’s NEC Corporation.

However, in August 2020, the Court of Appeal found SWP’s use of the technology unlawful – a decision that was partly based on the fact that the force did not comply with its public sector equality duty to consider how its policies and practices could be discriminatory.

The court ruling said: “For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable but, in our view, it does not enable a public authority to discharge its own, non-delegable, duty under section 149.”

Asked how cities can navigate the growing closeness of these public-private collaborations, Barcelona City Council’s Ramírez said that while cities will need to strike a balance between sensitive company information and public interest, “the city will need to understand how the code is working, and have procedural transparency to understand how decisions are made” by the algorithms.

He added: “The functioning of these systems needs to be able to be explained, so that citizens can understand it.”

Donaldson said cities will need to develop a set of checks and balances to figure out how to safely navigate public-private AI partnerships in ways that also benefit citizens.

“We might not really know what’s going on because your technology is far beyond our knowledge, but what we know is how to deliver public services, how to guarantee the rights of our citizens, and if your technology is going against that, we’re going to tell you to stop,” he said.

Responding to the same question, Blackwell said the application of AI in cities will happen in many different settings but that, from the examples he has seen, the most useful applications are based on very narrow use cases.

“I think the challenge with city authorities is actually that these technologies can be incredibly useful in narrow use cases,” he said. “Sometimes we might be approached by big companies that say ‘there is a wide range of things this tech can do’, and I think the art here is to basically say ‘no, we just need these things’, and it’s not something that builds towards an all-singing, all-dancing universal system, which I think is the kind of default position for many large technology companies.”

Blackwell said London plans to let organisations publish data protection impact assessments in the London Data Store so that “they can become less of a risk management tool for information governance professionals, and more of an accountability tool that says, ‘this is how I’m dealing with the questions that were asked about this technology’ – that’s a key provision in the emerging tech charter.”

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close