adiruch na chiangmai - stock.ado

Digital Ethics Summit 2023: The year in AI

At trade association TechUK’s seventh annual Digital Ethics Summit, public officials, industry figures and civil society groups met to discuss the state of AI regulation and the direction of travel set for 2024

Well-intentioned ethical principles and frameworks for artificial intelligence (AI) systems must now be translated into concrete practical measures, but those measures cannot be dictated to the rest of the world by rich countries developing the technology.

Speaking at TechUK’s seventh annual Digital Ethics Summit in December, panelists reflected on developments in AI governance since the release of generative models such as ChatGPT at the end of 2022, which put the technology in front of millions of users for the first time.

Major developments since then include the UK government’s AI whitepaper; the UK’s hosting of the world’s first AI Safety Summit at the start of November 2023; the European Union’s (EU) progress on its AI Act; the White House’s Executive Order on safe, secure and trustworthy AI; and China’s passing of multiple pieces of AI-related legislation throughout the year.  

For many in attendance, all of the talk around ethical and responsible AI now needs to be translated into concrete policies, but there is concern that the discussion around how to control AI is overly dominated by rich countries from the global north.  

A consensus emerged that while the growing intensity of the international debate around AI is a sign of positive progress, there must also be a greater emphasis placed on AI as a social-technical system, which means reckoning with the political economy of the technology and dealing with the practical effects of its operation in real world settings.

Reflecting on the past year of AI developments, Andrew Strait, an associate director at the Ada Lovelace Institute, said he was shocked at the time of ChatGPT’s release, not because the tool was not impressive or exciting, but because it showed how weak industry responsibility practices are.

Citing ChatGPT’s release in spite of concerns raised internally by OpenAI staff and the subsequent arms race it prompted between Google and Microsoft over generative AI, he added that “these were run as experiments on the public, and many of the issues these systems had at their release are still not addressed”, including how models are assessed for risk.

Moment of contradiction

Strait continued: “I find it a very strange moment of contradiction. We have had our prime minister in the UK say, ‘This is one of the most dangerous technologies, we have to do something, but it’s too soon to regulate.’ That means we won’t have regulation in this country until 2025 at the earliest.” He added that many of the issues we’re now faced with in terms of AI were entirely foreseeable.

Highlighting the “extreme expansion” of non-consensual sexual imagery online as a result of generative AI (GenAI), Strait said situations like this were being predicted as far back as 2017 when he worked at DeepMind: “I’m shocked at the state of our governance,” he said.

Commenting on the consensus that emerged during the UK AI Safety Summit around the need for further research and the creation of an international testing and evaluation regime – with many positing the Intergovernmental Panel on Climate Change (IPCC) as the example to follow – Strait added that the IPCC was essentially founded as a way of turning a regulatory problem into a research problem. “They’ve now spent 30 more years researching and are yet to address [the problem],” he said.

Alex Chambers, platform lead at venture capital firm Air Street Capital, also warned against allowing big tech firms to dominate regulatory discussions, particularly when it comes to the debate around open versus closed source.

“There is a war on open source at the moment, and that war is being led by big technology companies that … are trying to snuff out the open source ecosystem,” he said, adding that “it’s not entirely surprising” given many of them have identified open source as one of their biggest competitive risks.

“That’s why I’m nervous about the regulatory conversation, because … whenever I hear people talk about the need to engage more stakeholders with more stakeholder consultation, what I tend to hear is, ‘Let’s get a bunch of incumbents in the room and ask them if they want change,’ because that’s what it usually turns into when regulators do that in this country,” said Chambers.

He added that he’s concerned the regulatory conversation around AI “has been hijacked by a small number of very well-funded, motivated bodies”, and that there should be much more focus in 2024 on breaking the dependency most smaller firms have on the technical infrastructure of large companies dominating AI’s development and deployment.

The UK approach

Pointing to advances in GenAI over the past year, Lizzie Greenhalgh, head of AI regulation at DSIT, said it “has proved the value and the strength” of the adaptable, context-based regulatory approach set out by the UK government in its March 2023 AI Whitepaper.

As part of this proposed “pro-innovation” framework, the government said it would empower existing regulators – including the Information Commissioner’s Office (ICO), the Health and Safety Executive, Equality and Human Rights Commission (EHRC) and Competition and Markets Authority – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.

“We were always clear the whitepaper was an early step and this is a fast-evolving tech,” said Greenhalgh. “We have been able to move with the technology, but we know it’s not a done deal just yet.”

For Tim Clement-Jones, however, “The stance of the current government is that regulation is the enemy of innovation,” and attempts to minimise the amount of regulation in this space are “out of step” with other jurisdictions.

Like Strait, Clement-Jones said he feels “we’ve gone backwards in many ways”, citing the speed with which the US and EU have moved on AI in comparison with the UK, as well as the lack of regulation across large swathes of the UK economy.

“The regulators don’t cover every sector, so you have to have some horizontality,” he said.

Commenting on the Safety Summit, Clement-Jones questioned the wisdom of focusing on “frontier” risks such as the existential threat of AI when other dangers such as bias and misinformation are “in front of our very eyes … we don’t have to wait for the apocalypse, quite frankly, before we regulate”.

Convening role

According to Hetan Shah, chief executive at the British Academy, while he shares similar criticisms about the focus on speculative AI risks, the Summit was a success “in its own terms” of bringing Western powers and China around the same table, and it’s important the UK plays some kind of convening role in AI regulation given most firms developing it are either from China or the US.

However, he added that while the flexible structure of the regulatory approach set out by the UK government in its whitepaper is helpful in adapting to new technological developments, the UK’s problem has always been in operationalising its policies.

He added the lack of discussion about the whitepaper at the AI Safety Summit gave him the impression the government “had almost lost confidence in its own approach”, and that now is the time for movement with a sense of urgency. “We’ve got to start getting into specifics, which are both horizontal and vertical,” he said. “The problems of misinformation are different to the problems of privacy, which are different to the problems of bias, which are different to the problems of what would an algorithm do for financial trading, which is different to what impact will it have on jobs in the transport system, et cetera, so we need a lot of that drilling down … there’s no straightforward way of doing it.”

Citing Pew research that shows US citizens are now more concerned than excited about AI for the first time, UK information commissioner John Edwards said during his keynote address that “if people don’t trust AI, they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole”.

He added that the UK’s existing regulatory framework allows for “firm and robust regulatory interventions, as well as innovation”, and that “there is no regulatory lacuna here – the same rules apply [in data protection], as they always had done”.

Highlighting the Ada Lovelace Institute’s analysis of the UK’s regulation – which showed “large swathes” of the UK economy are either unregulated or only partially regulated – Strait challenged the view there was no regulatory lacuna.

“Existing sector-specific regulations have enormous gaps,” he said. “It does not provide protections, and even in the case of the EHRC and ICO, which do have the powers to do investigations, they historically haven’t used them. That’s for a few reasons. One is capacity, another is resources, and another is, frankly, a government that’s very opposed to the enforcement of regulation. If we don’t change those three things, we’ll continue to operate in an environment where we have unsafe products that are unregulated on the market. And that’s not helping anybody.”

Translating words to action

Gabriela Ramos, assistant director-general for social and human sciences at Unesco, said that although there is consensus emerging among the “big players” globally around the need to regulate for AI-related risks, all of the talk around ethical and responsible AI now needs to be translated into concrete policies.

She added that any legal institutional frameworks must account for the different rates of AI development across countries globally.

Camille Ford, a researcher in global governance at the Centre for European Policy Studies (CEPS), shared similar sentiments on the emerging consensus for AI regulation, noting that the significant number of ethical AI frameworks published by industry, academia, government and others in recent years all tend to emphasise the need for transparency, reliability and trustworthiness, justice and equality, privacy, and accountability and liability.

Commenting on the UK’s AI Safety Summit, Ford added that there needs to be further international debate on the concept of AI safety, as the conception of safety put forward at the summit primarily focused on existential or catastrophic AI risks through a limited technological lens.

“This seems to be the space that the UK is carving out for itself globally, and there’s obviously going to be more safety summits in the future [in South Korea and France],” she said. “So, this is a conception of safety that is going to stick and that is getting institutionalised at different levels.”

For Ford, future conversations around AI need to look at the risk of actually existing AI systems as they are now, “rather than overly focusing on designing technical mitigations for risks that don’t exist yet”.

Political economy of AI

Highlighting Seth Lazar and Alondra Nelson’s paper, AI safety on whose terms?, Ford added that AI safety needs to be thought of more broadly in socio-technical terms, which means taking account of “the political economy of AI”.

Noting this would include its impact on the environment, the working conditions of data labelers that run and maintain the systems, as well as how data is collected and processed, Ford added that such an approach “necessitates understanding people and societies, and not just the technology”.

Pointing to the growing intensity of the international debate around AI as a sign of positive progress, Zeynep Engin, chair and director of the Data for Policy governance forum, added that while there is a lot of talk about how to make AI more responsible and ethical, these are “fuzzy terms” to put into practice.

She added that while there are discussions taking place across the world for how to deal with AI, the direction of travel is dominated by rich countries from the global north where the technology is primarily being developed.

“We can already see that the dominance is from the global north, and it becomes quite self-selective,” said Engin. “When you see that when things are left to grow organically [because of where the tech firms are], then do you leave a big chunk of the global population behind?” She added that it’s difficult to create a regulatory environment that promotes social good and reflects the dynamic nature of the technology when so many are excluded from the conversation.

“I’m not just saying this from an equality perspective … but I also think if you’re saying AI regulation is as big a problem as climate change, and if you’re going to use this technology for social good, for public good in general, then you really need to have this conversation in a much more balanced way,” said Engin.

Noting that international AI ethics and governance forums are heavily concentrated in “wealthy, like-minded nations”, Ford said there was a risk of this approach being “overly institutionalised” at the expense of other actors who have not yet been as much a part of the conversation.

Read more about artificial intelligence

For Engin, a missing piece of the puzzle in the flurry of AI governance activity over the past year is cross-border community-led initiatives, which can help put more practical issues on the table.

Commenting on her recent visits to Africa, for example, she said there was a real feeling among those she spoke with that there are “no efficient platforms” for their representation on the international stage, and that the domination of the conversation by tech industry professionals means little attention is paid to what people on the ground need (in this case, enough electricity to even operate resource-intensive AI systems).

She added that, like the EU, other regions outside of the big AI developing centres (i.e. the US and China) are concerned about data sovereignty and ownership, and that there needs to be further global conversation on this issue, especially given the borderless nature of technology.

During a separate session on the People’s Panel for AI – an experiment in public engagement run by the Connected by Data campaign group – those involved spoke highly of the experience, and the need to integrate citizen assembly-style bodies into the new global governance frameworks that emerge.

People’s Panel participant Janet Wiegold, for example, suggested placing a team of ordinary citizens in the UK’s new AI Safety Institute so it is genuinely receptive to different public voices and concerns.

She added that while most involved in the Panel were not particularly tech-savvy, with many getting to grips with AI for the first time, the kind of deliberative public engagement that took place meant that, by the end of it, people were well-informed and able to speak about AI with confidence.

Tim Davies, director of research and practice at Connected by Data, added that while many in government and the private sector tend to dismiss or even fear public engagement either because of people’s lack of expertise or the tough ethical questions they may face, the People’s Panel experience showed how “constructive” it can be to include these voices, and how quickly people can learn about complex subjects if their intelligence is respected.

Read more on Technology startups

CIO
Security
Networking
Data Center
Data Management
Close