Feng Yu - stock.adobe.com
Digital Ethics Summit 2025: Open sourcing and assuring AI
Industry experts met to discuss the ethical challenges associated with assuring AI systems, and how open source approaches can challenge concentrations of capital and power
Open sourcing artificial intelligence (AI) can help combat concentrations of capital and power that currently define its development, while nascent assurance practices need regulation to define what “good” looks like.
Speaking at trade association TechUK’s ninth annual Digital Ethics Summit, panellists discussed various dynamics at play in the development of AI technologies, including the under-utilisation of open source approaches, the need for AI assurance to be continuous and iterative, and the extent to which regulation is needed to inform current assurance practices.
During the previous two summit events – held in December 2023 and 2024 – delegates stressed the need for well-intentioned ethical AI principles to be translated into concrete practical measures, and highlighted the need for any regulation to recognise the socio-technical nature of AI that has the potential to produce greater inequality and concentrations of power.
A major theme of these previous discussions was who dictates and controls how technologies are developed and deployed, and who gets to lead discussions around what is considered “ethical”.
While discussions at the 2025 summit touched on many of the same points, conversations this year focused on the UK’s developing AI assurance ecosystem, and the degree to which AI’s further development could be democratised by more open approaches.
Open sourcing models and ecosystems
In a conversation about the benefits and disadvantages of open versus closed source AI models, speakers noted that most models do not fall neatly into either binary, and instead exist on a spectrum, where aspects of any given model are either open or closed.
However, they were also clear that there are exceedingly few genuinely open source models and approaches being developed.
Matthew Squire, chief technology officer and founder of Fuzzy Labs, for example, noted that “a lot of these ostensibly open source models, what they’re really offering as open is the model weights,” which are essentially the parameters a model uses to transform input data into an output.
Noting that the vast majority of model developers do not currently open up other key aspects of a model, including the underlying data, training parameters or code, he concluded that most models fall decidedly on the closed end of the spectrum. “[Model weights represent] the final product of having trained that model, but a lot more goes into it,” said Squire.
For Linda Griffin, vice-president of global policy at Mozilla, while AI models do not exist in a binary of open vs closed, the ecosystems they are developed in do.
Highlighting how the internet was built on open source software before large corporations like Microsoft enclosed it in their own infrastructure, she said a similar dynamic is at play today with AI, where a handful of companies – essentially those that control web access via ownership of browsers, and which therefore have access to mountains of customer data – have enclosed the AI stack.
“What the UK government really needs to be thinking about right now is what is our long-term strategy for procuring, funding, supporting, incentivising more open access, so that UK companies, startups and citizens can build and choose what to do,” said Griffin. “Do you want UK businesses to be building AI or renting it? Right now, they’re renting it, and that is a long-term problem.”
‘Under-appreciated opportunity’
Jakob Mokander, director of science and technology policy at the Tony Blair Institute, added that open source is an “under-appreciated opportunity” that can help governments and organisations capture real value from the technology.
Noting that openness and open source ecosystems have a lot of advantages compared with closed systems for spurring growth and innovation, he highlighted how the current absence of open approaches also carries with it significant risks.
“The absence of open source is maybe an even greater risk, because then you have a high-power concentration, either in the hands of government actors or in terms of one or two big tech companies,” said Mokander. “Whether you look at this from a primarily growth-driven or information-driven lens, or from a risk-driven lens, you would want to see a strong open ecosystem.”
When it comes to the relationship between open source and AI assurance, Rowley Adams, the lead engineer at EthicAI, said it allows for greater scrutiny of developer claims when compared with closed approaches. “From an assurance perspective, verifiability is obviously crucial, which is impossible with closed models, taking [developers at their] word at every single point, almost in a faith-based way,” he said. “With open source models, the advantage is that you can actually go and probe, experiment and evaluate in a methodical and thorough way.”
Asked by Computer Weekly whether governments need to consider new antitrust legislation to break up the AI stack – given the massive concentrations of power and capital that stem from a few companies controlling access to the underlying infrastructure – speakers said there is a pressing need to understand how markets are structured in this space.
Griffin, for example, said there needs to be “long-term scenario planning from government” that takes into account the potential for market interventions if necessary.
Mokander added that the increasing capabilities of AI need to go “hand-in-hand with new thinking on anti-trust and market diversification,” and that it’s key “to not have reliance [on companies] that can be used as a leverage against government and the democratic interest. “That doesn’t necessarily mean they have to prevent private ownership, but it’s the conditions under which you operate those infrastructures,” he said.
Continuous assurance needed
Speaking on a separate panel about the state of AI assurance in the UK, Michaela Coetsee, the AI ethics and assurance lead at Advai, pointed out that, due to the dynamic nature of AI systems, assurance is not a one-and-done process, and instead requires continuous monitoring and evaluation.
“Because AI is a social-technical endeavour, we need multifaceted skills and talent,” she said. “We need data scientists, ML [machine learning] engineers, developers. We need red teamers who specifically look for vulnerabilities within the system. We need legal policy, AI, governance specialists. There’s a whole range of roles.”
However, Coetsee and other panellists were clear that, as it stands, there is still a need to properly define assurance metrics and standardise how systems are tested, something that can be difficult given the highly contextual nature of AI deployments.
Stacie Hoffmann, head of strategic growth and department for data science and AI at the National Physical Laboratory, for example, noted that while there are lots of testing evaluation tools either on the market or being developed in-house – which can ultimately help build confidence in the reliability and robustness of a given system – “there’s not that overarching framework that says ‘this is what good testing looks like’.”
Highlighting how assurance practices can still provide insight into whether a system is acting as expected, or its degree of bias in a particular situation, she added that there is no one-size-fits-all approach. “Again, it’s very context-specific, so we’re never going to have one test that can test a system for all eventualities – you’re going to need to bring in different elements of testing based on the context and the specificity,” said Hoffmann.
Read more about artificial intelligence
- AI Action Summit review: Differing views cast doubt on AI’s ability to benefit whole of society: Governments, companies and civil society groups gathered at the third global AI summit to discuss how the technology can work for the benefit of everyone in society, but experts say competing imperatives mean there is no guarantee these visions will win out.
- UK’s ‘deregulatory’ AI approach won’t protect human rights: Expert witnesses called before Parliament’s Joint Committee on Human Rights have told MPs and Lords that the UK’s current approach to artificial intelligence regulation will fail to effectively protect people’s rights.
- Swedish welfare authorities suspend ‘discriminatory’ AI model: A machine learning model used by Sweden’s social security agency to flag benefit fraud has been discontinued following investigations by media outlets and the country’s data protection watchdog.
For Coetsee, one way to achieve a greater degree of trust in the technology, in lieu of formal rules, regulations or standards, is to run limited test pilots where models ingest customer data, so that organisations can gain better oversight of how they will operate in practice before making purchase decisions.
“I think people have quite a heightened awareness of the risks around these systems now … but we still do see people buying AI off of pitch decks,” she said, adding that there is also a need for more collaboration throughout the UK’s nascent AI assurance ecosystem.
“We do need to keep working on the metrics … it would [also] be amazing to understand and collaborate more to understand what controls and mitigations are actually working in practice as well, and share that so that you can start to have more trustworthy systems across different sectors.”
Horse or cart: assurance vs regulation
Speaking on how the digital ethics conversation has evolved over the past year, Liam Booth – a former Downing Street chief of staff who currently works in policy, communication and strategy at Anthropic – noted that while global firms like his would prefer a “highest common denominator” approach to AI regulation – whereby global firms adhere to the strictest regulatory standards possible to ensure compliance across jurisdictions with differing rules – the UK itself should not “rush toward regulation” before there is a full understanding of the technology’s capabilities or how it has been developed.
“Because of things like a very mature approach to sandboxes, a very open approach to innovation and regulatory change, the UK could be the best place in the world to experiment, deploy and test,” he said, adding that while the UK government’s focus on building an assurance ecosystem for AI is positive, the country will not be world-leading in the technology unless it ramps up diffusion and deployment.
“You are not going to have a world-leading assurance market, either from a regulatory or commercial product side, if there aren’t people using the technology that wish to purchase the assurance product,” said Booth.
However, he noted that building up the assurance ecosystem can be helpful for promoting trust in the tech, as it will give both public and private sectors more confidence to use it.
“In a world in which you’re not the datacentre capital, or you may not necessarily have a frontier model provider located in your country, you need to continually innovate and think about what your relevance is at that [global] table, and keep recreating yourself every few years,” said Booth.
Taking a step back
However, for Gaia Marcus, director of the Ada Lovelace Institute, while it is positive to be talking about assurance in more detail, “we need to take a massive step back” and get the technology regulated first as a prerequisite to building trust in it.
Highlighting Ada’s July 2023 audit of UK AI regulation – which found that “large swathes” of the economy are either unregulated or only partially regulated when it comes to use of AI – she argued there are no real sector-specific rules around how AI as a general-purpose technology should be used in contexts like education, policing or employment.
Marcus added that assurance benchmarks for deciding “what good looks like” in a range of different deployment contexts can therefore only be decided through proper regulation.
“You need to have a basic understanding of what good looks like … if you have an assurance ecosystem where people are deciding what they’re assuring against, you’re comparing apples, oranges and pairs,” she said.
Marcus added that, due to the unrelenting hype and “snake oil” around AI technology, “we need to ask very basic questions” around the effectiveness of the technology and whose interests it is ultimately serving.
“We’re falling down on this really basic thing, which is measuring and evaluating, and holding data-driven and AI technologies to the same standard that you would hold any other piece of technology to,” she said.
