
peshkov - stock.adobe.com
Ada Lovelace: using market forces to professionalise AI assurance
The Ada Lovelace Institute examines how ‘market forces’ can be used to drive the professionalisation of artificial intelligence assurance in the context of a wider political shift towards deregulation
Using market incentives to professionalise the artificial intelligence (AI) assurance field in lieu of formal regulation will require organisations to create shared definitions and practices, according to the Ada Lovelace Institute (ALI).
The group said that the professionalisation of AI assurance – including the practice of algorithmic audits, red teaming and impact assessments – can help companies to more clearly demonstrate trustworthiness while also reducing the potential harms associated with a range of AI products.
However, in a report published 10 July 2025, the ALI said that while AI assurance to date has been incentivised to some extent by regulations, such as the European Union’s AI Act or New York City’s Local Law 144, the appetite for further rulemaking on AI has been dampened by a notable political shift towards deregulation globally.
Within a political context where regulatory “compliance will likely be removed as a motivator for companies to adopt assurance”, the ALI spoke with a number of current practitioners about how market levers can still be used to incentivise the professionalisation of the emergent AI assurance field.
“Market-driven forces, like preventing reputational damage stemming from unassessed and underperforming systems and increasing customer trust, may provide a ‘competitive advantage’ incentive for companies to voluntarily adopt assurance,” it said.
“Similarly, adopting assurance can signal to individual and institutional investors that a company has meaningfully reduced the risk of high-profile or high-cost failures. These strategies already exist as incentives for businesses, and professionalisation of AI assurance could better support these goals.”
The ALI added the “uncertain political economic climate” also underscores the need for “adaptive frameworks” that can evolve alongside both the technology and the growing body of evidence around what AI assurance looks like.
In its recommendations for assurance practitioners and their organisations, the ALI said that such frameworks would need to distinguish between AI systems generally and those used for narrower contexts, in terms of both the practical technical and legal competencies needed to assure each type of system, as well as the standards that should be applied to each.
“For AI assurance to professionalise, the field needs to define the core knowledge and practices that its practitioners should share,” it said, adding that if these are too narrowly or rigidly defined, they may not capture the full array of risks, nor keep pace with the evolution of AI technologies.
“Competencies that are effective today may prove inadequate for emerging systems, especially as new capabilities introduce new risks. At the same time, without some agreement on what practitioners should know and do, the field will struggle to build a cohesive professional identity.”
The ALI further added that while assurance practitioners currently lack a universally accepted set of standards to assess AI systems against, there is no consensus on who should be setting them.
“Far from being an apolitical process, standards present an opportunity for stakeholders seeking to influence what assurance entails and who gets to define it,” it said. “Our evidence reflected this dynamic, as participants from several different organisations suggested that their own organisations were best positioned to drive standard development.”
In lieu of AI-focused regulations, the ALI highlighted how both competencies and standards could potentially be driven by companies innovating on AI safety.
“The ‘three-point’ safety seatbelts that are now universal to all automotive vehicles today were designed by an engineer at Volvo, the Swedish manufacturing company, in 1959. Volvo had spearheaded a company culture of safety since its inception, and waived its patent rights to the seatbelt’s design,” it said.
“As industry coalesced around three-point belt adoption, regulation responded: in the UK, seatbelt production in cars became law for car manufacturers in 1965, with the requirement for drivers and passengers in 1983 and 1991 respectively…The wide dissemination of standards and methods for AI assurance may confer similar benefits.”
The ALI added that while this example shows how internally driven assurance practices can create utility and impact beyond the level of the company, the Volkswagen emissions scandal provides an example of how internal assurance standards can be used to whitewash unethical practices.
“Accordingly, assurance practices must be buttressed by accountability and enforcement mechanisms to protect businesses, people and society from harm, in instances where assurance fails or is thwarted,” it said.
However, those ALI spoke to were also clear that without regulation and, in particular, liability regimes for AI – which would set up strict guardrails and support people seeking legal redress – there was not sufficient incentive to adopt assurance practices: “One interviewee working on AI audit certification felt that the most straightforward path to professionalising the industry would come from widespread mandating of auditing.”
Ultimately, whether professionalisation is prompted by government action or market incentives, both will need to be utilised to have the most impact.
“We conclude that there is considerable opportunity for a multistakeholder coalition of actors to collaborate to support professionalisation of AI assurance, including civil society, industry bodies, international standards development organisations and national policymakers,” the ALI said.
“Such efforts, as we have argued, will require support from policymakers and regulators – for example, policymakers enacting funding initiatives or subsidies to support uptake of certification schemes.”
In November 2024, the UK government launched an AI assurance platform designed to help businesses across the country identify and mitigate the potential risks and harms posed by the technology, as part of a wider push to bolster the UK’s burgeoning AI assurance sector.
“AI Management Essentials [AIME] will provide a simple, free baseline of organisational good practice, supporting private sector organisations to engage in the development of ethical, robust and responsible AI,” said a government report on the future of AI assurance in the UK at the time.
“The self-assessment tool will be accessible for a broad range of organisations, including SMEs. In the medium term, we are looking to embed this in government procurement policy and frameworks to drive the adoption of assurance techniques and standards in the private sector.”
Read more about AI safety
- Assessing the risk of AI in enterprise IT: We speak to security experts about how IT departments and security leaders can ensure they run artificial intelligence systems safely and securely.
- Government renames AI Safety Institute and teams up with Anthropic: Addressing the Munich Security Conference, UK government technology secretary Peter Kyle announces a change to the name of the AI Safety Institute and a tie-up with AI company Anthropic.
- UK government unveils AI safety research funding details: Through its AI Safety Institute, the UK government has committed an initial pot of £4m to fund research into various risks associated with AI technologies, which will increase to £8.5m as the scheme progresses.