phonlamaiphoto - stock.adobe.com

Public sector buyers of AI tech must interrogate its suitability

The Ada Lovelace Institute has published a review on public sector use of artificial intelligence foundation models, looking at the risks and opportunities associated with the technology, and how these can be dealt with from the early stages of procurement onwards

Public sector bodies looking to deploy artificial intelligence (AI) foundation models must conduct extensive due diligence of the systems and suppliers to identify and mitigate the various risks associated with the technology early on, according to the Ada Lovelace Institute.

The Institute defines foundation models as a type of AI system designed for a wide range of possible use cases and with the capability to perform a variety of distinct tasks, from translation and text summation to generating draft reports from notes or responding to queries from members of the public.

Unlike “narrow AI systems” which are trained for one specific task and context, the Institute said foundation models, including large language models (LLMs), are characterised by their scale and potential applicability to a much wider range of situations.  

In terms of potential public sector deployments, it has been proposed that the technology could be used in, for example, document analysis, decision-making support, and customer service improvement, which it is claimed will lead to greater efficiency in public service delivery, more personalised and accessible government communications tailored to individual needs, and improvements in the government’s own internal knowledge management.

“However, these benefits are unproven and remain speculative,” said the Institute in a policy briefing published in early October 2023, noting that while there is optimism in both the public sector and industry about the potential of these systems – particularly in the face of tightening budgetary restraints and growing user needs – there are also real risks around issues such as bias and discrimination, privacy breaches, misinformation, security, over-reliance on industry, workforce harms and unequal access.

It further added that there is a risk such models are adopted by the public sector because they are a new technology, rather than because they are the best solution to a problem.  

“Public-sector users should therefore carefully consider the counterfactuals before implementing foundation models. This means comparing proposed use cases with more mature and tested alternatives that might be more effective, provide better value for money or pose fewer risks – for example, employing a narrow AI system or a human employee to provide customer service rather than building a foundation model-powered chatbot.”

Taskforce launch

Although official use of foundation model applications like ChatGPT in the public sector is currently limited to demos, prototypes and proofs of concept (noting there is some evidence that individual civil servants are using it on an informal basis), the Institute noted the UK government is actively seeking to accelerate the UK’s AI capabilities through, for example, the launch of its £100m Foundation Model Taskforce in June 2023.

Given this push, much of the briefing document focuses on the leading role the private sector has played in developing the technology, and how public sector bodies can navigate their own procurement and deployments in a way that limits the various harms associated with it.

For example, the Institute said there is a risk that being overly reliant on private technology providers can create a misalignment between applications developed for a wide range of clients and the needs of the public sector, which generally handles much larger volumes of sensitive information.

There are also further risks arising from automation bias, it said, where users are overly trusting of the foundation model outputs, or treat them as if they are from a human.

It added that the wide range of risks currently associated with foundation models are best dealt with by the upstream providers of foundation models at the training stage through, for example, dataset cleaning, instruction fine-tuning, or reinforcement learning from feedback.

As such, the Institute said all public bodies seeking to procure external foundation model capabilities should require detailed information about the associated risks and mitigations upfront when procuring and implementing.

“For example, when procuring or developing a summarisation tool, public-sector users should ask how issues like gender or racial bias in text outputs are being addressed through training data selection and model fine-tuning,” it said. “Or when deploying a chatbot for public inquiries, they should ensure the process of using data to prompt the underlying large language model does not violate privacy rights, such as by sharing data with a private provider.”

Read more about artificial intelligence

To support more effective foundation model governance, the Institute added policymakers should regularly review and update the relevant guidance; setting procurement requirements to uphold standards, which should be written in tenders and contracts; requiring data used by the models to be held locally; mandate independent third-party audits of the systems; and pilot limited use cases before a wider rollout to identify all risks and challenges.

It added that effective governance of AI needs to be underpinned by the Nolan Principles of Public Life, which includes accountability and openness.

However, in July 2023, the Institute warned that the UK government’s “deregulatory” data reform proposals will undermine the safe development and deployment of AI by making “an already-poor landscape of redress and accountability” even worse.

In a report analysing those proposals – largely contained in the March 2023 AI whitepaper – it found that, because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts.

This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.

“In these contexts, there will be no existing, domain-specific regulator with clear overall oversight to ensure that the new AI principles are embedded in the practice of organisations deploying or using AI systems,” it said, adding that independent legal analysis conducted for the Institute by data rights agency AWO found that, in these contexts, the protections currently offered by cross-cutting legislation such as the UK GDPR and the Equality Act often fail to protect people from harm or give them an effective route to redress.

“This enforcement gap frequently leaves individuals dependent on court action to enforce their rights, which is costly and time-consuming, and often not an option for the most vulnerable.”

Risks and opportunities

In September 2023, the House of Lords launched an inquiry into the risks and opportunities presented by LLMs, and how the UK government should respond to the technology’s proliferation.

In his written evidence to the inquiry, Dan McQuillan, a lecturer in creative and social computing, highlighted the risks of outsourcing of decision-making processes to such AI tools.

“The greatest risk posed by large language models is seeing them as a way to solve underlying structural problems in the economy and in key functions of the state, such as welfare, education and healthcare,” he wrote.

“The misrepresentation of these technologies means it’s tempting for businesses to believe they can recover short-term profitability by substituting workers with large language models, and for institutions to adopt them as a way to save public services from ongoing austerity and rising demand.

“There is little doubt that these efforts will fail,” said McQuillan. “The open question is how much of our existing systems will have been displaced by large language models by the time this becomes clear, and what the longer-term consequences of that will be.”

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close