Shuo - stock.adobe.com
More finance firms join FCA’s AI testing initiative
Barclays, Experian and UBS join the FCA’s live AI testing initiative, exploring cutting-edge technologies like agentic AI and SLMs to ensure safe, responsible innovation in UK financial markets
Barclays, Experian and UBS are among the latest finance firms to join the Financial Conduct Authority’s (FCA) live testing of artificial intelligence (AI) applications.
The second cohort of firms join the likes of Lloyds Banking Group, NatWest and Monzo in the FCA’s safe place to try out AI in real-world conditions with appropriate regulatory support and oversight.
The initiative aims to support companies that have advanced in their AI development and are prepared to implement it in real-world markets.
Jessica Rusu, chief data, information and intelligence officer at the FCA, said: “We’re continuing to collaborate with firms to support the safe and responsible development of AI in UK financial markets.”
She added that “tailored” support from the FCA and its technical partner Advai reflects the regulator’s commitment to supporting the pace of change in AI and demonstrates “how regulators and industry can work together to harness innovation responsibly”.
The FCA said: “This initiative helps successful applicants explore key questions around risk management and live monitoring to support the responsible deployment of AI for consumers and markets.”
It said this second group of participants will test customer-facing and business‑to‑business use cases. These will include AI-enabled targeted support for investments, credit score insights for consumers, agentic payments and money laundering detection.
According to the regulator, banks are experimenting with technology ranging from agentic AI and small language models (SLMs) to emerging technology such as neurosymbolic AI.
The FCA plans to publish a report highlighting both good and poor practices this year.
Proactive approach to AI risks needed
Separately, UK financial services regulators were criticised earlier this year by MPs on the Treasury Committee for taking what they perceived as a “wait-and-see” approach to AI regulation.
In a report, MPs said: “The UK public and the country’s finance system are exposed to potential serious harm because regulators in the financial sector are not doing enough.”
In response, Sarah Breeden, deputy governor of financial stability at the Bank of England, said: “We share the [committee’s] view that AI has broad, complex and likely long-term implications for how the UK financial system serves the real economy. However, we do not agree with [its] characterisation that the bank is taking a ‘wait-and-see’ approach to the use of AI in financial services.”
She added: “Far from taking a ‘wait-and-see’ approach, we have invested heavily in analysing the current and future risks posed by both the use of AI in financial services, and the broader investment in and adoption of AI across the wider economy.”
Last week, major UK banks entered discussions with regulators, as well as finance and national security organisations, as the latest Anthropic AI model, named Mythos, unearthed decades-old vulnerabilities.
Meg Hillier, MP, chair of the Treasury Committee, said: “Recent developments in the world of AI, such as Anthropic’s Project Mythos, show us how fast this transformative technology is moving. It has never been more important that those responsible for maintaining the UK’s financial stability take a proactive approach to understanding and mitigating the risks AI may pose to our financial system.”
Read more about tech regulation in the finance sector
- Banks called in by regulators as latest artificial intelligence model identifies thousands of software vulnerabilities.
- UK financial regulators exposing public to ‘potential serious harm’ due to AI positions.
- Government faces questions about why US AWS outage disrupted UK tax office and banking firms.
