sdecoret - stock.adobe.com

UK financial regulators exposing public to ‘potential serious harm’ due to AI positions

MPs warn that action is needed to ensure artificial intelligence is adopted safely in the finance sector as companies capitalise on its opportunities

The UK public and the country’s finance system are “exposed to potential serious harm” because regulators in the financial sector are “not doing enough” to manage risks introduced by artificial intelligence (AI), according to a Treasury Committee report.

Committee chair Meg Hillier does not believe the finance sector is prepared for a major AI-related incident, according to the report.

One banking insider warned that the people working in banks don’t understand the risks, such as the concentration of services from a small number of suppliers, and “think they are on a battleship that can’t sink”.

The MPs reported that the risks come as a result of the positions adopted by the Bank of England and the Financial Conduct Authority (FCA), which the committee described as a “wait-and-see approach”.

“The major public financial institutions, which are responsible for protecting consumers and maintaining stability in the UK economy, are not doing enough to manage the risks presented by the increased use of AI in the financial services sector,” said the committee of MPs. 

Hillier said: “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.” 

The major public financial institutions ... are not doing enough to manage the risks presented by the increased use of AI in the financial services sector
Treasury Committee

The Treasury Committee said 75% of UK financial services firms are using AI. It acknowledged that AI “could bring considerable benefits to consumers”, but warned that action is required to ensure the technology is adopted safely.

Stress test

The committee recommended that the Bank of England and the FCA conduct AI-specific stress-testing to boost businesses’ readiness for a potential “AI-driven market shock”. It also called on the FCA to publish “practical guidance on AI” by the end of the year, including how consumer protection rules apply and who in finance firms should be accountable. 

The committee also called on the government to designate AI and cloud providers in its Critical Third Parties Regime scheme, which gives the FCA and the Bank of England powers of investigation and enforcement over non-financial firms that provide critical services to the UK financial services sector. 

Over a year after it was set up, no organisations have yet been designated under the regime.

One IT professional from the UK banking sector, who wished to remain anonymous, said “the concentration of AI and cloud services from just a few suppliers” presented a huge risk.

“The concentration risk is getting worse. There are only two or three cloud services and a handful of major AI providers, and all the banks are using them. So, if something goes wrong, the entire financial system sits on top.”

He added: “I hear stories from people in the industry saying that the software writes itself, deploys itself, integrates itself and tests itself. The humans have no idea what’s going on anymore.

“The problem is the people at banks don’t understand the risk and think they are on a battleship that can’t sink,” he added, warning that it could end up more like the Titanic.

Read more about AI in financial services

Read more on IT for financial services