
iStock
UK finance regulator tie-up with Nvidia allows firms to experiment with AI
FCA wants to support organisations in testing their ideas for the use of artificial intelligence
The UK’s financial services regulator has teamed up with Nvidia to provide an environment to enable finance firms to test out artificial intelligence (AI) safely.
In what the Financial Conduct Authority (FCA) describes as a Supercharged Sandbox, firms will have access to the latest AI hardware and software.
The testing environment was first mooted in April, when the FCA said it planned to offer a service where companies under its watch can test out AI tools before they go live.
Any financial services firm looking to innovate and experiment with AI can participate in the Supercharged Sandbox, with access to data, technical expertise and regulatory support to speed up their innovation, the FCA said.
“This collaboration will help those that want to test AI ideas but who lack the capabilities to do so,” said Jessica Rusu, chief data, intelligence and information officer at the FCA. “We’ll help firms harness AI to benefit our markets and consumers, while supporting economic growth.”
Jochen Papenbrock, EMEA head of financial technology at Nvidia, added: “AI is fundamentally reshaping the financial sector by automating processes, enhancing data analysis and improving decision-making, which leads to greater efficiency, accuracy and risk management across a wide range of financial activities.”
He said that in the FCA testing environment, firms can explore AI innovations using Nvidia’s “full stack accelerated computing platform”.
AI take up widening
A recent Bank of England survey found that 41% of finance firms are using AI to optimise internal processes, while 26% are using AI to enhance customer support.
Sarah Breeden, deputy governor of financial stability at the Bank of England, said many firms have moved forward with AI and are now using it to mitigate the external risks they face from cyber attack, fraud and money laundering.
According to Breeden, a significant evolution from a financial stability perspective is the emergence of new use cases. For example, she said the survey revealed that 16% of respondents are using AI for credit risk assessment, and 19% are planning to do so over the next three years. A total of 11% are using it for algorithmic trading, with a further 9% planning to do so in the next three years.
Steve Morgan, global banking principal at Pegasystems, which already works with banks on AI and automation projects, said giving finance companies access to “play with AI in a sandbox setting makes sense, as for some, it is a high cost of entry”.
But, he added: “Regardless of this approach, allowing AI access and experimentation, no institution is going to deploy AI in the real world without absolute certainty about its accuracy and robustness. Creating in the sandbox, an AI app that is 95% effective at detecting fraud might not be good enough when you must accept 5% of the cases will be false positives.”
Morgan said this is a “recipe for financial and reputational losses”, and added that it’s an example where humans will stay “in the loop”.
“AI algorithms need just the same level of scrutiny that regulators give to – for example – automated credit policy instantiation to ensure responsible lending decisions are made,” he said. “The best way to achieve this is ensuring the powerful new processing capability offered by modern AI can be governed, monitored and transparent, such as when it is tied into workflow software that can manage how complex and regulated processes should proceed within clear guardrails.”
Read more about AI in financial services
- Lloyds Banking Group is running a six-month training programme with the aim of giving senior leadership AI skills.
- Financial services regulator proposes artificial intelligence testing environment as part of its AI Lab.
- MPs at a recent artificial intelligence governance meeting were keen to hear how Ofcom, the FCA and the ICO are preparing for UK AI legislation.