fotokitas - stock.adobe.com

Fintech leaders call for united front against AI-driven cyber crime

As AI makes financial scams more personalised and convincing, fintech experts have called for deeper collaboration and the use of behavioural analytics and other technologies to protect consumers

The financial industry must work more closely to combat the rise of artificial intelligence (AI)-enabled cyber crime to protect consumers and the integrity of the financial system, a panel of top banking and technology executives urged at the Singapore Fintech Festival 2025.

Moderated by Pieter Franken, co-founder and director for Japan at the Global Finance and Technology Network (GFTN), the panel noted that criminals are now employing the same AI tools as the industry, making it difficult for financial institutions to keep up with the fast-changing threat landscape.

“AI-powered cyber crime means the barrier for sophistication is much lower,” said Aditi Sawhney, senior vice-president for security solutions in Asia-Pacific at Mastercard. “Fraud, scams, and cyber are converging more than ever before. One stolen credential can translate into large-scale scams, and with things moving so quickly, more collaboration is needed.”

For one thing, the rise of real-time payments has made it harder for financial institutions to detect fraudulent transactions because these payments are processed instantly, leaving little time for traditional fraud detection methods to identify and block suspicious activities.

Kathleen Gan, chief risk and compliance officer for Asia and Middle East at HSBC, described the dilemma that banks face with providing a seamless customer experience and implementing security measures.

“The challenge for the bank is, how do you balance between speed and trust, and how much friction do you create in the system?” Gan said, adding that the solution could be in the form of “intelligent friction”, where technology can be used to create security measures that are invisible to legitimate users but effective against criminals.

For example, Gan said AI can help detect behavioural anomalies by identifying a device’s position, how a user is swiping the screen, and the person’s typing speed. “That can help to detect whether somebody is under stress and if we need to review the transactions,” she explained.

As financial institutions shore up their cyber defences, threat actors are starting to turn their attention to softer targets to achieve their goals, said Tobias Gondrom, chief information security officer at UOB. “You will find that banks are very hard to hack, which is why they are going after the ecosystem, say, a merchant, or the customer,” he added.

Indeed, Gondrom pointed out that social engineering, rather than high-tech hacking, is behind the vast majority of losses. “It’s not really a hack 95% of the time. It was the customer who was tricked into transferring the money because he thought he could make a lot of returns on an investment, or he found the love of his life.”

Matthew DeLauro, president of go-to-market at Seon, a fraud prevention firm, concurred, noting that the growing use of digital services has made the consumer the primary battleground, with attacks becoming hyper-personalised and more convincing, thanks to readily available AI tools and data available on the dark web.

To counter such attacks, he called for more proactive security measures, including ways to assess a user’s risk profile based on their digital footprint, such as their email address, even before the KYC (know-your-customer) process kicks in. “You can start putting customers into risk cohorts and create features that make it very difficult for the fraudster to assess the security pattern within the application,” he explained.

Sawhney highlighted Mastercard’s multi-layered approach, which combines AI-driven fraud detection with threat intelligence from its $2.65bn acquisition of Recorded Future. The company is also a founding member of public-private partnerships like the Global Anti Scam Alliance (Gasa) and provides risk scores to banks to help them identify scams and money mule accounts.

“You need insights and intelligence at that network level, because in scams, you need to understand what’s happening at other institutions. Your own data isn’t enough,” Sawhney said.

Gondrom praised Singapore’s strong collaborative environment, where security leaders from major banks are in constant contact. However, he cautioned that this is not enough. “The problem is with the second-tier ecosystem players that are not in the inner circle,” he said, noting the challenge of getting merchants and suppliers to disclose breaches and participate in information sharing.

Gan said collaboration must extend internationally and across different industries. “Fraud usually happens much further upstream in the process, and typically through social platforms and telecoms companies, and so collaboration with those industries is just as important,” she said.

Read more about cyber security in APAC

  • Nikkei has confirmed a major data breach that potentially exposed the personal information of more than 17,000 employees and business partners after hackers infiltrated its internal Slack messaging platform.
  • Australian privacy commissioner warns that the human factor is a growing threat as notifications caused by staff mistakes rose significantly even as total breaches declined 10% from a record high.

Read more on Hackers and cybercrime prevention