bas121 - stock.adobe.com

CSA: Take AI cyber threats to the boardroom

Current cyber risk assumptions may no longer be valid given the speed of advanced AI, warns the chief executive of Singapore’s Cyber Security Agency

The highest levels of leadership at Singapore’s critical information infrastructure (CII) providers have been urged to urgently review their cyber defences, as rapid advances in artificial intelligence (AI) threaten to upend current risk management assumptions.

In an open letter to the boards and chief executives of CII owners, Singapore’s commissioner of cyber security and chief executive of the Cyber Security Agency, David Koh, warned that frontier AI has “materially shifted the cyber security baseline” in the past month.

These developments, he stressed, demand board-level and CEO attention and should not be left to IT departments.

The alert comes amid global scrutiny of new and highly capable AI models such as Anthropic’s Claude Mythos, which has shown unprecedented abilities to autonomously discover and exploit software vulnerabilities. Recognising its potent cyber capabilities, Anthropic restricted the model’s access to vetted defenders under an initiative dubbed Project Glasswing.

In his letter, Koh noted that Claude Mythos had already identified thousands of zero-day vulnerabilities. Shortly after it was released, the UK’s AI Security Institute reported that Mythos was the first model it tested that successfully completed a 32-step simulation of breaking into a corporate network, which would normally take an expert about 20 hours to do.

Meanwhile, OpenAI recently assessed its widely available GPT-5.5 model as having a “High” cyber security capability, just one step below “Critical” under its safety preparedness framework.

That means GPT-5.5 can conduct cyber operations against reasonably hardened targets or speed up discovery of vulnerabilities, while a model with a critical capability can go as far as developing zero-day exploits to compromise critical systems without human intervention.

“Frontier AI is accelerating at a rate where current assumptions in cyber risk management, on which your controls, measures and incident response plans were designed, may no longer be valid,” Koh wrote.

He noted that vulnerability discovery is also becoming faster and cheaper, social engineering is getting more personalised, and the window between a vulnerability’s disclosure and its exploitation by bad actors is rapidly narrowing.

'A continuum, not a step change'

The issue also took centre stage in Parliament today, with senior minister of state for digital development and information Tan Kiat How addressing concerns from members of parliament over the risks these advanced tools pose to Singapore.

Tan clarified that the government does not currently have access to Mythos, nor is it aware of any local bank having access, given its restricted preview phase. However, the authorities are working closely with partners who do have access to track the model’s capabilities.

He urged perspective, noting that the threat should be viewed as a “continuum rather than a step change", pointing out that open-source AI models are also improving quickly and are likely to reach similar capabilities within months.

The immediate danger, he warned, lies in the sheer speed of AI-driven attacks. Security loopholes that once took weeks to detect can now be identified autonomously within hours or minutes.

“These attacks are faster, more scalable, and significantly more sophisticated,” Tan said. “What we have not yet seen is fully autonomous AI agents running end-to-end campaigns. But this is a matter of time given the trajectory of technological developments.”

Boardroom action required

To counter these developments, the CSA has asked CII boards to formally commission a review of their cyber risk posture.

The review must assess if current risk assessments properly account for AI-enabled threats across both IT and operational technology (OT) systems. Organisations must also evaluate whether their vulnerability management, patching, and incident response arrangements are fast enough to match the accelerating pace of adversaries.

Other considerations outlined in the letter include maintaining visibility over third-party dependencies and governing the organisation’s own use of AI, particularly when AI tools interact with sensitive data, software development, or critical systems.

Koh noted that these reviews should be tabled at the appropriate board or executive governance risk committees. Material gaps must be addressed with “clear remediation plans and explicit risk acceptance decisions” and, if necessary, immediate adjustments to cyber security investment priorities.

The CSA will engage sector leads in the coming weeks to track progress, understand challenges faced, and discuss how to collectively strengthen Singapore’s cyber resilience.

Read more about cyber security in APAC

Read more on Hackers and cybercrime prevention