DIgilife - stock.adobe.com

UK critical systems at risk from ‘digital divide’ created by AI threats

GCHQ’s National Cyber Security Centre warns that a growing ‘digital divide’ between organisations that can keep pace with AI-enabled threats and those that cannot is set to heighten the UK's overall cyber risk

A divide will emerge over the next two years between organisations that can keep pace with cyber threats enabled by artificial intelligence (AI) and those that fall behind, cyber chiefs have warned.

AI-enabled tools will make it possible for attackers to exploit security vulnerabilities increasingly rapidly, giving organisations precious little time to fix security vulnerabilities before they risk a cyber attack.

The gap between the disclosure of vulnerabilities by software suppliers and their exploitation by cyber criminals has already shrunk to days, according to research by the National Cyber Security Centre (NCSC), part of the signals intelligence agency GCHQ, published today.

However, AI will “almost certainly “reduce this further, posing a challenge for network defenders and creating new risks for companies that rely on information technology.

The NCSC report also suggests that the growing spread of AI models and systems across the UK’s technology base will present new opportunities for adversaries.

At particular risk are IT systems in critical national infrastructure (CNI) and in companies and sectors where there are insufficient cyber security controls.

In the rush to provide new AI models, developers will “almost certainly” prioritise the speed of developing systems over providing sufficient cyber security, increasing the threat from capable state-linked actors and cyber criminals, according to the report.

As AI technologies become more embedded in business operations, the NCSC is urging organisations to “act decisively to strengthen cyber resilience and mitigate against AI-enabled cyber threats”.

The NCSC’s director of operations, Paul Chichester, said AI was transforming the cyber threat landscape, expanding attack surfaces, increasing the volume of threats and accelerating malicious capabilities.

“While these risks are real, AI also presents a powerful opportunity to enhance the UK’s resilience and drive growth, making it essential for organisations to act,” he said, speaking at the NCSC’s CyberUK conference in Manchester.

“Organisations should implement strong cyber security practices across AI systems and their dependencies, and ensure up-to-date defences are in place.”

According to the NCSC, the integration of AI and connected systems into existing networks requires organisations to place a renewed focus on fundamental security practices.

The NCSC has published a range of advice and guidance to help organisations take action, including the Cyber Assessment Framework and 10 Steps to Cyber Security.

Earlier this year, the UK government announced an AI Cyber Security Code of Practice to help organisations develop and deploy AI systems securely. 

The code of practice will form the basis of a new global standard for secure AI through the European Telecommunications Standards Institute (ETSI).

Read more about cyber security in the age of AI

Read more on Data breach incident management and recovery