Laurent - stock.adobe.com

NCSC publishes landmark guidelines on AI cyber security

The NCSC and its US counterpart CISA have brought together tech companies and governments to countersign a new set of guidelines aimed at promoting a secure-by-design culture in AI development

This article can also be found in the Premium Editorial Download: Computer Weekly: Technologies to support hybrid working

The UK’s National Cyber Security Centre (NCSC) has published a set of guidelines designed to help ensure that artificial intelligence (AI) technology is developed safely and securely, written alongside tech sector partners, and developed with crucial assistance from the US’s Cybersecurity and Infrastructure Security Agency (CISA).

The Guidelines for secure AI system development are said to be the first of their kind in the world, and besides the UK and US, were developed with input from other G7 nations, international agencies, and government bodies from a number of countries, including voices from the Global South.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.

“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities,” she said.

Cameron’s American counterpart, CISA director Jen Easterly, added: “The release of the Guidelines for secure AI system development marks a key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design.

“As nations and organisations embrace the transformative power of AI, this international collaboration, co-developed by the UK NCSC and CISA, underscores the global dedication to fostering transparency, accountability and secure practices.

“The domestic and international unity in advancing secure-by-design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology evolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border collaboration in securing our digital future.”

The set of guidelines has been designed to help the developers of any system that incorporates AI make informed decisions about cyber security during the development process – whether it is being built from scratch or as an add-on to an existing tool or service provided by another entity.

The NCSC believes security to be an “essential pre-condition of AI system safety” and integral to the development process from the outset and throughout.

In a similar way to how the secure-by-design principles alluded to by CISA’s Easterly are increasingly being applied to software development, the cognitive leap to applying the same guidance to the world of AI should not be too difficult to make.

The guidelines, which can be accessed in full via the NCSC website, break down into four main tracks – secure design, secure development, secure deployment and secure operation and maintenance, and include suggested behaviours to help improve security. These include taking ownership of security outcomes for customers and users, embracing “radical transparency”, and baking secure-by-design practices into organisational structures and leadership hierarchies.

The document has already been endorsed and co-sealed by a number of key organisations in the field, including tech giants Amazon, Google, Microsoft and OpenAI, and representatives from 17 other countries – besides the Five Eyes intelligence alliance and the G7, these include Chile, Czechia, Estonia, Israel, Nigeria, Norway, Poland, Singapore and South Korea.

For the UK government, the document’s creation builds on discussions held at its AI Safety Summit at the beginning of November, which while not explicitly a cyber security-focused event, nevertheless sought to begin the necessary conversation around how society should manage the risks of AI.

For the Americans, it follows on from a recently published CISA roadmap on AI, which supports an October Executive Order signed by president Joe Biden that aims to build a foundation of standards that might one day underpin state or federal-level legislation in the US – with inevitable global impact.

The CISA’s roadmap sets out five “lines of effort” that it means to pursue. These are:

  1. To responsibly use AI to support CISA’s core cyber mission;
  2. To assess and assure secure-by-design AI tech across the private and public sectors;
  3. To protect critical national infrastructure (CNI) from malicious AIs;
  4. To collaborate and communicate on AI efforts in the US and across the rest of the world;
  5. And to expand AI expertise and skills.

Industry reacts

“These early days of AI can be likened to blowing glass: while the glass is fluid it can be made into any shape, but once it has cooled, its shape is fixed. Regulators are scrambling to influence AI regulation as it takes shape,” said WithSecure cyber security advisor Paul Brucciani.

“Guidelines are quick to produce since they do not require legislation, nonetheless, NCSC and CISA have worked with impressive speed to corral this list of signatories. Amazon, Google, Microsoft and OpenAI, the world-leading AI developers, are signatories. A notable absentee from the list is the EU.”

Darktrace’s global head of threat analysis, Toby Lewis, said: “Security is a pre-requisite for safe and trustworthy AI and today’s guidelines from agencies including the NCSC and CISA provide a welcome blueprint for it. I’m glad to see the guidelines emphasise the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task.

“Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realise the benefits of AI faster and for more people,” added Lewis, who prior to his appointment at Darktrace served as deputy technical director of incident management at the NCSC for four years.

Read more about cyber security and artificial intelligence

  • Supported by Darktrace, Loughborough University is to recruit five doctoral researchers focusing on cross-disciplinary research in AI and cyber security.
  • As cyber threats increasingly target cloud infrastructure, demand for robust and reliable incident response measures is through the roof. Find out why you might want to consider bringing AI to the table.
  • We know that malicious actors are starting to use AI tools to facilitate attacks, but on the other hand, AI can also be a powerful tool within the hands of cyber security professionals.

Read more on Security policy and user awareness

CIO
Security
Networking
Data Center
Data Management
Close