This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Digital developments in focus
| 3 minutes read

NCSC Publishes Guidelines on Secure AI System Development: a Concerted International Approach

Everyone’s talking about AI at the moment (see our round-up of recent developments here and here). A consensus is building across regulators, industry, experts and sceptics, that realising any benefits from AI requires mitigation of its risks. One of the biggest such risks is of course security. 

Of particular note, two bodies have recently issued publications about AI’s security risks. On 4th January, the US standards body NIST published a paper identifying the types of cyber attacks that manipulate the behaviour of AI Systems. This followed the release of guidelines from the UK’s National Cyber Security Centre (NCSC) in December last year on the secure development of AI systems, which is the subject of this blog.

AI – Specific and General Security Risks

Explaining why the guidelines are necessary, the NCSC explains that AI systems are vulnerable to both existing and new cyber security threats. Components used in machine learning (including hardware, software, workflows and supply chains) can be exploited by a number of techniques. This is adversarial machine learning (AML), which uses vulnerabilities in such components. Hackers’ objectives may include changing how a model performs, committing unauthorised actions or extracting information.  

The guidelines note that AI systems may increasingly become high-value targets, whilst on the other hand enabling new methods of attack as well. 

International response to a global problem 

Acknowledging the often borderless nature of the technology, as well as its international protagonists and risks, the NCSC published its guidelines jointly with the US Cybersecurity Infrastructure Agency (CISA) and a large number of other partner agencies across the world. 

Contributions were provided from a range of organisations, including AI developers (Google, DeepMind, OpenAI, Microsoft etc.), academics and the Alan Turing Institute. Given the drive to have an inclusive debate around AI safety, this joined-up approach is to be commended. 

Key Aims

Announcing the publication, the NCSC’s CEO emphasised the “need for concerted international action” to keep up with the pace of AI development. She noted that the guidelines aimed to ensure “that security is not a postscript to development but a core requirement throughout”, a sentiment echoed by her US colleague. 

Although addressed primarily at providers of AI systems, the NCSC urges “all stakeholders (including data scientists, developers, managers, decision-makers and risk owners)” to read the guidelines. Nevertheless, it stresses that users are unlikely to have sufficient visibility and expertise to take meaningful action, and so it is incumbent on providers to “take responsibility for the security outcomes of users further down the supply chain”. 

This emphasis on providers is interesting in light of the debate around where liability should sit in the AI supply chain – particularly where personal data is involved and those users may have their own responsibilities as data controllers under UK and EU data protection laws. 

Structure of Guidelines 

The guidelines map the four key stages of the lifecycle of an AI system: 

  1. Secure design – e.g. raising staff awareness around threats and risks (including at senior management level), threat modelling, designing your systems for security as well as functionality and performance and trade-offs to consider on system and model design.
  2. Secure development – e.g. supply chain security, documentation, and asset and technical debt management.
  3. Secure deployment – e.g. protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
  4. Secure operation and maintenance – e.g. logging and monitoring, update management and information sharing.

Each section has suggested “considerations and mitigations” of relevance to that stage. The guidelines as a whole follow a “secure by default” approach which aligns with other NCSC publications (like its Secure development and deployment guidance and Secure Software Development Framework). As such, they prioritise:

  • taking ownership of security outcomes for customers; 
  • embracing radical transparency and accountability; and 
  • building organisation structure and leadership so secure by design is a top business priority. 


The document is relatively short, and while many points will be familiar to those working in cyber security (for example, the importance of diligencing supply chains) some will clearly require organisations to put in the work to implement them in their particular contexts. For example, expanding on what it means to protect a model and data, the guidelines simply recommend “implementing standard cyber security best practices”. They do, however, helpfully reference a number of other sources of further reading, such as the NCSC’s principles for the security of machine learning, CISA’s cyber security goals and the ISO 27001 standards. 

The guidelines also add a salutary note of caution in these times of hype around AI: organisations should be “confident that the task at hand is most appropriately addressed using AI.” Even where this has been determined, it is important to still consider the AI-specific design choices being made (adopting a “secure by design” approach), and ensure that security mitigations are considered alongside functionality, performance and user requirements. 

"Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties."


ai, big data, cyber, data, emerging tech, regulating digital, tech procurement and cloud