top of page

ETSI EN 304 223: a practical baseline for securing AI across its lifecycle

Image of an artificial intelligence head showing neural network links
Image of an artificial intelligence head showing neural network links

AI security has stopped being a niche concern. The threats are real, they’re evolving quickly, and they look different to “traditional” application security: data poisoning, model manipulation, indirect prompt injection, and the operational realities of deploying models at scale all change the risk picture. (techuk.org)


That’s why ETSI’s EN 304 223 matters. It’s a baseline set of cyber security requirements for AI models and systems—including systems using deep neural networks (such as generative AI)—written in a way that organisations can use to structure programmes, assign responsibilities, and evidence what “good” looks like. (ETSI)


This blog provides a transparent, practitioner-focused overview: what it is for, how it’s structured, how it connects into UK direction, and how to apply it in real organisations.


What the standard is for (and what it does)

EN 304 223 defines baseline security requirements for AI models and systems. The emphasis is on security (a subset of cybersecurity in ETSI’s wording), and on requirements that span the end-to-end AI lifecycle—not just deployment hardening. (ETSI)


A few important scope cues that make it easier to position internally:

  • It explicitly includes AI systems incorporating deep neural networks, including generative AI. (ETSI)

  • It’s not aimed at purely academic research systems that won’t be deployed (i.e., it’s written for real-world, operational use). (ETSI)

  • It’s intended to act as a clear baseline—a common reference point across vendors, integrators, and organisations embedding AI—so “security due diligence” stops being a bespoke questionnaire every time. (ETSI)


Think of it as: the minimum set of lifecycle controls you want in place so that AI is treated as a first-class citizen in your security and risk management system.


Structure: How EN 304 223 is organised

One of the strengths of EN 304 223 is that it’s simple to navigate.


1) Five lifecycle phases

The standard groups requirements into five phases:

  1. Secure Design

  2. Secure Development

  3. Secure Deployment

  4. Secure Maintenance

  5. Secure End of Life (ETSI)

It also notes a mapping to the AI lifecycle stages described in ISO/IEC 22989, which is helpful if your organisation already references ISO terminology. (ETSI)


2) Thirteen principles to baseline your AI security

Within those phases, it defines 13 principles (each with “provisions” underneath). The principle titles are designed to be readable by both technical and governance audiences, for example:

  • Raise awareness of AI security threats and risks

  • Secure the supply chain

  • Document data, models and prompts

  • Conduct appropriate testing and evaluation

  • Monitor the system’s behaviour

  • Ensure proper data and model disposal (Iteh Standards)


3) Clear stakeholder responsibility

EN 304 223 explicitly discusses stakeholders and who the provisions primarily apply to (e.g., developer vs system operator). (Iteh Standards)This matters because many AI security gaps happen between parties (vendor ↔ integrator ↔ customer), not within a single team.


4) Companion documents you’ll use to implement the standard

EN 304 223 signposts companion work including:

  • ETSI TR 104 128 (implementation guidance / examples)

  • ETSI TS 104 216 (conformance assessment guidance) (ETSI)

In practice: EN 304 223 is your “what”; the guide and conformance assessment help with the “how” and “how do we evidence it”.


How it’s being incorporated into UK government and NCSC guidance

From a UK perspective, EN 304 223 isn’t landing in isolation.


In January 2025, the UK government published a “Code of Practice for the Cyber Security of AI”. Importantly, that document states the Code of Practice is part of a two-part intervention and was explicitly intended to help create a global standard in ETSI that sets baseline security requirements. (GOV.UK)


It also states the government plans to update the Code and implementation guidance to mirror the future ETSI global standard and guide. (GOV.UK)


So, if you’re a UK organisation wondering “will this become expected?”—the direction of travel is clear:

  • UK guidance has been pushing toward lifecycle-based AI security controls (design → development → deployment → maintenance → end-of-life). (GOV.UK)

  • The government’s published intent is alignment between UK guidance and the ETSI baseline as it matures. (GOV.UK)


That means EN 304 223 is a strong reference point for:

  • security-by-design expectations in AI programmes,

  • supplier assurance conversations,

  • and “what good looks like” when boards ask for clarity


How companies can use and apply EN 304 223 (without turning it into a paperwork exercise)

A useful way to apply EN 304 223 is to treat it as a control framework - you map into what you already run (ISO 27001, SOC 2, Secure SDLC, change management, incident response), rather than standing up a parallel “AI security” process.


Step 1: Decide who you are in the AI supply chain

Most organisations play multiple roles at once: you might be a System Operator for third-party models and a Developer when you fine-tune models or build agentic workflows. EN 304 223’s stakeholder framing helps you allocate ownership and avoid gaps. (GOV.UK)


Deliverable to aim for: a simple RACI for the 13 principles across (Security, Engineering, Data, Product, Legal/Privacy, Procurement, Risk).


Step 2: Translate the 13 principles into a minimum viable control set

Don’t start by trying to “do everything”. Start by implementing a thin but real control layer across the lifecycle:

  • Design: threat modelling for AI-specific threats; define acceptable use, misuse cases, and security requirements up front. (techuk.org)

  • Development: asset inventory for models/datasets/prompts, secure infrastructure, supply chain due diligence, and documentation/audit trail. (Iteh Standards)

  • Deployment: user communications and guardrails aligned to real operational risks (not generic “AI may be wrong” banners). (Iteh Standards)

  • Maintenance: monitoring for drift/abuse, patching/mitigations, incident response hooks. (Iteh Standards)

  • End-of-life: secure disposal of data and models—especially where training data sensitivity and retention obligations exist. (Iteh Standards)


Step 3: Evidence it like you would any other security control

If you’re already audited (ISO/SOC2/SOX), you’ll recognise the pattern:

  • policy/standard,

  • procedure/playbook,

  • implementation evidence (tickets, configs, logs),

  • monitoring & review


The AI-specific twist is making sure your evidence covers the data pipeline, model lifecycle, and runtime protections (e.g., prompt injection defenses and abuse monitoring), not just infrastructure controls. (techuk.org)


Step 4: Make it work for real delivery teams

Where EN 304 223 becomes genuinely valuable is when you embed it into delivery mechanisms teams already use:

  • product security requirements in epics,

  • secure-by-design gates,

  • model release checklists,

  • supplier onboarding questionnaires,

  • operational monitoring runbooks


That’s how you avoid “AI governance theatre”


My top tips (what I’d do first)

  1. Start with a single-page lifecycle view: what AI systems you have, where they run, what data they touch, who owns them.

  2. Treat prompts, datasets, and models as managed assets: version them, restrict access, and maintain an audit trail. (Iteh Standards)

  3. Threat model the AI-specific attack paths (not just network threats): data poisoning, model manipulation, indirect prompt injection, abuse at inference time. (techuk.org)

  4. Get procurement/supplier assurance involved early: you’ll need suppliers to evidence controls aligned to the baseline (especially if you’re integrating third-party models/services). (techuk.org)

  5. Operationalise monitoring: define what “bad behaviour” looks like, alert on it, and connect it to incident response. (Iteh Standards)

  6. Map EN 304 223 into your existing frameworks (ISO 27001/SOC2) so it’s auditable and sustainable—then expand maturity over time


Let's do this

If you’re adopting AI (or scaling it fast) and want a clear, proportionate way to align to ETSI EN 304 223—without creating a parallel bureaucracy—get in touch.


I can support you to:

  • assess your current AI security posture against EN 304 223,

  • create a pragmatic, prioritised remediation plan,

  • and help you evidence controls in a way that builds customer confidence and stands up to due diligence.


Email me and we’ll set up a short call to scope what you need.

 
 
 

Comments


bottom of page