top of page

Understanding AI Security: The Importance of ETSI EN 304 223 for Fast-Growing Tech Companies

Updated: Mar 16

AI security has stopped being a niche concern. The threats are real, evolving quickly, and they look different from “traditional” application security. We face challenges like data poisoning, model manipulation, and indirect prompt injection. The operational realities of deploying models at scale change the risk landscape significantly. (techuk.org)


That’s why ETSI’s EN 304 223 matters. It’s a baseline set of cyber security requirements for AI models and systems—including systems using deep neural networks, such as generative AI. This standard is crafted to help organisations structure programmes, assign responsibilities, and define what “good” looks like. (ETSI)


This blog provides a transparent, practitioner-focused overview: what it is for, how it’s structured, how it connects to UK direction, and how to apply it in real organisations.


Why EN 304 223 Matters for Your Organisation


EN 304 223 defines baseline security requirements for AI models and systems. The emphasis is on security (a subset of cybersecurity in ETSI’s wording) and on requirements that span the entire AI lifecycle—not just deployment hardening. (ETSI)


Key Scope Cues for Internal Positioning


A few important scope cues make it easier to position EN 304 223 internally:


  • It explicitly includes AI systems incorporating deep neural networks, including generative AI. (ETSI)

  • It’s not aimed at purely academic research systems that won’t be deployed (i.e., it’s written for real-world, operational use). (ETSI)

  • It’s intended to act as a clear baseline—a common reference point across vendors, integrators, and organisations embedding AI. This way, “security due diligence” stops being a bespoke questionnaire every time. (ETSI)


Think of it as the minimum set of lifecycle controls you want in place so that AI is treated as a first-class citizen in your security and risk management system.


Structure: How EN 304 223 is Organised


One of the strengths of EN 304 223 is that it’s simple to navigate.


1) Five Lifecycle Phases


The standard groups requirements into five phases:


  1. Secure Design

  2. Secure Development

  3. Secure Deployment

  4. Secure Maintenance

  5. Secure End of Life (ETSI)


It also notes a mapping to the AI lifecycle stages described in ISO/IEC 22989, which is helpful if your organisation already references ISO terminology. (ETSI)


2) Thirteen Principles to Baseline Your AI Security


Within those phases, it defines 13 principles (each with “provisions” underneath). The principle titles are designed to be readable by both technical and governance audiences. Here are a few examples:


  • Raise awareness of AI security threats and risks

  • Secure the supply chain

  • Document data, models, and prompts

  • Conduct appropriate testing and evaluation

  • Monitor the system’s behaviour

  • Ensure proper data and model disposal (Iteh Standards)


3) Clear Stakeholder Responsibility


EN 304 223 explicitly discusses stakeholders and who the provisions primarily apply to (e.g., developer vs system operator). (Iteh Standards) This matters because many AI security gaps happen between parties (vendor ↔ integrator ↔ customer), not within a single team.


4) Companion Documents for Implementation


EN 304 223 signposts companion work, including:


  • ETSI TR 104 128 (implementation guidance/examples)

  • ETSI TS 104 216 (conformance assessment guidance) (ETSI)


In practice, EN 304 223 is your “what”; the guide and conformance assessment help with the “how” and “how do we evidence it”.


Incorporation into UK Government and NCSC Guidance


From a UK perspective, EN 304 223 isn’t landing in isolation.


In January 2025, the UK government published a “Code of Practice for the Cyber Security of AI”. Importantly, that document states the Code of Practice is part of a two-part intervention and was explicitly intended to help create a global standard in ETSI that sets baseline security requirements. (GOV.UK)


It also states the government plans to update the Code and implementation guidance to mirror the future ETSI global standard and guide. (GOV.UK)


So, if you’re a UK organisation wondering “will this become expected?”—the direction of travel is clear:


  • UK guidance has been pushing toward lifecycle-based AI security controls (design → development → deployment → maintenance → end-of-life). (GOV.UK)

  • The government’s published intent is alignment between UK guidance and the ETSI baseline as it matures. (GOV.UK)


That means EN 304 223 is a strong reference point for:


  • security-by-design expectations in AI programmes,

  • supplier assurance conversations,

  • and “what good looks like” when boards ask for clarity.


How Companies Can Use and Apply EN 304 223


A useful way to apply EN 304 223 is to treat it as a control framework. You can map it into what you already run (ISO 27001, SOC 2, Secure SDLC, change management, incident response), rather than establishing a parallel “AI security” process.


Step 1: Identify Your Role in the AI Supply Chain


Most organisations play multiple roles at once. You might be a System Operator for third-party models and a Developer when you fine-tune models or build agentic workflows. EN 304 223’s stakeholder framing helps you allocate ownership and avoid gaps. (GOV.UK)


Deliverable to aim for: A simple RACI for the 13 principles across (Security, Engineering, Data, Product, Legal/Privacy, Procurement, Risk).


Step 2: Translate the 13 Principles into a Minimum Viable Control Set


Don’t start by trying to “do everything.” Begin by implementing a thin but real control layer across the lifecycle:


  • Design: Threat modelling for AI-specific threats; define acceptable use, misuse cases, and security requirements up front. (techuk.org)

  • Development: Asset inventory for models/datasets/prompts, secure infrastructure, supply chain due diligence, and documentation/audit trail. (Iteh Standards)

  • Deployment: User communications and guardrails aligned to real operational risks (not generic “AI may be wrong” banners). (Iteh Standards)

  • Maintenance: Monitoring for drift/abuse, patching/mitigations, incident response hooks. (Iteh Standards)

  • End-of-life: Secure disposal of data and models—especially where training data sensitivity and retention obligations exist. (Iteh Standards)


Step 3: Evidence It Like Any Other Security Control


If you’re already audited (ISO/SOC2/SOX), you’ll recognise the pattern:


  • Policy/standard

  • Procedure/playbook

  • Implementation evidence (tickets, configs, logs)

  • Monitoring & review


The AI-specific twist is ensuring your evidence covers the data pipeline, model lifecycle, and runtime protections (e.g., prompt injection defenses and abuse monitoring), not just infrastructure controls. (techuk.org)


Step 4: Make It Work for Real Delivery Teams


Where EN 304 223 becomes genuinely valuable is when you embed it into delivery mechanisms teams already use:


  • Product security requirements in epics

  • Secure-by-design gates

  • Model release checklists

  • Supplier onboarding questionnaires

  • Operational monitoring runbooks


That’s how you avoid “AI governance theatre.”


My Top Tips for Implementation


  1. Start with a single-page lifecycle view: What AI systems do you have? Where do they run? What data do they touch? Who owns them?

  2. Treat prompts, datasets, and models as managed assets: Version them, restrict access, and maintain an audit trail. (Iteh Standards)

  3. Threat model the AI-specific attack paths (not just network threats): Data poisoning, model manipulation, indirect prompt injection, and abuse at inference time. (techuk.org)

  4. Get procurement/supplier assurance involved early: You’ll need suppliers to evidence controls aligned to the baseline (especially if you’re integrating third-party models/services). (techuk.org)

  5. Operationalise monitoring: Define what “bad behaviour” looks like, alert on it, and connect it to incident response. (Iteh Standards)

  6. Map EN 304 223 into your existing frameworks (ISO 27001/SOC2) so it’s auditable and sustainable—then expand maturity over time.


Let's Do This Together!


If you’re adopting AI (or scaling it fast) and want a clear, proportionate way to align with ETSI EN 304 223—without creating a parallel bureaucracy—get in touch.


I can support you to:


  • Assess your current AI security posture against EN 304 223

  • Create a pragmatic, prioritised remediation plan

  • Help you evidence controls in a way that builds customer confidence and stands up to due diligence


Email me and we’ll set up a short call to scope what you need.

 
 
 

Comments


bottom of page