Free Template

EU AI Act Risk Classification Template
Classify Every AI System You Use or Build

The EU AI Act introduces four risk tiers for AI systems — each with different compliance obligations. This template helps you classify every AI system in your portfolio, understand what is required for each tier, and build the evidence base you need for compliance. Applies to organisations that deploy AI in the EU, regardless of where they are headquartered.

Get EU AI Act Compliance Help →What is the EU AI Act?

Quick Classification Decision Tree

1
Is the AI system prohibited under Article 5?
Yes: → Unacceptable Risk (Prohibited)
No: Continue to next question
2
Is it a safety component of a product covered by EU harmonised legislation (Annex I)?
Yes: → High Risk
No: Continue
3
Is it listed in Annex III (critical sectors & use cases)?
Yes: → High Risk (unless exception applies)
No: Continue
4
Does it interact with users presenting as human, or generate/manipulate content?
Yes: → Limited Risk (transparency obligations)
No: → Minimal Risk

Risk Tier Reference Guide

Unacceptable Risk

PROHIBITED

AI systems that pose a clear threat to fundamental rights, safety, or EU values. These practices are banned from 2 February 2025.

Examples
  • Biometric categorisation systems that infer sensitive attributes (race, political opinion, religion, sexual orientation) from biometric data
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions)
  • Social scoring systems by public authorities that lead to detrimental treatment
  • AI systems that exploit vulnerabilities (age, disability, social/economic situation) to manipulate behaviour
  • Subliminal techniques that bypass conscious awareness to distort behaviour causing harm
  • Predictive policing based solely on profiling or personality traits (not objective, verifiable facts)
Obligations
  • Discontinue use immediately
  • Do not procure or deploy
  • Document the review and rationale for exclusion

High Risk

FULL COMPLIANCE REQUIRED

AI systems used in critical sectors or safety applications. Subject to strict requirements before market placement. Applies from August 2026 (Annex III) or earlier for safety components.

Examples
  • Biometric identification and categorisation (Annex III, 1)
  • Critical infrastructure management: electricity, water, gas, transport (Annex III, 2)
  • Educational/vocational assessment determining access to institutions (Annex III, 3)
  • Employment decisions: recruitment, CV screening, task allocation, promotion, termination (Annex III, 4)
  • Access to essential services: credit scoring, insurance risk assessment, emergency dispatch (Annex III, 5)
  • Law enforcement: individual risk assessment, polygraph, crime analytics (Annex III, 6)
  • Migration, asylum, and border control management (Annex III, 7)
  • Administration of justice and democratic processes (Annex III, 8)
Obligations
  • Risk management system (ongoing, documented)
  • Data governance and training data documentation
  • Technical documentation (before market placement)
  • Record-keeping and automatic logging of events
  • Transparency and information provision to deployers
  • Human oversight measures built in by design
  • Accuracy, robustness, and cybersecurity requirements
  • EU conformity assessment (self-assessment or third-party)
  • Registration in EU database (Art. 71)
  • CE marking and Declaration of Conformity
  • Post-market monitoring system
  • Incident reporting to national authorities

Limited Risk

TRANSPARENCY OBLIGATIONS

AI systems with specific transparency risks. Users must be told they are interacting with AI. Applies from August 2026.

Examples
  • Chatbots and virtual assistants interacting with natural persons
  • Emotion recognition systems (must disclose to individuals)
  • Deepfake images, audio, or video content generated by AI
  • AI-generated text published to inform the public on matters of public interest
  • Biometric categorisation or emotion recognition systems not in Annex III
Obligations
  • Notify users they are interacting with an AI system (unless obvious from context)
  • Label AI-generated content (images, audio, video, text) as artificially generated
  • Implement technical solutions for content provenance (watermarking recommended)

Minimal Risk

NO MANDATORY REQUIREMENTS

The vast majority of AI systems in use today fall here. No mandatory EU AI Act requirements beyond general law (GDPR, product liability, consumer protection). Voluntary codes of conduct encouraged.

Examples
  • AI-powered spam filters
  • AI-enabled inventory management
  • Product recommendation engines
  • AI-based content moderation (non-employment, non-public service context)
  • Business intelligence and analytics tools
  • AI writing assistants for internal use (not public-facing news)
  • AI-assisted scheduling and logistics optimisation
Obligations
  • No mandatory EU AI Act obligations
  • Consider voluntary AI Code of Conduct
  • Standard GDPR and data protection obligations still apply
  • Document classification rationale for audit purposes

Related Resources

EU AI Act GlossaryPlatform OverviewAI Governance FrameworkAll Templates

Automate your EU AI Act compliance

Aona AI discovers every AI tool in your organisation, classifies it by risk, and gives you the governance controls the EU AI Act requires.

Book a Demo →