Risk Tier Reference Guide
Unacceptable Risk
PROHIBITEDAI systems that pose a clear threat to fundamental rights, safety, or EU values. These practices are banned from 2 February 2025.
Examples
- •Biometric categorisation systems that infer sensitive attributes (race, political opinion, religion, sexual orientation) from biometric data
- •Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions)
- •Social scoring systems by public authorities that lead to detrimental treatment
- •AI systems that exploit vulnerabilities (age, disability, social/economic situation) to manipulate behaviour
- •Subliminal techniques that bypass conscious awareness to distort behaviour causing harm
- •Predictive policing based solely on profiling or personality traits (not objective, verifiable facts)
Obligations
- ✓Discontinue use immediately
- ✓Do not procure or deploy
- ✓Document the review and rationale for exclusion
High Risk
FULL COMPLIANCE REQUIREDAI systems used in critical sectors or safety applications. Subject to strict requirements before market placement. Applies from August 2026 (Annex III) or earlier for safety components.
Examples
- •Biometric identification and categorisation (Annex III, 1)
- •Critical infrastructure management: electricity, water, gas, transport (Annex III, 2)
- •Educational/vocational assessment determining access to institutions (Annex III, 3)
- •Employment decisions: recruitment, CV screening, task allocation, promotion, termination (Annex III, 4)
- •Access to essential services: credit scoring, insurance risk assessment, emergency dispatch (Annex III, 5)
- •Law enforcement: individual risk assessment, polygraph, crime analytics (Annex III, 6)
- •Migration, asylum, and border control management (Annex III, 7)
- •Administration of justice and democratic processes (Annex III, 8)
Obligations
- ✓Risk management system (ongoing, documented)
- ✓Data governance and training data documentation
- ✓Technical documentation (before market placement)
- ✓Record-keeping and automatic logging of events
- ✓Transparency and information provision to deployers
- ✓Human oversight measures built in by design
- ✓Accuracy, robustness, and cybersecurity requirements
- ✓EU conformity assessment (self-assessment or third-party)
- ✓Registration in EU database (Art. 71)
- ✓CE marking and Declaration of Conformity
- ✓Post-market monitoring system
- ✓Incident reporting to national authorities
Limited Risk
TRANSPARENCY OBLIGATIONSAI systems with specific transparency risks. Users must be told they are interacting with AI. Applies from August 2026.
Examples
- •Chatbots and virtual assistants interacting with natural persons
- •Emotion recognition systems (must disclose to individuals)
- •Deepfake images, audio, or video content generated by AI
- •AI-generated text published to inform the public on matters of public interest
- •Biometric categorisation or emotion recognition systems not in Annex III
Obligations
- ✓Notify users they are interacting with an AI system (unless obvious from context)
- ✓Label AI-generated content (images, audio, video, text) as artificially generated
- ✓Implement technical solutions for content provenance (watermarking recommended)
Minimal Risk
NO MANDATORY REQUIREMENTSThe vast majority of AI systems in use today fall here. No mandatory EU AI Act requirements beyond general law (GDPR, product liability, consumer protection). Voluntary codes of conduct encouraged.
Examples
- •AI-powered spam filters
- •AI-enabled inventory management
- •Product recommendation engines
- •AI-based content moderation (non-employment, non-public service context)
- •Business intelligence and analytics tools
- •AI writing assistants for internal use (not public-facing news)
- •AI-assisted scheduling and logistics optimisation
Obligations
- ✓No mandatory EU AI Act obligations
- ✓Consider voluntary AI Code of Conduct
- ✓Standard GDPR and data protection obligations still apply
- ✓Document classification rationale for audit purposes