The EU AI Act introduces four risk tiers for AI systems, each with different compliance obligations. This template helps you classify every AI system in your portfolio, understand what is required for each tier, and build the evidence base you need for compliance. Applies to organisations that deploy AI in the EU, regardless of where they are headquartered.
AI systems that pose a clear threat to fundamental rights, safety, or EU values. These practices are banned from 2 February 2025.
AI systems used in critical sectors or safety applications. Subject to strict requirements before market placement. Applies from August 2026 (Annex III) or earlier for safety components.
AI systems with specific transparency risks. Users must be told they are interacting with AI. Applies from August 2026.
The vast majority of AI systems in use today fall here. No mandatory EU AI Act requirements beyond general law (GDPR, product liability, consumer protection). Voluntary codes of conduct encouraged.
Aona AI discovers every AI tool in your organisation, classifies it by risk, and gives you the governance controls the EU AI Act requires.