90 Days Gen AI Risk Trial -Start Now
Book a demo
Compliance

What is EU AI Act?

The world's first comprehensive legal framework for artificial intelligence, passed by the European Parliament in 2024, establishing risk-based obligations for any organization placing AI systems on the EU market.

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament in March 2024 and entering into force in August 2024, it applies to any provider, deployer, importer, or distributor of AI systems used in the European Union — regardless of where the organization is based.

**The Four Risk Tiers**

The Act classifies AI systems into four risk categories, each with different obligations.

**Tier 1 — Unacceptable Risk (Banned):** Certain AI applications are outright prohibited because they pose unacceptable threats to fundamental rights. Examples include social scoring systems operated by governments or public authorities, real-time biometric surveillance in public spaces (with narrow exceptions for law enforcement), subliminal manipulation techniques designed to influence behavior without awareness, and systems that exploit vulnerable groups.

**Tier 2 — High Risk:** AI systems used in critical sectors face the most stringent requirements. High-risk domains include critical infrastructure (energy, water, transport), education and vocational training, employment and HR decisions (CV screening, performance monitoring), essential private and public services (credit scoring, insurance, benefits), biometric identification and categorization, law enforcement, migration and border control, and administration of justice. Providers of high-risk AI must implement a risk management system, data governance controls, comprehensive technical documentation, logging and record-keeping, transparency information for deployers, meaningful human oversight mechanisms, and accuracy, robustness, and cybersecurity measures. An EU conformity assessment is required before placing a high-risk system on the market.

**Tier 3 — Limited Risk:** Systems such as chatbots, emotion recognition, and AI-generated deepfakes carry transparency obligations. Providers and deployers must ensure users are informed that they are interacting with an AI system or viewing AI-generated content.

**Tier 4 — Minimal Risk:** The vast majority of AI applications — spam filters, AI in video games, recommendation engines — fall into this category and face no specific obligations under the Act, though voluntary codes of conduct are encouraged.

**Timeline: Progressive Enforcement**

The regulation applies in phases. Prohibited AI systems (Tier 1) were banned six months after entry into force (February 2025). General-purpose AI model obligations apply from August 2025. High-risk AI system requirements for most sectors apply from August 2026, with some high-risk AI used in regulated products (medical devices, machinery) having until August 2027. Full enforcement across all provisions is expected by 2026–2027.

**Key Obligations for High-Risk AI**

Organizations deploying high-risk AI must: establish a risk management system that identifies and mitigates foreseeable risks throughout the AI lifecycle; implement data governance practices ensuring training data is relevant, representative, and free from significant errors; maintain comprehensive technical documentation that can be provided to regulators on request; enable automatic logging of events to facilitate post-market monitoring and investigation; provide clear instructions and transparency information to human operators; ensure meaningful human oversight so operators can understand, intervene, and override AI outputs; and meet defined thresholds for accuracy, robustness against errors, and cybersecurity resilience.

**Penalties**

Non-compliance carries significant financial penalties. Violations involving prohibited AI applications (Tier 1) can result in fines of up to €35 million or 7% of global annual turnover, whichever is higher. Violations of other obligations — including high-risk AI requirements — carry penalties of up to €15 million or 3% of global turnover. Providing incorrect or misleading information to authorities can result in fines of up to €7.5 million or 1% of global turnover.

**Who It Affects**

The EU AI Act has extraterritorial reach. Any organization that places an AI system on the EU market, puts an AI system into service in the EU, or uses AI systems in a way that affects people in the EU must comply — regardless of where the organization is headquartered. This means US, UK, Australian, and Asian companies with EU customers, EU employees, or EU-facing AI systems are all within scope.

**Australian Relevance**

For Australian organizations, the EU AI Act is relevant on two fronts. First, any Australian company with EU operations, EU customers, or EU-facing AI products must comply with the Act directly — including Australian fintechs, SaaS providers, and enterprises with European subsidiaries or customer bases. Second, the EU AI Act signals the direction of travel for AI regulation globally. Australia's own AI Safety Standard (published August 2024) and forthcoming legislative changes are explicitly informed by international frameworks including the EU AI Act. Australian organizations that build EU AI Act compliance capabilities now will be better positioned for domestic regulatory requirements as they emerge.

Related Terms

Learn how Aona handles EU AI Act

See how Aona AI helps enterprises manage this risk in practice.

See how it works →

Protect Your Organization from AI Risks

Aona AI provides automated Shadow AI discovery, real-time policy enforcement, and comprehensive AI governance for enterprises.