The world's first comprehensive AI law, establishing a risk-based framework for AI systems across the EU single market.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted on 13 June 2024, it establishes harmonised rules for the development, placement on the market, putting into service, and use of AI systems within the European Union.
The Act takes a risk-based approach, categorising AI systems into four tiers: unacceptable risk (banned), high-risk (strictly regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). This tiered framework ensures that the most stringent requirements apply to AI systems that pose the greatest potential harm to health, safety, and fundamental rights.
The regulation applies to providers of AI systems placed on the EU market regardless of whether those providers are established within the EU or in a third country. It also applies to deployers of AI systems located within the EU and to providers and deployers located outside the EU where the output produced by the AI system is used in the EU.
Key innovations include the creation of AI regulatory sandboxes, mandatory conformity assessments for high-risk AI systems, requirements for transparency and human oversight, and the establishment of the European AI Office to coordinate enforcement. The Act also introduces specific rules for general-purpose AI (GPAI) models, including additional obligations for GPAI models with systemic risk.
Penalties for non-compliance are significant: up to €35 million or 7% of global annual turnover for prohibited AI practices, up to €15 million or 3% for violations of other provisions, and up to €7.5 million or 1% for supplying incorrect information. For SMEs and startups, the lower of the two amounts applies.
The EU AI Act represents a paradigm shift in technology regulation, moving from sector-specific rules to a horizontal framework that covers AI across all industries. Compliance professionals must understand that this regulation will have extraterritorial reach similar to GDPR, affecting organisations worldwide that serve the EU market.
The Act also mandates the creation of national competent authorities in each member state, the establishment of an AI Board to facilitate consistent application of the regulation, and the appointment of an AI Ombudsman function through the European AI Office.
For organisations developing or deploying AI, the EU AI Act requires a fundamental reassessment of AI governance practices. This includes implementing risk management systems, maintaining technical documentation, ensuring data governance for training datasets, establishing quality management systems, and providing transparency to users about AI-generated content.
The regulation places particular emphasis on fundamental rights impact assessments for high-risk AI systems used by public bodies or private entities providing public services. Deployers of high-risk AI systems in areas like law enforcement, migration management, and critical infrastructure must conduct these assessments before putting systems into service.
Prohibited AI practices: social scoring, real-time biometric identification in public spaces (with exceptions), manipulation techniques, exploitation of vulnerabilities
High-risk AI systems must undergo conformity assessment before market placement
Mandatory risk management system covering the entire AI system lifecycle
Data governance requirements for training, validation, and testing datasets
Technical documentation and record-keeping obligations
Transparency obligations: users must be informed when interacting with AI
Human oversight requirements for high-risk AI systems
Accuracy, robustness, and cybersecurity requirements
Quality management system implementation
Registration in EU public database for high-risk AI systems
General-purpose AI model providers must maintain technical documentation and provide information to downstream providers
GPAI models with systemic risk require model evaluations, adversarial testing, and serious incident reporting
AI-generated content must be labelled as such (deepfakes, synthetic text)
Fundamental rights impact assessment for certain high-risk deployments
The EU AI Act entered into force on 1 August 2024 with a phased implementation. Prohibited practices bans apply from February 2025, GPAI obligations from August 2025, and most other provisions including high-risk AI requirements from August 2026.
Yes. The EU AI Act has extraterritorial scope. It applies to any provider placing AI systems on the EU market and any deployer located in the EU, regardless of where the provider is established. It also applies where the AI output is used in the EU.
Fines can reach up to €35 million or 7% of global annual turnover for prohibited AI practices, €15 million or 3% for other violations, and €7.5 million or 1% for supplying incorrect information. Lower caps apply to SMEs and startups.

Empowering businesses with safe, secure, and responsible AI adoption through comprehensive monitoring, guardrails, and training solutions.
Copyright ©. Aona AI. All Rights Reserved