90 Days Gen AI Risk Trial -Start Now
Book a demo
GUIDES

EU AI Act Compliance Checklist 2026: What Enterprises Must Do Now

AuthorBastien Cabirou
DateMarch 25, 2026

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework governing artificial intelligence. Fully enforced from August 2026, it applies to AI providers and deployers operating in the EU — and to any organisation whose AI systems affect EU residents, regardless of where the organisation is based. For most enterprises, this means the EU AI Act applies to you now.

This checklist provides a practical compliance roadmap for enterprise security, legal, and GRC teams. Use it to assess your current compliance posture and prioritise remediation actions.

Enforcement Timeline: What's Active When

  • February 2025 — Prohibited AI practices (Article 5) became enforceable. Organisations must immediately stop using any AI systems on the prohibited list.
  • August 2025 — GPAI (General Purpose AI) model obligations took effect, including transparency requirements and technical documentation for providers of foundation models.
  • August 2026 — Full enforcement of high-risk AI system obligations, including conformity assessments, technical documentation, human oversight requirements, and accuracy/robustness standards.
  • August 2027 — Obligations for high-risk AI systems embedded in regulated products (medical devices, machinery, vehicles) take effect.

The Risk-Based Classification System

The EU AI Act uses a risk-based tier system. Your obligations depend entirely on which tier your AI systems fall into.

Prohibited AI (Unacceptable Risk)

Prohibited AI systems are banned outright. These include:

  • Social scoring systems that assess individuals based on behaviour or personal characteristics
  • Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
  • AI systems that exploit psychological vulnerabilities or use subliminal manipulation
  • AI-based emotion recognition in workplace or educational settings
  • Biometric categorisation based on sensitive characteristics (race, political opinion, sexual orientation)
  • Predictive policing AI that profiles individuals solely based on past behaviour

Penalty for prohibited AI: up to 35 million EUR or 7% of global annual revenue, whichever is higher.

High-Risk AI

High-risk AI systems face the most extensive obligations. They include AI used in:

  • Hiring, promotion, and employee management decisions
  • Credit scoring and access to financial services
  • Education and vocational training access decisions
  • Safety-critical infrastructure management
  • Medical device AI and clinical decision support
  • Law enforcement, border control, and migration processing
  • Administration of justice and democratic processes

For most enterprises, AI used in HR, finance, and customer creditworthiness assessment is high-risk under the Act.

Limited-Risk AI

Systems like chatbots and deepfake generators face transparency obligations: users must be informed they are interacting with AI, and synthetic content must be labelled.

Minimal-Risk AI

Spam filters, AI-enabled games, and most content recommendation systems fall here. No specific obligations apply, though voluntary codes of practice are encouraged.

EU AI Act Compliance Checklist

Part 1: AI Inventory and Classification

  • [ ] Complete an inventory of all AI systems in use across your organisation — including AI used by employees without IT approval (shadow AI)
  • [ ] Classify each AI system by risk tier: prohibited, high-risk, limited-risk, or minimal-risk
  • [ ] Identify whether your organisation acts as a provider (develops/places AI on the market) or deployer (uses AI for its own purposes) for each system
  • [ ] Document the intended purpose, data inputs, and decision outputs for each high-risk AI system
  • [ ] Review third-party AI tools and SaaS applications for AI features that may fall under high-risk categories

Part 2: Prohibited AI — Immediate Action

  • [ ] Audit for any current use of AI for social scoring, subliminal manipulation, or prohibited biometric identification
  • [ ] Confirm no real-time remote biometric identification systems are deployed in public spaces without lawful authority
  • [ ] Confirm no emotion recognition AI is used in workplace monitoring
  • [ ] Decommission or modify any AI systems that fall within prohibited categories
  • [ ] Document the prohibition review and retain for audit purposes

Part 3: High-Risk AI — Technical Requirements

  • [ ] Establish a quality management system for high-risk AI development and deployment
  • [ ] Prepare technical documentation demonstrating compliance before deployment
  • [ ] Implement automatic logging of AI system events (data used, decisions made, outputs generated)
  • [ ] Test AI systems against accuracy, robustness, and cybersecurity standards
  • [ ] Conduct adversarial testing (red teaming) and document results
  • [ ] Register high-risk AI systems in the EU AI Act public database (providers only)
  • [ ] Affix CE marking to AI systems embedded in regulated products

Part 4: Human Oversight Requirements

  • [ ] Ensure human oversight is built into high-risk AI decision processes — humans must be able to understand, monitor, and override AI outputs
  • [ ] Identify and designate qualified human oversight officers for each high-risk AI system
  • [ ] Implement stop/suspend mechanisms that allow immediate shutdown of high-risk AI systems
  • [ ] Train relevant employees on AI limitations, failure modes, and when to escalate

Part 5: Transparency and User Notification

  • [ ] Disclose to users when they are interacting with a chatbot or automated AI system
  • [ ] Label AI-generated content and deepfakes appropriately
  • [ ] Provide individuals with meaningful information about high-risk AI decisions that affect them
  • [ ] Update privacy notices and employee policies to reflect AI usage

Part 6: GPAI Model Obligations (if applicable)

  • [ ] If you develop or fine-tune a general-purpose AI model, prepare technical documentation per Article 53
  • [ ] Publish a summary of training data used (copyright-respecting)
  • [ ] Implement copyright compliance policies for training data
  • [ ] If your GPAI model poses systemic risk, conduct adversarial testing and report incidents to the EU AI Office

Part 7: Governance and Documentation

  • [ ] Appoint an AI Act compliance lead or extend existing DPO/CISO responsibilities to cover AI Act
  • [ ] Develop an AI usage policy that employees must acknowledge
  • [ ] Establish an AI incident reporting and response procedure
  • [ ] Maintain AI compliance records for at least 10 years (high-risk systems)
  • [ ] Conduct annual compliance reviews as the Act's implementing regulations are updated

Who Does the EU AI Act Apply To?

The Act applies to both providers and deployers — and the deployer category is broader than many organisations realise.

You are a provider if you develop an AI system or place it on the EU market — this includes building custom AI models, fine-tuning open-source models, or developing AI features in your products.

You are a deployer if you use an AI system in the course of your professional activities — this covers most enterprises using AI for HR, customer service, credit assessment, or operations, even if they bought the AI from a third-party vendor.

Importantly: as a deployer, you cannot simply outsource compliance to your AI vendor. You remain responsible for ensuring the AI system you deploy is used in compliance with the Act, that human oversight is implemented, and that your use case matches the intended purpose the provider specified.

Fines and Enforcement

  • Prohibited AI violations: up to EUR 35 million or 7% of global annual turnover (whichever is higher)
  • High-risk AI compliance failures: up to EUR 15 million or 3% of global annual turnover
  • Provision of incorrect or misleading information to regulators: up to EUR 7.5 million or 1.5% of global annual turnover
  • SMEs and startups: proportionally reduced penalties (capped at lower absolute thresholds)

National competent authorities in each EU member state will conduct enforcement. The European AI Office oversees GPAI providers and cross-border cases. Enforcement began in earnest in mid-2025 with prohibitions, and full high-risk system enforcement is active from August 2026.

How Aona AI Helps with EU AI Act Compliance

The first and most fundamental EU AI Act requirement — and the one most enterprises are failing — is maintaining a complete, accurate inventory of all AI systems in use. You cannot classify, govern, or document what you do not know exists.

Aona AI's AI governance platform:

  • Automatically discovers all AI tools in use across your organisation, including shadow AI tools employees use without IT approval
  • Classifies discovered AI tools against EU AI Act risk tiers, flagging high-risk and potentially prohibited systems
  • Maintains a living AI inventory that satisfies the documentation requirements of Article 17 (quality management systems)
  • Monitors AI model outputs and usage patterns for anomalous behaviour, supporting the ongoing oversight requirements
  • Generates compliance reports and audit trails that demonstrate due diligence to regulators
  • Enforces AI usage policies, blocking or alerting on use of prohibited or non-approved AI tools

For organisations facing the August 2026 high-risk AI compliance deadline, Aona AI provides the fastest path from exposure to compliance.

Frequently Asked Questions

Does the EU AI Act apply to non-EU companies?

Yes. The EU AI Act applies extraterritorially. If your AI system's outputs are used in the EU, or if you deploy AI that affects EU residents, the Act applies — regardless of where your company is based. This is similar to GDPR's territorial scope.

When does the EU AI Act fully apply to high-risk AI systems?

High-risk AI system obligations (conformity assessment, technical documentation, human oversight, logging, transparency) became fully enforceable in August 2026. The prohibited AI provisions have been enforceable since February 2025.

What is the difference between a provider and a deployer under the EU AI Act?

A provider develops an AI system and places it on the market. A deployer uses an AI system for its own professional purposes. Most enterprises are deployers — even if they use AI tools from third-party vendors. Deployers have their own set of obligations, separate from (and in addition to) those of providers.

Do I need to register our AI systems in any EU database?

Providers of high-risk AI systems must register them in the EU AI Act public database before placing them on the market. Deployers of certain high-risk AI systems (those used in public administration) must also register their use. The database is managed by the European AI Office.

How do I discover shadow AI to build our AI inventory?

Shadow AI — AI tools used by employees without IT knowledge — is the biggest practical barrier to EU AI Act compliance. Manual self-reporting fails because employees do not disclose AI usage. Aona AI automates shadow AI discovery through network traffic analysis and identity provider integration, surfacing all AI tools in use within minutes of deployment.

See it in action

Want to see how Aona handles this for your team?

15-minute demo. No fluff, no sales pressure.

Book a Demo →

Stay ahead of Shadow AI

Get the latest AI governance research in your inbox

Weekly insights on Shadow AI risks, compliance updates, and enterprise AI security. No spam.

About the Author

Bastien Cabirou

Co-Founder & CEO

Bastien Cabirou is the Co-founder & CEO of Aona AI, where he leads the company's mission to help enterprises govern AI adoption securely and at scale. With deep expertise in AI security and enterprise risk management, he is a recognised voice on Shadow AI, AI governance frameworks, and the evolving regulatory landscape.

Ready to Secure Your AI Adoption?

Discover how Aona AI helps enterprises detect Shadow AI, enforce security guardrails, and govern AI adoption across your organization.