90 Days Gen AI Risk Trial -Start Now
Book a demo
GUIDE

EU AI Act Compliance Checklist: What Enterprises Must Do Before 2026

AuthorCleo Park
DateMarch 26, 2026

Key Takeaways

  • EU AI Act Timeline: What Has Already Changed
  • Understanding the Four Risk Tiers
  • 12-Step EU AI Act Compliance Checklist
  • Penalties: The Stakes Are High
  • Australian Businesses: You Are in Scope

EU AI Act Compliance Checklist: What Enterprises Must Do Before 2026

The EU AI Act is the world's first comprehensive legal framework governing artificial intelligence — and its full application deadline is August 2026. For enterprise legal teams, compliance officers, and CTOs, this is not a future risk: it is an active compliance programme that needs to be underway now. With penalties reaching €35 million or 7% of global annual turnover, the cost of non-compliance exceeds almost every other regulatory exposure on the corporate agenda.

This checklist covers the four risk tiers, the 12 steps every enterprise needs to complete, and what Australian businesses operating in the EU need to know.

EU AI Act Timeline: What Has Already Changed

The EU AI Act entered into force on 1 August 2024. This triggered a phased rollout:

  • **February 2025** — Prohibitions on unacceptable-risk AI systems took effect. Any enterprise deploying prohibited AI in the EU has already been in violation for over a year.
  • **August 2025** — Rules for general-purpose AI (GPAI) models, including transparency and copyright obligations, became enforceable.
  • **August 2026** — Full application. High-risk AI system obligations, conformity assessments, and registration requirements apply to all in-scope systems.

If your organisation has not already started an AI compliance programme, you are behind. The 12-month window to August 2026 is a hard deadline, not a soft target.

Understanding the Four Risk Tiers

The EU AI Act classifies AI systems into four tiers. Knowing where each of your systems sits determines what compliance obligations apply.

Tier 1: Unacceptable Risk (Prohibited)

These systems are banned outright. Since February 2025, deploying them in the EU is a regulatory violation.

Examples include: AI systems that manipulate individuals through subliminal techniques, social scoring systems operated by governments, real-time remote biometric identification in public spaces (with narrow exceptions), and AI that exploits vulnerabilities related to age, disability, or socioeconomic circumstances.

Enterprise relevance: If you are using AI-driven customer retention tools with dark-pattern optimisation, or any system that scores individuals on protected characteristics, conduct an immediate review.

Tier 2: High Risk

High-risk systems face the most significant compliance burden: conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. This tier covers:

  • AI systems used in critical infrastructure (energy, water, finance, transport)
  • AI in education and vocational training (systems that determine access to education or assess students)
  • Employment decisions: CV screening, interview scoring, performance monitoring, task allocation
  • Essential services: credit scoring, insurance risk assessment, life and health insurance underwriting
  • Law enforcement and border control applications
  • Administration of justice

Enterprise relevance: If your HR team uses AI to screen candidates or monitor employee productivity, you are operating a high-risk system. If your finance function uses AI for credit decisioning, that is high-risk. These systems need immediate compliance attention.

Tier 3: Limited Risk

Limited-risk systems face transparency obligations only. Users must be informed when they are interacting with an AI.

Examples: AI chatbots and virtual assistants, AI-generated content (images, audio, video), emotion recognition systems.

Enterprise relevance: Your customer service chatbot and AI content generation tools fall here. The compliance requirement is disclosure — ensure your interface makes clear that users are interacting with AI.

Tier 4: Minimal Risk

The vast majority of AI applications fall here: spam filters, AI-powered search, recommendation engines, AI features in productivity tools. No mandatory obligations apply, though the Act encourages voluntary codes of conduct.

12-Step EU AI Act Compliance Checklist

Step 1: Conduct an AI System Inventory

You cannot comply with what you cannot see. Map every AI system in use across the enterprise — including vendor-supplied tools, SaaS platforms with embedded AI features, and internally built models. This includes shadow AI: AI tools employees are using without IT approval.

Aona's [AI Governance platform](/governance) provides continuous AI discovery across cloud, network, and endpoint — giving compliance teams a live inventory to work from.

Step 2: Classify Each System by Risk Tier

For each system in your inventory, determine which tier applies. Use the EU AI Act's Annex III as a reference for high-risk classifications. Consider whether your use of a system (not just the system itself) changes its classification — a general-purpose AI model used for CV screening is high-risk in that context.

Document your classification rationale. Regulators will expect to see it.

Step 3: Assign AI Risk Owners

Every AI system in scope needs a named owner — a specific individual accountable for compliance obligations. For high-risk systems, this person must have sufficient authority to halt deployment if compliance cannot be maintained. Define accountability in your AI governance policy and update role descriptions accordingly.

Step 4: Complete Conformity Assessments for High-Risk Systems

High-risk AI systems require a conformity assessment before deployment. This assessment must verify that the system meets EU AI Act requirements: data quality, technical documentation, accuracy, robustness, cybersecurity, and human oversight. For most enterprise systems, this is a self-assessment process — but it must be documented and auditable.

Third-party conformity assessment is mandatory for certain categories (e.g., remote biometric identification, AI in critical infrastructure). Engage your legal counsel to determine which pathway applies.

Step 5: Implement Human Oversight Mechanisms

High-risk AI systems must be designed and deployed to allow effective human oversight. This means humans must be able to understand what the system is doing, intervene when necessary, and override its outputs. Document the oversight process — who reviews AI-driven decisions, at what frequency, and how overrides are logged.

For employment AI tools, this is especially important: AI-assisted candidate rejections should be reviewable by a human before they are acted upon.

Step 6: Establish AI Documentation and Record-Keeping

The EU AI Act mandates technical documentation for high-risk systems that covers system design, training data, testing methods, accuracy metrics, and known limitations. This documentation must be maintained and updated throughout the system's lifecycle.

For all in-scope systems, maintain records of: system inventory entries, risk classifications, conformity assessment outcomes, oversight procedures, and incident logs. Records must be retained for a minimum of 10 years for high-risk systems.

Step 7: Conduct Bias and Accuracy Testing

High-risk systems must be tested for bias and accuracy before deployment and on an ongoing basis. Testing must cover the full range of individuals and groups that the system will affect — including testing for disparate impact across gender, ethnicity, age, and disability status.

Document your testing methodology, datasets used, results, and any mitigation measures applied. If your system cannot be adequately tested, it should not be deployed.

Step 8: Set Up Incident Reporting Processes

Operators of high-risk AI systems must report serious incidents to the relevant national competent authority. A serious incident is one that results in death, serious health harm, significant property damage, or serious adverse societal effects.

Establish an internal process for: detecting AI system malfunctions, escalating potential incidents to your compliance team, assessing whether a notifiable threshold has been met, and filing reports within the required timeframe. The reporting obligation is active from August 2026.

Step 9: Train Staff on AI Policies

Every person who uses, manages, or oversees an AI system in scope of the EU AI Act must have sufficient AI literacy — a specific obligation introduced by Article 4. This means formal training on: what AI systems are in use, how they work, their limitations and risks, applicable policies, and how to escalate concerns.

Training records should be documented and refreshed when systems change. A one-time induction is not sufficient.

Step 10: Implement Technical Robustness Measures

High-risk AI systems must be resilient to errors, faults, and cyberattacks. Technical requirements include: error detection mechanisms, failover processes, protection against adversarial inputs, and data integrity controls. Work with your security and engineering teams to assess each high-risk system against these requirements and document remediation plans where gaps exist.

Step 11: Register High-Risk Systems in the EU Database

Providers of high-risk AI systems — and deployers in certain categories — are required to register their systems in the EU-wide AI database maintained by the European AI Office. Registration includes system name, provider details, intended purpose, risk category, and conformity assessment information.

Check current guidance from the European AI Office on which deployment categories require deployer registration (in addition to provider registration) — this guidance is being updated as implementation guidance matures.

The EU AI Act applies based on where AI systems are deployed and used, not where the company is headquartered. If your AI system affects people in the EU, the Act applies — regardless of whether your organisation is based in Brussels, Sydney, or San Francisco. Engage EU-qualified legal counsel to assess your specific obligations, particularly around:

  • Whether you are classified as a "provider," "deployer," or both
  • Applicable national competent authority in each EU member state
  • Cross-border data transfers and interaction with GDPR obligations
  • Contractual obligations to pass down compliance requirements to AI vendors

Penalties: The Stakes Are High

Non-compliance with the EU AI Act carries some of the largest regulatory fines in history:

  • **Prohibited AI systems:** Up to **€35 million** or **7% of global annual turnover** (whichever is higher)
  • **High-risk system violations:** Up to **€15 million** or **3% of global annual turnover**
  • **Providing incorrect information to regulators:** Up to **€7.5 million** or **1.5% of global annual turnover**

These are not theoretical maximums — they are enforcement ceilings applied by national authorities whose mandates include active investigation. The EU's track record on GDPR enforcement (€4.5 billion in fines since 2018) signals that AI Act enforcement will be taken seriously.

Australian Businesses: You Are in Scope

If your Australian business deploys AI systems that affect people in the EU — or if you operate a business in the EU, serve EU customers, or have European employees — the EU AI Act applies to you. Regulatory jurisdiction is determined by where the effect occurs, not where the company is incorporated.

This matters because many Australian enterprises assume they are exempt from EU regulation. They are not. Australian financial services firms serving European investors, Australian SaaS companies with EU business customers, and Australian retailers with EU-facing e-commerce operations are all potentially in scope.

The interaction between the EU AI Act and Australia's own emerging AI governance landscape — including the [Voluntary AI Safety Standard](/compliance) and proposed Privacy Act amendments covering automated decision-making — means compliance programmes need to address both frameworks. A unified AI governance strategy reduces duplication and ensures neither framework is overlooked.

How Aona Helps

Aona's [AI Governance platform](/governance) is built specifically for enterprises navigating complex AI compliance requirements. It provides:

  • **Continuous AI system discovery** — live inventory of all AI tools in use, including shadow AI
  • **Risk classification workflows** — structured classification against EU AI Act tiers
  • **Policy enforcement** — controls that prevent non-compliant AI use in real time
  • **Audit-ready documentation** — automatically maintained records for conformity assessments and incident reporting
  • **Board-level reporting** — dashboards that surface AI compliance posture for executives and regulators

Start with a free trial of Aona Governance and see your full AI inventory within 24 hours.

You can also download our [EU AI Act Compliance Checklist template](/resources/templates/eu-ai-act-checklist) — a structured worksheet for tracking your compliance programme against each obligation.

Frequently Asked Questions

What is the EU AI Act? The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level and imposes requirements ranging from transparency disclosures to full conformity assessments. It entered into force on 1 August 2024 and reaches full application in August 2026.

When do EU AI Act obligations apply? Key milestones: prohibitions on unacceptable-risk AI took effect February 2025; GPAI model rules took effect August 2025; full application (high-risk system obligations) applies from August 2026.

What are the maximum penalties under the EU AI Act? Up to €35 million or 7% of global annual turnover for deploying prohibited AI systems. High-risk violations carry penalties of up to €15 million or 3% of global turnover.

Do Australian companies need to comply with the EU AI Act? Yes, if they deploy AI systems that affect individuals in the EU. The Act's scope is based on where the system is used and its effects — not where the company is headquartered. Australian businesses serving EU customers or employees must assess their obligations under the Act.

See it in action

Want to see how Aona handles this for your team?

15-minute demo. No fluff, no sales pressure.

Book a Demo →

Stay ahead of Shadow AI

Get the latest AI governance research in your inbox

Weekly insights on Shadow AI risks, compliance updates, and enterprise AI security. No spam.

About the Author

Cleo Park avatar

Cleo Park

Customer Success Lead

Cleo leads customer success at Aona AI, partnering with enterprise teams to achieve measurable outcomes from AI governance programs. She specialises in translating complex AI compliance requirements into practical, actionable frameworks.

More articles by Cleo

Ready to Secure Your AI Adoption?

Discover how Aona AI helps enterprises detect Shadow AI, enforce security guardrails, and govern AI adoption across your organization.