The world's first comprehensive AI regulation is here. Is your organisation ready?
The EU AI Act requires risk classification, conformity assessments, transparency obligations, and AI literacy for every organisation deploying AI. Aona provides the visibility, governance, and tooling you need to comply.
The EU AI Act introduces obligations for both AI providers and deployers — with significant penalties for non-compliance.
The EU AI Act establishes four risk categories: unacceptable (banned), high-risk (strict obligations), limited risk (transparency duties), and minimal risk (no specific rules). Organisations must assess every AI system they deploy or develop against these categories. High-risk AI — used in employment, education, law enforcement, or critical infrastructure — faces the most stringent requirements.
AI systems that interact with people must clearly disclose that the person is interacting with AI. This applies to chatbots, AI-generated content, and emotion recognition systems. Deep fakes must be labelled. Organisations deploying AI must ensure transparency requirements are met at the point of interaction.
High-risk AI systems must undergo conformity assessments before being placed on the market or put into service. These assessments evaluate risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Some categories require third-party assessment by a notified body.
Article 4 requires that all staff dealing with AI systems have sufficient AI literacy — an understanding of AI capabilities, limitations, risks, and the regulatory context. This applies to deployers, not just developers. Organisations must implement training programmes proportionate to the AI systems in use and the roles of the individuals involved.
High-risk AI systems must have automatic logging capabilities to ensure traceability. Deployers must keep logs generated by the AI system for a period appropriate to the intended purpose, at least six months. These logs must be available to market surveillance authorities upon request and are essential for post-market monitoring.
You cannot classify AI risk or meet transparency obligations for AI tools you do not know about. Shadow AI is the single biggest compliance gap for the EU AI Act.
Employees adopt AI tools without assessing their risk category under the EU AI Act. An AI tool used for candidate screening is high-risk, but if adopted by an HR team without IT oversight, it may never receive the required conformity assessment.
Market surveillance authorities can request a complete inventory of AI systems deployed. Without visibility into Shadow AI, organisations cannot demonstrate compliance or even identify which AI systems are subject to the Act's requirements.
Customer-facing teams using AI chatbots, AI-generated emails, or AI-assisted responses may fail to disclose AI involvement — a direct transparency violation. Shadow AI tools deployed without governance are unlikely to include required disclosures.
Purpose-built AI governance that addresses the EU AI Act's requirements for deployers of AI systems.
Aona automatically discovers every AI tool in use across your organisation and provides the complete inventory regulators expect. Map each tool to EU AI Act risk categories, track which require conformity assessments, and identify prohibited AI practices before they become enforcement actions.
Generate reports that demonstrate EU AI Act compliance to market surveillance authorities. Aona tracks AI system deployments, risk classifications, transparency obligations, and logging requirements — providing audit-ready documentation at any time.
Aona tests AI agents and autonomous AI systems for security vulnerabilities, accuracy, and robustness — supporting the technical requirements of conformity assessments for high-risk AI. Identify risks before deployment and maintain ongoing monitoring.
Define and enforce AI usage policies that map directly to EU AI Act risk categories. Block prohibited AI practices, require approval workflows for high-risk AI deployments, and enforce transparency disclosures — all automatically and in real time.
Build your AI inventory, classify risk, and enforce policies aligned with the EU AI Act — all from one platform.