90 Days Gen AI Risk Trial -Start Now
Book a demo
GUIDES

AI System Inventory: The EU AI Act Requirement Every Organisation Needs to Meet

AuthorBastien Cabirou
DateMarch 19, 2026

AI System Inventory: The EU AI Act Requirement Every Organisation Needs to Meet

If your organisation uses AI tools - and almost every one does in 2026 - you're likely sitting on a compliance time bomb. The EU AI Act, Australian AI safety standards, and emerging global frameworks all require organisations to know what AI systems they're running, who's using them, and what data they're handling.

The problem? Most organisations have no idea.

A 2025 survey found that 78% of enterprise IT teams couldn't accurately name all the AI tools their employees were actively using. Shadow AI - the use of unsanctioned AI tools outside IT's visibility - has made maintaining a proper AI system inventory nearly impossible through manual processes alone.

This guide covers what an AI system inventory is, why it's now a legal requirement in many jurisdictions, and how to build one before regulators (or attackers) force the issue.

What Is an AI System Inventory?

An AI system inventory is a structured catalogue of every artificial intelligence system in use across your organisation. This includes:

  • **Sanctioned tools** approved and deployed by IT (Microsoft Copilot, ChatGPT Enterprise, Salesforce Einstein)
  • **Shadow AI tools** employees use independently (personal ChatGPT accounts, Claude, Gemini, Perplexity)
  • **Embedded AI** within existing SaaS platforms (Grammarly, Notion AI, Zoom AI companion)
  • **Custom-built models** developed internally or by third-party vendors
  • **AI agents and automations** running scheduled or event-triggered tasks

For each system, a complete inventory should capture: the AI tool name and version, the data it processes, who has access, what decisions it influences, the vendor and contractual data handling terms, and the risk classification.

EU AI Act

The EU AI Act, which entered into force in August 2024 and applies fully from 2026, explicitly requires organisations deploying or using AI systems to maintain documentation of those systems. For high-risk AI systems - covering areas like employment decisions, credit scoring, biometric identification, and critical infrastructure - detailed technical documentation and logging is mandatory.

Organisations that can't demonstrate what AI systems they run face fines of up to 15 million euros or 3% of global annual turnover for non-compliance.

Australia's AI Safety Standards

The Australian government's voluntary AI Safety Standard (2024) and the incoming mandatory framework for high-risk AI use cases both emphasise transparency and accountability - which starts with knowing what you have. The Australian Signals Directorate (ASD) has also flagged AI governance as a priority for organisations subject to the Essential Eight framework.

ISO 42001

ISO 42001, the international standard for AI management systems published in December 2023, requires organisations to establish, implement, maintain, and continually improve an AI management system - which fundamentally depends on a complete inventory of AI systems in scope.

GDPR and Australian Privacy Act

Any AI tool that processes personal data falls under privacy regulations. Without an inventory, you can't conduct the required Data Protection Impact Assessments (DPIAs), can't respond accurately to data subject access requests, and can't demonstrate compliance. Under Australia's Privacy Act reforms, personal information processed by third-party AI tools remains the responsibility of the collecting organisation.

The Shadow AI Problem

Here's the uncomfortable reality: traditional approaches to building an AI system inventory don't work.

A manual process - asking department heads to list their AI tools, reviewing procurement records, auditing approved software lists - will miss the majority of AI usage in your organisation. Employees routinely use personal accounts for AI tools, use browser extensions with embedded AI, or access AI features within tools IT didn't realise had AI capabilities.

Research consistently shows that employees use 3 to 5 AI tools that IT has no visibility into. In larger organisations, the number of undiscovered AI touchpoints runs into the hundreds.

This creates a direct compliance problem. If your AI system inventory is built on self-reporting and procurement data alone, it will be incomplete - and an incomplete inventory is no protection during a regulatory audit or data breach investigation.

How to Build an Accurate AI System Inventory

Step 1: Discover What's Already in Use

Before you can catalogue AI systems, you need to find them. This requires active discovery across three vectors:

Network and DNS traffic analysis - AI tools make outbound API calls to recognisable endpoints (api.openai.com, claude.ai, gemini.google.com, etc.). Analysing outbound traffic reveals what tools are in active use, even personal accounts.

Browser extension and plugin audits - Many AI tools operate entirely through browser extensions (Grammarly, Otter.ai, Monica AI). A standard software audit won't catch these.

SaaS platform AI feature reviews - Platforms like Salesforce, HubSpot, Notion, Zoom, and Microsoft 365 have rolled out AI features that may be active by default. Review each platform's AI capabilities and check whether they're enabled.

Employee disclosure programs - Create a safe, non-punitive process for employees to disclose the AI tools they use. Make it easy and frame it as support, not surveillance.

Step 2: Classify Each AI System by Risk

Not all AI tools carry the same compliance weight. Once discovered, classify each system:

  • **Critical risk**: AI used in decisions affecting people (hiring, lending, performance management, access control)
  • **High risk**: AI processing sensitive or personal data at scale (customer data analysis, medical records, financial data)
  • **Medium risk**: AI used in internal workflows with limited personal data exposure
  • **Low risk**: AI used for general productivity with no sensitive data input (summarisation of public content, coding assistance with sanitised data)

Step 3: Document Data Flows for Each System

For each catalogued AI tool, document:

  • What data types are inputted (personal data, confidential business data, intellectual property)
  • Where that data is stored and processed (vendor's servers, geography, retention period)
  • Whether data is used to train vendor models
  • Who has access to outputs

This documentation forms the basis of your DPIA obligations under GDPR and the Australian Privacy Act.

Step 4: Implement Ongoing Monitoring

An AI system inventory isn't a one-time audit - it's a living document. New AI tools appear weekly. Existing SaaS platforms add AI features constantly. Employees change their tool usage.

Effective ongoing monitoring requires automated discovery that continuously scans for new AI touchpoints rather than relying on periodic manual audits. Without automation, your inventory will be out of date within weeks.

Step 5: Establish a Governance Framework

Once you know what AI systems exist and how they're being used, you need a governance framework to manage them:

  • An AI acceptable use policy that defines what tools are approved and how they can be used
  • A process for employees to request approval for new AI tools
  • Clear data handling rules (what can and cannot be inputted into AI tools)
  • Regular review cycles to update classifications as tools evolve

What an AI System Inventory Should Look Like

A well-structured AI system inventory entry might look like this:

Tool: ChatGPT (personal accounts, OpenAI) Status: Shadow AI - unsanctioned Data processed: Customer emails (confirmed), internal documentation (suspected) Risk classification: High - personal data exposure Vendor data policy: Data may be used for model training unless opted out (Enterprise tier required) Users: Estimated 40+ employees across Sales, Support, Marketing Action required: Enforce Enterprise tier with DPA, or block personal account access Last reviewed: March 2026

The Cost of Not Having an Inventory

The consequences of inadequate AI governance aren't theoretical. In 2025, multiple European companies received GDPR enforcement actions related to employee AI tool usage that had not been assessed under their data protection frameworks. Australian regulators have signalled that AI-related privacy breaches will be treated with the same seriousness as traditional data breaches under the forthcoming Privacy Act reforms.

Beyond regulatory risk, there's the operational risk: an employee pasting customer data into an AI tool that retains and trains on that data creates a data breach obligation. Without an inventory, you won't know this happened until after the damage is done.

How Aona Helps

Aona was built to solve the AI system inventory problem. Instead of relying on self-reporting or procurement records, Aona automatically discovers AI tool usage across your organisation through network-level visibility - including personal accounts, browser extensions, and AI features embedded in existing SaaS tools.

Aona provides a continuously updated AI system inventory showing:

  • Every AI tool in active use across your organisation
  • Which employees are using each tool
  • What data categories are being processed
  • Risk classifications aligned to EU AI Act, ISO 42001, and Australian frameworks
  • Policy enforcement to block or alert on non-compliant usage

For organisations that need to demonstrate AI governance to regulators, customers, or boards, Aona's inventory output provides the audit-ready documentation required.

Ready to see what AI systems are actually running in your organisation? [Book a demo](/book-demo) to get your AI system inventory in under 24 hours.

Ready to Secure Your AI Adoption?

Discover how Aona AI helps enterprises detect Shadow AI, enforce security guardrails, and govern AI adoption across your organization.