A practical guide defining what data can and cannot be used with AI tools. 4-level classification system with definitions, examples, and explicit AI usage rules for each level.
Updated March 2026 · 4 classification levels · GDPR, ISO 27001, PCI DSS aligned
Employees cannot make good data handling decisions with AI tools if they don't know what data is allowed where. Most AI-related data incidents are not the result of malicious behaviour — they are the result of employees not knowing that the data they are pasting into an AI tool is sensitive, or not understanding which AI tools are approved for which data types. A clear data classification guide is the foundation of enforceable AI governance.
Click each classification level to expand the definition, examples, and AI usage rules. Customise examples for your organisation's specific data types and systems.
Information that is intentionally made available to the public or that would cause no harm if disclosed. This is the only classification level that can be freely used with any AI tool without additional controls.
Examples of Level 1 — Public Data
AI Tool Rules — Level 1
Note: Even with Public data, do not submit information that is not yet publicly released (upcoming announcements, embargoed content) — classify embargoed content as Internal or above until the embargo lifts.
A data classification guide only reduces risk if employees understand it and technical controls enforce it. Follow these steps to implement classification effectively.
A classification policy requires technical enforcement to be effective. Aona detects when employees submit Confidential or Restricted data to AI tools, blocks prohibited interactions in real time, and provides the visibility to know whether your data classification rules are actually working in practice.
Book a Demo