Your employees are using AI tools with EU personal data. Is it lawful?
AI tools processing personal data without a lawful basis, DPIAs, or data processing agreements put your organisation at risk of GDPR enforcement. Aona discovers Shadow AI, blocks personal data in prompts, and provides the audit trail regulators expect.
GDPR applies to all processing of EU personal data — including when employees use AI tools that were never assessed by your DPO.
Article 6 of GDPR requires a lawful basis for every instance of personal data processing. When employees use AI tools with personal data, the organisation must identify the applicable legal ground — whether consent, legitimate interest, or contractual necessity. Most consumer AI tools do not provide the contractual or technical framework needed to establish a valid lawful basis.
Article 35 mandates a DPIA when processing is likely to result in high risk to individuals. AI tools that process personal data at scale, involve profiling, or use new technologies trigger this requirement. A DPIA must assess the necessity and proportionality of the processing, evaluate risks to data subjects, and identify measures to mitigate those risks.
Article 22 grants individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Organisations using AI for hiring, credit scoring, or customer segmentation must ensure human oversight, provide explanations of the logic involved, and allow individuals to contest automated decisions.
Article 5(1)(c) requires that personal data be adequate, relevant, and limited to what is necessary. When employees paste customer records, emails, or support tickets into AI tools, they often share far more personal data than required for the task. Data minimisation principles must be enforced technically, not just through policy.
Chapter V of GDPR restricts transfers of personal data to countries outside the EU/EEA that do not provide adequate protection. Many AI tools — including ChatGPT, Claude, and Gemini — are operated by US-based companies. Organisations must ensure appropriate safeguards such as Standard Contractual Clauses (SCCs) or adequacy decisions are in place before personal data is processed by these services.
Employees are adopting AI tools faster than privacy teams can assess them. These tools frequently process EU personal data without the safeguards GDPR requires.
Employees routinely use ChatGPT, Gemini, and other US-based AI tools to process customer emails, support tickets, and internal documents containing EU personal data — often without Standard Contractual Clauses or any data processing agreement in place.
By default, ChatGPT stores conversation data for model training. When employees enter personal data of EU residents into ChatGPT, that data may be retained, processed for purposes beyond the original intent, and stored outside the EU — all without a lawful basis.
Many AI tools used by employees lack proper Data Processing Agreements (DPAs) as required by Article 28. Without a DPA, the organisation has no contractual control over how the AI vendor processes personal data, no audit rights, and no guarantees on data deletion or sub-processor management.
Purpose-built AI security that addresses the GDPR compliance challenges of enterprise AI adoption.
Aona maps every AI tool in use across your organisation and identifies which ones are processing EU personal data. Get a complete inventory of AI tools, their data processing activities, and whether they have appropriate DPAs and SCCs in place.
Aona flags AI tools that require a DPIA based on the type and volume of personal data they process. Track DPIA completion status, link assessments to specific AI tools, and ensure no high-risk AI processing begins without a completed impact assessment.
Aona's real-time data loss prevention detects personal data in AI prompts before they are submitted — and blocks or redacts that data when it is destined for a tool without appropriate GDPR safeguards. Enforce data minimisation technically, not just through training.
Every AI interaction involving personal data is logged with full context. Generate audit reports that demonstrate GDPR compliance to supervisory authorities, track data processing activities for your Article 30 records, and support breach investigations with complete AI interaction history.
Discover Shadow AI, prevent personal data exposure, and demonstrate GDPR compliance for AI — all from one platform.