Every prompt your employees type into ChatGPT, Gemini, Claude, Mistral or DeepSeek leaves your network. Watch, in real time, which countries it ends up in, and under whose laws.
Each entry links to the original reporting. Use them in your next CISO briefing.
Prohibited-AI practices (social scoring, workplace emotion recognition, untargeted facial-image scraping) are live. Penalties run up to 7% of global turnover — the highest ceiling of any digital regulation on the books.
Source: European CommissionThe Information Commissioner publishes enforcement guidance clarifying that inputting personal data into third-party AI tools without a lawful basis is a UK GDPR violation — putting every unmanaged ChatGPT prompt in scope.
Source: ICOTransparency, copyright, and systemic-risk obligations for general-purpose AI models (ChatGPT, Claude, Gemini, Llama) become binding for providers serving the EU.
Source: European CommissionWithin days of the Wiz disclosure, national regulators and US state governors block DeepSeek on government devices over PRC Data Security Law exposure. The Italian Garante orders a full processing halt.
Source: Al JazeeraItaly's data-protection authority blocks DeepSeek from processing Italian users' data — the first regulator to act on the emerging evidence, citing GDPR transparency and lawful-basis failures.
Source: Garante (IT DPA)Wiz researchers find a publicly accessible DeepSeek ClickHouse instance leaking chat history, API keys, and operational metadata — discovered within minutes of probing.
Source: Wiz ResearchThe world's first horizontal AI law goes live. Article 5 prohibitions took effect Feb 2025, GPAI rules Aug 2025, full enforcement Feb 2026. Up to 7% of global turnover at stake.
Source: European CommissionBritish Columbia tribunal rules Air Canada must honour a refund its customer-support chatbot invented — first major precedent on enterprise liability for AI-generated output.
Source: BBCThree internal incidents in three weeks: confidential semiconductor code and meeting notes pasted into ChatGPT. Samsung issued a company-wide ban — still the canonical case study cited in every CISO briefing.
Source: The VergeNetwork blocks push staff to personal devices, mobile, and the next 5,000 AI tools you've never heard of. Governance fixes what bans can't.
Once a prompt or file leaves your network, it lands in a data centre governed by foreign law. Here are the destinations Aona sees most often.
A note on the locations. Countries and jurisdictions reflect each provider's public infrastructure footprint (e.g. OpenAI is hosted on Microsoft Azure US regions; DeepSeek on PRC-based infrastructure; YandexGPT on Russian infrastructure). City-level markers are representative default regions pulled from each provider's own documentation — Azure/AWS/Google Cloud region listings, OpenAI's trust portal, Microsoft's EU Data Boundary docs, Anthropic's trust centre. Actual per-request routing varies by load, tenant region, and Enterprise residency selection — switch the Subscription Tier above the globe to see how Enterprise tiers reroute Western providers into EU regions.
Aona inspects egress in real time and maps every AI tool your workforce touches, sanctioned or shadow.
Inline guardrails redact PII, secrets and confidential files before they ever leave the device.
Employees get a contextual nudge, not a help desk ticket, the second they try to paste regulated data.
Aona's Gen AI Risk Discovery shows you every tool, every prompt and every export. No agents on user devices required to start.