The definitive numbered list of shadow AI statistics for 2026. Every stat is cited from a primary source — Salesforce, Cisco, IBM, Gartner, Forrester, OWASP, and more. Use these figures in your security briefings, board reports, and AI governance programmes.
What is Shadow AI?
Shadow AI refers to AI tools and applications that employees use without IT knowledge, approval, or oversight. It is the AI-era evolution of shadow IT — and in 2026, it affects the vast majority of enterprises worldwide, creating significant data, security, and compliance risk.
Showing 27 of 27 statistics
of employees use unapproved AI tools — more than half of all enterprise AI usage happens entirely outside sanctioned channels, invisible to IT and security teams.
of employees use AI tools not officially sanctioned by their IT or security team, according to Microsoft WorkLab research — making shadow AI a near-universal enterprise challenge.
of employees who use AI at work brought their own tools — not ones provided or approved by their employer. The supply of consumer AI tools has outpaced enterprise procurement.
of employees say they would not tell their manager they used AI to complete a work task — making self-reported AI policies functionally unenforceable.
of senior executives have personally used an unsanctioned AI tool for a work task in the past 90 days — shadow AI is a leadership problem, not just a frontline one.
growth in the number of AI tools used without IT approval since 2022. Shadow AI is accelerating faster than enterprise governance, creating a widening visibility gap.
shadow AI tools are in active use at the average enterprise — completely invisible to IT. This is more than double the figure from 2023, reflecting rapid AI tool proliferation.
of employees have entered non-public company information into AI tools — including internal strategy, customer data, financial projections, and proprietary product details.
of employees have pasted confidential customer data into a public AI chatbot. This data is often retained by AI providers and can be used for model training.
of shadow AI data inputs contain sensitive business information — including personally identifiable information (PII), financial data, or intellectual property.
of enterprises have experienced a measurable IP leakage event linked to employees entering proprietary information into external AI tools — source code, product roadmaps, M&A plans.
of organisations report that employees using shadow AI have inadvertently created data sovereignty violations by routing sensitive data through offshore AI servers.
average cost of a data breach involving AI tools in 2025–2026 — up 22% from traditional breach costs, driven by delayed detection and poor AI incident containment.
average annual loss per enterprise from ungoverned AI use — including compliance gaps, regulatory fines, incident response, and productivity waste from uncoordinated tooling.
of CISOs say their organisation has experienced at least one security incident linked to an unsanctioned AI tool in the past 12 months.
more likely for shadow AI incidents to go undetected compared to traditional shadow IT incidents, due to lack of AI-specific monitoring tooling in most security stacks.
of organisations cannot confirm whether AI contributed to a security breach — most lack the monitoring capabilities to detect AI-related incidents at all.
ChatGPT credentials found exposed on the dark web via infostealer malware — many linked to corporate accounts containing sensitive business context and confidential conversation history.
of professionals feel unprepared for the pace of AI change at work — a skills and governance gap with direct security and compliance implications for their organisations.
of organisations have no formal AI usage policy, leaving employees to make their own decisions about which AI tools to adopt and what data to share.
of organisations have a formal AI governance framework in place. The vast majority are operating without documented AI policies, controls, or oversight processes.
of compliance teams say they lack the visibility tools to monitor AI usage across their organisation — creating a systemic blind spot that regulators are beginning to target.
of IT leaders cite shadow AI as a top security concern for 2026 — ahead of ransomware and cloud misconfiguration in many recent surveys.
of global annual revenue — the maximum fine under the EU AI Act for use of prohibited AI systems. The Act is in full enforcement from August 2026, covering both EU and non-EU organisations.
of organisations subject to the EU AI Act have not yet completed an AI inventory or risk classification — leaving them exposed to enforcement action as deadlines pass.
organisations globally are affected by the EU AI Act — including non-EU companies whose AI deployments affect EU residents. Shadow AI tools used by employees may trigger deployer obligations.
of data protection officers report receiving regulatory enquiries related to employee AI tool use in the past 12 months — shadow AI is now on regulators' radar.
The data is unambiguous: shadow AI is not a niche IT problem — it is a systemic enterprise risk. More than half of all employees are using AI tools outside approved channels (Salesforce, 2024), and nearly half have already shared sensitive company data with third-party AI providers they have no data processing agreements with (Cisco, 2024).
The financial consequences are becoming measurable. IBM's 2025 Cost of a Data Breach Report found AI-related breaches now cost organisations over $6.5 million on average — a 22% premium over traditional breach costs. This premium reflects the delayed detection, limited forensic capability, and poor containment that characterise AI-related incidents in organisations without dedicated AI governance tooling.
The regulatory window is closing. The EU AI Act is now in full enforcement as of August 2026, and 61% of in-scope organisations have not yet completed an AI inventory (KPMG, 2025). In Australia, the Privacy Act amendments introduce automated decision-making transparency requirements from December 2026. Organisations relying on policy documents alone — without technical enforcement — face real exposure.
The governance gap is stark: only 23% of organisations have a formal AI governance framework (Deloitte, 2025), while 65% of professionals feel unprepared for the pace of AI change (WEF, 2025). The 77-point gap between AI adoption and AI governance readiness is the defining enterprise risk of 2026.
Statistics on this page are sourced from publicly available research, analyst reports, vendor studies, and regulatory publications from 2024–2026. Sources include Salesforce, Cisco, IBM, Gartner, Forrester, Microsoft, Deloitte, Ponemon Institute, ISACA, KPMG, Thomson Reuters, Group-IB, Cyberhaven, Nightfall AI, and others. Where multiple data points exist for a topic, the most recent or most widely cited figure is used. All figures relate to enterprise usage unless otherwise stated. Aona AI does not manufacture statistics — readers are encouraged to consult primary sources for full methodology.
Last updated: March 2026 — This page is updated quarterly to reflect the latest research. Next update: June 2026.
Multiple studies converge on 55–78% of employees using AI tools not sanctioned by their employer. Salesforce (2024) found 55% use unapproved tools, while Microsoft WorkLab (2025) found 78% brought their own AI to work. The gap is widening as consumer AI proliferates faster than enterprise procurement.
Cisco (2024) found 48% have entered non-public company information into AI systems. Cyberhaven (2024) found 46% have pasted confidential customer data into a public chatbot. Nightfall AI (2025) found 38% of shadow AI inputs contain PII, financial data, or IP.
IBM's 2025 Cost of a Data Breach Report found AI-related breaches cost an average of $6.5 million — 22% more than traditional breaches. Ponemon Institute found organisations lose ~$670,000 per year from ungoverned AI through compliance gaps, incident response, and productivity waste.
Gartner's 2025 survey estimates 158+ shadow AI tools per enterprise — more than double the 2023 figure. Forrester found a 3x growth in unapproved AI tool usage since 2022, with no sign of deceleration.
The EU AI Act (full enforcement from August 2026) carries fines up to 7% of global annual revenue. 61% of in-scope organisations have not completed an AI inventory (KPMG, 2025). In Australia, Privacy Act amendments on automated decision-making take effect December 2026.
Dedicated platforms like Aona AI discover unsanctioned AI tools via network analysis, browser monitoring, and identity provider integration. Manual self-reporting is unreliable — 52% of employees would not disclose AI usage to their manager (Microsoft, 2025).
Aona AI discovers every unsanctioned AI tool your employees are using — providing real-time visibility, policy enforcement, and compliance reporting for the EU AI Act, AU Privacy Act, and more.