One company gets breached. Seven hundred others follow. That's not a hypothetical — it's what happened when attackers compromised Salesloft and Drift, two AI-enabled SaaS platforms, and used stolen OAuth and refresh tokens to quietly walk into Salesforce installations across more than 700 of their customers.
The "Great SaaS Breach" of 2025 became a masterclass in how AI tooling has created a new class of attack chain. And if your organisation is using any modern AI-integrated platform — which, statistically, it almost certainly is — this story is about you.
The Credential Problem Nobody Wants to Talk About
The "State of Secrets Sprawl 2026" report dropped this month with a number that should stop any security leader in their tracks: AI service secrets on public GitHub surged 81% year-over-year in 2025. Nearly 29 million secrets were exposed in total. LLM infrastructure secrets leaked five times faster than those from core model providers.
Here's what that actually means in plain English. When developers wire up AI tools — ChatGPT, Copilot, Claude, internal LLM APIs — they need API keys. Those keys end up in code. That code ends up in repositories. Repositories get pushed to GitHub. And GitHub is, as it turns out, comprehensively indexed by anyone with a browser.
This isn't carelessness. It's a structural problem. Development teams are integrating AI tools at a pace that governance has no hope of matching. The average enterprise is now running dozens of AI integrations across product, sales, marketing, and ops — each one generating credentials, OAuth tokens, service account keys, and webhook secrets that live somewhere in the codebase.
And unlike traditional software secrets, AI service credentials often come with something extra dangerous: broad scope. An API key for an LLM integration might have access to file uploads, email content, browsing history, and tool calls. It's not just a database password. It's a skeleton key to your team's entire AI-augmented workflow.
How the OAuth Chain Breaks Everything
The Salesloft/Drift incident illustrates a specific and underappreciated attack pattern. The attackers didn't need to break into 700 companies individually. They just needed to break into one — and let the OAuth trust chain do the rest.
Modern SaaS integrations are held together by tokens. When you connect your CRM to your email platform to your AI writing tool to your analytics stack, you're creating a web of delegated trust. Each connection mints a token that allows System A to act on behalf of System B. Refresh tokens, in particular, are long-lived and often indefinitely valid — they're designed to keep your tools connected without constant re-authentication.
When an attacker steals those tokens, they don't just access one app. They impersonate the integration itself. From inside Drift, they could issue requests that looked like legitimate Drift-to-Salesforce API calls. The victims' security tooling had no reason to flag it — the token was valid, the source was trusted, the behaviour looked normal.
This is what shadow AI risk actually looks like in practice. It's not just an employee pasting customer data into ChatGPT. It's the invisible web of authorisations your AI-integrated SaaS stack has accumulated over time, sitting quietly, waiting to be exploited.
What Your Security Team Is Missing
Most enterprise security programs are built around a mental model that no longer maps to reality. The assumption: humans make authenticated decisions, and you audit those decisions. The reality: in a modern AI-integrated environment, a significant and growing fraction of your "decisions" are made by non-human identities — APIs, agents, automation workflows — that operate at speeds and scales that make traditional monitoring almost irrelevant.
Gartner estimates that by 2027, nearly 40% of enterprise workflows will be automated using AI agents. By the end of 2026, they project over 1,000 legal claims for harm caused by AI agents against enterprises that failed to implement sufficient guardrails and oversight.
The security gap isn't just about external attackers exploiting your AI integrations. It's about internal visibility. Most organisations cannot tell you:
- Which AI tools are actively connected to their production systems
- What data those tools have access to
- Which integrations are using credentials that were created by employees who have since left
- Whether any of those credentials have been rotated in the past six months
That last point deserves a moment. The average enterprise SaaS estate has hundreds of integrations. The credentials underpinning those integrations are typically managed by whoever set them up originally — which might be a developer who left the company, a manager who automated their own workflow, or a vendor who was given "temporary" access two years ago.
This is shadow AI's most boring and most dangerous face: not rogue chatbots, but orphaned authentication.
The OWASP Signal
In February 2026, OWASP released its "Top 10 for Agentic Applications" — the first dedicated security framework for AI agent risks. The fact that OWASP needed to publish this at all tells you something. We're past the point where AI security can be handled by retrofitting existing controls.
The list includes prompt injection, excessive agency, and privilege compromise — but what ties them together is a common thread: these risks don't exist in isolation. They compound. An AI agent with excessive permissions that's vulnerable to prompt injection and connected to an OAuth chain is not one problem. It's three problems that multiply each other.
Singapore's IMDA published its own agentic AI governance framework in January. The EU AI Act's high-risk obligations become fully enforceable in August 2026. Security and compliance leaders are about to find themselves in a very uncomfortable position: regulators expecting governance they haven't implemented yet, and attack surfaces that grew while everyone was focused on the productivity gains.
Getting Ahead of the Chain Reaction
The Salesloft/Drift cascade breach wasn't inevitable. A handful of controls would have broken the chain at multiple points.
First: credential hygiene for AI integrations. Treat every API key and OAuth token connected to an AI-enabled service as a high-value credential. Rotate them. Scope them minimally. Audit them quarterly. If a credential was created by someone who's no longer in the organisation, it's a liability.
Second: visibility before governance. You can't govern what you can't see. Most security teams discover their AI tool inventory through reactive channels — a vendor notification, a breach report, an employee offboarding that surfaces surprising access. Getting ahead requires proactive discovery: an ongoing process of identifying what AI tools are running in your environment and what they're connected to.
Third: treat OAuth chains as attack surfaces. Map your integration dependencies. Know which of your critical systems are accessible via tokens held by third-party SaaS providers. When a provider has a security incident, your response time to revoke delegated access shouldn't be measured in days.
Fourth: monitor non-human identity behaviour. AI agents and automated integrations generate logs. Those logs contain the signals of compromise — unusual API call patterns, off-hours access, requests that don't match normal integration behaviour. The question is whether anyone is watching.
The 81% surge in leaked AI secrets isn't a trend that's going to reverse itself. Every new AI integration is a potential exposure point. The organisations that navigate 2026 without becoming a node in someone else's cascade breach will be the ones that got serious about AI governance before the incident — not after.
That's the job now.