300,000 Stolen ChatGPT Credentials: Why Shadow AI Is Now a Hacker's Favourite Entry Point
Meta description: IBM's 2026 X-Force Threat Index revealed over 300,000 ChatGPT credentials stolen by infostealer malware. Here's what that means for enterprise AI security — and why shadow AI just got a lot more dangerous.
---
Yesterday, IBM released its 2026 X-Force Threat Intelligence Index. It's worth reading cover to cover, but one number jumped out immediately: over 300,000 ChatGPT credentials were exposed by infostealer malware in 2025 alone.
Let that land for a moment. Not a breach of OpenAI's infrastructure. Not some sophisticated zero-day. Just infostealer malware — the kind that silently harvests saved credentials from browsers and apps — running quietly on enterprise devices while employees go about their day.
And this isn't really a story about password hygiene. It's a story about shadow AI.
The Part Most Organisations Are Missing
When a company's Salesforce credentials get compromised, there's usually a clear paper trail. IT knows Salesforce is deployed. There's an SSO integration. There are access logs. The security team has a fighting chance of catching it.
ChatGPT is different. So is Perplexity, Claude, Gemini, Midjourney, and the dozens of other AI tools your employees almost certainly downloaded last Thursday afternoon without telling anyone. These tools live outside your identity stack. There's no SSO. No SIEM integration. No data loss prevention policy that was written with them in mind. They're running on personal accounts (or corporate email, which is somehow even worse) with zero visibility from the security team.
IBM's report makes it plain: AI platforms have now reached the same credential risk profile as core enterprise SaaS. The attackers have noticed. Your security architecture hasn't caught up yet.
What a Stolen ChatGPT Credential Actually Enables
This is where it gets specific — and where most commentary glosses over the detail that actually matters.
When an attacker gets your Outlook credentials, they read your emails. Bad enough. When they get your ChatGPT credentials, they inherit your conversation history. Every prompt you've sent. Every document you've pasted in. Every internal strategy you've discussed with the AI because it felt safer than sending a Slack message.
IBM's report flags three distinct attack paths from compromised AI credentials: output manipulation, sensitive data exfiltration, and prompt injection. The third one is particularly underappreciated. If an attacker can access a shared AI account — say, a team ChatGPT Plus subscription someone put on their corporate card — they can inject malicious instructions that persist in the conversation history and influence outputs for anyone who picks up the thread later.
In practice, this might look like subtly bad contract language appearing in AI-drafted documents. Or an AI assistant that starts steering employees toward particular vendors. It's the kind of attack that doesn't trigger any existing alert. It just slowly poisons decisions.
The 44% Problem
The credential finding isn't the only number worth noting from IBM's report. X-Force also recorded a 44% increase in attacks that began with exploitation of public-facing applications — largely because missing authentication controls made it trivially easy.
That phrase — "missing authentication controls" — is doing a lot of work. It's a polite way of saying that companies are deploying AI tools without first asking: who can access this, what can they do with it, and how would we know if something went wrong?
This is exactly the gap that the shadow AI problem creates. Employees adopt tools faster than governance keeps up. The tools accumulate access to sensitive data — customer information pasted into prompts, financial projections uploaded as context, HR details shared with an AI coach — and none of it is tracked, classified, or protected by the controls that cover everything else.
The attack surface expands invisibly.
"But We Have Policies"
Most organisations do. They have an AI use policy, probably written sometime in 2024, that says something like: "Employees should not share confidential information with external AI tools." Full stop.
The problem is that policies without visibility are just wishes. If you don't know which AI tools are running on your network, you can't enforce anything. And according to every piece of research published in the past twelve months, the gap between what employees are actually using and what IT knows about is substantial — typically 3-5x more AI tools in active use than appear on any approved list.
The IBM report found that vulnerability exploitation is now the leading cause of enterprise breaches, accounting for 40% of X-Force incidents. The most common entry points? Misconfigured access controls and poor credential hygiene. Both are direct consequences of deploying tools (AI or otherwise) without governance infrastructure to match.
What Good Actually Looks Like
The answer isn't banning AI tools. Teams that do that just go underground harder — using personal devices, personal accounts, and leaving no trail at all. You've solved nothing; you've just made the problem invisible.
What works is a layered approach:
Discovery first. You need to know what's actually running. Not what's approved — what's actually in use, across which teams, at what frequency, and with what data flows. This baseline is the foundation for everything else.
Guardrails, not blockades. Policy enforcement that trips up employees constantly breeds workarounds. The goal is to guide behaviour at the moment it happens — flagging when someone's about to paste something sensitive into an unsanctioned tool, offering an approved alternative, logging the incident. Fast and frictionless for safe usage; clear and informative for risky behaviour.
Credential governance for AI tools. This one's specific to the IBM finding. AI platforms should be treated like any other SaaS in your identity stack — managed accounts, MFA enforced, access reviewed, offboarded when employees leave. The ChatGPT subscription on the team Amex card needs to become a managed enterprise account with proper controls.
Continuous monitoring. The threat landscape isn't static. The 49% surge in active ransomware groups that IBM documented means the pool of actors looking to exploit these gaps is growing fast. What you got away with last year won't necessarily hold in 2026.
The Governance Window Is Narrowing
IBM's report landed yesterday. The EU AI Act's high-risk provisions come into full force in August 2026. NIST launched its AI Agent Standards Initiative last month. Regulators and threat actors are both intensifying focus on enterprise AI at the same time.
The organisations that will handle this well aren't the ones with the most restrictive policies. They're the ones who built visibility and governance infrastructure before the breach — not in the incident review meeting afterwards.
300,000 stolen credentials is a data point. Whether it becomes a cautionary tale about your organisation is still up to you.
