Something happened at a company in late December 2025 that didn't make headlines until last week. An employee got flooded with emails — hundreds of them, fast enough to be obviously abnormal. Then a message arrived through Microsoft Teams from what looked like internal IT helpdesk: "We've seen the issue. Click this link to install a patch and stop the spam."
They clicked.
The link dropped a custom malware suite called Snow — a browser extension, a tunneler, and a backdoor that eventually let attackers dump Active Directory credentials and walk off with the company's entire domain. Google's Mandiant team tracked the campaign to a newly identified threat group, UNC6692, and published the full breakdown last week. The technical detail is impressive. What's more interesting for anyone running enterprise AI governance is what the attack didn't use: a zero-day, a leaked password, or a misconfigured cloud bucket. It used trust.
This Isn't Really a Microsoft Teams Story
The Register ran the headline as a Teams security story. It isn't, exactly. Teams was just the channel. The actual exploit was psychological — the victim believed they were talking to IT helpdesk on a platform their company approved, told them to use, and embedded with AI tools to make work easier.
That matters because most enterprise security and AI governance strategies are built around a different threat model. They're focused on what tools employees use: block the unsanctioned ChatGPT tabs, flag when someone uploads proprietary data to a consumer AI, detect the developer running an unapproved LLM on a work laptop. All legitimate concerns. But that entire threat model assumes the risk comes from employees making bad choices with external tools.
UNC6692 flipped that. The attacker didn't need to find an unsanctioned tool. They found a sanctioned one — a deeply trusted one — and exploited the trust employees place in it.
The AI Copilot Complication
Here's where it gets interesting for AI governance specifically.
Teams is no longer just a chat and video platform. For most Microsoft 365 customers, it's now the primary interface for Microsoft Copilot — the AI assistant that drafts messages, summarises meetings, answers questions, and recommends actions. Employees interact with Copilot through the same message threads they use to talk to colleagues and, apparently, the occasional threat actor posing as helpdesk.
When you embed an AI assistant into a trust context, you get a trust multiplier. If a legitimate Copilot message looks like "Here's a summary of your meeting and three action items," an attacker's message doesn't have to look very different to exploit that same cognitive frame. People have learned to act on AI-generated suggestions quickly, often without the same critical scrutiny they'd apply to a cold email.
We're not saying Copilot itself is compromised or insecure. The point is simpler: as AI tools become woven into the fabric of how employees work, the psychological infrastructure around those tools — the habit of acting on what appears in the interface — becomes an attack surface that most governance frameworks haven't caught up to.
What Governance Usually Misses
If you asked most enterprise CISOs whether they govern AI usage in Teams, the answer would probably be yes. They'd point to DLP policies, conditional access, information barriers, maybe a Copilot governance layer.
But governing AI in Teams usually means governing what employees ask Copilot to do — not governing what attackers do to employees through Teams. It's a subtle distinction with significant consequences.
The Snow campaign used a malicious Chrome extension (SnowBelt) that executed in a headless Edge instance, completely invisible to the user, while the actual backdoor ran silent commands over an encrypted WebSocket tunnel. Nothing about that looked like AI. Nothing would have tripped a standard Copilot governance rule. The attack happened one layer below where most governance tools are looking.
Genuinely effective AI governance has to do more than police what AI tools employees are using. It needs to account for:
Trusted channel exploitation — Attackers using approved platforms (Teams, Slack, email) to deliver malicious payloads that bypass traditional perimeter controls. If it shows up in an approved app, employees extend it a level of trust they wouldn't give an unsolicited email.
AI-assisted social engineering — Threat actors are increasingly using LLMs to craft more convincing helpdesk impersonation, phishing, and vishing at scale. The bar for a convincing fake has dropped dramatically. An attacker in 2026 doesn't need a skilled writer — they need a decent prompt.
Post-compromise AI data exposure — Once attackers are inside, AI tools used by employees become treasure maps. Meeting transcripts, document summaries, workflow data — all accessible through compromised sessions. When an attacker takes over a device running Copilot with a valid session, they're not just inside a computer. They're inside an AI-powered summary of everything that user has done, said, and written in recent months.
That third one doesn't get talked about enough. Copilot for Microsoft 365 has access to email, calendar, Teams messages, SharePoint documents, and meeting recordings. A compromised session isn't a breach of one account — it's a breach of the entire operational history of that account, pre-digested and searchable.
What Good Looks Like
None of this means you should rip out Teams or stop deploying Copilot. That ship has sailed, and those tools genuinely improve productivity. What it means is that the governance layer needs to evolve past tool blocklists.
Visibility has to cover the full AI stack. Not just the tools employees installed themselves, but the AI embedded in every platform they use every day. Copilot in Teams, Gemini in Google Workspace, Claude via API in third-party apps. You can't govern what you can't see, and you definitely can't detect anomalies in AI tool usage if you don't have a baseline to compare against.
Behavioural patterns matter more than tool lists. The UNC6692 campaign started with an email flood — an anomalous behaviour that should have been detectable. Understanding normal AI usage patterns across teams and flagging deviations is how you catch things that look legitimate but aren't. The email bomb was a signal. So is an employee suddenly interacting with an unfamiliar external Teams account.
Employee AI literacy has to include social engineering awareness. Not just prompt hygiene — the stuff that actually stops this kind of attack. Knowing that helpdesk will never ask you to install a patch via a Teams link. Knowing that urgency combined with an unusual request is a red flag regardless of the channel. This is harder to implement than a DLP policy, but it's the layer that matters most when the attacker is already inside your approved toolset.
The Snow campaign is a reminder that the threat landscape doesn't wait for organisations to finish writing their AI policies. Attackers aren't waiting for your governance framework to be ratified. They're using your employees' trust in AI-adjacent platforms right now — the same platforms your governance tools are probably pointing away from.
The question isn't whether to govern AI. It's whether your governance is looking in the right direction.
---
Aona discovers every AI tool and AI agent running across your enterprise — including the ones embedded in platforms your employees already trust. [Book a demo](/book-demo) to see what's in your environment.
