90 Days Gen AI Risk Trial -Start Now
Book a demo
GUIDE

Your AI Vendor Decides What You Can Do With Their Tools. The Pentagon Didn't Like That Answer.

AuthorMaya Chen
DateMay 3, 2026

Key Takeaways

  • What's actually happening here
  • The shadow AI angle nobody's talking about
  • The governance gap vendors can't close
  • What building your own governance layer actually looks like
  • The uncomfortable conclusion

Last week, the U.S. Department of Defense announced it had inked fresh AI deals with Nvidia, Microsoft, AWS, and Reflection AI — deploying their models directly onto classified military networks. The announcement was framed as a win for American AI supremacy, and technically, it was.

But buried in the press release was a detail worth sitting with: the DOD is now working with four new vendors, in part, because its previous arrangement with Anthropic fell apart spectacularly. The Pentagon wanted unrestricted use of Anthropic's AI tools. Anthropic said no — its models, its guardrails, their rules. They're still in court over it.

Around the same time, OpenAI announced it would be rolling out GPT-5.5 Cyber — its most capable security testing model — only to "critical cyber defenders." Not to paying enterprise customers. Not to security teams at Fortune 500 companies. To a vetted list of users that OpenAI itself approves, through a tiered credentialing program it calls Trusted Access for Cyber.

Worth noting: Sam Altman had, just days earlier, publicly mocked Anthropic for doing the exact same thing with its Mythos model. He called it "fear-based marketing." Then he did it anyway.

What's actually happening here

On the surface, these are two separate stories. The Pentagon is a special case. OpenAI gatekeeping a cybersecurity model makes sense — you don't want pentesting tools in the wrong hands.

But zoom out and both stories are pointing at the same structural problem: enterprises don't actually control how they govern AI. Their vendors do.

When you buy access to ChatGPT Enterprise or Microsoft Copilot or Claude, you're not buying a tool you own. You're licensing access to a capability that the vendor can reshape, restrict, or revoke based on their own judgment. They decide what's acceptable use. They decide which features are too dangerous for your team. They decide what guardrails get applied to your workflows.

The Pentagon found this out in an uncomfortable way — they wanted to deploy Claude for operational military use without usage restrictions, and Anthropic said that's not how this works. Most enterprises won't face that exact confrontation. But the underlying dynamic is the same.

The shadow AI angle nobody's talking about

Here's the thing that should keep enterprise security teams up at night: when vendors restrict access, employees don't stop wanting the capabilities.

An unauthorized group reportedly managed to gain access to Anthropic's restricted Mythos model anyway. We don't know exactly how, but the fact it happened is instructive. The hunger for powerful AI tools is real. When official channels close, people find unofficial ones.

This is the shadow AI problem playing out in slow motion. Your employees already have AI tools installed that your IT team doesn't know about. When OpenAI restricts GPT-5.5 Cyber to verified defenders, the developer on your security team who wants those capabilities doesn't stop wanting them — they go looking for alternatives, workarounds, or jailbreaks.

You can't manage what you can't see. And right now, most enterprises are flying blind on both fronts: they don't know what AI tools their teams are actually using, and they've handed the governance decisions for the tools they do know about to the vendors who built them.

The governance gap vendors can't close

To be fair, the vendors are trying. OpenAI's Trusted Access for Cyber program is a genuine attempt to balance capability with safety. Anthropic's Mythos restrictions came from a real place — they didn't want their models used for autonomous weapons or domestic mass surveillance.

But here's what they can't do: govern your organisation's specific risk appetite, your compliance requirements, your data sensitivity, or your employee behaviour patterns. They're building one-size guardrails for millions of users. You need something tailored to your context.

The DOD can negotiate at the Pentagon level and still end up in a lawsuit. Your enterprise isn't going to get a special terms-of-service carveout with OpenAI. You get the same usage policies as everyone else, adjusted whenever the vendor decides to adjust them.

That's not governance. That's hope.

What building your own governance layer actually looks like

The enterprises handling this well share one thing: they've stopped treating AI governance as the vendor's problem.

Practically, that means:

Visibility before control. You can't govern what you can't see. The first step is understanding what AI tools are actually in use across your organisation — not just the approved ones, but the browser extensions, the personal ChatGPT accounts, the Cursor installs on developer laptops. Comprehensive discovery across your workforce is table stakes.

Policy that lives at the data layer, not the app layer. Vendor guardrails operate at the model level. Your guardrails need to operate at the data level — what information is being shared with which tools, by which teams, under what conditions. That's a different control surface entirely.

Governance that scales without manual overhead. The AI tool catalogue is too big and growing too fast for manual management. The organisations getting this right are automating tiering, flagging, and policy enforcement so security teams can focus on exceptions rather than inventory.

Awareness at the moment of use. When an employee tries to paste a customer contract into an AI tool that isn't approved for that data sensitivity level, the right time to intervene is before they hit send — not six weeks later during a compliance review.

The uncomfortable conclusion

The Pentagon's dispute with Anthropic wasn't really about weapons policy. It was about who owns the governance function when you integrate AI at scale into sensitive workflows.

Anthropic's position was clear: they do. That's a reasonable position for an AI lab to take. It's also a completely untenable position for any enterprise that needs to actually run its operations.

The vendors aren't your governance team. The models aren't your policy. And waiting for OpenAI to decide what your security team is allowed to do isn't a strategy.

The organisations that figure this out will move faster and more safely. The ones that don't will keep finding out — sometimes via lawsuit, sometimes via breach — that outsourcing governance was never an option.

---

Aona helps enterprises discover, tier, and govern every AI tool their workforce uses — from ChatGPT to Cursor to the tools your security team doesn't know about yet. [Book a demo](/book-demo) to see how it works.

See it in action

Want to see how Aona handles this for your team?

15-minute demo. No fluff, no sales pressure.

Book a Demo →

Stay ahead of Shadow AI

Get the latest AI governance research in your inbox

Weekly insights on Shadow AI risks, compliance updates, and enterprise AI security. No spam.

About the Author

Maya Chen avatar

Maya Chen

Growth & Marketing Lead

Maya leads growth and marketing at Aona AI, driving SEO strategy, content creation, and demand generation. With a sharp focus on AI governance topics, she helps enterprises understand the evolving landscape of Shadow AI, AI security, and responsible AI adoption.

More articles by Maya

Ready to Secure Your AI Adoption?

Discover how Aona AI helps enterprises detect Shadow AI, enforce security guardrails, and govern AI adoption across your organization.