Your SBOM Is Missing Half the Picture. Welcome to the Age of AI-BOMs.
For the past decade, SBOMs - software bills of materials - have been the gold standard for enterprise software security. After a string of high-profile supply chain attacks (SolarWinds being the canonical example), maintaining a comprehensive inventory of every software component in your environment went from "good practice" to regulatory expectation in many sectors.
Now there's a new term security teams need to add to their vocabulary: AI-BOM. And it's not just a rebrand. It signals a fundamental shift in what enterprise security actually needs to track.
Shadow IT Grew Up
Cast your mind back to 2015. The shadow IT problem was employees using Dropbox instead of the corporate file share. IT would block it at the proxy. Employees would switch to Google Drive. Cat and mouse, mostly low-stakes.
Shadow AI operates at a different threat level entirely. When an employee uses an unsanctioned AI tool today, they're not just storing a file in the wrong place - they're potentially feeding customer data, internal source code, financial projections, or HR records to a third-party model they know nothing about. The data goes in. Whether it stays there, gets used for training, or ends up accessible to others is largely determined by terms of service most employees never read.
According to reporting this week from The Register, AI Bills of Materials are gaining serious traction among enterprise security teams - and for good reason. A traditional SBOM covers software packages and dependencies. But the AI stack in a typical enterprise looks nothing like a traditional software environment.
What Actually Needs to Go in an AI Inventory
Here's where the complexity sneaks up on you. When security teams think about AI risk, they tend to picture the obvious consumer-grade tools - ChatGPT, Copilot, Gemini. The ones that show up in DLP alerts. But the actual AI surface area in a mid-sized enterprise in 2026 looks more like this:
- Dozens of SaaS applications with embedded AI features - Notion AI, Salesforce Einstein, Zendesk's AI agents, HubSpot's content tools - many enabled by default without IT's involvement
- Developer tools like Cursor, GitHub Copilot, and increasingly autonomous coding agents that have read/write access to production codebases
- Internal tools built directly on model APIs - the quiet ones nobody announced to security because "it's just an internal script"
- MCP servers connecting AI agents to live production systems with real credentials
- Agentic workflows that chain multiple models together, passing context (and potentially sensitive data) between them
- Fine-tuned or locally deployed models that may have been trained on company data
An AI-BOM tries to capture all of this: the models, the datasets, the SDK libraries, the agentic skills, the MCP integrations, the prompts used at inference time, and crucially - how all of these components interact with each other and connect to business workflows.
That last part is what most teams underestimate. It's not just what AI tools exist. It's how they're wired together. An AI coding agent with read access to your codebase and write access to your CI/CD pipeline is a fundamentally different risk profile from ChatGPT being used to draft email responses.
"If You Don't Have Visibility, You Can't Understand What to Protect"
That quote comes from researchers working on AI-BOM frameworks this week, and it's blunt but accurate. It's the same argument that drove SBOM adoption after Log4Shell in late 2021: organizations couldn't patch what they couldn't find. Teams spent weeks just trying to figure out which of their systems used the vulnerable library.
The parallel with AI is close enough to be uncomfortable. When a vulnerability in an AI model or agent framework is discovered, or when a data exposure incident happens, the organizations that will struggle most are the ones that have no clear picture of what's deployed.
And there are a lot of those organizations right now. ServiceNow this week announced an AI control tower that includes agent kill switches - essentially circuit breakers for when an AI workflow goes rogue. The Five Eyes intelligence agencies (US, UK, Canada, Australia, New Zealand) issued a joint advisory warning that rapid deployments of agentic AI are creating security risks organizations aren't prepared for.
These signals are converging. AI deployment is outpacing governance by a significant margin.
Why Building This Inventory Is Harder Than It Sounds
The challenge with AI-BOMs isn't the concept - it's the execution. AI tools enter the enterprise through multiple vectors simultaneously, and each vector requires a different discovery approach.
Officially sanctioned tools are the easy category. IT approved them, legal reviewed the data processing agreement, security signed off. These should be in your inventory already. They often aren't documented particularly well, but at least someone knows they exist.
Shadow AI tools are the harder problem. These arrive via browser extensions installed on personal accounts, AI features buried inside SaaS tools (often enabled by default with a product update), and direct API access by developers who bypass procurement entirely because spinning up an API key takes five minutes.
Agent-built pipelines are the hardest category of all. When a developer uses an AI coding agent to build an internal automation that connects to your CRM or data warehouse, who reviews that? The developer almost certainly didn't think of it as "deploying AI infrastructure." They were finishing a ticket faster. But that agent-built integration may now be running in production with credentials, data access, and external model calls that nobody mapped, documented, or assessed.
This is the core of the AI-BOM problem: the inventory doesn't just need to include tools your company bought. It needs to include AI-assisted outputs that are now running in your environment.
What Mature AI Governance Actually Looks Like
The good news is that practical frameworks are emerging quickly. The approach borrows heavily from supply chain security thinking: continuous discovery rather than point-in-time audits, risk classification based on data access and blast radius, and policy enforcement at the integration layer.
For most enterprises, a workable starting point covers three layers:
Discovery - Know what's running. This means monitoring browser extension behavior, API calls to known AI providers, SaaS OAuth grants, and network traffic to model endpoints. This layer catches the majority of shadow AI activity, including tools that didn't go through any procurement process.
Classification - Not all AI tools carry equivalent risk. A grammar checker presents a very different threat profile than a coding agent with production database access. Risk tiering lets security teams focus governance effort where the actual exposure is, rather than treating every AI tool as equally dangerous (which leads to either paralysis or blanket blocking - neither of which works).
Continuous monitoring - AI tools change constantly. Models get updated, new integrations get added, permissions drift over time as employees add features. A static inventory you compile once and update annually is nearly useless. The goal is a living picture of what's in your environment, updated continuously.
The Trajectory Is Clear
Shadow IT governance took roughly a decade to mature after the cloud era began. Organizations went from reflexive blocking to "understand, assess, then approve" - and that shift made them simultaneously more productive and more secure.
AI governance is on the same arc, compressed into a much shorter timeframe. The organizations that build the visibility layer now - before they need it - will be in a fundamentally better position than those scrambling to reconstruct their AI exposure after something goes wrong.
An AI-BOM isn't the finish line. It's the foundation. You can't govern what you can't see, and right now, most enterprises are partially blind to some portion of their AI stack.
That's the problem worth solving first.
---
Aona gives security teams continuous visibility across all AI tool usage - sanctioned and shadow - with automatic risk tiering and policy enforcement. [Book a demo](/book-demo) to see how it works in your environment.
