Yesterday, OpenAI quietly dropped something that should have enterprise security teams paying attention. Workspace Agents in ChatGPT — available now to Business, Enterprise, and Edu plan customers — aren't just another chatbot feature. They're autonomous AI workers that can access your company's files, Slack channels, email, and CRM. They run in the cloud. They keep working when your employees log off.
And anyone on your team can build one.
What ChatGPT Workspace Agents Actually Do
Let's be clear about the scope here. These aren't the simple GPT plugins from a few years ago. OpenAI's own announcement describes agents that can "prepare reports, write code, respond to messages" — and crucially, "run in the cloud, so they can keep working even when you're not."
The examples OpenAI shared give you a sense of the access involved:
- A **Lead Outreach Agent** that pulls from call notes and account research, qualifies leads, and drafts follow-up emails directly in a rep's inbox
- A **Product Feedback Router** that monitors Slack, support channels, and public forums — continuously
- An **Accounting Agent** that handles month-end close, pulling data across financial systems, generating journal entries and balance sheet reconciliations
- A **Third-Party Risk Manager** that researches vendors and assesses sanctions exposure and reputational risk
These aren't hypothetical examples. OpenAI says these are agents their own internal teams have already built and deployed.
The part that should make any CISO sit up: "Agents do more than answer a prompt: they can write or run code, use connected apps, remember what they've learned, and continue work across multiple steps."
Memory. Code execution. Connected apps. Running in the background.
The Governance Gap That Just Opened Up
Here's the challenge. When an employee uses ChatGPT to ask a question, that's a one-time interaction. IT can observe it, policy can govern it, and if something goes wrong, it's a discrete event. An agent is different. An agent is an ongoing relationship between OpenAI's infrastructure and your company's data.
Think about what the accounting agent scenario actually means. To prepare month-end close, that agent needs access to financial systems — real numbers, real accounts. OpenAI hosts it. It runs on a schedule. It generates documents that go back into your systems. Who approved those integrations? What data did it access last Friday at 2am? If a number is wrong in the monthly report, can you audit the agent's decisions?
Most enterprise security tools weren't built for this. They were built for humans using software, not AI systems operating as the software.
And the broader issue is that this is now mainstream. It's not a shadow tool — it's baked into your existing ChatGPT Enterprise subscription. The employees who've been waiting for permission to use more AI? They now have legitimate, IT-sanctioned access to build agents that touch your most sensitive workflows.
The "Build Once, Share Across the Org" Problem
One of the features OpenAI highlights is that workspace agents can be shared across an organisation. "Build once, use it together in ChatGPT or Slack, and improve it over time."
That sounds efficient. From a security perspective, it's a multiplier on whatever risk the agent carries.
A sales rep builds a lead outreach agent that has access to your CRM and sends emails on behalf of the team. It works well. The rep shares it with the whole sales org. Now you have 40 people using an AI agent with CRM write access and email-sending capability that was configured by someone who wasn't thinking about data residency, PII handling, or what happens if the agent misidentifies a contact.
This is the pattern that makes AI governance hard: the tools are genuinely useful, the adoption is organic and fast, and by the time security catches up, the agent has been quietly working for two months.
What "Runs in the Cloud" Actually Means for Data
Enterprise teams tend to focus on what employees are typing into AI tools. That's still important, but workspace agents introduce a different risk surface: it's not just about what your employees send to ChatGPT, it's about what ChatGPT's agents can pull from your systems.
When you connect a workspace agent to Salesforce, it doesn't just answer questions about your CRM — it reads records, potentially writes records, and those data flows run on OpenAI's infrastructure. For companies under APRA CPS 234, the EU AI Act, or Australia's Privacy Act, that's not a trivial consideration. Third-party AI systems that access personal or financial data trigger notification, audit, and data residency obligations.
This is also where the "runs even when you're not there" feature cuts both ways. It means more productivity for teams — and it means data flowing to an external AI provider at times when no human is present to verify the interaction was appropriate.
What Security Teams Should Do Right Now
This isn't an argument against using workspace agents. They're genuinely powerful, and enterprises that deploy them thoughtfully will move faster than those who don't. But "thoughtfully" requires some baseline controls that most organisations don't yet have.
Inventory what's already running. Before rolling out formal governance, find out what agents your teams have already built. OpenAI launched this in research preview, which means early adopters have had weeks to experiment.
Treat agents like vendors, not tools. An agent with persistent access to your CRM or financial systems is functionally equivalent to a third-party SaaS integration. It should go through your vendor risk management process.
Define data access tiers. Not every workflow needs an agent with access to sensitive data. Work with business teams to define what types of information agents can access, and apply that consistently across your AI policy.
Get visibility into what's connected. The agents that pose the most risk are the ones with broad integration access. Knowing which agents are connected to which systems — and what they've been doing — is the foundation of any governance approach.
Brief your compliance team today. If you're subject to APRA, the EU AI Act, or Australian Privacy Act obligations, your legal and compliance teams need to know that employees may already be running agents with access to regulated data.
The tools to manage this — visibility into AI usage, policy enforcement, data controls — exist. But they need to be in place before the agent sprawl starts, not after an incident prompts an urgent audit.
The AI agent era isn't coming. For enterprise ChatGPT customers, it arrived yesterday.
---
Aona helps enterprise security teams discover, govern, and control AI tool usage across their organisations — including autonomous agents running in the background. [Start a free 90-day AI risk discovery trial.](/book-demo)
