Compliance frameworks were not written with agentic AI in mind. GDPR was drafted before autonomous AI systems were a mainstream enterprise concern. SOC 2's trust service criteria were designed to assess static infrastructure, not self-directing software agents. ISO 27001 controls assume human-initiated data processing, not AI systems that operate asynchronously under human-provided goals.
This does not mean these frameworks do not apply to agentic AI. They do—but their application to agents creates specific compliance gaps that organizations need to address proactively. This post identifies the highest-priority gaps under each framework and provides concrete remediation paths.
GDPR and Agentic AI
Gap 1: Lawful Basis for Agent-Initiated Processing
GDPR requires a lawful basis for every processing activity involving personal data. Human-initiated processing is typically covered by a combination of legitimate interests, contract performance, and consent. But when an AI agent—operating on a broad goal like 'analyze customer behavior patterns'—decides to access personal data as part of its autonomous reasoning, who authorized that specific processing activity?
The answer is often unclear. The user who deployed the agent authorized the agent to achieve a goal, not to perform each specific data processing step. If the agent's approach to achieving the goal involves processing personal data in ways the user did not explicitly anticipate, there is a colorable argument that the lawful basis does not clearly apply to that specific processing.
Remediation: Define the scope of personal data access for each agent deployment at the approval stage. The lawful basis documentation should specify not just that the agent may process data, but what categories of data, for what purpose, under what legal basis. This constrains the agent's data access to what is covered by the documented lawful basis.
Gap 2: Data Residency and Cross-Border Transfer
GDPR restricts transfers of personal data to countries outside the EEA unless adequate protection exists. AI agents create new cross-border transfer vectors that are easy to miss: LLM API calls that include personal data in prompts, MCP servers hosted outside the EEA that process personal data, agent orchestration infrastructure running in US-based cloud regions.
When personal data is included in a prompt sent to an LLM API, that data is transferred to wherever the API endpoint is hosted. If the endpoint is in the US and no Standard Contractual Clauses are in place, this is a GDPR cross-border transfer violation. Most enterprise AI deployments have not fully mapped the data flows involved in agent operation to their cross-border transfer obligations.
Remediation: Map every data flow in agent operations to its physical transfer path. Include LLM API endpoints in your data transfer mapping. Ensure SCCs or equivalent mechanisms are in place for any transfer of EU personal data to third-country endpoints. Consider data residency controls at the agent gateway layer that filter personal data from prompts destined for non-compliant endpoints.
Gap 3: Automated Decision-Making Under Article 22
GDPR Article 22 restricts fully automated decision-making that produces legal or similarly significant effects. As agents are deployed for use cases like loan triage, hiring screening, credit assessment, and content moderation, the Article 22 applicability becomes real. Organizations need to determine whether agent decisions in these contexts constitute 'fully automated' processing, document the human oversight involved, and provide the required information to data subjects.
SOC 2 and Agentic AI
Gap 1: Logical Access Controls (CC6.1)
SOC 2 CC6.1 requires logical access to be restricted to authorized individuals. Agents present a challenge here because they often operate under user credentials, blurring the boundary between 'user access' and 'agent access.' When an agent performs actions using a user's OAuth tokens, the access is attributed to the user in access logs—but the decisions were made by the agent.
Auditors reviewing SOC 2 controls are increasingly asking: how do you ensure that AI agents are not being used to circumvent intended access restrictions? If a user cannot access a database directly, but can deploy an agent that can—because the agent uses a service account with broader permissions—the logical access control has a gap.
Remediation: Ensure agent identities are distinct from user identities in your access control architecture. Agent service accounts should have scoped permissions that are no broader than necessary for the agent's approved use case. Include agent identities in user access reviews.
Gap 2: Audit Trail Integrity (CC7.2)
SOC 2 CC7.2 requires the organization to detect and address security events using audit trail analysis. Agents operating under user credentials generate audit logs attributed to the user, not the agent—making it difficult to reconstruct what actions an agent took versus what the human user took.
This attribution problem is not just a security issue; it is a SOC 2 audit issue. If an auditor asks 'who accessed this record on March 15?', the answer 'the user's AI agent, not the user directly' is a significant finding if the audit logs cannot distinguish between the two.
Remediation: Implement agent-specific logging that captures agent identity, the user context under which the agent operated, and the specific tools and data accessed. This supplementary log should correlate with standard audit logs to provide the complete picture required for CC7.2 compliance.
Gap 3: Change Management (CC8.1)
SOC 2 CC8.1 requires changes to infrastructure and applications to go through an authorized change management process. AI agents that can modify code, configuration, or infrastructure—common in developer-facing agent deployments—represent a change management bypass risk. An agent executing a code change on behalf of a user is making a change to the information system. Is that change going through your change management process?
ISO 27001 and Agentic AI
Gap 1: Asset Management (A.8)
ISO 27001 Annex A.8 requires identification and management of information assets. AI agents are information assets—they process data, take actions on organizational systems, and represent a risk surface. Most organizations' asset inventories do not include deployed AI agents. This is a direct ISO 27001 compliance gap.
Remediation: Add AI agent deployments to your information asset register. For each agent, document: classification, owner, data it processes, systems it can affect, and applicable risk treatment. Include agent assets in your ISMS scope.
Gap 2: Supplier Relationships (A.15)
ISO 27001 A.15 requires security requirements to be established with suppliers and service providers. AI model providers (OpenAI, Anthropic, Google) and MCP server vendors are suppliers for the purposes of A.15. Organizations deploying agentic AI should have supplier agreements in place that address: data processing terms, subprocessor restrictions, security incident notification, and audit rights.
Many organizations have basic DPAs with AI vendors but have not addressed the agentic use case specifically—which changes the data processing relationship materially (the data is now being sent as part of autonomous agent operations, not just user-initiated queries).
Gap 3: Incident Management (A.16)
ISO 27001 A.16 requires a structured incident management capability. Agentic AI incidents—a prompt injection attack, an agent exfiltrating data through a tool call, an agent taking unauthorized destructive action—are a new incident category that most incident response playbooks do not address.
Remediation: Update your incident classification framework to include agentic AI-specific incident types. Define containment procedures (how do you stop a running agent mid-task?), investigation procedures (how do you reconstruct what an agent did?), and communication procedures (when is an agent incident a notifiable breach?).
A Practical Compliance Roadmap
The compliance gaps above can be addressed in parallel with the governance framework described in our policy framework post. The critical path items:
- GDPR: Map agent data flows to transfer paths, document lawful basis for each agent type, update Article 22 assessment process to include agent decisions.
- SOC 2: Implement agent-distinct identities for audit attribution, include agents in access reviews, extend change management to cover agent-initiated changes.
- ISO 27001: Add agents to asset register, update supplier agreements for agentic use cases, add AI agent incident types to playbooks.
Compliance frameworks will evolve to address agentic AI more explicitly over the next 12–24 months—updated GDPR guidance, revised SOC 2 criteria, and ISO 27001:2026 amendments are all in progress. Organizations that address the gaps now will be well-positioned when those updates arrive, rather than scrambling to retrofit controls into established workflows.