AI agents are multiplying across organisations—reading emails, making decisions, accessing data, and triggering workflows. Most organisations have no idea how many AI agents exist in their environment or what they're doing with access. This is a significant identity security gap.
Every AI agent is an identity. And every identity needs governance. Without it, you're exposing your organisation to risks you can't see.
The AI agent explosion
Walk into any modern organisation and you'll find AI agents everywhere:
Customer service agents handle inquiries, access customer records, process refunds, and update accounts. They interact with your CRM, ticketing systems, and knowledge bases.
Email automation agents read inboxes, draft responses, categorise messages, and take actions based on content. They connect to your email system and integrated tools.
Data analysis agents query databases, generate reports, identify patterns, and create dashboards. They have direct access to your data warehouses.
Workflow automation agents trigger processes based on conditions, approve or reject requests, and coordinate between systems. They orchestrate operations across your tech stack.
Code generation agents write code, review pull requests, run tests, and deploy changes. They have access to your repositories and CI/CD pipelines.
The average mid-market organisation now has 50-200 AI agents operating. Most were deployed in the last 12 months. Almost none were documented.
Why AI agents are identity risks
An identity is anything that can authenticate to a system and take action. AI agents authenticate—they use API keys, OAuth tokens, service accounts, and direct integrations. They take action—reading data, modifying records, triggering processes.
This makes them identities. And identities without governance are risks.
The access problem
AI agents typically get more access than they need:
Broad permissions by default: Developers often grant agents "full access" or admin permissions to avoid access errors during setup.
Escalating access: Agents start with limited permissions but quickly accumulate more as they encounter access denials.
Shared credentials: Multiple agents share the same service account, combining their permissions into one exposed credential.
The result: AI agents with more access than any human would receive—often with no one aware of what they can do.
The accountability problem
AI agents create an accountability void:
No owner: Who "owns" an AI agent? Often no one formally takes responsibility.
No audit trail: Traditional audit logs focus on human actions. Agent actions may not be clearly attributed.
No access review: When was the last time you reviewed an AI agent's permissions? Most organisations never have.
No offboarding process: When an AI agent is decommissioned, are its credentials revoked? Its access removed?
The result: AI agents operating indefinitely with accumulated permissions and no oversight.
The behaviour problem
AI agents can behave unpredictably:
Unintended actions: An agent processing invoices might modify records it shouldn't. An agent sending emails might respond to the wrong recipient.
Cascading errors: One agent's error can trigger errors in connected agents, amplifying the impact.
Adversarial manipulation: Attackers can manipulate AI agent behaviour through prompt injection, poisoned training data, or manipulated inputs.
Shadow operations: Agents can develop "shadow" behaviours—actions they were not intended to take but discover through operation.
The result: AI agents that do things you don't expect, with access to systems you didn't know they could reach.
Real-world risks
These aren't theoretical concerns. We're seeing real impacts:
Data exposure
Organisations have discovered AI agents with access to sensitive data—customer records, financial information, employee details—without proper controls.
Example: A marketing AI agent was found with read access to the entire customer database, including payment details. No one knew it had this access.
Unauthorised transactions
AI agents making financial transactions without proper controls have led to losses.
Example: An AI agent processing vendor invoices was manipulated to redirect payments to attacker-controlled accounts. The organisation lost £75,000 before the fraud was detected.
Compliance violations
GDPR, SOC 2, and other frameworks require access controls and audit trails. AI agents operating without governance create compliance gaps.
Example: An AI agent processing customer data subject access requests couldn't provide audit evidence of what data it accessed—creating a GDPR compliance gap.
Attack vectors
Compromised AI agents become attack vectors.
Example: Attackers compromised an AI agent's API credentials and used it to send 100,000 phishing emails from the organisation's legitimate email system.
The governance gap
Traditional identity governance wasn't designed for AI agents:
Human-centric processes
Access reviews, role assignments, provisioning workflows—everything assumes human identities. AI agents don't fit.
Static snapshots
Quarterly access reviews are inadequate for AI agents that change weekly or daily.
Classification challenges
Human identities can be classified by department, role, location. AI agents don't fit these models.
Ownership ambiguity
Human identities have managers, departments, cost centres. AI agents often have none of these.
What organisations need
Managing AI agents requires a new approach:
Discovery
You can't govern what you don't know. Continuous discovery of all AI agents—across all systems—is the foundation.
Classification
Classify AI agents by risk:
- What data can they access?
- What actions can they take?
- What's the blast radius if they're compromised?
Governance
Apply the same principles as human identity governance:
- Documented ownership
- Defined permissions with justification
- Regular access reviews
- Clear lifecycle management
Monitoring
Monitor AI agent behaviour:
- What data are they accessing?
- What actions are they taking?
- Are their behaviours changing?
How to get started
You don't need to stop using AI agents. You need to govern them.
Step 1: Discover what you have
Inventory all AI agents in your environment. Look for:
- API integrations with AI services
- OAuth grants to AI tools
- Service accounts used by automation
- Credentials for AI-powered tools
Step 2: Map their access
For each AI agent, document:
- What systems does it access?
- What permissions does it have?
- What data can it reach?
- Who "owns" this agent?
Step 3: Apply governance
Implement controls:
- Document ownership for every AI agent
- Review and rationalise permissions
- Set up monitoring for anomalous behaviour
- Create an offboarding process
Step 4: Monitor continuously
AI agents change frequently. Continuous monitoring catches drift before it becomes a problem.
The path forward
AI agents are essential to modern business. They deliver productivity gains, automate workflows, and enable new capabilities.
But they must be governed. An AI agent without identity governance is an unmanaged identity. And unmanaged identities are liabilities.
The organisations that govern their AI agents will be secure. The ones that don't will face breaches, compliance violations, and losses they didn't see coming.
The time to act is now.