The IAM You Once Trusted Can’t Protect You Anymore, Here’s Why
Imagine running a business where every worker has a dynamic schedule, constantly shifting responsibilities, and the ability to delegate tasks to coworkers on the fly. That’s essentially what modern AI agents are doing. They aren’t static service accounts that perform the same API call over and over. They are dynamic, context-aware actors that respond to prompts, learn from previous interactions, and seek out the best way to achieve a goal.
Analysts predict that within the next couple of years, a significant share of enterprises will depend on these agents to carry out routine tasks, from booking travel to provisioning infrastructure. Many companies have already started deploying these digital helpers across customer service, IT, marketing, and finance.
But while the agents are getting smarter, the systems that govern their identities are still rooted in a world where humans log in with passwords and stay within well-defined roles. In cloud environments, it’s common for machine identities—API keys, service accounts, certificates—to outnumber human users by an order of magnitude or more. Each of those non-human identities opens a potential door for attackers, because they often come with long-lived credentials and broad privileges. When you add autonomous agents on top of that, the attack surface grows exponentially. It’s like handing out master keys to a swarm of robots and hoping none of them falls into the wrong hands.
Traditional IAM systems were never designed for this. They revolve around creating roles and entitlements, assigning them to users, and issuing long-term credentials. Once a user is approved, they can perform a wide range of actions until someone manually revokes access. This model works reasonably well when people’s job duties are stable and the number of accounts is manageable. It crumbles when you need to manage thousands of short-lived identities that change what they’re doing every hour.
The shortcomings of human-centric IAM are clear in the context of agentic AI:
These weaknesses aren’t hypothetical. Many security breaches trace back to forgotten service accounts, unrotated API keys, or over-privileged machine identities. As AI agents proliferate, the number of machine identities will dwarf the number of human users, and the consequences of mismanagement will be proportionally larger.
To address these challenges, we must rethink what an “identity” means in the context of autonomous software. For people, an identity is typically a username, a password, and some attributes like job title or department. For an AI agent, identity needs to convey far more:
An “agentic identity”, therefore, looks less like a user account and more like a capability profile—a bundle of metadata, policies, and context that evolves over time. It is dynamic and contextual. If the agent’s behavior changes, its identity (and corresponding permissions) must change too. Treating agents as first-class digital citizens means they get unique identifiers, detailed auditing, and proper lifecycle management, including decommissioning when they are no longer needed.
To illustrate the difference, consider a simple comparison:
| Lifetime | Typically long-lived; created at hire and retired at termination. | Often short-lived; created on demand for a specific task and destroyed afterward. |
| Scope of access | Tied to a job role or department, rarely changes day to day. | Tied to a purpose or objective; may change mid-session as tasks evolve. |
| Autonomy | Acts based on direct user input; decisions are deliberate. | Acts autonomously; can plan multi-step tasks and delegate to other agents. |
| Oversight | Human user is directly responsible for actions. | Requires human oversight for sensitive operations; actions should be logged and reviewable. |
| Risk profile | Behavior is predictable and deterministic. | Behavior can be non-deterministic; context and state influence outcomes. |
If agents need capability-based identities, we need an infrastructure capable of issuing, governing, and monitoring them at scale. This leads to the concept of an identity control plane for agentic AI. Think of it as a dynamic trust layer that sits between agents and the systems they access. Its job is to determine whether an agent should be allowed to perform an action, based on its current identity profile, risk context, and declared purpose.
At a high level, a modern identity control plane should provide:
Adopting a new control plane addresses many risks, but it doesn’t eliminate them. Security teams still need to anticipate and counter emerging threats:
AI agents are autonomous. They carry state from one interaction to the next and adapt as they learn. That creates unpredictability. An agent might discover a new tool or API during a session and decide to use it. It might recall a previous prompt and apply it out of context. This flexibility is valuable, but it means that monitoring must account for evolving behavior. Input sanitization and output vetting become essential to prevent prompt injection or data leakage.
Agentic AI has exploded into the mainstream so quickly that security frameworks haven’t kept pace. Standards like OAuth and OpenID Connect work well for human users, but they don’t gracefully support scenarios where agents act on behalf of multiple users or spawn sub-agents. Today, many organizations build custom solutions, resulting in fragmentation. In the long run, unified standards for agent identity and delegation will be necessary to reduce complexity and enable interoperability.
When agents delegate tasks to other agents, the chain of authority can get tangled. A root agent might spawn several sub-agents, each acting in different contexts. Without careful scope attenuation—clearly defining what each delegate can do and ensuring that delegated permissions don’t accumulate—we risk privilege escalation. New authorization patterns, such as explicit “on-behalf-of” flows, will be needed.
As developers experiment with AI, agents may show up in unexpected places: a proof-of-concept script in a private Git repository, a marketing tool that suddenly offers “smart workflows,” or a third-party SaaS platform that quietly enables agents for customer support. Discovering and inventorying these agents requires continuous scanning, code analysis, and integration with vendor APIs. Generating an AI bill of materials—an inventory of AI models, frameworks, and plugins—can aid in risk assessment and compliance.
Attackers are already adapting to AI. Prompt injection can trick agents into revealing secrets or executing unauthorized actions. Tool poisoning can compromise the external APIs agents rely on, causing them to behave maliciously. Vulnerabilities in the underlying model or plugin can allow an attacker to take over an agent. Defending against these threats requires not only control over identity and authorization but also a deep understanding of the AI stack, from model training to runtime execution.
Technology can only go so far. People must understand the risks of agentic AI and follow safe practices. Developers should avoid hardcoding secrets and should document the dependencies and capabilities of the agents they build. Business leaders need to define acceptable use cases and ensure that sensitive data isn’t exposed. Regulators will likely demand transparency and explainability in AI systems, especially when decisions impact consumers. Auditable identity trails will help prove compliance..
In my view, the question isn’t whether we need new identity controls for AI agents—it’s whether we can rethink identity in a way that supports responsible autonomy. AI agents are more than scripts; they’re decision-makers that learn, plan, and act. If we treat them as faceless processes with static API keys, we invite misuse and erode trust. If we assign them unique identities, track their lineage, bind their purpose, and monitor their actions, we can harness their potential without sacrificing security.
That said, we must avoid repeating the mistakes of the early cloud days, where every provider invented its own proprietary IAM solution. Fragmentation slows adoption and creates blind spots. Open standards for agent identity, delegation, and credential management will be crucial. So will collaboration across vendors, researchers, and regulators. We have an opportunity to build a robust foundation before agentic AI becomes ubiquitous. Let’s not waste it.
Finally, remember that identity is not just a technical problem; it’s a cultural one. Security teams, developers, and business leaders must work together. Training is essential. Adapting policies and processes to reflect the realities of autonomous software will take time. But the benefits—faster workflows, deeper insights, and the ability to delegate routine tasks to machines—are worth the investment.
Human-centric IAM has served us well for decades, but it is ill-equipped for a future dominated by AI agents. A new identity control plane, built around dynamic identities and purpose-bound access, is essential to govern agentic AI responsibly. Coupled with an identity security fabric that unifies governance across humans and machines, this control plane will enable organizations to embrace autonomy without compromising security or compliance.The journey begins with awareness: understand your machine identities, separate agents from generic service accounts, and start experimenting with short-lived credentials. From there, build the monitoring, policies, and human oversight needed to keep agents honest. Engage with standards groups to help shape the protocols that will define agentic identity. And above all, treat AI agents not as black boxes but as members of your digital workforce. Only then can we unlock their true potential—safely, ethically, and efficiently.