
Identity security is the discipline concerned with reducing all aspects of identity-related risk, which requires identifying, governing, and protecting all identities within an organization. The discipline is growing in complexity.
In the past, security teams focused on human identities and ensuring they had the right level of access to the resources they needed to do their jobs. In recent years, this focus has expanded to securing machine identities to protect secrets, certificates, and workloads.
The latest identity complexity is agentic AI. Now that businesses are rolling out AI agents, the challenge is securing an identity that inherits the security challenges of both humans and machines.
Are AI agents a new identity class?
AI agents are machines by definition, but their abilities to make decisions and to learn are more similar to human capabilities. Agentic AI uses advanced algorithms and machine learning to perform tasks and make decisions on behalf of people.
Agents in complex agentic AI systems can perceive their environment, process information, make decisions, and even learn and improve over time. That makes these agents more than machine identities. They can also work independently with minimal human prompts and oversight.
Challenges of the New Identity Class
Scale and oversight are significant challenges with AI identities, just as they have been with machine identities. Traditional machine identities now outnumber human identities 82:1, and by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to Gartner.
Organizations must onboard these identities, give them appropriate access, manage them, and eventually deprovision them. Taking those steps would be challenging enough with human or machine identities. Introducing AI identities adds much more complexity.
AI agents need access to enterprise resources. But how do you manage this access and determine what level of privilege these agents require? And how do you manage these access challenges without adding human intervention to your teams’ workload—potentially negating the productivity you’re trying to gain by using AI agents?
Some organizations that are eager to introduce AI agents may grant broad permissions to speed up implementation. But if the agent is compromised as a result, it could do a lot of damage.
AI agents also lack security awareness, and they do not understand right and wrong. They can be programmed to detect anomalies and flag unusual behavior—which is immensely helpful in preventing fraud. Howeve