CISA and Global Cyber Agencies Push Identity-First Security Model for Agentic AI as Enterprise Risks Accelerate
- 7 minutes ago
- 3 min read
A new wave of government-backed cybersecurity guidance is reshaping how enterprises approach agentic AI, warning that autonomous systems are rapidly evolving from passive tools into active participants inside corporate environments. The shift is forcing security leaders to rethink identity, accountability, and access control as foundational elements of AI governance.
The guidance, developed in collaboration with international cyber agencies including CISA, NSA, and allied partners, emphasizes that agentic AI systems are no longer confined to generating content. They are increasingly capable of making decisions, interacting with infrastructure, and executing tasks across enterprise environments with minimal human intervention.
That expanded capability comes with a sharp increase in risk.
Agentic AI systems operate through interconnected components such as large language models, external tools, memory stores, and APIs, creating a significantly larger attack surface than traditional software. These systems can autonomously plan, execute, and even spawn sub-agents, introducing new layers of complexity and potential failure points.
Security experts warn that without strict controls, these systems can quickly become a liability.
Kevin Surace, Chair at TokenCore, framed the issue in stark terms:
"This guidance matters because agentic AI is no longer just software. Once an agent can access systems, move data, call tools, or make decisions, it becomes a digital actor inside the enterprise. That actor needs identity, limits, monitoring, and accountability."
At the core of the new guidance is a clear message. Identity is no longer just a user problem. It is now the control plane for AI.
As enterprises deploy AI agents into workflows like procurement, customer support, and infrastructure management, the risk of overprivileged or poorly governed agents grows. The report highlights scenarios where compromised agents with excessive permissions can approve payments, modify contracts, or exfiltrate sensitive data while appearing legitimate in audit logs.
Surace emphasized that accountability must be explicit and enforced at the human level:
"The most important word here is accountability. Every agent should have a human owner. Someone needs to sign off on what that agent can do, what systems it can touch, and what risks it is allowed to take."
This aligns closely with the guidance’s recommendation that every agent be treated as a distinct identity with tightly scoped privileges, backed by strong authentication and continuous verification. Organizations are urged to adopt least privilege access, enforce cryptographic identity controls, and maintain a trusted registry of all active agents.
The stakes are high. One of the most concerning risks outlined in the report is the emergence of “orphaned agents” operating without clear ownership or oversight. These agents can persist in environments with broad permissions, creating silent entry points for attackers.
Surace warned:
"We should not allow orphaned agents to run around the enterprise with broad privileges and no accountable sponsor. That is how you automate your next breach."
Beyond identity, the guidance outlines a broader set of risks unique to agentic AI. These include prompt injection attacks, privilege escalation, tool exploitation, and cascading failures across multi-agent systems. Because agents can interact with external data sources and third-party tools, attackers can manipulate inputs or compromise dependencies to influence agent behavior at scale.
Another critical challenge is visibility. Agentic systems often operate through long chains of reasoning and distributed decision-making, making it difficult to trace actions back to a single source. This creates significant challenges for auditing, compliance, and incident response.
To address this, agencies recommend continuous monitoring, comprehensive logging, and human-in-the-loop controls for high-risk actions. Enterprises are also encouraged to deploy agents progressively, starting with low-risk tasks and expanding autonomy only as security controls mature.
Surace underscored the need for stronger identity assurance mechanisms:
"And that sign off cannot just be a password, an email approval, or a checkbox. It should be tied to biometric assured identity, so the enterprise knows exactly who authorized the agent and who is responsible for it."
The broader takeaway for CISOs and security teams is clear. Traditional perimeter defenses and static access controls are insufficient for a world where AI systems can act independently. Instead, organizations must adopt a zero trust approach where every agent action is verified, monitored, and attributable.
Surace summed it up:
"Identity is the control plane for agentic AI. If you cannot prove who authorized the agent, you have not secured the agent. You have just given automation a badge and hoped for the best."
As agentic AI adoption accelerates across industries, the line between human and machine actors continues to blur. For enterprises, the challenge is no longer just deploying AI. It is ensuring that every digital actor operates within clear boundaries, with accountability that can stand up under scrutiny.


