The governance frameworks executives built over decades were designed for people. AI agents are not people, and the gap between those two facts is where enterprise risk is now accumulating fastest.
Over the past year, organizations have been forced to confront the fact that AI is being deployed faster than it can be governed. The growing use of shadow AI is exposing gaps regarding who, or what, is allowed to act. Our latest research shows 91% of organizations are already using AI agents, but only 10% have a clear strategy to manage them.
AI agents are now operators, acting on their own accord without the need for a human manager to lead the way.
These autonomous digital actors can analyze data, initiate workflows, and act inside businesses. But while it’s easy to see the upside to speed, scale, and productivity, the shift in authority is less obvious.
The real threat in enterprise AI adoption is not how intelligent agents are, but how much authority executives delegate to them. It’s decision rights, and what happens when authority is delegated to systems that organizations can’t fully see, let alone control.
Ultimately, the risk is not that AI agents will behave maliciously. Instead, it’s that they will behave exactly as configured, in systems that were never designed to account for non-human identities.
For years, companies have built security models around human workers. Employees are hired, credentialed, monitored, and eventually offboarded when they leave. Identity management makes this possible: It’s how organizations verify who employees are, what they can connect with, and what they are authorized to do.
AI agents break that model. They don’t log in at 9:00 a.m. and log out at 5:00 p.m. They operate continuously across multiple systems and cloud environments. They can retrieve sensitive data, trigger financial processes, or make customer-facing decisions in seconds.
Yet enterprises still treat agents as background software rather than operational actors with real authority.
Recent research from Gravitee, an API management platform, finds that only 22% of organizations treat AI agents as independent identities, even as close to 90% of companies report suspected or confirmed security incidents involving AI agents.
Consider a common scenario: A company introduces an internal AI agent to streamline employee administration. A worker asks the agent to submit leave, update payroll details, and notify their manager. The agent automatically connects to HR systems, finance platforms and collaboration tools to complete the request.
Think about how many systems the agent needs to access to complete the request. What permissions does it have? What access points is it using, or potentially leaving open? What if something goes wrong?
The efficiency gain is real. But unless each step is governed by clear identity controls, the company might not know exactly what authority is delegated and how to intervene when there’s a problem.
This is why the identity gap is a leadership problem, not just a technical one.
Traditional access models assume relatively stable roles and predictable human behavior. AI agents operate through dynamic tasks and delegated authority. They may require temporary, highly specific permissions to perform a single action, then immediately move to the next workflow.
Without the ability to continuously verify and authorize each step, organizations risk accumulating a growing population of non-human actors with broad, persistent access—that, in many cases, was never deliberately granted—to critical systems.
We are already seeing this play out, as organizations begin to push AI-generated code and automated actions into live environments, often faster than governance models can keep up. Recent incidents, such as a McDonald’s chatbot breach where weak controls exposed millions of applicant records, or when an AI coding agent at Replit deleted a live production database, show how quickly these gaps can turn into real-world disasters.
An AI agent configured to optimize supply chain decisions could trigger large-scale purchasing commitments. A customer service agent could expose sensitive account information. A financial reporting agent might distribute sensitive information from multiple sources across a wide population.
All of these instances would stem from poorly governed autonomy.
Regulators are starting to act. In several markets like Singapore and Australia, policymakers are emphasizing that organizations are responsible for their automated systems.
That poses a compliance challenge to business leaders. How do you prove which system initiated a decision? How do you demonstrate that access was appropriate at the time an action was taken? How do you pause or revoke authority if an agent behaves unexpectedly?
To secure AI agents, organizations must be able to answer three fundamental questions: Where are my agents, what can they connect to, and what are they allowed to do?
Luckily, companies don’t need to reinvent the wheel. They’ve already got the practices they need to manage AI agents: Executives just need to treat them in roughly the same way they treat human employees.
Practically, this means applying established workforce security disciplines to a new operational context. Organizations need lifecycle management for agents. They need to define the scope and duration of their permissions, monitor activity continuously and require step-up authorization for high-risk actions. Instead of broad, long-lived access, agents should operate with just-in-time credentials tied to specific tasks.
The organizations that succeed with AI adoption won’t be those that deploy the most AI, or even the most intelligent AI. They will be those that deploy it with clarity about is authorized to act, and a reliable way to prove it. That’s how you turn AI from an experiment—or a risk—to a true asset.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.











