Home Artificial intelligence Why CISOs Must Rein In Agentic AI Before It Runs The Enterprise
Artificial intelligence

Why CISOs Must Rein In Agentic AI Before It Runs The Enterprise

Share


David Schiffer is the CEO of RevBits and formerly of Safe Banking Systems (SBS). RevBits develops cybersecurity software for organizations.

If generative AI is like giving every employee a calculator, agentic AI is more like handing them a fleet of self-driving cars. At first, it feels impressive; tasks finish on their own, workflows speed up and the organization gains new momentum.

But as any CISO knows, once a machine starts steering instead of just suggesting, the stakes are higher. Now, you’re not just managing tools. You’re managing actors, and those actors have access to your digital kingdom.

For the past decade, security leaders have dealt with identity sprawl, cloud fragmentation and a surge in machine-to-machine communication. Just as the industry began to address these issues, a new challenge emerged: agentic AI.

These systems do more than generate content or answer questions. They take action, start workflows and act as high-privilege digital identities within the company. For security leaders, this is a major change in the threat landscape that requires better visibility, governance and discipline.

The pressure is real, and it’s coming from every direction.

Boards want more efficiency. Business units want automation. Developers want faster results. The market rewards companies that move quickly. But security leaders know something often overlooked: Every new identity, whether human or machine, can be an attack surface. Agentic AI doesn’t just add new identities; it multiplies them.

Generative AI was disruptive but mostly limited to content creation and analysis. Agentic AI is different; it’s operational. These systems can trigger deployments, change configurations, move data and interact with production environments. They act like service accounts with high privileges, but they can also reason, adapt and link actions together. This is powerful, but also risky.

The machine identity problem just got much bigger.

Most companies already have trouble managing machine identities. Ratios of 40:1 or even 80:1 (machine-to-human) are common. With agentic AI, these numbers could grow into the hundreds or thousands. Each agent might have its own credentials, access patterns and activity. Without strong controls, organizations risk losing track of which agents exist, what they can access and how they act.

This is not just a possible risk; it’s a structural one. When identities grow faster than governance can keep up, blind spots appear. Attackers thrive in these blind spots. A forgotten agent with excessive permissions is like an unmonitored service account, except it can now act on its own, make decisions and potentially exacerbate a breach.

Governance is a security control.

Security leaders have long said that governance is a security function, not just a compliance task. With agentic AI, this is now impossible to ignore.

Before deploying autonomous agents, organizations need:

• Clear ownership for every agent

• Defined scopes and permissions

• Lifecycle policies for creation, rotation and retirement

• Audit trails that distinguish human actions from machine actions

• Approval workflows for new AI use cases

Skipping these steps doesn’t just make operations less efficient. It creates gaps that attackers can exploit. They don’t need to break into your most valuable systems if they can take over an agent that already has access.

The operating model will change, whether you prepare for it or not.

Agentic AI will change how engineering, operations and security teams work. Developers may soon work with hundreds of automated coworkers doing code reviews, testing and deployment. Security teams will need new ways to monitor, check and, if needed, isolate AI agents.

Security leaders should prepare for:

• A dramatic increase in machine-initiated actions

• New categories of misconfiguration and drift

• Faster incident propagation when agents behave unexpectedly

• Accountability questions when machine-generated work causes impact

Organizations that succeed will treat AI agents as important identities, not just as new tools or experiments. Those who struggle will be the ones who underestimate how fast these systems can change their operations.

Visibility is the first battle, and the one most organizations are losing. You can’t secure what you can’t see. Right now, many companies don’t have a reliable list of their current machine identities, let alone the next wave of autonomous agents.

A real-time inventory is essential. Without it, you can’t enforce least privilege, monitor behavior or respond effectively when problems arise. Agentic AI makes this even more urgent because these systems can create new agents, start new processes and interact with environments in ways traditional tools can’t track.

Follow the CISO’s guide to securing agentic AI.

Security leaders don’t need to fear agentic AI, but they do need to take it seriously. Here’s a practical guide to building the guardrails required to use these systems safely and at scale:

• Establish a real-time inventory of all AI agents. If you don’t know what exists, you can’t secure it. Inventory is the first and most important control.

• Enforce least‑privilege access. Treat agents like high-risk service accounts. Minimize permissions and review them frequently.

• Implement identity lifecycle management. Automate provisioning, rotation and deactivation. Require ownership and expiration dates for every agent.

• Build an AI governance council. Security, engineering, legal and risk must jointly approve new use cases. This isn’t optional; it’s structural.

• Require full auditability. Log every action and tag machine-generated activity. Make attribution fast and unambiguous.

• Conduct adversarial testing. Simulate prompt injection, privilege escalation and misuse scenarios. Don’t assume agents will behave as expected.

• Segment AI workloads. Isolate agents to prevent lateral movement and limit blast radius. Treat them like any other highly privileged identity.

• Train teams on AI accountability. Make it clear that humans remain responsible for machine-generated output. Accountability cannot be automated.

The future belongs to the prepared.

Agentic AI will change how companies work. It will streamline workflows, drive innovation and improve efficiency. But it will also introduce new risks that require visibility, identity and access control, as well as strong governance.

Security leaders who act early will help their organizations innovate safely. Those who wait may find themselves trying to control a system that’s already moving too fast.

Machines are no longer just answering questions; they’re taking action. It’s up to security leaders to make sure they’re heading in the right direction.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *