Before You Deploy Agentic AI, Read This: Why Zero Trust Must Come First

Before You Deploy Agentic AI, Read This: Why Zero Trust Must Come First

Trevor Dearing, Director of Critical Infrastructure Solutions

Agentic AI is the new darling of the AI world. I’m hearing it everywhere. But like most shiny new things in cybersecurity, it’s more complicated than it looks and more dangerous if left unchecked.

In simple terms, agentic AI refers to autonomous software agents that make decisions and act on your behalf. Think ServiceNow agents handling IT tasks, or Waymo cars navigating traffic in real time.  

These aren’t just apps running commands. They’re interpreting environments, learning, and acting independently.

That’s powerful and exciting, but if compromised, it’s also dangerous.

Smart security leaders should explore how agentic AI can enhance operations while building Zero Trust protections to contain its risks from day one.

Why agentic AI should be on every CISO’s radar

These agents behave more like humans than machines, and that creates new security challenges.  

Do we treat them like users with credentials, access rights, and audit trails? Do we teach them ethics? And how many of them will we end up managing?

In a typical enterprise, you might have 5,000 employees. But with agentic AI, you could easily have tens or hundreds of thousands of autonomous agents running across your environment. That’s an explosion of risk.

Worse yet, these agents are often built on smaller, specialized models called small language models (SLMs) which are easier to manipulate. Security gaps in an SLM will cause significant, widespread issues instantly compared to a large language model (LLM) where risk would be critical but more isolated.

This makes agentic AI a high-value target. Right now, too many organizations are racing to adopt it without thinking through the security implications.

Let’s not reverse-engineer security (again)

We’ve seen it before. A new tech gets rolled out, security is bolted on after the fact, and we spend years trying to clean up the mess. The internet is a great example.

Agentic AI is too powerful — and too risky — for that kind of approach. CISOs can’t afford to reverse security into our AI strategy. We need to embed it from the beginning.

That means recognizing that agentic AI isn’t just another IT asset. It’s a new kind of entity. It learns, adapts, and interacts with other agents. It can be co-opted by attackers just as easily as it can help your help desk.

Zero Trust is the only sensible path forward

If we want to secure agentic AI, we need to stop treating it like magic and start treating it like what it is: code running on infrastructure.

That’s where a Zero Trust strategy comes in.

Zero Trust gives us the framework to contain and control AI agents without making assumptions about their behavior. It lets us:

In short, Zero Trust gives you a way to build secure, scalable agentic AI infrastructure before the risks become unmanageable.

Zero Trust: the safety net for AI’s new frontier

Zero Trust creator John Kindervag likes to say, “If you can’t quantify it, it’s not a risk — it’s a danger.”

That’s where we are with agentic AI. We don’t fully understand the scope of the threat yet.

As AI agents become more embedded in everything from IT operations to critical infrastructure, we can’t just hope for the best. We need to assume these agents will be targeted and build our Zero Trust architecture accordingly.

Agentic AI is here. It’s powerful and helpful, but it’s vulnerable. Zero Trust is how we make sure it stays on our side.

George Billman

Strategic Sales & Business Development Executive currently focusing on Data Security Governance, Compliance & Privacy

1w

Within the broad realm of Zero Trust it is Zero-trust Data Access (ZTDA) that becomes paramount Protection of access to data training sets so that they stay pristine, protection of inferred LLM model weights so there are not stolen or tampered with, protection of access by agents to sensitive data (both by access control and de-identification) e.g. not agent should be able to access PII or PHI without fine grain access control.

Like
Reply
Mauricio Ortiz, CISA

Great dad | Inspired Risk Management and Security | Cybersecurity | AI Governance & Security | Data Science & Analytics My posts and comments are my personal views and perspectives but not those of my employer

1w

Insightful post. Illumio, the example of Replit agent shows that organizations still cannot rely on these autonomous agents as they do not have enough guardrails and security measures to prevent terrible outcomes. Not all bad outcomes from using AI agents can be prevented with Zero Trust, but it definitely can help

To view or add a comment, sign in

Others also viewed

Explore topics