Supervising the Future: Cybersecurity Roles Are Becoming Agentic by Design

Supervising the Future: Cybersecurity Roles Are Becoming Agentic by Design

This isn’t an alarmist narrative. This is pattern recognition. It’s about reading the signals and tracking the early tremors of change in how cybersecurity roles inside classified information systems managed by defense contractors are evolving. What I see coming isn’t the erasure of ISSMs, ISSOs, ISSEs, or Secure Area IT professionals. It’s their expansion. We’re entering a phase where human professionals won’t just manage tasks; they’ll manage agents.

I call it vertical control with horizontal execution.

And it’s more than a clever phrase. It’s a framework that’s quietly emerging in classified enclaves as AI moves from copilots and chatbots into structured, context-aware task agents that adhere to security boundaries, classification rules, and mission constraints.

For those of us working on systems that handle Controlled Unclassified Information (CUI), Special Access Programs (SAP), or Secret and Top-Secret data, this evolution has real implications. It will change the nature of how we govern risk, validate controls, track policy drift, and secure infrastructure. It won’t happen overnight-but it will happen. And those who prepare for it now will gain a distinct advantage in mission assurance, audit readiness, and operational agility.

It also aligns, sometimes urgently, with emerging governance frameworks and government policy:

  • NIST AI Risk Management Framework (AI RMF) — with core functions of Map, Measure, Manage, and Govern, perfectly suited to define trust boundaries for Supervisor Agents.

  • NIST AI 100 Series, including:

  • OWASP Top 10 for LLMs, offering guidance on vulnerabilities relevant to AI agents such as prompt injection, training data leakage, and insecure output handling.

  • OWASP AI Exchange + Testing Guide, actionable practices for evaluating agent reliability, behavior, and attack surface in secure environments.

  • NIST SP 800-53 Rev. 5, JSIG, DAAPM, and ICD 503, which continue to serve as mandatory control baselines for classified information systems.

  • Executive Orders, including the now-revoked EO 14110 and the more recent 40-page AI governance directive, which signal where federal guidance is heading.

  • DoD CFR Title 32 and 48, governing national security systems, contractor performance, and safeguarding classified information.

The implications? We need agent-based systems to be:

  • Explainable

  • Testable

  • Policy-constrained

  • Bounded by mission-classified access control

We’re not just automating work; we’re enforcing policy through delegated cognition.


Mapping Agent Roles to Frameworks and Guidance

To illustrate the regulatory alignment of these roles, here's a visual mapping of common subagents to their respective compliance and governance references:

Agent RoleMapped Governance/Compliance Alignment

This mapping highlights how each agent’s tasking is not only functional but also fully embedded within recognized cybersecurity policy, federal regulations, and AI risk guidance.


What I Mean by Vertical Control

In the traditional security construct, ISSMs, ISSOs, ISSEs, and Secure Area IT staff operate in tightly defined lanes inside closed environments. Each has domain authority. But under an agentic architecture, each role becomes a supervisor of a corresponding Supervisor Agent, a policy-bound, access-scoped digital proxy capable of orchestrating tasks, issuing signals, and assigning workflows to subagents that understand classification boundaries and network controls from inception.

Here’s how I’m framing the model:

ISSM → Governance Overseer

  • Subagents:

ISSO → Operational Security Enabler

  • Subagents:

ISSE → System Integrity Architect

  • Subagents:

Secure Area IT → Infrastructure Compliance Anchor

  • Subagents:

Each Supervisor Agent executes based on the guidance of its human lead. That’s the vertical control structure: human → supervisor agent → task agents. And within that framework, the diversity of subagent roles is nearly limitless. Teams can develop specialized agents for highly nuanced, SME-driven functions like forensic analysis agents, insider threat signal detection agents, incident response agents, or system misconfiguration auditors. These are not theoretical. They’re practical extensions of existing classified workflows, giving domain experts the ability to embed their expertise into agent behavior and scale it across hardened systems.

What makes this model even more powerful is that these subagents are context-aware by default. Unlike a new SME hire who must learn your secure environment, classification rules, and operational nuances before becoming effective, subagents are instantiated with embedded knowledge of the exact parameters, guardrails, and policies that govern their tasks. They operate within your cross-domain and compartmentalization boundaries from the moment they’re deployed, accelerating insight and execution without increasing risk.


Why Horizontal Execution Matters

Where things get interesting is when Supervisor Agents from different roles/functions begin governed, secure collaboration agent to agent across system boundaries, domains, or classification levels to fulfill a shared operational goal.

Let me give a practical illustration:

During a quarterly ConMon review, the ISSM tasks their Supervisor Agent to collect readiness indicators. The ConMon subagent identifies a system lagging in patch compliance and triggers the ISSE’s Supervisor Agent. That agent coordinates with subagents to validate the risk, issue remediation guidance, and alert Secure Area IT. At the same time, the ISSO’s Supervisor Agent updates audit logs and flags the system for follow-up.

No humans were displaced. But the time-to-detection and response was slashed, because the agents executed horizontally, laterally collaborating across classified functions under strict policy constraints.

This isn’t science fiction. It’s the near-term trajectory of cybersecurity augmentation inside secure programs. Companies like IBM, Microsoft, and Palantir are already investing in agentic frameworks. According to Forrester’s 2025 Cybersecurity Futures report, “AI agents embedded in enterprise security workflows will become a strategic differentiator by 2026.”


A Phased, Pragmatic Path Forward

We’re not flipping a switch. We’re orchestrating a transition. I see this playing out in three practical phases:

Phase 1: Task-Level Augmentation

  • Execution: Agents assist with data preparation, control monitoring, and log analysis. They remain in passive mode, surfacing trends but not acting on them.

  • Challenges: Trust in the output, limited integration with legacy classified systems.

  • Culture Shift: Teams must stop viewing automation as a threat and start treating agents as productivity tools.

  • Leadership Buy-In: Requires strategic investment in agent infrastructure and a clear understanding of measurable time savings.

Phase 2: Supervisor Delegation

  • Execution: Human leads assign tasks to Supervisor Agents who orchestrate subagents across their security role.

  • Challenges: Governance complexity, access control granularity, and explainability expectations increase.

  • Culture Shift: Role redefinition begins; professionals move from task executors to strategic decision-makers.

  • Leadership Buy-In: Must address compliance concerns and demonstrate control effectiveness within classified enclaves.

  • Risk vs Reward: The reward is speed, consistency, and documentation. The risk is over-reliance before proper validation.

Phase 3: Collaborative Orchestration

  • Execution: Supervisor Agents initiate workflows across security roles, triggering multi-agent collaboration based on mission-driven objectives.

  • Challenges: System interoperability, cross-domain data integrity, and policy harmonization.

  • Culture Shift: Requires high-trust environments and clarity on escalation paths. Teams must embrace fluid digital workflows.

  • Leadership Buy-In: Executive leadership must support a multi-domain AI strategy aligned with classification boundaries and enterprise policy.

  • Risk vs Reward: The reward is rapid mitigation and enterprise-level risk insights. The risk is fragmentation if agent behavior isn’t governed tightly.


What Security Teams Should Start Doing Now

  1. Map Your Repetitive Tasks: Identify what could be offloaded to agents without compromising classification rules or risk posture.

  2. Align Agents to Security Profiles: Treat them like cleared digital employees, scoped by need-to-know, least privilege, and bounded behavior.

  3. Simulate Agent-Driven Workflows: Run tabletop exercises. If your ISSO had a Supervisor Agent inside a SCIF, what could it accomplish by end of day?

  4. Build a Digital Trust Framework: Define explainability, override controls, audit logs, and kill switches — especially in secure enclaves.

  5. Engage AI Governance Now: Don’t wait for policy to catch up. As one Gartner analyst said last month, "Policy will lag the agents it tries to control."


Why This Isn’t a Threat...It’s an Opportunity

This shift doesn’t flatten the classified security workforce; it sharpens it.

It won’t replace the ISSM. It will let them see patterns across dozens of secure systems with near-zero latency. It won’t make the ISSO obsolete. It will give them faster visibility, better forensics, and task coverage they can’t achieve alone.

And let’s not kid ourselves, our adversaries are already building their agentic models. If we don’t shape ours, we’ll be reacting to theirs inside our own cleared environments.


Final Thought: Don’t Wait for Permission to See the Future

As cybersecurity leaders inside the DIB, we have a responsibility to not just react, but to forecast. If you sense the ground shifting beneath our profession, trust that instinct. Validate it with research. Pressure-test it with use cases. And most of all, prepare your classified teams for what’s coming.

Put yourself in the path of this opportunity. Because if we lead this right, agentic collaboration won’t diminish our roles, it’ll magnify them.


Disclaimer: The opinions and content creation expressed in this article are my own and do not reflect those of my employer. This content is intended for informational purposes only and is based on publicly available information.

Dave Balroop

CEO of TechUnity, Inc. , Artificial Intelligence, Machine Learning, Deep Learning, Data Science

2mo

Cybersecurity is shifting from reactive controls to proactive delegation. Vertical control with horizontal execution isn’t a theory—it’s a necessity.

Dr. Mark Stanley, DM, CISSP/ISSEP, CCSP, PMP

Information Technology and Cybersecurity Leadership and Engineering

2mo

💡 Great insight

Alexander Hubert, CISSP, CMP, MBA/ITM

Regional Mission Director - Authorizing Official (AO) Field Operations, NISP Cybersecurity, Eastern Region, Defense Counterintelligence and Security Agency (DCSA)

2mo

Nice work! Also consider DFARS restraints. AI cannot cross program boundaries unless the impacted programs agree in writing and need-to-know (NTK) is established. Think about the “mess” when AI crosses program boundaries and the output divulges information to someone without the NTK or worse, aggregated to another classification level. Governance is key here and getting industry to constrain AI will be even more of a challenge.

To view or add a comment, sign in

Others also viewed

Explore content categories