Security Considerations in the Advancing-Tide of Agentic AI: (3 min read)
Cybersecurity teams are increasingly embracing the new agentic world as AI-driven tools promise to reduce workload and improve response times.
However, these tools also introduce risks, as any misfire could expose systems to serious threats. Unlike traditional chatbots that simply respond to prompts, agentic AI goes further by taking approved actions based on its own findings. While the adoption of AI in security requires time and education, building confidence in these tools is critical. The challenge lies in the fact that if an AI-enabled security tool makes an error, it creates an opportunity for cybercriminals and spies to exploit vulnerabilities.
Microsoft recently announced plans to preview 11 new AI agents in Security Copilot next month. CrowdStrike integrated agentic AI into its security tools last month, while Trend Micro introduced autonomous agents and its AI-powered security system to customers last year.
This shift marks a significant departure from just two years ago, when many corporations banned employees from using ChatGPT due to concerns about data leaks. Now, the landscape has changed, and cybersecurity is emerging as one of the strongest use cases for generative AI, particularly in an industry plagued by worker shortages and high burnout rates. A survey conducted last summer found that over 70% of CISOs identified their organisations as "innovators," "early adopters," or "early majority" adopters of AI technologies, reflecting a growing level of trust.
Additionally, half of the CISOs surveyed reported that their organisations had developed AI use cases or were piloting new AI-driven security projects. For many security teams, agentic AI is valuable for sifting through thousands of daily threat notifications and identifying real risks. Initially, Microsoft customers using Security Copilot focused on straightforward tasks such as summarising incidents, according to Dorothy Li, corporate VP of Microsoft Security Copilot. Over time, as confidence in the tool grew, users started automating larger portions of their workflow, which led Microsoft to introduce autonomous agents. These AI-driven capabilities are now being used to handle phishing alerts and vulnerability notifications across security stacks.
CrowdStrike has also expanded its AI capabilities, incorporating an agentic feature into its security-focused large language model that automatically triages notifications for security teams. The company estimates that this tool can save more than 40 hours of manual work per week. Before deploying these AI-driven features, CrowdStrike rigorously tests them against its own analysts' findings to ensure accuracy and prevent inappropriate actions. Trust is a crucial factor for security teams in major corporations, and proving the reliability of AI tools through extensive internal testing is a key part of that process. Elia Zaitsev, CrowdStrike’s chief technology officer, noted that generative AI is being adopted at an unprecedented pace, outstripping nearly every other technological advancement in recent memory.
Despite these advancements, skepticism remains among security professionals. Many still require hard, quantifiable metrics to justify the investment in AI-driven security tools and demonstrate a clear return on investment. As organisations continue to integrate AI into their cybersecurity systems, the industry can expect an influx of announcements from cyber vendors showcasing their own agentic capabilities.
I think context will always be critically important when considering AAI with SOC services or the ongoing further integration of AI into our core systems. Is it safe right now to let algorithms make all of the decisions? I think not, but as I have said in previous posts, we are in the first decade of a fundamental shift in the workplace use of technology that will define generations to come, and you can't hold back the tide!