The AI Agent Arms Race: Why Human-AI Collaboration Is Your Best Defense
‘If you know the enemy and know yourself, you need not fear the result of a hundred battles.’
Business leaders are not the only ones to have experienced bootstrapping their way up only to find themselves struggling to scale. Until now, hackers had been limited by the number of targets they could manage and the expertise required to pull off complex attacks. Thankfully, that limitation allowed the cybersecurity industry to keep pace with attackers. Enter AI agents. Not only can they be weaponized by bad actors to reason and make decisions autonomously, but they are about to obliterate the problem of scale.
“The risk of agentic attackers is that it could make ‘big game’ attacks an everyday norm, overwhelming security teams,” says cybersecurity “evangelist” Mark Stockley from Malwarebytes. Stockley reckons the security disruption many expected with the arrival of GhatGPT is now here. While criminals struggled as much as anyone else to get the most out of the new generative AI tools, agentic AI is poised to do the thinking and innovation for them. Here’s what might happen next and what we can do about it.
How We Got Here
AI agents may seem to have come out of nowhere in 2025, but developers were waiting for a breakthrough before they could go mainstream. That came recently when retrieval-augmented generation (RAG) allowed AI agents to learn from more diverse sources and make autonomous decisions with fewer errors. That was effectively the green light for mass adoption, not just for businesses but cybercriminals, too.
AI agents are built on top of generative AI but go beyond just creating content on demand. While generative AI waits for instructions to produce text, images, or code, agents can autonomously decide what to create, when to deploy it, and how to adapt their strategy. As Stockley says, it’s a shift from technology that “knows things” (responds to commands) to technology that “does” things (acts autonomously).
This is important because both sides were previously constrained by a skilled labor shortage. Cybersecurity still needs human oversight, but criminals have now been largely freed from that requirement.
The Anatomy of an AI Agent Attack
Ransomware victims paid more than $1 billion to gangs after attacks in 2023, with organizations in the U.S. accounting for 47% of victims globally. To get a sense of the stakes, the DarkSide gang’s ransomware attack on the Colonial Pipeline in 2021 disrupted fuel supplies across the eastern U.S. for days, causing panic buying and price spikes.
But in this new era, the hands-on role of the hacker is greatly diminished. Here’s what an agentic AI attacks might look like:
A ransomware gang deploys an AI agent to target a midsize energy company.
The agent works 24/7 scanning LinkedIn to identify staff, then crafting personalized phishing emails.
It adjusts tactics in real-time, as it constantly learns what works and what does not.
When an employee clicks a malicious link, the agent maps the network, identifies critical systems, and deploys encryption software.
It could even customize and draft the ransom demand based on the company’s financial data.
This same autonomous approach could transform other attack types: data theft, supply chain infiltrations, credential harvesting, distributed denial-of-service attacks, and corporate espionage. Human attackers set the objective and leave AI agents to execute every step without further guidance.
The Security Backdoor
Of course, AI agents can also be used to target individuals through social media phishing campaigns and using stolen data sold on the dark web. As concerning as this is alone, businesses need to be aware that compromised employees can become backdoors into otherwise secure networks.
With 59% of employers allowing staff to access company applications from unmanaged personal devices, the boundary between personal and corporate security has weakened. An AI agent could monitor an employee’s personal accounts, gather intelligence on their habits, then gain access to corporate systems through their remote connections.
Before, hackers were more likely to only target high-value employees (like executives or IT administrators) and be very selective in how much resources they spent looking for password re-use by employees across their social media and work accounts. But what was once a manageable risk from personal devices becomes a much larger attack surface when autonomous AI agents are deployed without constraints.
Fighting Back
One of the most famous lines in Sun Tzu’s Art of War is: “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” Part of the problem is autonomous attacks may make it harder to identify the human actors behind them. But there are two ways where this does applies to cybersecurity:
1) As Malwarebytes suggests, organizations need to implement “security by design” principles that prepare for AI agent capabilities rather than just responding to known attack patterns. Anticipation, in this sense, is the best form of defense.
This means using AI agents as “digital guardians” that continuously monitor networks for suspicious activities at machine speed. Just as malicious agents may find and exploit vulnerabilities that we don’t know of yet, agents deployed for the defense can match them to intercept novel threats.
2) Businesses must turn the weakness of relying on human oversight into a strength through implementing human-in-the loop protocols. Put simply, while attackers must succeed at every step of their attack chain, strategic human oversight can prioritize critical vulnerabilities. They can also interpret the “why” behind attack patterns and make preemptive changes to security architecture while malicious agents, limited in their understanding of context, vainly persist in trying to unlock an already secured door.
Outflanking Malicious Agents
One final suggestion: Companies need to evaluate every potential entry point for hackers through the lens of AI agent capabilities. To address employee vulnerabilities, they should implement more robust authentication beyond passwords, strictly segment networks, and/or provide company-managed secure devices for remote work.
As much as the threat attack surface has increased, AI agents working for the defense, allied with human insight, and strategic vision, can outflank the opposition. Organizations that preempt the shift to agentic AI and adapt now will maintain their defensive edge.