Impressive! Our AI is Approaching “One 9” of Accuracy.
Large language models can do a lot of things. But they are hardly infallible. We continue to see stories emerge of organizations relying on the output of LLMs without any human review, resulting in consequences ranging from humorous to disastrous. So, how do we build in human-in-the-loop verification when the scale of LLM output can be so daunting?
This week’s episode is hosted by David Spark , producer of CISO Series and Andy Ellis , principal of Duha . Joining them is their sponsored guest, Kevin Tian , co-founder and CEO, Doppel .
AI fraud gets on the juice
If you thought ad fraud was bad, we're only starting to see the impact of generative AI in the space. Increasingly, content farms are stealing publishers' ads.txt files to mass-produce fake news sites that perfectly mimic legitimate publishers, almost at the push of a button. We're seeing sophisticated deepfake celebrity endorsements, AI-generated articles that mirror authentic content, and entire networks of fake websites created at an unprecedented scale, as pointed out in a Forbes article by Ashish Bhardwaj of Google .
The tools being deployed to combat this fraud are also AI-powered, raising the question about whether this is simply the latest chapter in cybersecurity's eternal cat-and-mouse game or something genuinely different. What makes this iteration unique is that we're dealing with machine intelligence that can operate autonomously without pre-programming. This marks a fundamental change in scale and effort that extends far beyond just security.
Agentic AI demands a new security mindset
"If we don't rethink our role now from blockers to enablers, from rule makers to copilots, we risk becoming the bottleneck in a machine-speed world," warns Rinki Sethi of Upwind Security about the coming wave of agentic AI. This goes beyond the simple question of how to secure your org from employees using a chatbot. It's about preparing for when AI acts on our behalf. Look at all the companies getting in trouble when GenAI publishes autonomously. Now imagine that across your business.
The key is implementing proper frameworks that determine where AI can take action versus where it should only assist with decision-making. Getting it right 99% of the time sounds good. But it still means catastrophic failures when AI systems are taking thousands or millions of actions daily. Safeguards and human-in-the-loop processes are still critical for a successful deployment.
The new frontier for social engineering
It turns out threat actors are really motivated to find security's blind spots. They are increasingly targeting employees through channels that security teams don't monitor: LinkedIn, phone calls, and personal devices. It's tempting to focus heavily on email security and awareness training. But attackers are building comprehensive dossiers across multiple touchpoints. This kind of persona-based attacks are tough to monitor.
Security teams must connect the dots across channels, correlating suspicious LinkedIn accounts with phishing emails, tracking phone numbers used in social engineering campaigns, and mapping entire networks of fake accounts. Effective defense means not just detecting these multi-channel attacks. We need to start being in a position to actively disrupt them.
We still need human verification
Law firms using AI to draft legal filings keep getting caught with hallucinated case citations that never existed. This isn't a design flaw in LLMs. At their core, they're regurgitators, not researchers, designed to produce output that looks structurally correct based on patterns they've seen before. The problem isn't that attorneys are being lazy by not verifying citations. They're asking AI to solve problems it's fundamentally not designed for.
Successful AI deployment requires frameworks that validate outputs, implement safeguards, and maintain human oversight processes. Efficiency is great, but without this, we're creating efficient ways to be wrong more quickly. Organizations need to understand exactly where AI is at its best. Applying reason and critical thinking is still not within its purview.
Listen to the full episode on our blog or your favorite podcast app, where you can read the entire transcript. If you haven’t subscribed to the CISO Series Podcast via your favorite podcast app, please do so now.
Listen to the full episode here.
Thanks to Louis Zhichao Zhang of AIA Australia for providing this week's "What's Worse" scenario.
Thanks to our podcast sponsor, Doppel
Subscribe to CISO Series Podcast
Please subscribe via Apple Podcasts, Spotify, YouTube Music, Amazon Music, Pocket Casts, RSS, or just type "CISO Series Podcast" into your favorite podcast app.
What I love about cybersecurity…
“What I love about cybersecurity is that it seems like all of us vendors are trying to solve some of the same technical problems around false positives versus false negatives, precision versus recall. How do we make sure that we’re showing teams the right information without making sure we’re missing anything as well? As a software engineer, that’s always the fun part about cybersecurity here.“ - Kevin Tian , co-founder and CEO, Doppel
Listen to the full episode of “Impressive! Our AI is Approaching “One 9” of Accuracy.”
Cybersecurity Has a Prioritization Problem
"Most security professionals working in tech and software… they don’t ask for a risk register. They ask, what are the top five things I need to worry about? What keeps you up at night?" - Terry O'Daniel , former CISO at Amplitude
Listen to the full episode of “Cybersecurity Has a Prioritization Problem”
Subscribe to our newsletters on LinkedIn!
CISO Series Newsletter - Twice every week
Cyber Security Headlines Newsletter - Every weekday
Security You Should Know Newsletter - Weekly
Cyber Security Headlines - Week in Review
Make sure you register on YouTube to join the LIVE "Week In Review" this Friday for Cyber Security Headlines with CISO Series reporter Richard Stroffolino . We do it this and every Friday at 3:30 PM ET/12:30 PM PT for a short 20-minute discussion of the week's cyber news. Our guest will be Steve Zalewski , co-host, Defense in Depth. Thanks to our Cyber Security Headlines sponsor, Vanta .
Thanks to our sponsor, Vanta
Join us Friday, August 15, for “Hacking Burnout”
Join us on Friday, August 15, 2025, for Super Cyber Friday: “Hacking Burnout: An hour of critical thinking about how security teams gets overwhelmed and how to manage it."
It all kicks off at 1 PM ET / 10 AM PT, when David Spark will be joined by Jonathan Waldrop , former CISO, The Weather Company, and Terry O'Daniel , former CISO, Amplitude for an hour of insightful conversation and engaging games. And at 2 PM ET / 11 AM PT, stick around for our always-popular meetup, hosted right inside the event platform.
Thank you for supporting CISO Series and all our programming
We love all kinds of support: listening, watching, contributions, What's Worse?! scenarios, telling your friends, sharing in social media, and most of all we love our sponsors!
Everything is available at cisoseries.com.
Interested in sponsorship, contact me, David Spark.