When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?

When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?

Let TAPE3 read this edition of the newsletter to you 🎧 🤖 ⇩


At Black Hat USA 2025, it became clear that artificial intelligence (AI) is no longer a novelty in security — it’s the baseline. Out of more than 60 vendor announcements in the past month, the overwhelming majority led with “AI-powered,” “agentic AI,” or “autonomous SOC.”

The novelty has worn off. We’ve reached the lowest common denominator stage, where the mere presence of AI is assumed. It’s the table stakes of our industry narrative, not the differentiator. And just like in my earlier musings on virtual reality, when the extraordinary becomes commodity, the real value shifts somewhere else — to the premium layer that’s harder to fake and harder to deliver.

The question is: when the baseline itself is built on opaque models, blurred decision pathways, and marketing noise, will we even know what reality is? And if we can’t clearly define it, how can we possibly measure success?

Model Poisoning — In Code and in Conversation

In the Black Hat USA 2025 Lock Note session (with Jeff Moss , Founder, Black Hat and DEF CON ; Daniel Cuthbert , Global Head of Security Research, Santander ; Heather Adkins , Security Engineering, Google ; Aanchal Gupta , Chief Security Officer, Adobe ; Jason Haddix , CEO, Hacker & Trainer, Arcanum Information Security ), among other things ... the panel talked about poisoning the model in the technical sense: introducing bad data into AI training pipelines. But there’s a parallel here in our industry discourse.

We can poison our collective model of reality when we focus on the wrong problems with the wrong solutions. When we implement AI for scale and speed without asking whether we’re solving meaningful challenges — or just ticking the “AI inside” checkbox — we set the floor too low.

At that point, the lowest common denominator becomes the industry’s operational standard. And attackers, regulators, and customers will all judge us by that bar.

When Automation Eats the Human Layer

One of the week’s unspoken tensions is whether the drive for scale and speed will ultimately erase the human role. Autonomous SOC agents, AI copilots adjusting zero-trust policies on the fly, detection pipelines without human review — these are pitched as efficiency wins.

But if AI takes over decision-making without oversight, we risk not only operational collapse when the model fails, but legal consequences for negligence. You can’t claim plausible deniability when you’ve intentionally let the system run without guardrails.

As Jennifer Granick reminded us (more directly, Marco Ciappelli , when they discussed her keynote prior to the event during an ITSPmagazine On Location podcast), we also face a transparency problem. If we can’t look back and see how a system arrived at a conclusion — if it’s a black box even to its owner — then neither security teams nor courts can reliably defend its actions.

The Game of Tetris

Mikko Hypponen 's opening keynote gave us the perfect metaphor: security is like Tetris. Successes disappear quietly. Failures stack up for everyone to see.

Bring in the ever-present AI tech stack, and that’s even more dangerous. A human analyst might make an error that affects a single case. An autonomous system, once poisoned or miscalibrated, can make thousands of bad calls at machine speed. The stack builds fast — and if you don’t have the ability to trace, explain, and reverse those calls, you’re playing a potentially unwinnable game with serious consequences.

Not just for the organization, but perhaps for the executives — including the CISOs — who made the decision to cut entry-level roles and functions in favor of the cheaper alternative, AI. As Granick underscored, without transparency and governance, you may not just lose control of your security reality; you could also face legal and regulatory scrutiny for letting the black box take over.

Reality: Can it be Re-Defined and Successfully Measured?

Here’s the uncomfortable truth: in the endless AI-saturated market, “reality” is whatever the model says it is — unless we make a deliberate effort to challenge that.

Defining success means:

  • Knowing exactly what problem we’re trying to solve; don't be fooled by what the model tells us success is or should look like (it could be tricking us)
  • Setting measurable, verifiable outcomes
  • Building systems (human and machine) that can explain their reasoning
  • Maintaining the ability to audit the past so we can learn from it

Measuring success means:

  • Capturing metrics that reflect business and mission outcomes, not just technical performance ... and not just towing the line of the AI model(s)
  • Testing the model’s accuracy and its governance under real-world conditions
  • Comparing results against human judgment, not in place of it ... verifying that we aren't being misguided by the model along the way

Breaking Out of the Lowest Common Denominator Trap

From the Lock Note to Granick’s call for legal awareness, from the CISO conversations Marco and I had in the hallways to the outlier vendor announcements that weren’t about AI at all, the message is clear:

  • Transparency is non-negotiable. We must be able to see how we got to a decision — not just the output.
  • Governance beats gimmicks. Agentic AI without guardrails isn’t innovation; it’s liability at scale.
  • Humans remain essential. Speed and scale mean nothing if we strip away adaptability, creativity, and ethical judgment.
  • Value is proven, not proclaimed. The vendors who will stand out are those who can tie their technology directly to program goals, provide measurable impact, and acknowledge their own limitations.

The new reality is this: AI has taken over the headlines and the product sheets. That’s no longer impressive in itself. What matters now — and what will command a premium — is the ability to define what success actually looks like, measure it honestly, and prove that the systems we trust to protect us are not just fast and scalable, but accurate, explainable, and worthy of that trust.

If we can’t do that, we won’t just lose track of reality. We’ll stop recognizing it altogether.


📒 Resources

The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/

Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEA

Learn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25

Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72

Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage

ITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/

ITSPmagazine Webinar: What’s Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year’s Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference


💬 Join the Conversation

In a world where AI is becoming the default in security, how do you define success — and how do you know when you’ve achieved it? Can you trust the “reality” your systems present, and can you prove it? 🤔

Drop a comment below or tag us in your posts! 💬

What's your perspective on this story? Want to share it with Sean on a podcast? Let him know!


ⓘ About Sean Martin

Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️

Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location

To learn more about Sean, visit his personal website.

Jeff Pike

Writer and Editor for Businesses

5d

Your article Sean presents several key issues that cybersecurity professionals must be aware of when it comes to AI. I love the Tetris analogy and think your insight on "humans remain essential" is another key concept that everyone needs to remember.

Chris Hurst

CIO and CISO - Blackwired

1w

Pascal Bornet gives a powerful perspective, and warns that humans retain accountable for the real-world action outcomes AI produces in his book “Irreplaceable”. The concept of “Legal Fiction” is useful in determining how humans have dealt with the problem of attatching a set of abstract accountabilities to an abstract of a Human and making that stick in the real world. AI is a “Legal Fiction” abstract that, when we introduce it, brings consequences in the real world that cannot be detached from us Humans. The question before selection of AI for a project is what “intelligence” am I choosing. There is a huge proliferation of choice now and do the choosers even know they are making a choice beyond being an AI have or have not? The fact is that we choose our intelligence and experimentation is the best arbiter between competing human abstract hypothesis. We get the reality we choose when we choose the intelligence we use.

Marco Ciappelli

Co-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍

1w

I have no AIdea! 🤔

To view or add a comment, sign in

Explore topics