Blog 16 – The AI Blame Game: When Algorithms Go Rogue, Who’s Accountable?

Blog 16 – The AI Blame Game: When Algorithms Go Rogue, Who’s Accountable?

Spoiler: AI won’t face the media. It won’t write the apology. But someone will and it won’t be the algorithm.

CyberWest 2025: Where the Future of AI Got Uncomfortably Real

CyberWest Summit 2025. Two days of bold ideas, honest reflection, and thought-provoking speakers.

Over two jam-packed days, Perth became the epicentre of conversations that matter cyber resilience, AI disruption, and the uncomfortable questions that come with handing decision-making power to machines. As MC, I had the front-row seat to bold ideas, sharp debates, and the occasional unscripted moment.

From sunrise keynotes to late-night sidebar strategy sessions, the energy was electric and the message clear: AI isn’t coming, it’s here. And we’re deploying it faster than we’re governing it.

Across the summit, a recurring question echoed for me through the main stage and side rooms alike: Are we ready for the systems we’re building?

Nowhere was that challenge clearer than in Dr. Catherine Ball’s powerful opening keynote: “Humans on the loop: biology beats fakery in future systems.” She laid out a future where human-machine symbiosis isn’t theoretical, it’s operational. Where digital twins, AI-powered decision engines, and hybrid human systems are already reshaping work, leadership, and risk.

Dr. Ball’s keynote hit a nerve for me. “Biology beats fakery” wasn’t just a tagline, it was a reminder that no matter how fast AI evolves, human oversight, judgement, and ethics must remain in the loop. Because while algorithms can generate answers at speed, they still can’t generate responsibility.

As LTGEN Michelle McGuinness later put it, “Cybersecurity is everyone’s business.” But in 2025, cybersecurity isn’t just about stopping breaches. It’s about interrogating the decisions made by models behind the scenes, models that decide who gets hired, who gets flagged, and who gets ignored.

Take a moment to ponder: When the algorithm goes rogue, makes a biased call, delivers a flawed diagnosis, or quietly spirals into error, you won’t see it testifying at a Senate inquiry. It won’t write an apology. It won’t take the hit. You will. Because in the AI era, accountability isn’t automated. It’s still very much a human responsibility.

From Assistant to Authority: When AI Stops Asking and Starts Deciding

We used to call AI “assistive tech”, a digital sidekick. A faster spreadsheet. A smarter search. But that label doesn’t cut it anymore.

Today, AI isn’t just supporting human decision-making, it’s replacing it. Entire processes are being outsourced to models that no longer just analyse data, but interpret it, act on it, and shape real-world outcomes.

AI is now:

  • Screening résumés and rejecting candidates before a human even glances at their name
  • Diagnosing patients based on medical imaging, with no bedside manner and limited context
  • Flagging threats in critical infrastructure systems based on pattern recognition, not intent
  • Approving or declining financial transactions, insurance claims, or loan applications in milliseconds

These aren’t edge cases. They’re quietly becoming business as usual.

And the scary part? In many of these systems, no one really knows how or why the model reached its decision. Not the end user. Not the affected person. Sometimes not even the developers.

We’ve become enamoured with speed and scalability, chasing optimisation like it’s the end goal. But the faster we go, the blurrier the accountability becomes.

What happens when the algorithm gets it wrong? When a qualified candidate is filtered out, a diagnosis is missed, or a security system triggers a false positive that locks down critical operations?

You can’t cross-examine a black box. You can’t appeal to a neural network. You can’t ask a machine to reflect.

We’ve engineered efficiency at scale. Now it’s time to engineer responsibility to match.

The Accountability Black Hole

This is where things get murky. Fast.

When a human makes a bad decision, there’s usually a name, a role, a title on the line. But when an AI system makes the wrong call, rejects a loan application unfairly, delivers a flawed medical recommendation, or wrongfully flags an employee for termination, who takes the fall?

  • The data scientist who trained it, but never saw the final implementation?
  • The vendor who built the model, but insists they just “provide the tools”?
  • The executive who signed off on its deployment, trusting the sales deck more than the risk report?
  • The regulator, still playing catch-up with technology that outpaces legislation?

Good luck getting a straight answer. AI doesn’t slot neatly into our legal or ethical frameworks. It isn’t a person. It doesn’t understand context. It can’t be cross-examined. It doesn’t break rules out of malice, it follows logic, often flawed, and sometimes dangerously so.

And here’s the kicker: It does it relentlessly, without pause or second thought. It gets it wrong. It does it confidently. And it never apologises.

AI doesn’t do crisis comms. It doesn’t “step down effective immediately.” It just keeps executing logic whether flawed, biased, or dangerous.

It doesn’t sit in front of a royal commission or write a LinkedIn post reflecting on lessons learned. That responsibility? That fallout? That reputational firestorm?

That’s all yours.

Until we build AI governance with the same rigour we apply to financial reporting or workplace safety, we’re walking a compliance tightrope, with no safety net and no one clearly holding the rope.

Explainability Isn’t Optional (Anymore)

In my view, this is where things are heading fast, and it’s long overdue.

We can’t keep deploying AI systems that make decisions affecting people’s jobs, finances, health, or safety without being able to explain how they work. If we don’t understand the logic behind an algorithm’s outcome, then we have no business using it in critical settings.

The regulators are starting to agree.

Here in Australia, we’re seeing a sharp shift. The Privacy Act Review is turning the screws on high-impact automated decision-making, with proposed rights for individuals to challenge algorithmic decisions. The Office of the Australian Information Commissioner is already expecting organisations to provide clear, human-readable explanations, not vague technical jargon, when AI is involved.

And if you’re in financial services, critical infrastructure, or government? The APRA CPS 230 and CPS 234 standards are a signal flare. These aren’t niche frameworks; they’re telling boards and executives that operational risk includes machine-made decisions. That algorithms must be as accountable as any other system or process under your remit.

The Security of Critical Infrastructure (SOCI) Act puts a spotlight on operational risk across 11 critical sectors. Under its Risk Management Program rules, organisations are now expected to identify and mitigate the all risks including the risks of AI-based systems. Especially where they impact availability, integrity, reliability, or confidentiality.

This isn’t theoretical. We’ve already seen what happens when automation is deployed without explanation, Robodebt being the obvious cautionary tale. Systems were trusted blindly. No one could explain the decisions. And in the end, real people paid the price.

To me, it’s simple: If your AI can’t explain itself, it has no place in a system that governs lives. And if you can’t explain it on its behalf, you’ve got a governance problem, not a technology one.

AI as Attack Surface: When Your Risk Register Needs a Rewrite

In my view, the biggest threat with AI isn’t what it does intentionally, it’s what it can be manipulated into doing. And that’s where the cybersecurity implications really kick in.

We’ve spent years building stronger perimeters, hardening endpoints, and segmenting networks. But AI introduces a completely different kind of vulnerability, one that lives in the data, the model, and the logic itself.

  • Poisoned training data can silently warp how a model behaves, and often no one notices until the damage is done.
  • Adversarial prompts can trick even the most “secure” generative models into producing responses or actions they were explicitly programmed to avoid.
  • Model inversion attacks can extract sensitive or proprietary training data even when it’s been anonymised or aggregated.

These aren’t science fiction scenarios. They’re here. Now. And most organisations don’t have the tools, or the mindset, to defend against them.

At CyberWest, this became a recurring theme. Time after time, we heard examples where AI wasn’t just a productivity tool, it had become the new attack surface. Organisations with well-established cyber programs, risk registers, policies, incident response playbooks, were now facing exposure from AI systems they didn’t even realise had been embedded into third-party tools, SaaS platforms, or outsourced processes.

This is a blind spot I see across industries: We treat AI as an innovation function, but we’re not assessing it like any other core system. We're not red-teaming models. We're not mapping the full attack surface introduced by AI-powered workflows. In many cases, we’re not even logging what models are running and where.

AI brings incredible capability, but it also brings entirely new classes of risk. And if we keep pretending cybersecurity and AI are separate domains, we’ll get blindsided.

For me, this is a turning point. If AI is going to live at the centre of our businesses, then it needs to live at the centre of our security strategies too.

What Boards, CISOs, and Legal Counsels Should Be Asking

In my view, this is where the conversation needs to shift, urgently.

We’ve spent years telling boards to take cybersecurity seriously. Now we need to help them understand that AI risk isn’t separate, it’s an evolution of the same challenge, amplified by speed, scale, opacity, and unpredictability.

This isn’t about futureproofing, it’s about managing clear and present risk. And yet, too many boardrooms are still stuck asking whether they’re “using AI” rather than understanding where it’s already making decisions on their behalf.

The stakes are real, and the questions need to be sharper than ever:

  • Who actually owns AI risk in our organisation? Is it the CIO? The CISO? The Chief Risk Officer? The answer can’t be “all of them” or “none of them.” If no one’s clearly accountable, that’s the first red flag.
  • Do we have an AI-specific incident response plan? Because a data breach is one thing, but what happens when your model is manipulated into making bad decisions at scale? That’s not a technical fault. That’s an operational crisis.
  • Are we auditing AI decisions with the same rigour as financial statements? If an algorithm makes a flawed decision that impacts customers, clients, or citizens, can you trace it? Can you explain it? And can you stand behind it?
  • Can we connect every AI outcome back to its data, logic, and owner? Governance isn’t about knowing a model exists, it’s about knowing exactly how it behaves, why it behaves that way, and who signed off on it.
  • Are we embedding explainability and accountability into our AI systems from day one, or waiting for the lawyers to ask for it later?

This is where I believe boards and executives need to understand. AI risk isn’t theoretical. It’s happening now, in real systems, in real decisions, with real consequences.

Because when the breach happens, or the bias hits the fan ignorance won’t save you. Preparation might.

Final Thought: AI May Be the Future, But Responsibility Isn’t Automated

CyberWest 2025 left me energised, but cautious.

We're not entering the AI era we're already neck-deep in it. And while the tools have changed, the responsibility hasn’t. It’s optimising workflows, accelerating analysis, and giving us insights we couldn’t imagine a decade ago. But bold innovation without clear accountability? That’s not transformation. That’s risk, strategic, legal, operational, and reputational.

Because when the algorithm fails, and it will, it won’t write the apology. It won’t answer the Senate inquiry. It won’t front the media, calm the customers, or face the board.

You will.

So, let’s stop treating AI like a magic trick. Start treating it like the enterprise risk it is. And let’s make sure that as our systems evolve, our governance evolves faster.

Because in the age of machines, accountability doesn’t vanish. It becomes more important than ever.

AI can do many things, predict, process, and perform at scale. But it can’t accept blame. It can’t explain failure. It can’t make ethical decisions in a moral vacuum.

That job still belongs to us.

So yes, let’s harness the power of AI. But let’s not lose sight of the fact that what we automate, we still own.

Dr Glenn, signing off. (Still asking inconvenient questions. Still skeptical of black-box governance. Still reminding leaders: If your systems can’t explain themselves, neither can you.)

To view or add a comment, sign in

Others also viewed

Explore content categories