Unf*cking Your CX #78 - Contact Center AI Isn’t Magic; It’s Amplified Bullsh*t If Your Data Sucks

Unf*cking Your CX #78 - Contact Center AI Isn’t Magic; It’s Amplified Bullsh*t If Your Data Sucks

What if your customers liked getting help from AI?

We dug into what builds trust, what breaks it, and why the right approach to AI can shift the entire support experience.

Read the Report


Before you plug in a bot or train a model, ask yourself this: Would you trust the decisions it’s making? If the answer is no, it’s not artificial intelligence. It’s artificial incompetence.

Let’s get something straight:

AI isn’t the hero of your contact center. It’s not a savior. It’s not a shortcut. And it’s definitely not a stand-in for leadership.

What AI is in the real world of frontline operations, overloaded agents, and budget-strained support orgs.. it's an amplifier.

It turns whispers into megaphones. It takes whatever system you’ve already built - your workflows, your data hygiene, your tagging logic, your definition of “resolved” and scales it at speed.

If your system is sharp, coherent, and customer-centered, AI makes it stronger. If your system is confused, bloated, or misaligned?

AI becomes a bullhorn for dysfunction.

And that’s the part nobody wants to admit.

Because while executives scramble to bolt LLMs onto legacy tech stacks… And vendors flood your inbox with “efficiency at scale” promises… And headlines scream about bots replacing humans…

Almost no one is asking the one question that actually determines whether AI improves your experience or ruins it:

Is your data good enough to trust at scale?

If the answer is no (and for most companies) then congratulations: You’re not investing in innovation. You’re institutionalizing incompetence.

Let’s break down exactly why.

1. AI Doesn’t Fix Broken Processes. It Scales Them.

Here’s the myth that’s taken hold in boardrooms:

“If we add AI to our contact center, we’ll fix the inefficiencies.”

Let me be clear: AI doesn’t fix anything. It reveals everything.

It reveals every shortcut you’ve taken in building your IVR logic. It reveals how outdated your ticket tagging structure really is. It reveals how siloed your escalation paths have become. And then, like an industrial-scale printing press, it reproduces those flaws at scale.

Let me give you an example.

A global retail brand I advised recently launched an AI-based assistant inside their contact center. It was supposed to deflect low-value contacts, speed up handling times, and create a more modern customer experience.

Instead, it misrouted high-value VIPs. It escalated simple questions to Tier 2. And it dropped abandonment rates by accident because customers gave up before even finishing their request.

Why? Because the bot was “smart.” But the system it was plugged into wasn’t.

It was trained on seven years of miscategorized ticket tags. It relied on keywords, not intent. And it followed a workflow built to protect team capacity, not to serve the customer.

So what happened?

The AI didn’t solve the problem. It scaled it. It accelerated bad decisions with more confidence, more speed, and less human intervention.

That’s the danger.

AI doesn’t ask: “Is this the right thing to do?”

It asks: “Based on what you’ve taught me, what’s the fastest thing to do?”

And if what you’ve taught it is flawed, biased, or incomplete… You don’t get a futuristic customer experience. You get a faster failure loop delivered with polish and precision

Executives often forget: AI doesn’t rewrite your playbook. It runs it faster.

If your playbook is misaligned with customer outcomes, no amount of LLMs or NLP models will help. The technology will simply codify inefficiency and make it look intelligent.

True transformation doesn’t start with automation. It starts with architecture.

Fix the system. Then scale the outcome.

2. Garbage In, Garbage Out: The Signal Problem

The most dangerous lie in the AI arms race is this:

“If we just have enough data, the model will figure it out.”

No, it won’t.

Volume doesn’t equal value. And AI doesn’t generate truth. It predicts patterns.

If the underlying data is flawed, biased, outdated, or incomplete, the AI will confidently generate answers that are exactly wrong. It will hallucinate. It will misfire. And it will do so with the smooth tone of a well-trained assistant convincing your teams and customers that bad logic is good service.

Let’s zoom in on what this looks like in real life.

In most contact centers, data pipelines weren’t designed to train intelligence. They were designed for reporting: compliance dashboards, SLA tracking, CSAT snapshots.

That means your AI is likely being fed:

  • Tagging data that was never validated and often applied retroactively.

  • Survey scores that measure perception, not root cause.

  • Agent notes that vary wildly in quality and tone depending on burnout, shift load, or whether it’s Friday at 4:59 p.m.

What you're giving the AI isn’t a structured learning environment. You’re handing it a stack of messy, unlabeled notebooks and hoping it writes a strategy guide.

And then you wonder why it sends a furious loyalty member in year 6 of tenure the same refund policy script you use for a first-time buyer trying to return a broken item.

This isn’t an intelligence problem. It’s a signal problem.

Most AI in the contact center today is trying to manufacture insight from broken signal.

That’s like asking a GPS to find the fastest route through a city when the traffic sensors are offline, the roads are mislabeled, and half the map is still from 2014.

If your data foundation is weak, the AI can’t help you. It can only hurt you faster.

Before you even think about automation, you need to redesign your signal model; what you capture, how it’s structured, how it’s scored, and whether it’s aligned to actual behavior, not just perceived satisfaction.

Performance Tip Every signal you collect (survey, behavior, system timestamp, intent, emotion) should be scored against friction, not just tagged for reporting. If it doesn’t help you act, it shouldn’t shape what the AI learns.

3. Amplification Without Accountability = Chaos

One of the most seductive promises of contact center AI is scale.

Faster resolution. Smarter routing. Automated summaries. 24/7 support without ballooning headcount.

But here’s what most CX leaders miss in the rush to “scale efficiency”:

AI doesn’t come with built-in accountability.

It doesn’t stop to ask, “Is this aligned to our customer promise?” It doesn’t pause to validate, “Did the customer actually get what they needed?”

It just does what it was trained to do... over and over, at high velocity, with zero introspection.

And if what it was trained on is broken?

Welcome to AI-powered chaos.

Let me give you a real-world example.

A telco we reviewed had implemented an AI-driven deflection system to reduce Tier 1 call volumes. The goal? Cut costs. Increase self-service. Boost containment.

On paper, it worked: Call volumes dropped. Average handle time went down. Containment rates were up.

But revenue? Flat. Customer retention? Worse. Escalations to executive support? Tripled.

Why? Because the AI was optimizing for containment, not customer outcome. It was trained on legacy resolution tags, not friction resolution scores. And no one (no one) was actively pressure-testing whether those AI-handled interactions actually solved the customer's problem.

It scaled silence, not satisfaction. Efficiency, not effectiveness. And in doing so, it made the contact center look like it was performing while it quietly hemorrhaged trust.

This is what happens when leaders confuse automation with impact.

If there’s no human-led validation loop, your AI is just a confidence machine with no conscience.

It will act decisively. It will report “resolved.” And it will mislead your leadership team into thinking the customer experience is improving when, in fact, it’s being hollowed out from the inside.

This is the core failure point in most AI deployments today: They amplify outputs without verifying outcomes.

Performance Tip: For every AI decision (routing, summarization, escalation) you need a feedback loop that compares:

  • What the system decided to do

  • Against what the customer actually needed

If those two don’t match, you're not scaling performance. You're automating misalignment.

4. AI Is Only As Good As the Signal Model You Feed It

Let me give it to you straight:

You don’t need more AI. You need a better signal model.

That’s the part no vendor pitch will tell you because the hardest, least sexy, most essential part of making AI actually work is the thing most leaders skip entirely:

Designing a system that feeds the machine good signal.

Not just sentiment. Not just survey snippets. Not just tagged intent.

Real, structured, behavior-rich signal.

Let me explain.

In most contact centers today, AI models are being trained and activated on a Frankenstein data stream. It’s a patchwork of unverified agent notes, post-call survey scores, time stamps from systems that don’t talk to each other, and arbitrary tags based on how fast someone needed to close the case.

You wouldn’t build a house on this kind of foundation. You wouldn’t train a pilot with this kind of feedback. But we’re fine launching AI on it?

That’s not transformation. That’s malpractice.

Every great AI deployment in the contact center starts with a disciplined signal architecture. And here’s what that includes:

1. Behavioral Signals

What the customer actually did:

  • Time between interactions

  • Abandonment rate

  • Repeat contacts

  • Time to resolution

  • Conversion drop-off points

2. Friction Scoring

Where effort, confusion, or failure occurs in the journey:

  • Did they have to repeat themselves?

  • Were they transferred without resolution?

  • Did the resolution require escalation?

  • Was the task completed without assistance?

3. Intent + Outcome Alignment

Not just what they asked for, but whether they got it.

4. Operational Context

Layer in who the customer is:

  • Tenure

  • Value tier

  • Recent purchases

  • Account risk profile

  • SLA triggers

This is how you move from “AI that responds” to AI that understands. From static call deflection to dynamic friction resolution. From automating scripts to orchestrating outcomes.

But you can’t do any of that if your signal model looks like a broken game of telephone.

Performance Tip: If you can’t explain what data your AI is pulling, how it’s being scored, and how outcomes are being validated, you’re not running a contact center system. You’re running a contact center experiment.

The future of CX performance isn’t just about listening harder. It’s about listening smarter—through a signal model that feeds your AI with clarity, context, and consequence.

5. What to Fix Before You Automate

Everyone wants to talk about what AI can do.

Nobody wants to talk about what you have to do first to make it work.

That’s why so many contact center AI initiatives stall, embarrass, or flat-out implode: They chase capability before readiness. They automate tasks before they align outcomes. They install tech without ever confronting the real question:

Is your operating system ready to be scaled?

Because once you flip the switch, once the model starts predicting, routing, summarizing, or deflecting, there’s no “we’ll fix it later.” There’s just: You’re now moving 10x faster in the wrong direction.

AI isn’t a product; it’s a mirror.

It reflects the system you’ve built. The logic you’ve codified. The values you’ve operationalized; whether you meant to or not.

So don’t ask,

“Can we use AI to save money?”

Ask:

“What are we about to amplify?”

If the answer makes you nervous, good. That means you’re finally being honest.


The 3 Player Tips to Stop Scaling Dysfunction and Start Designing for Performance

You don’t need more AI features. You need better system decisions.

These aren’t thought exercises—they’re the next three moves any serious CX, Ops, or Tech leader should be making before another AI tool gets deployed.

Player Tip #1: Build a Cross-Functional AI Readiness Scorecard - Before you go live, audit the foundation.

Create a scorecard across five dimensions—Signal Quality, Data Hygiene, Tagging Accuracy, Outcome Validation, and Feedback Loops.

Use a simple Red / Yellow / Green system:

If you’ve got more red than green? You’re not ready. AI will scale confusion, not clarity.

Player Tip #2: Name Your Bots by Business Outcome, Not Function - Don’t label it “Chat Assistant v2.” Label it “Cart Recovery Bot” or “Delivery Disruption Resolver.”

Why? Because naming forces intent.

If your bot’s name doesn’t directly connect to a measurable business outcome, it’s probably just noise in a new interface.

Once you lock the name, define:

  • ✅ Its success metric (e.g., % of saved carts, NPS delta post-resolution)

  • ✅ Its target user state (e.g., first-time buyer hitting a delivery delay)

  • ✅ Its escalation threshold (when it hands off to a human)

If the bot can’t be measured like a frontline employee, it shouldn’t be deployed like one.

Player Tip #3: Pilot AI Where It Hurts the Most, Not Where It’s Easiest - Stop launching pilots in low-stakes use cases just to check the box.

Instead, start where the customer pain is visible and costly:

  • High repeat contact areas

  • Where escalation rates spike

  • Journeys with known CLV or conversion drop-off

Map the journey. Score the friction. Use AI as a removal mechanism, not a gimmick.

Pro tip: Start with a single friction statement. Example: “Customers abandon support when they get rerouted twice without resolution.” Now test if AI reduces that.

If it doesn’t? It’s not intelligent; it’s just busy.


2 Frameworks to Pressure-Test Your AI Before It Wrecks Your CX

If you’ve made it this far, you already get it:

AI isn’t the strategy. It’s an accelerant. And if your systems, signals, or friction prioritization are broken, all AI will do is scale the mess.

But knowing that isn’t enough. You need a way to operationalize the insight to separate what’s ready from what’s risky, and to make sure your AI deployments don’t become executive-stage theater pieces with no performance underneath.

That’s why I’m giving you these two EPS-built frameworks battle-tested, friction-first, and designed to do what most AI tools won’t:

Expose the gaps before they go live.

These aren’t for your next vendor demo. They’re for your next leadership session when someone says “Let’s just automate that,” and you’ve got five minutes to decide whether that’s genius… or malpractice.

Let’s go.

Framework 1: The EPS Signal Integrity Checklist™

Use this to verify whether the signals feeding your AI models are actually trustworthy and performance-ready.

Most contact center AI tools rely on signal inputs that were never built for automation. The Signal Integrity Checklist helps you audit those inputs through the EPS lens and determine whether they’re helping or hurting.

How to use it:

  • Any signal marked ❌ should not be feeding any AI workflow.

  • ⚠️ = needs cleanup, structure, or scoring design before automation.

  • ✅ = candidate for model training and performance measurement.

Remember: AI only works when fed structured, contextual, friction-aware signal. If you can’t score the outcome, don’t scale the input.

Framework 2: The EPS Friction-to-AI Deployment Filter™

Use this before launching any AI initiative to ensure it’s solving meaningful problems—not automating noise.

This filter forces the org to start with friction, not tech. It ensures AI is applied where it matters, customer blockers that hurt business performance, not just where it’s easy to “plug in a bot.”

Step 1: Identify Friction

Ask:

  • Where are repeat contacts happening?

  • Where are NPS/CSAT drops clustered by intent or journey?

  • What frustrates high-value customers?

Step 2: Quantify Business Impact

  • What’s the CLV, revenue, or cost-to-serve implication of this friction?

  • What’s the behavioral consequence (abandonment, churn, escalation)?

Step 3: Validate Friction Type

Label it using EPS friction categories:

  • Effort-based: “I had to do too much”

  • Clarity-based: “I didn’t understand what to do”

  • Timing-based: “The process was too slow/late”

  • Outcome-based: “I didn’t get what I needed”

Step 4: Determine AI Fit

AI is appropriate only if:

  • The friction is predictable, repeatable, and patterned

  • There is structured signal available to detect and act on it

  • There’s a clear resolution path that AI can execute or triage

Step 5: Define the Performance Loop

Set pre-launch metrics:

  • Friction delta: Was it reduced?

  • Resolution %: Was the intent successfully closed?

  • Behavior shift: Did it lower calls, improve conversion, reduce time?

If you can’t measure the friction delta, you’re not ready to automate the solution.


1 Thought-provoking Question:

“If this AI made the wrong decision for a high-value customer… would anyone inside our system even know?”


Stop Automating Dysfunction. Start Performing Experience.

Let’s be real:

AI isn’t going to save your contact center. Not if it’s fed junk signal. Not if it’s optimizing for deflection over resolution. Not if it’s scaling processes that should’ve been killed, not automated.

You don’t need more bots. You need better system design. You need signal clarity. Friction visibility. Outcome accountability.

Because AI doesn’t ask permission. It just acts. And if what it’s acting on is built on faulty assumptions, misaligned data, and a broken definition of “performance” then what you’ve launched isn’t innovation.

It’s negligence at scale.

This isn’t about artificial intelligence. It’s about experience intelligence. And that doesn’t start with code. It starts with the courage to fix your system before you try to scale it.

Steven Papaioannou 🔬

I help complex businesses find & fix their churn

1w

Awesome stuff Zack Hamilton. An important shift for the support experience that we need to consider as AI infrastructure gets more deeply embedded.

Like
Reply

100% Zack Hamilton . The foundation is so important. If you haven’t got the right base data then AI will give you distorted insights. The other challenge I’m often seeing is a Change Management challenge. There’s a rush to jump at the latest tech without preparing the people for the changes in processes. A successful AI rollout really needs to address Tech, Process and People.

Audrey CHATEL

Chief Experience Strategist | Speaker | Building CX Strategies that CEOs actually care about | Expertise in Lean Startup

1w

Brilliant breakdown Zack Hamilton the "garbage in, garbage out" when you're watching AI confidently deliver wrong answers to frustrated customers. I've seen too many CX teams rush to deploy bots without auditing their foundational data first. The result? AI that amplifies every broken process, bad tag, and misaligned workflow at lightning speed. The signal integrity piece is everything. If your historical data wouldn't help a human agent make good decisions, why would it help an AI?

Michael Lowenstein, PhD CMC

Senior Advisor, Customer Value Creation International

1w

And, companies need to understand that, if the data leveraged in the contact/service center via AI is poor, it's likely to negatively impact the remainder of the overall customer experience: https://customerthink.com/the-difference-between-customer-service-and-the-customer-experience/

  • No alternative text description for this image
Doug Rabold ITIL®, HDI-CI

🔟x Top 25 Thought Leader and CX Influencer, International Keynote Speaker, Author, & Certified Trainer who delivers exceptional experiences through cultural transformation

1w

Love this, Zack. Fully aligns with a decades-old quote that has recently become one of my most used...

  • No alternative text description for this image

To view or add a comment, sign in

Others also viewed

Explore topics