Why Doesn’t Great Support Always Feel Great?

Why Doesn’t Great Support Always Feel Great?

The dashboards say we’re doing well:

✓ Response times are tight

✓ AI deflection is up

✓ The team is moving fast

✓ CSAT hasn’t dropped

So why does support still feel... off?

That’s the question I’ve been wrestling with, and I know I’m not alone.

📉When the metrics say “great,” but it doesn’t land that way

On paper, we’re hitting targets. But zoom in and the cracks show:

  • Fast replies, without real relevance
  • Resolved tickets, where nothing’s actually resolved
  • Help that shows up, but doesn’t help

And if you lead this work, you know the trap: Metrics look clean. But the experience isn’t.

Article content

👉 Run this audit, it changed how I think

Here’s what I did: Pulled 10 resolved tickets at random. No filters. No cherry-picking.

Then I asked:

“Would I feel good if this landed in my inbox?” “Was this complete and clear, or just closed?” “Did it feel like we showed up?”

It was... humbling.

Some were technically accurate, but lacked enthusiasm. Some didn't quite follow through. Others felt more like handoffs instead of genuine assistance.

Here’s what we need to change

Short-term :

We should update our rubrics to score tone, clarity, and effort required by the customer, not just speed and handle time.

Long-term:

Let's redefine “quality” across AI and humans. Fast is fine, but it has to be useful. Automated is great, but it still needs to feel understood.


For CX/Care/Service leaders: A real question

AI can scale. Ops can optimize. But if your bar is just “resolution,” you’ll miss what matters.

Relevance is the bar.

So I’ll leave you with this: What signal tells you that support actually felt good?

Let’s build from there.

Best, Guneet


Disclaimer: The views expressed in this newsletter are solely mine. I am not a spokesperson for my employer, nor do I represent my employer's opinion.


Patrick Martin

CCO/CX & Service Executive/AI and Agentic/Speaker/Advisory Board Member

2mo

I have run into this several times in my career, and the reality is that operational metrics are great, but you have to be careful on which metrics you use to drive accountability. FCR is great, but if your re-open/callbacks are high, it's a signal that your team is not resolving issues. The metrics you use need to drive the behaviours you are expecting from your team. You want relevance...measure that. In my previous life, we had built a case quality guide, that was used as a QA mechanism for agents. There were 3 main sections: structure, quality of interactions, and resolution. Specific criteria were set for each section, and the agents were scored on it. That is what we held them accountable on. First replies required relevance, and not just an acknowledgement of case assignment. Resolution needed to be confirmed by customers. By having high Case Quality scores, we knew that CSAT and CES would go up, even if that meant sacrificing FCR. Rather take more time and get it right rather than rush through it and have to start over. Food for thought.

Thank you for prompting such thoughtful reflection on customer experience. Your approach will undoubtedly spark important conversations.

Mark Pruett

Business Leader by Day | Writer/Creator/Editor by Passion | Crafting Performance Cultures and Soul-Stirring Stories | Kentucky Colonel

3mo

Love this.

To view or add a comment, sign in

Others also viewed

Explore topics