What Customers Really Think About Your AI: Academic Research Reveals the Truth.

What Customers Really Think About Your AI: Academic Research Reveals the Truth.

Listen to the podcast for deeper insights

I have noticed myself saying please and thank you when I am using ChatGPT and treating it as a human, do you do that? According to Sam Altman, the CEO of OpenAI, it's costing millions. So why do we do this?

Recently, my podcast co-host, Professor Ryan Hamilton , and I spoke at the SOCAP Conference sponsored by IA Solutions. Ryan, being the clever academic, gave a cracking talk about the psychology behind how customers respond to AI.

Trust Issues: Customers Are Suspicious of AI and Rightly So

Ryan kicked things off with a stat that made the audience do a collective sharp intake of breath. 

👉 50% of customers say they trust companies LESS when they know they use AI.

Yes, just the knowledge that AI is in the mix is enough to knock a brand down a few pegs in the trust department.

Now, some of you might be thinking: “That’ll change as AI becomes more common.” And you’d be right. But today, trust in AI still feels like handing your wallet to a magician and hoping your credit card comes back unscathed.

Ryan explained this mistrust isn’t just paranoia. It’s grounded in something psychologists call “outgroup attribution.” Customers treat AI like a suspicious stranger at a family barbecue. Sure, they might smile politely, but they’re definitely not going to reveal any deep and dark secrets they may have. 

AI Feels Human—But Not in a Good Way

Here’s the twist. We humans are funny creatures. We know AI isn’t a person, and yet we interact with it like it is.

  • Saying “please” and “thank you” to AI.

  • We tend to trust AI more for useful (utilitarian) tasks like picking a good flight.

  • But trust people more for fun (hedonic) tasks, like choosing chocolate or defining how soft a pillow is.

It’s what researchers call the “word-of-mouth” effect. We trust AI for logic. We trust people for taste and more subtle things.

The Irony of AI Literacy

Here’s a brilliant (and deeply ironic) insight uncovered in the research (see below):

🧠 The more customers understand AI, the less they trust it.

This is quite profound. According to the research, individuals who truly understand how AI works are less likely to utilize it. Why? Because ignorance is bliss. When AI is a magical black box, it’s easy to trust. But once you learn how the sausage is made—data scraping, algorithm bias, hallucinations—it suddenly feels a lot less charming. I can attest to this.

I recently asked AI to appraise a piece of sporting memorabilia I have. It told me it was worth between $ 30-60,000! Wow! I had bought it for $ 3,500 some twenty years ago. My wife nearly divorced me because of this, but with this assessment, I could now prove to her what a great investment it was! I was over the moon. To confirm this valuation, I sent it to two auction houses for their estimates. They informed me it was worth about $ 6,000. Adjusting for inflation, it hasn't gone up at all. My wife was right! I should have invested the money elsewhere. I was very disappointed. 

So if your customers are tech-savvy, guess what? They’re more skeptical.

Which means: don’t assume everyone wants more AI. Some want less; it depends on the customer segment. Or at least, like it explained properly.

The Spillover Effect: Once Bitten, Twice Wary

Now here’s where it gets juicy. Research shows that if a customer has one bad experience with AI, they carry that resentment into future experiences, even with different companies. My experience with sports memorabilia has made me very skeptical. It's like getting food poisoning at one sushi place and swearing off raw fish forever.

This is what psychologists refer to as outgroup homogenization. We lump all AI together. If one bot fails, they are all bad. So if your chatbot fumbles a customer interaction, don’t just worry about the lost sale. Worry about how they’ll respond the next time anyone uses AI with them. It’s effectively guilt by digital association. AI use is goal-driven—people engage with AI because they want something done. That could be booking a flight, finding a recipe, or even just a bit of reassurance.

So, what is the key takeaway?

Understand the customer’s goal—and how AI helps (or hinders) it.

Don't just roll out AI for efficiency. Roll it out for experience. AI should solve customer problems, not just your operational headaches.

What You Should Do About It

Here’s a quick list of actionable takeaways from this research:

🔹 Transparency is your friend. Tell customers when AI is involved and what it’s doing. Don’t make it a mystery.

🔹 Design AI for trust. Give customers some control. Offer opt-outs. Show how decisions are made.

🔹 Segment, segment, segment. Tech-savvy Gen Z might love your AI assistant. But your 70-year-old customer segment may not. 

🔹 Measure trust. This is key. Ask your customers how they feel about interacting with your AI tools—and track changes over time.

🔹 Build bridges, not black boxes. Treat AI as part of your team, not your replacement. And never lose sight of the human touch.

My Take: Keep AI Friendly, but Keep It Real

I’ve said it before, and I’ll say it again: Customer Experience is about feelings.

You can’t just slap AI on your front page and expect everyone to swoon. It has to feel helpful. Honest and transparent. It needs to feel human. Customers want to trust you. They’re looking for signals—subconscious and otherwise—that say: “We’ve got your back.” AI can help. But only if you design it that way.

So as you start (or continue) your AI journey, ask yourself the same questions we ask on The Intuitive Customer podcast:

  • What experience are we trying to deliver?

  • What emotions are we trying to evoke?

  • And how do we make sure the machines don’t mess it up?

Thanks to Ryan for shining a much-needed spotlight on this. He may not be British, but he is brilliant. (Just don’t tell him I said that.)

This article is sponsored by SOCAP International and IA Solutions, who are both as passionate about improving customer experience as we are.

Thanks to all the people who took part on the day:

@Ian Tempro: https://www.linkedin.com/in/ian-insta-answer/

Steve Samuels: https://www.linkedin.com/in/steve-samuels-7b942a200/

Tim Astrums: https://www.linkedin.com/in/timaustrums/

Nicole Nutile: https://www.linkedin.com/in/nicole-nutile/

References:

Castelo, Noah, Maarten W. Bos, and Donald R. Lehmann (2019), “Task-Dependent Algorithm Aversion,” Journal of Marketing Research, 56 (5), 809-825.

Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey (2015), “Algorithm aversion: people erroneously avoid algorithms after seeing them err,” Journal of Experimental Psychology: General, 144, 1, 114. 

Hermann, Erik, and Stefano Puntoni, (2024), “Artificial intelligence and consumer behavior: From predictive to generative AI,” Journal of Business Research, 180, 114720.

Ipsos (2022), “Global opinions about AI – January 2022, https://t.ly/qyyEI

Longoni, Chiara, and Luca Cian (2022), “Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The “Word-of-Machine” Effect,” Journal of Marketing, 86 (1), 91-108.

Puntoni, Stefano, Rebecca W. Reczek, Markus Giesler, and Simona Botti (2021), “Consumers and Artificial Intelligence: An Experiential Perspective,” Journal of Marketing, 85 (1), 131-151.

Santoro, Erik, and Benoît Monin (2023), “The AI Effect: People rate distinctively human attributes as more essential to being human after learning about artificial intelligence advances,” Journal of Experimental Social Psychology 107, 104464.

Somto Ogbonna

Customer Service Trainer | Leadership Speaker | Project & Operations Excellence Consultant | Learning & Development Strategist

1mo

Fascinating insights, Colin. One dimension I find increasingly relevant but often overlooked is emotional transference in AI interactions.  If a customer has a bad day and vents at a chatbot, or gets frustrated when the AI “doesn’t get it,” are they reacting to the tech or to what it represents? It makes me wonder if we are underestimating how much emotion customers bring into AI interactions. And could too many “friendly” AI experiences, without real empathy behind them, backfire over time? Would love to hear your take on where this emotional tension leads as AI keeps evolving.

Like
Reply

Thank you for the great summary. I especially liked the insight that the more people understand AI, the less they trust it. That mirrors what I see in enterprise settings too: Tech-savvy users are often the loudest critics. I guess they know where the cracks are. One question I’m exploring: How can we design AI interactions that build trust without hiding complexity? Transparency often overwhelms users, but oversimplification damages credibility. 

Like
Reply
Hector Premuda, CCXP

CX & Marketing Advisor & Mentor/ C-Level Segur Caixa Adeslas, Cinesa, Caprabo / I help you design and implement strategies that maximize customer value and impact on your business / AI applied to business / COACH

2mo

Fascinating topic, Colin Shaw! 👏 What struck me the most (and with a touch of delicious irony) is the finding that the more we understand AI, the less we trust it. It’s like learning how the sausage is made—and suddenly losing your appetite. 🌭🤖 That’s the real challenge: it’s not just about deploying AI, it’s about designing it with emotional intelligence. Because if customers feel like they’re interacting with a “black box,” the magic is gone. Transparency, segmentation, and empathy aren’t nice-to-haves—they’re the new CX fundamentals. 📊❤️ Also, your memorabilia story? Relatable to the core. Sometimes AI gives us hope… until human reality fact-checks it. Let’s just say: not all hallucinations are helpful. 😅 Thanks for keeping the conversation both smart and human. #CX #AITrust #EmpathyByDesign #CustomerExperience

Like
Reply
Rohit Kuttappan

Founder, कोपलें | Career & Leadership Coach | Digital Strategy Mentor | Building Purpose-Driven Growth Ecosystems from Grade 8 to CXO | Rooted in India. Aligned Globally.

2mo

Colin — this hits home! As a coach and digital marketing strategist, I see it all the time: clients aren’t impressed by how “smart” the AI is — they care about how useful and human it feels. Your take on emotional connection being the real driver of AI adoption is spot on. Appreciate you cutting through the noise with real clarity and relevance.

Thank you for sharing, Colin! Insights and conversations like this make us #SOCAPPY. Angela Bragg, Nicole Nutile

To view or add a comment, sign in

Others also viewed

Explore topics