We Yell at Siri and Trust ChatGPT—But Should We? The Hidden Psychology of Talking to AI

We Yell at Siri and Trust ChatGPT—But Should We? The Hidden Psychology of Talking to AI

Why We Trust Talking Toasters: The Hidden Risk in How We Design AI

From the gods of ancient Greece, who embodied the wind and the sea, to our modern habit of naming our cars and yelling at Siri, humanity has always had a knack for turning objects into companions.

This deeply rooted instinct—called anthropomorphism—is how we’ve historically made sense of the world. We project emotions, agency, and intention onto non-human things, whether a river, a robot… or now, a chatbot.

And in the age of artificial intelligence, anthropomorphism isn’t just quaint—it’s potentially dangerous.

🧠 The Problem With Friendly Bots

You type a question. The AI responds smoothly: “Great question! Here’s what I think…”

It sounds confident. Helpful. Maybe even wise. You relax. You trust.

But here’s the truth: LLMs don’t "know" anything. They’re not thinking. They’re predicting—selecting the next word in a sentence based on vast patterns in data. They don’t believe, feel, or understand.

And yet, the more natural their responses feel, the more we begin to treat them as if they do.

That’s when problems arise. When:

  • A student cites an AI hallucination in a research paper.

  • A patient takes AI-generated advice as medical truth.

  • A business relies on a polished-sounding but incorrect forecast.

🔍 What the Research Says

This research paper from Harvard—The Impact of Revealing Large Language Model Stochasticity on Trust, Reliability, and Anthropomorphization (CHI 2024)—caught my attention not from a technical perspective, but from the perspective of how AI design choices subtly shape human perception, behavior, and even trust in authority. It opened my eyes to the broader societal implications of something as simple as showing one response versus ten.

How Do We Measure Anthropomorphism and Trust?

To understand how people perceive and relate to AI systems, the researchers didn't rely on guesswork. They used three well-established psychological instruments to quantify how human-like participants found the AI—and how much they trusted it.

  1. IDAQ (Individual Differences in Anthropomorphism Questionnaire) This questionnaire measures a person’s baseline tendency to attribute human traits to non-human entities. It asks participants to rate things like whether they believe a robot can have emotions, or whether a car has free will. It’s a way of understanding: how likely is someone to anthropomorphize anything, regardless of context? In this study, it provided crucial context—those who scored high on IDAQ were more susceptible to trusting the AI or seeing it as human-like, even when they were shown evidence to the contrary.

  2. The Godspeed Questionnaire Originally designed for evaluating human-robot interaction, this tool zooms in on the interface experience itself. Participants were asked to rate the AI responses on a spectrum: Were they machine-like or human-like? Natural or artificial? Conscious or mechanical? This allowed researchers to measure not just people’s general tendencies—but how the design of the LLM interface actively influenced those perceptions.

  3. The Human-Computer Trust (HCT) Scale Trust isn’t just about whether something “works.” It’s emotional, cognitive, and often subconscious. The HCT Scale unpacks trust into five distinct dimensions—each reflecting a different way we relate to technology:

Perceived ReliabilityDoes the system behave consistently? Can I count on it to give similar answers under similar conditions?

Technical CompetenceDoes it seem capable and intelligent? Does it give responses that reflect deep understanding or expertise?

UnderstandabilityDo I feel like I get how it works? Even if I don’t know the exact algorithm, can I anticipate its behavior?

FaithDo I believe in its advice even when I can't verify it? Do I trust it by default, especially when I'm unsure?

Personal AttachmentDo I feel emotionally connected to the system? Do I prefer using it over other tools? Would I miss it if it were gone?

What Did They Test?

Participants interacted with the same AI system under three interface designs:

  1. Single Response – one answer, similar to most chatbots today.

  2. Multiple Responses – ten AI-generated answers shown together.

  3. Multiple Responses + Cognitive Support – the same ten responses, but enhanced with a visualization tool called Positional Diction Clustering (PDC) to highlight similarities across them.

What They Found:

  • Single-response interfaces encouraged greater anthropomorphism and blind trust. One voice = one expert. Simple, but misleading.

  • Multiple responses exposed the model’s variability, helping users see the system as probabilistic—not omniscient. Trust became more calibrated.

  • Cognitive support helped users digest the extra information, reducing overload while preserving transparency.

The takeaway? How we design the interface shapes how we perceive the AI.

Why This Matters for the Next Generation

Kids born today may never know a world without AI assistants. Their first “mentor” might not be a teacher—it could be a voice in a device. If they grow up seeing polished, single-answer AIs, they may equate fluency with truth.

Just as children believe in Santa because adults never break the illusion, users believe AI is "smart" because the interface doesn’t reveal its limits.

That’s on us.

What Businesses Should Do

As LLMs become embedded in products—from search and support tools to content creation, diagnostics, and coaching apps—business leaders and product teams need to think beyond model performance. The interface is the experience. And that interface shapes how users perceive, trust, and rely on your AI.

Here’s your design checklist for building AI products that earn trust responsibly:

✅ Transparency Through Interface Design

Trust doesn’t come from a legal disclaimer at the bottom of the page. It comes from how the system behaves and explains itself. The goal isn’t to overwhelm users with technical details—it’s to make uncertainty visible.

Consider these practices:

  • Show multiple responses instead of just one. This breaks the illusion of a single “truth” and invites users to compare and reflect.

  • Visualize confidence or uncertainty using progress bars, shaded highlights, or phrases like “This answer may not be reliable.”

  • Include source attribution or citation links to help users trace information and validate AI-generated content.

🔍 Example: Instead of saying “Your blood pressure result is normal,” a healthcare chatbot might say, “This result appears normal based on general guidelines—but ranges can vary by age and condition. [Learn more].”

✅ Design for Doubt, Not Dependence

The goal isn’t to make users distrustful—it’s to help them stay alert, analytical, and in control.

Effective ways to embed constructive skepticism:

  • Enable side-by-side comparisons of AI-generated answers, especially for research or recommendations.

  • Highlight contradictions or diversity of viewpoints if responses differ, especially for open-ended or subjective questions.

  • Let users rate, flag, or reflect on responses—turning passive consumption into active engagement.

🧠 Why it matters: People don’t always want to second-guess an answer. Good interface design nudges them to think critically without creating friction.

✅ Educate Through Interaction

Don’t just provide answers—foster understanding. Help users develop intuition about how AI works and where it might fall short.

You can:

  • Use tooltips, hover-over info, or a built-in “How this works” explainer that demystifies the AI.

  • Embed context-aware warnings or reminders when AI provides answers in domains like law, healthcare, or finance.

  • Add "Did you know?" prompts—e.g., “Did you know this answer was selected from 10 different possibilities?” or “This response reflects patterns in past data, not real-time knowledge.”

🎓 Long-term payoff: The more users understand the nature of AI, the more likely they are to use it responsibly—and to trust your product for the right reasons.

✨ Bottom Line

LLMs are powerful—but power without friction can create illusions. And illusions, especially in AI, are dangerous.

Great AI products don’t just answer questions. They empower users to ask better ones.

By embedding transparency, doubt, and education into your design, you’re not just protecting users—you’re enhancing their intelligence, agency, and trust in you.

Would you like this turned into a LinkedIn carousel or visual framework (like a “Trust Design Toolkit”)?

🔁 How AI Companies Are Handling Anthropomorphism

Many companies recognize the issue—but responses vary:

  • Some double down on human-like personas (think: avatars, names, “feelings”), leaning into user engagement even at the cost of clarity.

  • Others are exploring ways to de-anthropomorphize AI: Google Gemini shows multiple drafts; Perplexity cites sources; Claude offers contextual disclaimers.

  • Ethical frameworks are emerging that question: Should we ever build emotionally resonant AIs that users might come to trust too much?

Regulators are starting to take note too. The balance between usability and user protection is becoming a major focal point.

🧪 Where the Research Could Go Next

This is only the beginning. Future questions include:

  • How does emotional AI (affective computing) amplify trust—or skepticism?

  • Do people exposed to anthropomorphic AI long-term struggle to distinguish between human and machine communication?

  • Are some populations (e.g., children, older adults) more vulnerable to trusting AI, and how can design account for that?

  • What happens when AI lives not in a browser, but in your VR headset or smart glasses?

The Final Word

Anthropomorphism isn’t a bug—it’s a feature of how we think.

But when it comes to AI, it’s a double-edged sword. We can’t stop people from seeing personality in AI—but we can build systems that educate, challenge, and empower rather than deceive.

As we design the next generation of AI tools, let’s not just ask, “Can it speak like us?”

Let’s ask: “Does it help us think better?” “Does it help us stay in control?”

And above all—“Is it worthy of our trust?”

📖 Recommended Reading: The Impact of Revealing LLM Stochasticity on Trust and Anthropomorphization by Swoopes, Holloway & Glassman.

💬 What do you think—should AI feel more human or more honest? Let’s discuss.

John Avi Socha, MBA🍀

Host @ Gossip Without Prejudice Pod🎧🎙️| COO @ Hemmat Law Group | Building a law firm that feels great to work at | Clio Guy

3mo

Was just discussing the "Do you thank ChatGPT?" convo with my colleagues, great food for thought.

Ankur Mittal

Director Cloud Applications and AI | Customer Success | Pre Sales

4mo

An eye opener thanks for sharing wonderful insights a must read article

To view or add a comment, sign in

Others also viewed

Explore topics