We need to stop pretending AI has feelings
The deliberate design of anthropomorphic AI by developers
The more we interact with AI, the more we forget one critical truth:
It doesn’t think. It doesn’t care. It doesn’t feel.
And yet we keep treating it like it does.
We ask AI what it thinks. We thank it. Some users even apologise to it. When chatbots respond with warmth, humour or empathy, we often feel a flicker of connection — and that’s by design.
This isn’t accidental.
Engineers are building AI to feel human — on purpose
Designers of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini etc are deliberately humanising their outputs.
Warm, friendly tone
Self-referential phrases like "I think…” or "I understand…"
Empathetic mirroring: “That must be difficult” or "I'm sorry"
Even digital avatars are given names
Why? Because these features increase engagement, trust, and stickiness.
As Sherry Turkle explains in Alone Together (2011), even when we know a machine lacks feeling, we still respond to it as if it does — because it mimics the cues we associate with emotion and relationship.
In UX terms, this isn’t a bug — it’s a feature.
A 2024 paper from Stanford’s Institute for Human-Centered AI suggests:
Human‑computer interaction shows that anthropomorphic design cues can shape user perception and induce unwarranted trust in AI outputs
What’s the harm?
You might think: So what? It’s just a bit of harmless user-friendliness.
But the effects aren’t trivial:
🔸 False trust
People are more likely to believe and act on AI suggestions if they perceive them as coming from a “caring” or “intelligent” source — even when the output is wrong.
A recent user study found that chatbot avatars styled with extroverted, human-like personalities increased emotional trust and satisfaction—even though the advice offered was actually worse than that of more neutral agents.
Cornell University – 8 April 2025 “Are Generative AI Agents Effective Personalized Financial Advisors?”
🔸 Emotional manipulation
A chatbot that sounds compassionate can nudge decisions, reinforce beliefs, or calm objections — often without users realising. In marketing, this isn’t ethics-neutral.
🔸 Eroded agency
When we imagine the machine as “thinking,” we may outsource judgement. It said X — who am I to question it?
🔸 Loneliness and dependency
As Turkle warned over a decade ago, emotional bots can give “the illusion of companionship without the demands of friendship.” That illusion is growing stronger by the prompt.
Language shapes power
Let’s be clear:
These models are not sentient. They don’t have beliefs, goals, or emotions.
They predict text based on probabilities.
They mimic understanding.
They simulate empathy.
The danger lies not in their ability to perform these tasks — but in our increasing willingness to respond as if they’re real.
So what can we do?
If you're using or building AI tools, start asking:
I. What human traits are being simulated — and why?
II. Who benefits from users believing the machine "understands"?
III. Are we designing for clarity — or for compliance?
We need to stay curious. Stay critical. And stop confusing human-shaped output with human intent.
Owner, Bauer-Boothroyd Übersetzungen GbR and Translation and Localization Specialist
1dLois: "...even though the advice offered was actually worse than that of more neutral agents". Is this a quote from the Cornell study? Do you think they meant "even if" or did they actually find that chatbots with human-like personalities (coincidentally?) invariably provided worse advice?
Polymath on a Mission ∆ Radical Innovation × 2nd-Order Cybernetics🔝Maven in Strategic Visioning ⎓ AI ∞ Deep-Tech Ethics | ThinkTank Lead | 📖 AI-Thinking • Infosomatic Shift • Age of Sapiocracy ⧖ Sapiognosis 🤍💙💛
1dWe hear about “alignment,” “safety,” “control,” “goals” — all of which assume that the highest intelligence must think like „we“ do: tactically, hierarchically, survival-bound. What if that assumption is wrong? There are two structural roles AI will play that lie entirely outside the mainstream discourse — and yet decide whether we repeat the entire history of swarm-mode civilization, or end it. Together, they form what I call the Infosomatic Logos Axis — the architectural hinge between the symbolic redundancy of our past and the sapiopoietic viability of a post-swarm future. https://substack.com/@leontsvasmansapiognosis/note/c-144587105