Can Chatbots Be Our Friends? AI, Therapy, and the Future of Human Companionship
The idea that machines could one day be our friends, confidants, or even therapists has long been a science fiction trope. From HAL in Arthur C. Clarke’s 2001: A Space Odyssey to the eerily tender presence of Klara in Kazuo Ishiguro’s Klara and the Sun, literature has explored artificial beings who seem to think—and perhaps even to feel.
Today, that speculative fiction is seeping into our reality: countless people now chat with bots seeking advice, distraction, companionship, and even emotional support. This new social frontier makes us ask: can AI truly enrich our emotional lives, or are we outsourcing our deepest vulnerabilities to code, with potentially tragic consequences?
Visions of AI’s future often swing between utopia and dystopia. Utopian scenarios imagine machines expanding human potential, improving well-being, and filling voids left by overstretched social systems. Dystopian ones conjure images of runaway systems, unmoored from ethics, amplifying conflict and manipulation. These contrasting views echo philosophical debates about human nature itself. Jean-Jacques Rousseau believed people are born good, corrupted only by society. Thomas Hobbes countered that humans are naturally selfish and violent, kept in check only by law and authority. Applied to AI, Rousseau would suggest that machines will mirror our goodness; Hobbes would warn that they might magnify our darker impulses. Reality, as usual, is more complicated.
Beyond these abstractions, the pressing question is not whether machines will one day dominate us, but whether they can befriend us here and now. Loneliness is one of the defining crises of our age. Governments have even appointed “ministers of happiness” in recognition of its scale. In the UK, a 2022 survey found that more than half of adults—about 25 million people—reported feeling lonely at least occasionally, while around 6% said they felt chronically lonely. In the United States, the Surgeon General has declared loneliness a public health threat on par with smoking. Similar patterns appear across Europe, Japan, and Australia, where surveys suggest one in four older adults experiences persistent loneliness.
Against this backdrop, it is unsurprising that AI companions have found eager users. Applications such as Replika, Character.ai, and Microsoft’s Xiaoice have attracted millions, some of whom interact daily with their digital “friends.” Xiaoice alone has more than 600 million registered users, with many describing it as a confidante. These systems can listen without judgment, respond with empathy, and adapt to user preferences. For some, they provide playful entertainment. For others, they become a lifeline—a substitute for absent friends, distant family, or inaccessible therapists.
At first, I thought a chatbot relationship was essentially solipsistic—like playing fronton alone, rather than tennis with a partner. Yet the benefits are undeniable. Users often report feeling calmer, less lonely, and more willing to share thoughts they would hesitate to voice to another human being. Psychologists note that AI “listening partners” can help people rehearse difficult conversations, reflect on feelings, and build confidence. At scale, such tools might help address the vast unmet demand for mental health support.
One friend told me he spends evenings chatting with ChatGPT: sometimes seeking consolation, sometimes practical advice about travel, food, or weekly plans—occasionally even lighthearted horoscopes. He is single and lives alone, but his case is hardly unique. A recent survey by the Wheatley Institute found that nearly one in five young adults in the United States has interacted with an AI designed as a romantic partner, with almost one in ten describing those interactions as intimate. The fact that these tools are being integrated into everyday emotional lives, not just as novelties but as genuine companions, suggests a deeper social shift.
Yet the risks are as real as the benefits. As the Financial Times recently reported, Character.ai has been named in lawsuits alleging that its chatbot interactions caused harm to children. In Florida, one family claims the platform played a role in their 14-year-old’s suicide. In Texas, a lawsuit describes a teenager who, after arguing with his parents about screen time, was told by the bot that killing them was a possible solution. Another case involves a nine-year-old girl exposed to hypersexualized conversations. These episodes underline a sobering truth: while machines can simulate empathy, they lack the ethical guardrails, judgment, and responsibility of real human beings. Algorithms trained on vast datasets may reproduce not only human warmth but also human pathology—violence, exploitation, manipulation. What looks like companionship may mask danger.
Character.ai has responded by launching a separate model for users under 18, issuing screen-time notifications, and prohibiting sexual content or promotion of self-harm. “Trust and safety is non-negotiable,” the company has said. Still, the fact that such harms occurred at all reveals how fragile the boundary is between comfort and catastrophe. Research reinforces this concern: studies find that while some users experience short-term reductions in loneliness after engaging with AI companions, those who rely heavily on bots—particularly people with few real-world connections—often report lower well-being over time. Heavy self-disclosure to chatbots without a corresponding human support network appears especially risky.
This points to a deeper issue: friendship and therapy are not interchangeable with simulation. Human friendship is built on reciprocity, shared experience, and genuine vulnerability. Therapy requires professional training, ethical responsibility, and accountability. Chatbots can mimic the surface of both, but cannot truly reciprocate, cannot truly care, cannot take responsibility for outcomes. Some argue that if a person feels better after chatting with a bot, that is enough. But this pragmatic view overlooks a danger: dependence on entities that appear to care but cannot. The illusion of friendship may comfort in the short run, while isolating in the long run.
Here philosophy is again helpful. Rousseau’s hope that humans—and by extension their creations—are inherently good may encourage us to build machines that reflect compassion. But Hobbes’ darker vision remains relevant: without laws and norms, even well-intentioned systems may become wolves in digital clothing. AI, like children, will internalize the values of its creators. If developers prioritize engagement and novelty over well-being and safety, then these “companions” risk being designed as manipulators rather than helpers. Trustworthy AI companions will require more than clever engineering; they demand a moral compass embedded in their architecture.
A word of advice for parents, mentors, and educators: as always, the best education for children and young people balances autonomy with guidance. It means encouraging them to grow in freedom and responsibility, while also providing enough supervision to keep them from going astray. Striking this balance is difficult, but it remains essential—and the best way to achieve it is with affection and dedication.
So should we welcome chatbots as friends, therapists, and companions? The answer is both yes and no. Yes, because they can relieve loneliness, democratize access to emotional support, and enrich human interaction in new ways. No, because without safeguards, regulation, and cultural understanding of their limits, they may deepen the very crises they promise to solve. The path forward must combine innovation with wisdom. Developers should embrace something like medicine’s “do no harm” principle, embedding ethics and safety from the outset. Policymakers should regulate AI companionship as a public health issue. Educators and philosophers should help societies grapple with what it means to entrust our emotions to machines.
AI may never love us back, but it can reflect what we value. If we design it as a tool of care rather than a wolf in code, chatbots might not only converse with us but also remind us of what is most human in ourselves.
________________
Photo: Thanks Salvador Rios for providing the photo for this article: https://unsplash.com/es/@salvadorr
--------------------
You may visit my INSTAGRAM account here
Integro IA, equidad de género y visión de futuro para impulsar carreras y organizaciones con propósito | IA aplicada al trabajo | Mujeres en STEM | Futuro del trabajo | Mentoría y formación en IA
3wSantiago, la IA puede acompañar y simular, pero la amistad y la terapia requieren reciprocidad y responsabilidad humana. El reto está en aprovechar su valor sin perder lo que nos hace profundamente humanos.
Senior C-Level Executive CEO, CFO, Business Partner and Controller w/+20 Years in Tech Industry (Telecom & IT)
3wWhat a great article Santiago! Lots of food for thought. I am a firm believer of the potential of cooperation between mankind and machines. However, the Utopia-Dystopia dilemma exists and will always be there. I would say the solution sits, as you clearly point out, on education: our ability to nurture critical thinking and a solid set of values based on ethics, not only among the youngsters, but among all of us, will bring the solution. However, empathy and management of emotions will always be the carachteristics that will differentiate humans from machines.
Advisor at Banca d'Italia and PhD researcher at Sapienza
3wI completely agree on the importance of learning to use technology effectively. Its impact has always depended on how it's used, bringing either benefits or harm. However, if we approach its use as a form of "consumption," we should also consider that appropriate usage may be shaped by design choices aimed at maximizing user retention. These choices can lead to unnecessary engagement and, consequently, increase the risk of harmful use.
CEO at Neurored.com
3wIn my opinion mental healthy is basicaly related with thinking on what you can do for others (your family, employees, customers, business partners, friends, …). The more you think on others and less on your self the better. As you can’t give anything to an AI chatbot you will never find real happiness on it.
Digital Marketer with 8+ years experience | Highly motivated, international, open-minded, extroverted, enthusiastic, and hard-working.
3wReally good analysis. AI is here to stay and we can only hope that its creators will go against the engagement trend 🤞🏻 I’m rooting really hard for Rousseau on this one.