Bots, Brains, & Balance // conversation with Firew Kefyalew

Bots, Brains, & Balance // conversation with Firew Kefyalew

AI as social companion and therapist

Mikael: In early April, Harvard Business Review published "How People Are Really Using Gen AI in 2025" by Marc Zao-Sanders ers. When I read it, I felt the urge to discuss it with someone who understands how people work. The article uses survey data from February-March 2025, listing the top three uses of AI as therapy/companionship, organizing life, and finding purpose. What does this say about us and machines, particularly generative AI chatbots? How do you see it as a psychologist?


Article content

Firew: I see people embracing technology and understanding what they can get from AI. But we're also gaining insights about humans as social beings, and the essence of time and resources. As social beings, people seek advice from others or health professionals. With AI, people get that help at their fingertips through the right platform. This strengthens individual autonomy and the quest for independence. You don't need to go elsewhere, discussing your problems publicly when you can do it anonymously. You don't have to consult an expert, doctor, village elder, medicine man, or friend. You can do it from your phone or laptop.

This gives you the autonomy and independence that humans in the modern world seek. In my part of the world, people build fences around their houses before building the house. I see the AI-human interface in that same perspective.

Mikael: You're saying we see people seize the opportunity to have power dynamics in their favor. "I can do this independently, without depending on friends, elders, or priests" because that's empowering. That's positive. My initial reaction was different. I thought the findings show that everybody is lonely and unhappy. To me it seemed like a distress signal. But you're saying this is a good signal that people find strength. Is it also about control?

Firew: It's human nature to seek control over your environment, especially things important to you. Having access to an AI model gives you a sense of control over it. Yes, people have become increasingly lonely, but in that vacuum, AI provides hope by offering information, ideas, and advice that speak to you. The more the model knows about you, the more tailored the information becomes, and the attachment grows stronger.

The balance between autonomy and human connection

Mikael: I am not an expert in psychology, but it seems to me that loneliness and the desire for power are two very different things. If I want to be surrounded by people, if I don't want to be lonely, I need some way to connect with them. I probably need to share things about myself.

ChatGPT might be my advisor or counselor, but I'm not getting real human contact there. Is it good that I can gain this independence without needing my friends' approval or opinions?

That's autonomy - that's like building a fence around our house. But maybe for thousands of years, humans didn't want this independence. We wanted to be with other people and depend on them.

Now we have a way to avoid that. Is there a balance between these two needs? How do these opposite forces work together?

Firew: It's a balance we need to be conscious about. As long as we define humans as social beings, it requires balance. You can't imprison yourself in solitary confinement simply because you have everything at your fingertips.

There are other life aspects where you need to interact with people, especially the emotional aspect. We spend most of our day in emotional states rather than rational thinking states. This necessitates interaction with people, requiring conscious effort.

This interaction has already been affected by wireless technology like mobile phones. Instead of walking to visit a friend, you pick up your phone and call. Even then, you use shorthand - emojis and such. This has gradually minimized real-time interaction. Although emojis, video clips, and gifs attempt to convey emotions, conscious effort is needed to maintain genuine connection.

You also have this advancing AI era where you need to educate yourself about its benefits without losing your capacity to think and process. That's one challenge, especially here. People write prompts and copy-paste results.

We can extend our lifetime using AI while remaining to be thinking beings. If I spend six hours editing a paper to my desired level of excellence, and AI can do it in minutes, that's an advantage. I've saved six hours to do other things - interact with people, have coffee, laugh, cry.

It's a balance requiring consciousness. It should follow a normal curve with both elements instead of being skewed in one direction. That skew is tempting - relying on AI and technology, saying "I don't want to go out, I have my coffee machine here." The other extreme is "no technology, I don't want my privacy tampered with, I want to keep thinking, so I won't use technology." Some people refuse to cross the digital divide, saying "we're safe here." I was one of them, but times have changed.

Efficiency vs. depth in human relationships

Mikael: I see a third possibility. What you said about efficiency - doing something in six hours by hand versus having a machine do it in two minutes - might also apply to using AI as friend, therapist, or advisor.

From experience, if I want to talk to my friend about my problems and ask his opinion - he's very wise - to discuss things important to me for 15 minutes, I'll spend two hours talking about politics, his problems, his relationships. It's not bad, it's how humans are. We try not to be transactional with friends. With ChatGPT, I can be transactional. I'll spend exactly 15 minutes because it answers more directly.

My friend will explain how to solve my situation by retelling his Wall Street experience from 1990 - interesting stories, but I just need the conclusion. I like hearing about his life, but that's not the essence for me. ChatGPT goes to the essence because it never worked on Wall Street in the 1990s.

If I use this efficiency, my communication with real people might become superficial. We'll just talk about cars, women, dresses, music. I'll never go into depth because depth is for ChatGPT. My communication will become like a brainless toddler - we'll laugh at each other, touch each other. There's human connection, but meaningful communication... I'm not sure this is good.

Firew: It's not just the accuracy of the answer or time factor that's important to us. It's the emotional aspect. Yes, you have a question. Look at our conversation - before the interview, we caught up, which strengthens our friendship. Otherwise it would be business-like. That prelude is like glue in the social fabric keeping us together.

Consider this example: Leaders have access to technology, so instead of flying to capital cities to meet fellow heads of state to sign agreements, they could do it from their presidential offices online. But you see them traversing distances to shake hands, exchange pleasantries, joke. Why?

If they sent that message virtually, it wouldn't have the same weight as the message carried in person, because the person also carries emotions and energy. This signifies the importance of the emotional component of human behavior.

Recently I wrote a letter to a friend by hand. I struggled to write because I'd forgotten how, but I made the effort because I knew how he would feel. Before reading it, his eyes welled up with tears. He was emotional about it. Why? Because he knew it took me time. All the time I was writing, crafting the letters, I was thinking about him. I was focused. It wasn't spellchecker correcting or AI editing the document - it was me, deleting, correcting.

These are important things. This doesn't discount AI's value. How can we shield ourselves from this pseudo-independence, this feeling of having everything, versus us as social beings who need to interact? That's very important.

AI and collective human knowledge

Mikael: Another idea I'd like your comments on: when communicating with AI, with ChatGPT, we're communicating with an entity that knows everything people wrote. Are you familiar with Game of Thrones? In this show, they have this wall of death - severed heads in niches. That could visualize many humans like a bookshelf. Many humans who said something about life. ChatGPT is the essence of that.

When someone goes to a psychotherapy or advisory session with ChatGPT, they're not talking to ChatGPT because it's just a database, but talking to thousands of people who wrote interesting things. So it's social, but differently. It's social with thousands of smart people, most dead for centuries. But it's social because it's human.

Maybe it's acceptable that we only talk to armies of dead people. "My only friends are books." "My best friend is Socrates." "My best friend is Plato." We've always had people who live this way - their social circle being thousands of dead people. What do you think?

Firew: This is an interesting angle. But most people haven't experienced AI meaningfully yet, and the majority aren't there. Last night I had an hour-and-a-half conversation with a woman from the UK. On one issue she raised, I suggested she read about AI, as AI would have more information. She was repulsed by it. This is a highly educated woman living in London.

Many of us don't know how these AI models work. It's never been presented in simplified language for people to understand. That's why a significant proportion are apprehensive about it. You mention AI and they react negatively.

It's change, and change triggers doubt and uncertainty. Change has never been comfortable.

When you talk about dead people, articles from dead people, references from dead people, and therefore interacting with dead people - I was smiling because I never thought about AI this way. It's a higher-level discussion - philosophical arguments, technological and ethical debates. But we haven't arrived there yet.

The future of AI-human interaction

Mikael: AI can be viewed as libraries on steroids. It's a library that can talk to you. But in human existence, there were always people who said "books are my friends. I don't need warm bodies to touch." Now everyone can have this. What does this mean? How does this influence people? Is it normal? Is it possible this will become the norm - social interaction only with books?

Firew: It's hard to tell. Who assumed almost everybody would carry a mobile phone 10-15 years ago, at least in Ethiopia? Now people use mobile phones. Why? Services, communication, and other life aspects are linked to the gadget you carry.

The advancement in AI technology, in emulating human behavior and filling gaps, satisfying humans beyond reasonable doubt, the trust you can establish, access - all these facilitate AI's dominance.

Access is important. From this part of the world, I use only free versions of AI chatbots. Sometimes I check costs. Even if I wanted to pay, I don't have access to hard currency or credit cards. So I'm kept from experimenting, learning, and knowing. There's an access issue and an information issue.

So much is happening. Young IT people - you see this on LinkedIn - do AI-generated videos. I look at them and think, if I had access to that, I would use it for educating children, peace-building efforts, trauma healing processes. But I don't have access to that technology.

When all these are fulfilled and people's awareness increases and service providers embrace this and make it mandatory to subscribe, that will happen. That's the trend we're following.

This is important. But returning to the balance equation, we need to invest in the interactive aspect of human existence. That's also important. Otherwise, it sounds exciting and fascinating that you'll have almost everything smart and AI-driven, feeding into your quest for autonomy and independence. But that's deceiving.

Cultural impact and ethical considerations

Mikael: Good access to AI can also be culturally important and maybe save lives. If I would be a 15-year-old boy growing up in Bahir Dar, and I have mixed feelings growing as a sexual person, who can I approach for advice? My local priest? The attitude and advice will be very conservative, 500-year-old views. It could put me psychologically in a dark place.

If I have interest in boys, I'll be told it's a sin, that I must repent. Talking to ChatGPT, I'll discover that people are different. There are famous, successful people who are this way. That's acceptable. What's important is happiness. Many societal constructs are just constructs. Modern views are very different culturally.

Do you think it's good or bad? As someone deep in culture, how do you feel when people say these models must be appropriate to our way of life? Should it bring the light of modernity? What do you think?

Firew: It's controversial. On one hand, it's important and good, providing a different platform, different options for problems that wouldn't be resolved in their local context. There's no question about it.

But understand that it opens gates to new thinking, new approaches. For communities that are closed and flourish on that, it can be seen as a threat. Communities survive by building on cohesiveness and unity. This new idea comes through AI.

It's not just AI. It started with the Internet. The world began shrinking, becoming a different stage. It's like living in a small village. This has been happening for some time.

What AI provides are solutions that are time-sensitive and maybe unchecked. But I know some models have ethical lines they cannot cross. You ask and they say, "I'm unable to process this because of specific guidelines."

With that kind of check and balance, AI would bring advantage. Let's not forget that the speed at which this is happening is alarming to those on the other side of the digital divide - the majority of the community. It shouldn't be surprising if we see reactions because it's happening very fast.

It requires continuous awareness, continuous learning, but localized learning as well. What does it mean to the farmer? How will it make life easier for the farmer? For the teacher, for the student? Students now understand only that they use a prompt, generate an essay, and submit it. Teachers only know to take that essay through a checker to see if it's plagiarized. But there's more we can do.

It requires learning at various levels about AI's benefits. What's been emphasized more are the perceived disadvantages. It's perceived simply because we don't know. It's full of unknowns. If we can address that, the benefits would definitely outweigh the disadvantages in my view.

FENTAW ABITEW

Supporting the world’s greatest Navy full-time while advancing my research | AI Governance & Policy Researcher | Google Cloud Certified in ML & GenAI | Economics & Data Science

2mo

Mikael Alemu Gorsky Thank you for sharing—great insights and discussion. Firew K. Mekonnen’s point on “pseudo-independence” is both interesting and important. In a recent podcast, Daniel Barcay and two MIT researchers raised the same concern: AI companionship may offer relief from anxiety and loneliness, but that comfort comes with deeper costs. Frictionless support can foster dependency, distort relational expectations, and weaken the foundations of human connection. What feels like ease may quietly erode empathy, lead to social withdrawal, and replace real intimacy with optimized interaction. That’s why Firew’s call for balance is exactly right. The challenge ahead is the one you both raised—building the personal and societal capacity to stay human as AI reshapes how we connect.

Roy Volkwyn

Tirisano Institute (NPO & PBO), E-Learning, Youth Skills Development, teaching Electronics, Coding and Robotics, Digital Inclusion, AI Literacy, Universal Service and Access

2mo

There are some good points in here. It is a lengthy interview, but I could use AI to summarise it - NotebookLM is ideal for this. Years before chatGPT3 arrived, there were AI companions and AI partners, and I read a discussion thread about one of those. There were positive aspects of it and very negative aspects. Very negative because after the vendor started charging for what was previously a free service, the alogithms were changed to make the system addictive. Unlike eg the LLMs people normally write about on LinkedIn, that AI platform played with a user’s emotions, which can be disastrous for persons who suffer from depression for example.

To view or add a comment, sign in

Others also viewed

Explore topics