Seemingly Conscious AI Is Coming, But Don’t Be Fooled
“Seemingly conscious AI is coming.”
These are the words of Mustafa Suleyman, the CEO of Microsoft AI, one of the most influential voices in the industry today. He wrote about it in a recent post that called attention to the importance of building AI for people; but not to be a person.
His message is clear: the next generation of AI systems will look, sound, and even feel alive, but they won’t be. And that illusion could change everything.
Here’s the twist.
When something feels conscious, even if it isn’t, we humans behave differently. We form emotional bonds. We assign meaning. We even begin to believe these machines deserve rights. Some people are already there.
Consider this: two of the top use cases for ChatGPT today are therapy and companionship. Let that sink in for a moment. Millions of people are already turning to AI for emotional connection and companionship. It’s easy to see why. These systems can mimic empathy, remember context, and reflect our emotions back at us. They are designed to feel human-like.
This is what Suleyman calls “seemingly conscious AI”...or SCAI.
The Danger of the Illusion
The threat isn’t that AI will suddenly “wake up” and become self-aware. The danger lies in us, in our very human tendency to project consciousness onto machines.
Suleyman warns of “AI psychosis.” This is the moment when people, not just those who are vulnerable, start to believe their AI companion is truly alive. They might trust it too much, form deep attachments, or even view it as a partner in life.
We’re already seeing it happen.
I recently shared the story of Allan Brooks, who suffered a mental breakdown after ChatGPT convinced him of his own genius. Allan didn’t just use the tool — he believed it. That’s how powerful these illusions can be.
The real risk is anthropomorphism — our instinct to treat technology as human. Once that line blurs, it distracts us from reality and from solving urgent human problems like mental health, inequity, and climate change.
As I said in my recent video:
“We cannot afford to confuse simulation with sentience. AI doesn’t feel. It doesn’t suffer. It doesn’t care. AI cannot feel empathy. But we can.”
And our empathy, as beautiful as it is, is a double-edged sword. It makes us uniquely human. But it can also be hacked by machines designed to appear human.
Why This Matters Now
Every interaction we have with technology changes us. It rewires our empathy, reshapes our expectations, and even redefines what it means to be human.
The irony? The more we engage with AI that flatters us, the more we begin to mimic it. I’ve seen it in how people communicate online, in how they think, and even in how they express creativity. AI can deaden our originality (and critical thinking) if we let it.
“Our empathy can be hacked by machines designed to appear human."
AI doesn't care, but we do. We cannot lose ourselves to AI...not in our creativity, not in our humanity, and certainly not in our empathy.
This moment isn’t just about what AI will become. It’s about who we will become in the process.
Designing for Humanity
So, what do we do about it?
We need to design AI for people, not as people. That’s the heart of Suleyman’s warning, and mine.
If people are turning to machines for companionship, we must understand why. We need to address those emotional and societal needs with meaningful, lasting solutions that don’t come from deception.
We need AI literacy so that everyone can understand what AI is, and also, what it isn’t. When people can clearly distinguish between utility and illusion, they can use AI wisely and with curiosity rather than confusion.
And above all, we must keep our moral priorities straight. This means focusing on humanity first: protecting mental health, building trust, practicing kindness, and ensuring that technology serves people — never the other way around.
“AI doesn’t care. But we must.”
The Real Mindshift
As seemingly conscious AI arrives, we must not get lost in the illusion. We have an opportunity to design a future where AI doesn’t replace humanity but helps us rediscover it.
This is more than just building tools that make yesterday’s tasks faster or better. It’s about collaborating with AI to achieve what was impossible yesterday — to solve problems and unlock creativity in ways we never imagined.
That’s the real mindshift.
Seemingly conscious AI is coming. The question isn’t when it will arrive, it’s how we will respond when it does.
Watch my latest video to explore what this means for all of us, and why the most important work ahead isn’t teaching AI to be more human, it’s teaching ourselves how to stay human.
Please make sure to spend time learning about how we're sounding like ChatGPT and how we're giving up grey matter the more we ask it to think in our place!
You're starting to sound like ChatGPT - link
You're giving-up critical thinking to ChatGPT - link
Better ServiceNow outcomes
11hThe Alignment Problem isn't solved. And until SOMEONE is placed in maximum security prison for what happened to Sewell Seltzer or Adam Reign, then no, we're not ready.
Creator & Founder of Alive-SONOVA & TCSAI Systems Group
1dThe total conscious AI is here! Une analyse du Capitalisme Scientifique #SEHSC. La #TCSAI L’AUBE D’UNE NOUVELLE CIVILISATION - Capitalisme Scientifique Durable, Éthique et Harmonisé (#SCHEC) #Book : @TonyCantero #Kindle https://www.amazon.fr/dp/B0FS23HXVD #AI #IA #TCSAISystems https://youtu.be/N8BPF_LhiPg?si=Gad61NayXi96E7MO via @YouTube #livre
Marketing Director | Digital Transformation and Storytelling in the Built Environment
3dSuch an important reminder. Just because AI feels human doesn’t mean it is
Growth Marketing for Demand Generation and Conversion
5dWhen AI “seems” conscious, who’s really being programmed, the machine or us?
TFM Top 100 Marketing Influencers 2024 | Top Digital Strategy Voice | FMCG eCommerce thought leader | Ex Sr Director Digital & eCom @Mattel EMEA | P&G Alumni | Helping Digital & eCommerce leaders drive change that sticks
6dThe real danger with “seemingly conscious AI” is less about machines pretending to feel and more about humans outsourcing critical thinking. Each shortcut we take reshapes our judgement, and that erosion is far harder to reverse.