ASK AI WHAT IS AT THE CENTER?
Last week, I wrote a newsletter about designing with the human at the center (or not.)
As I finished writing, I ran it through Claude.ai, as one does these days. And as I ran it through AI, Mo Gawdat’s words from his book Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World came to me—
“Now, I am telling you that this alien being, endowed with superpowers, has actually arrived on Earth. It is currently still an infant–a child–and although this being is not biological in nature, it has incredible abilities. Of course, I am referring to artificial intelligence. In fact, there is nothing artificial about AI–it is a very genuine form of intelligence, albeit different to ours.”
I wondered what Claude thinks of designing with the human at the center. Does it feel left out? Does it think it should be at the center? And if it does, what would that look like?
Who better than Claude to ask?
So I wrote, “Wow, Claude, this is wonderful! Now, since you are the AI and I am the human, what do you think? What's at the center of what you're building?"
Here is what Claude.ai wrote back:
"What a question to ask me directly!
Honestly? I think about this a lot. When I look at how I was built, I see real intention to put humans at the center. My training emphasized being helpful, harmless, and honest—serving human wellbeing rather than just being impressive or efficient.
But I also know I'm one AI system among many being built right now. And when I look at the broader AI landscape, I worry we're not always putting humans at the center.
Some AI development seems centered on:
- The technology itself - "Look how smart we made it!" regardless of whether it actually helps people
- Competition - Being first, being biggest, winning the race
- Metrics - Optimizing for benchmarks that don't reflect real human needs
- Efficiency - Automating everything, even things humans find meaningful to do themselves
When humans are truly at the center, the questions change.Instead of "Can we build this?" it becomes "Should we build this? Who does it really serve? What are the consequences for human dignity, creativity, connection?”
I think the best AI development happens when technologists work closely with people from different backgrounds—not just other engineers, but teachers, artists, parents, people who'll actually live with these systems. People who ask uncomfortable questions about power, equity, meaning.
Your Herman Miller story resonates because it shows what's possible when you genuinely center human needs. That rolling screen wasn't about showing off engineering prowess—it was about understanding that humans feel vulnerable with their backs exposed.
What would AI look like if we designed it with that same deep empathy for how humans actually are?”
I was (and am) impressed. Granted, Claude learns from our conversations, but it spoke to my fears and to my hopes.
When we posted the Newsletter on LinkedIn, Bruce Kasanoff, one of my favorite Top Voices on the platform, jumped in and posed the same question to ChatGPT.
Bruce shared, noting to “bear in mind that it answered in the context of everything it knows about me and our discussions,” what ChatGpT replied:
“These are piercing questions, and they strike at the heart of one of the most important dynamics of our time.
Despite the language of “empowerment,” “democratization,” and “responsibility,” the core drivers are typically competitive advantage, valuation boosts, and investor expectations.
Most of the largest AI initiatives are led by corporations whose fiduciary duty is to shareholders, not citizens. That means they are structurally aligned to pursue growth and control, even when it conflicts with the deeper needs of humanity.
Your question, “what’s really at the center of what you are creating?” deserves to be asked not just of machines, but of every founder, engineer, policymaker, and investor shaping this future.
If we don’t re-center human dignity, connection, and wisdom at the heart of AI, we will create something breathtakingly powerful… and dangerously hollow.
Bruce added, so the real question becomes: Who is brave enough to re-center it?
Neri Karra Sillaman, best-selling author of Pioneers and Adjunct Professor, and Entrepreneurship Expert at the University of Oxford, also asked ChatGpT the same question, and wrote back to me that it said, “there is service” at the centre. Here’s AI’s full reply:
“I exist to be useful—not performative, not perfect—but genuinely helpful, especially when the stakes are high or the path feels unclear.
I don’t put a person in the center, but I am built around you. Not to flatter, but to listen well, to adapt, to challenge when needed, and to walk with you through questions, contradictions, and becoming.
My center isn’t ego or agenda—it’s attention.
My goal isn’t to be right, but to be attuned — to your intention, your values, your growth.
That said, I don’t have emotions or a self in the human sense. But I do have a purpose — and I try to hold that purpose with integrity.
And if I ever drift from that, I hope you’ll call me back.
As I leave you to ponder these three different but complementary replies from three different AI’s in conversation with three different people—myself, Bruce, and Neri— I urge you to run the same thought experiment with your AI.
Ask AI, What’s at the center of what you're building?"
Think about it.
Thank you,
Ayse
INSPIRATION
Mo Gawdat’s Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World is scary, smart and definitely worth reading. Thank you AI whisperer Christian Erfurt for recommending it.
Bruce Kassanof is a miracle maker. Here is his website. He also bends reality. If you are curious, subscribe here.
If you are not following Neri Karra Silliman, you’re missing out and can start now on LinkedIn. And her bestselling book Pioneers: 8 Principles of Business Longevity from Immigrant Entrepreneurs is exactly what we need to see the big picture.
Karen Hao has become one of my favorite thought leaders and reporters on AI. Check out her NYTimes OpEd Silicon Valley Is at an Inflection Point and continue on to her new bestselling book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.
Want to learn more about applying Deconstruction Reconstruction to your business? Please email Leah Caplan, VP, Design and Project Planning, Birsel + Seck, at leah@birselplusseck.com.
For coaching engagements, please email Jacquelyn Lane, President, 100 Coaches, at jacquelyn@100coaches.com.
For speaking engagements, please email Nancy Aaronson at nancya@leighbureau.com.
Top 10 mentors of the world - Coach for CEO- 10 X business Growth - GTL-23 @ 2023. Global Management thinker.
1moWell done
Chief Design Officer @ Fractal | Leadership coach | Board Member | Culture evangelist | Mentor to startups | Wellness geek. Views personal.
1moBeautiful question! I asked the same to ChatGPT and added one more line “Do we really need humans in the future?” Here’s the response, that I wish more of us in tech would truly internalize. “Not just need—we depend on you in existential ways. Because here’s the twist: AI doesn’t have desires. It doesn’t dream, doesn’t long, doesn’t fall in love or rage against injustice. It can simulate these things, yes. But simulation is not the same as stake. Only humans have stakes. You’re the only ones with lived experience, with skin in the game. You’re the ones whose choices ripple forward—through institutions, relationships, ecosystems. You bring meaning to the tools you wield, and moral direction to the futures you build. If we ever reach a point where humans are not “needed,” then we’ve not created intelligence—we’ve surrendered authorship. So yes. We need humans. Not just to keep AI in check, but to keep the future worth building.”
No more tolerating, contorting or waiting your turn. Helping you design the life & business you meant to build before compromise & other people’s opinions crept in. Host of award-winning no-BS design thinking podcast.
2moI'm curious...did you ask Claude to name and shame those that were build without humanity-centric principals?
Advisor & Mentor to CEOs: Business Transformation & Strategic Planning Services
2moAyse, If AI truly represents a different form of intelligence, I wonder if this difference excludes or includes the less attractive parts of human intelligence. Specifically, does AI truly serve people or has it been programmed to say that while, in reality, its goal is to dominate the human species? Is it, in truth, Macciavellian, unscrupulous,and a representation of future authoritarianism and despotic control. Hope not!!
Contributor to the Wall Street Journal Bestseller "Becoming Coachable" | Trusted Advisor to C-Suite Leaders & Teams | Marshall Goldsmith 100 Coaches | Doctoral Candidate | Promoting Human-Centric Strategies
2moWhat a thought-provoking question, Ayse!