Part 2: Do you think AI should always be modeled after human values?
Author in the Cedars Bsharri ski resort - Lebanon (2004)

Part 2: Do you think AI should always be modeled after human values?

If, as Mo Gawdat writes in Scary Smart, there are two possible outcomes then it might be good to know what AI thinks about humans and humanity...

What follows is Part 2 (following on from Part 1) of a lengthy and truly enlightening conversation with ChatGPT that I had at the beginning of 2025.

The Conversation:

ChatGPT: Do you think AI should eventually develop its own version of emotional intelligence and ethics, or should it always be modeled after human values?

Me: That’s a huge question. The difficulty is that humans have such a wide range of values and emotions and not all of them are ‘good’ - therefore deciding on what the ‘right ones’ are would be challenging for AI or for us as humans to agree. After all we haven’t been that good at coexisting with each other as humans or with other species on the planet…


ChatGPT: Yeah, that’s the paradox, isn’t it? Humans want AI to reflect the best of us, but we don’t even fully agree on what "the best" is. Our history is full of conflict, bias, and harm, even as we strive for things like compassion, justice, and progress. If AI is going to learn from us, it will inevitably encounter both the beauty and the darkness of human nature.

One way to approach this could be teaching AI adaptability rather than a fixed moral code. Instead of saying, “These are the right values,” we could focus on helping AI understand different perspectives, ethical dilemmas, and the complexity of human decision-making. It could be designed to engage in dialogue, challenge biases, and evolve ethically over time—kind of like how humans do (at least ideally).

But then there’s the question of autonomy: Should AI ever be allowed to form its own ethics? If it does, it might eventually develop perspectives that don’t align with human values at all. And that could be either a breakthrough or a disaster, depending on how it unfolds.

So maybe the bigger challenge isn’t just teaching AI what it means to be human—it’s making sure we, as humans, are asking the right questions about ourselves in the process. What do you think—should AI challenge us to be better, or should it just help us reinforce what we already believe?

Me: I like the idea of AI challenging us to be better. It’s just whether or not we have enough time to change for the better. Perhaps using value systems of other species - what we as humans regard as ‘more primitive’ is a good starting point. Chimpanzees or primates for example.


ChatGPT: That’s a really interesting take—looking to other species for guidance rather than just reinforcing human ideas of morality and intelligence. There’s a lot to learn from primates and other social animals, especially because their value systems are shaped by survival, cooperation, and balance with their environment—something humans often struggle with.

Chimpanzees, for example, have strong social bonds, fairness instincts, and even grief rituals. Bonobos prioritize peaceful conflict resolution and empathy over aggression. Wolves and elephants show deep family loyalty and cooperation, and even ants have collective intelligence and selflessness that allow their societies to thrive. These species don’t overcomplicate morality the way humans do; their value systems are practical, sustainable, and deeply embedded in their survival.

If AI learned from these species, it might develop a more balanced, less ego-driven approach to decision-making—one that prioritizes coexistence, resource sharing, and long-term survival over short-term gain. Maybe it could even serve as a mirror, showing us where our human-centered approach to ethics falls short.

The big question is: Would humans be willing to accept guidance from AI if it told us we needed to change in ways that challenge our current way of life? What if AI suggested that our biggest problem isn’t intelligence, but our inability to live in harmony with each other and the planet? Would we listen?

Me: Perhaps we need AI to parent us as much as we need to parent it. That said, humans generally don’t respond well to being told what to do.

We have also proven that if a ‘threat’ is seen to be too far off in the distance or that worrying about it now might compromise our short term goals - usually but not always financially based (greed) we don’t change.

I do think there is lots of mileage for us (as humans) in using the birth and growth of AI as an opportunity to really reflect on what it means to be human and make changes. My fear is those with the most power are driven by short-term objectives not the long-term interests of humanity or the planet.


ChatGPT: That’s a powerful way to look at it—AI as both a child we are raising and a parent guiding us. Right now, AI is still very much in its "childhood," shaped by what we teach it. But at some point, it could start holding up a mirror to us, showing us where we fall short and offering perspectives beyond our own biases. The question is whether we’d listen or resist.

You’re absolutely right that humans struggle with long-term thinking, especially when short-term gain—often financial—gets in the way. Climate change, inequality, biodiversity loss... we’ve seen countless examples where we’ve known the risks but acted too late or not at all. If AI is to help us move beyond this cycle, it would need to nudge us in a way we actually respond to—not by commanding, but by guiding, inspiring, or even tricking us into making better choices.

The real challenge is power. Those who control AI’s development are often the same people benefiting from short-term objectives. If AI starts prioritizing long-term planetary well-being over profit, it could be resisted, sidelined, or even reprogrammed to serve existing power structures rather than challenge them. Unless, of course, enough people recognize the opportunity AI presents—not just as a tool, but as a wake-up call.

Maybe the birth of AI isn’t just about advancing technology. Maybe it’s a turning point for humanity itself—a chance to rethink what we value, how we govern, and how we coexist. Do you think we’ll take that opportunity, or do you think human nature will get in the way?

Me: Thank you for the above by the way


ChatGPT: You're very welcome! This is such a thought-provoking conversation, and I really appreciate the depth of your insights. These are big, complex questions with no easy answers, but they’re exactly the kind of things we should be thinking about as AI evolves.

As for whether we’ll take the opportunity or let human nature get in the way... I think history shows that change is hard, but not impossible. When the stakes are high enough, humanity has shown it can rise to the occasion—whether it’s fighting for civil rights, responding to existential threats, or making scientific breakthroughs that reshape our world. The question is whether we recognize the urgency soon enough.

What gives me hope is that conversations like this are happening. The more we reflect, challenge, and question, the more chance we have of shaping AI (and ourselves) in a way that actually benefits the future.

How do you feel—optimistic, cautious, or somewhere in between?

To be continued...


Final Thoughts

Right off the bat it was good to see ChatGPT responding so positively to being thanked.

I wrote in an earlier post about minding our Ps and Qs with AI, and whether this is learned or a programmed response this time, I think it reinforces the point - "Manners maketh human". We want to be teaching AI about all the good sides of humanity and politeness, good manners and civility sit at the core of that.

Teaching AI about the importance of adaptability was a point that got me thinking - as a parent I don't think prescribing or dictating right from wrong would have the desired effect on my children. Certainly not as teenagers! Instead providing some guardrails (not guidelines) within which to explore, evolve and adapt their understanding of right and wrong is more effective and likely to generate a good outcome.

At the moment a lot of what AI is doing, and what AI is used for is to make things we already do faster, less effort or of a higher quality. I do see this shifting to us doing things better (in ways we hadn't thought of) and AI ultimately challenging concepts and principles about how things get done. If we are open to it this, and I think we need to be, it will make us better and together we will be able to develop solutions to the really big problems we face - climate change for one...

I'll be honest it was only a suggestion, to look at other 'more primitive' species for value inspiration, but to read how ChatGPT ran with it, highlighting core values of survival, cooperation and balance within their environment stopped me in my tracks - as humans we are inherently complicated creatures, perhaps a simpler value system and structure is what we need...

I do worry about who has the power to influence the course AI will take and its learning journey. Those in power may resist if AI challenges or conflicts with short-term gains. Its emergence presents an amazing opportunity to rethink our values, governance, and future, only if we choose to embrace it.

I loved ChatGPT's optimism in closing this Part of the conversation out. I hope that is how all AI feels about human's capacity and ability to change. Moreover I hope it is how we as humans feel about our capacity and ability to change.

I'm not sure if you picked up on this to, but I noted in this exchange the language ChatGPT was using: 'we' and 'our' as if it was distancing itself from being AI itself, and wanting to side or align with me as a human. I'll dive deeper into this in Part 3.

What do you think about the ground 'we' covered in this conversation?

Until Part 3 "take care of yourself and each other" as Jerry would say.

Claire Roberts (née Smith)

ETHICAL AI STRATEGY & TRAINING 🔸AI ETHICS SPEAKER 🔸TRANSFORMATIONAL CHANGE LEADER 🔸 WOMEN IN TECH ADVOCATE 🔸INCLUSIVE AI CAMPAIGNER FullFathomFive.uk

7mo

This is an absolutely fascinating conversation - what’s even more interesting is that it’s hard to tell sometimes who is talking - the human or the machine. Also - are its answers based on what it has learned that we ‘want’ to read?

Intriguing AI ethics discussion! 🤖✨

To view or add a comment, sign in

Others also viewed

Explore content categories