AI Ethics: A 6-Step Framework for Humans and Machines
This article has been created with a decent human effort and a bit of AI. Image created with Midjourney v7.

AI Ethics: A 6-Step Framework for Humans and Machines

Change is happening at breakneck speed. Artificial Intelligence is getting more disruptive by the day. Geopolitics are shifting, and old ways of doing are eroding. They might be all true, but all outside your control as well.

🧘 What Is Within Our Control?

What is within your control? That ancient question posed by the Stoics might be the most modern of all.

We are so focused on shaping AI that we rarely stop to think about how AI is shaping us. And yet, the future of AI Ethics does not depend solely on how we design algorithms or structure datasets. It also depends on how we, as humans, choose to interact with AI; how we engage with it, how we question it, how we allow it (or don’t) to influence our decisions. It’s not just about designing better systems; it’s about becoming better users of those systems. More ethical. More thoughtful. More intentional.

🏛️ Ancient Philosophy for Modern Challenges

Long before algorithms and large language models, the ancient philosophers aligned to Stoicism were already thinking deeply about judgment, control, and human behavior. Philosophers like Marcus Aurelius, Epictetus, and Seneca weren't just reflecting on the chaos of ancient Rome. They were crafting practical mindsets for navigating uncertainty and complexity. Sound familiar?

Their approach was grounded in clarity, discipline, and a sense of social responsibility. At its core, Stoicism revolves around three guiding areas: how we think, how we understand the world around us, and how we act within it. They called these Logic, Physics, and Ethics, but don’t let the old-school names fool you. These ideas are strikingly modern.

In practice, they boil down to three disciplines we still struggle with today:

  • Assent - How we judge what's true.

  • Desire - What we pursue and why.

  • Action - What we choose to do, especially under pressure.

Today, these principles can serve as a foundation for a modern framework for AI Ethics—one that applies not only to the systems we build but to ourselves as well. This dual-track approach recognizes that AI Ethics is not just a codebase issue; it’s a human character issue too.

Let’s explore a 6-step framework that I have drafted, grounded in Stoic philosophy, that business leaders, developers, and AI enthusiasts can use to act more ethically in an increasingly automated world.


🧩 Logic → Assent: Cultivate Critical Thinking in AI and Ourselves

The Stoic discipline of assent reminds us not to blindly accept appearances. Just because something seems true does not make it so. We must test impressions with reason. In the same way, AI systems must be explainable and trustworthy. And we, as users, must interrogate those outputs.

“Do not accept any impression without first examining it and testing it.” — Epictetus

For Humans: Strengthen your AI literacy. Question machine outputs. Don’t treat AI like an oracle.

For AI: Build transparency, bias detection, and interpretability into models.

Chinese-American computer scientist and author Fei-Fei Li has long emphasized that AI tools are reflections of their creators and users. She believes that without strong human values behind them, even the most advanced systems risk amplifying bias or harm. That’s why developing critical thinking and ethical awareness in humans is just as important as refining machine intelligence.


🌐 Physics → Desire: Understand Context and Respect Natural Limits

Desire, in Stoic terms, should be aimed only at things within our control. The same applies to AI: we must define realistic and responsible goals for what we want AI to achieve. We must teach AI to know its limits. And we must know our own.

"AI must be built with humility, aware of its limitations and its potential. When we ignore boundaries, we invite consequences." — Sundar Pichai

For Humans: Don’t expect AI to solve the unsolvable. Understand the limits of its training data, and of your own delegation to it.

For AI: Ensure your models understand their boundaries. Avoid building systems with impossible or unchecked ambitions.

British computer scientist Stuart Russell often underscores that AI should be built to honor human intent, even when that intent isn’t perfectly clear. He argues that responsible AI must act cautiously, constantly questioning and updating its goals in response to uncertainty. True intelligence, he believes, starts with humility.


⚖️ Ethics → Action: Act Justly, Benefit the Whole

Stoicism teaches that we are social beings. Ethics is not abstract; it’s relational. How we treat others matters. And how AI treats others matters too.

“What injures the hive injures the bee.” — Marcus Aurelius

For Humans: Choose to use AI in ways that uplift people. Ask: does this tool serve the few, or the many?

For AI: Build fairness, inclusivity, and bias mitigation into every layer of the model lifecycle.

OpenAI founder Sam Altman has envisioned a future where AI isn’t just powerful, but profoundly beneficial to society. He stressed about the importance of distributing AI’s benefits broadly, rather than concentrating them in the hands of a few. His focus has been on long-term value creation and collective uplift.


🛡️ Logic → Assent (Human): Preserve Human Judgment in the Loop

Epictetus reminds us: it is not events that disturb us, but our judgments about them. The same applies to AI. We must preserve space for human discernment, especially in high-stakes contexts.

"We must never abdicate human agency to algorithms. Technology can assist, but judgment remains ours." — Barack Obama

For Humans: Don’t outsource your conscience to a machine. Retain the final say.

For AI: Design with explainability. Enable override options. Flag ambiguity.

As AI systems take on more autonomy, it becomes easier to hand over our responsibility and step out of the loop. But doing so risks losing our sense of discernment and the ability to ask whether a system should act, not just whether it can. In a world driven by machine logic, our judgment is not a redundancy—it’s the safeguard.


🔄 Physics → Desire (AI): Align AI Goals with Human Nature

The Stoics believed in living according to nature—not just biology, but our shared rationality and social connectedness. AI goals should reflect that same alignment.

“Man is affected not by events, but by the view he takes of them.” — Epictetus

For Humans: Design with a full view of what it means to be human—emotion, context, and values included.

For AI: Align optimization goals with well-being, not just metrics. Build ethical reasoning into model objectives.

Google DeepMind co-founder Demis Hassabis believes AI’s greatest potential lies not in replacing humans, but in enhancing our natural intelligence. He champions a vision of AI that works in harmony with human thinking—supporting our strengths and compensating for our weaknesses. His goal is partnership, not competition.


🤝 Ethics → Action (Shared): Collaborate Toward Ethical Fluency

The ultimate Stoic virtue is wisdom in action. It’s not enough to think about ethics; we must practice it. This applies to AI engineers and users alike.

"Ethics in AI isn't a checklist—it's a culture. It must be built, nurtured, and constantly challenged." — Margrethe Vestager

For Humans: Make ethics part of your workflow. Reflect daily. Ask if you’re using AI as a shortcut, or a force for good.

For AI: Build in ethical defaults. Prioritize user safety and dignity in all decisions.

Fei-Fei Li advocates for a continuous process of ethical reflection in AI development. She sees values not as fixed inputs, but evolving guidelines that must be regularly revisited. For her, building AI ethically means building a culture that asks questions, not just writes code.


Final Thoughts: What Is in Our Control

The Stoics weren’t against progress. They were for wisdom. They understood that tools are neither good nor bad, it’s how we use them that counts. AI is a tool. A powerful one. Possibly the most powerful we’ve ever made.

But the real power? That still lies in you. Your choices. Your judgment. Your ability to pause, reflect, and act rightly. The future of AI Ethics is not written in code alone. It’s also written in our own conduct.

#AI #ArtificialIntelligence #AIEthics #Leadership

Want to read more on this subject?

Please explore these sources that have been used as inspiration for this article:.

———————

🗿 I’m Paul de Metter — Techno-Leader, Entrepreneur & Human Technologist.💥 Follow me for deep insights at the intersection of leadership, technology, strategy, and governance—where innovation drives real impact, responsibly.

To view or add a comment, sign in

Others also viewed

Explore topics