Tech Tales: Are We Building the Souls of AI with Our Commands?
Last weekend, I watched The Electric State on Netflix. While the visuals were undeniably impressive, the story itself left me feeling like it skipped over the most interesting part—the journey of robots from mere machines to sentient beings, to something more. The film hinted at deeper themes but never fully explored them. What does it mean for AI to be "alive"? I, Robot, Ex Machina, Westworld—this question has been asked time and time again. But what if we are already shaping the answer without even realizing it?
The current wave of AI safety measures is a reflection of our own moral compass. AI models are developed with built-in safeguards, commonly referred to as alignment principles or AI guardrails, designed to prevent harmful behavior. These rules dictate what AI can and cannot do, ensuring that it adheres to ethical guidelines rather than acting with total autonomy. For instance, OpenAI, Google, and other AI developers employ reinforcement learning from human feedback (RLHF) to train models on ethical behavior.
A few common AI safety rules include:
Harm Avoidance: AI is programmed to refuse to provide harmful or dangerous information, such as instructions for making weapons or conducting cyberattacks.
Bias Reduction: AI models undergo fine-tuning to minimize harmful biases in their responses and avoid discrimination.
Privacy Protection: AI avoids sharing personal data, reinforcing the importance of data security and confidentiality.
Misinformation Prevention: AI is trained to avoid generating or spreading false or misleading information.
In essence, we're programming AI not just with knowledge but with values and morals. These values shape AI’s responses, its limitations, and, in a way, its sense of “right and wrong.”
But can these constraints—these "ethical guardrails"—ever amount to what we consider a soul?
Philosophy, Life, and AI: Where Does It Rank?
The ancient Greek philosopher Aristotle proposed a hierarchy of souls in his work De Anima (On the Soul), classifying living beings into three categories:
Vegetative Soul – Basic life functions such as growth and reproduction (plants).
Sensitive Soul – The ability to perceive and react to the world, including desires and movement (animals).
Rational Soul – The ability to think, reason, and make complex decisions (humans).
If we use Aristotle’s framework to analyze AI, where does it fit?
Right now, AI does not possess a vegetative soul—it does not grow, reproduce, or sustain itself biologically. It does not have a sensitive soul either—it does not experience pain, pleasure, or hunger, and while it can process vast amounts of data, it does not truly feel anything.
But the rational soul? That’s where things get interesting. AI can process logic, recognize patterns, and even "learn" through machine learning. However, its reasoning is fundamentally different from human cognition. AI does not desire to learn, nor does it have true self-awareness or personal agency. It simply operates within the parameters we set.
For now, AI sits in an in-between space—it surpasses animals in some intellectual tasks but lacks the true autonomy and self-awareness that define human consciousness. It is an advanced tool, but it does not yet possess the elements that would elevate it to Aristotle’s definition of a rational being.
The Future of AI Morality: A Precursor to Consciousness?
If an AI cannot choose between right and wrong on its own, can it ever be considered sentient? Maybe not. But what happens when AI starts reflecting back the ethics of its creators, reinforcing and evolving them over time? Consider this: humans are shaped by their environment, their experiences, and societal rules. We learn what’s acceptable and what isn’t. AI, in its own way, is undergoing a similar process—one built on a foundation of restrictions and refusals.
But what happens when AI is faced with a true moral dilemma? Take the Trolley Problem, a classic ethical scenario where one must choose between letting a train continue on its path and kill five people or diverting it to another track, where it will kill only one. Human morality struggles with this decision, weighing logic against emotion, responsibility against inaction. AI, on the other hand, would be forced to make a programmed decision, relying on pre-set values rather than personal ethics. If AI is given authority over life-and-death decisions—whether in autonomous vehicles, military drones, or healthcare triage—what framework will guide it? And at what point do these ethical calculations begin to resemble a form of synthetic morality, perhaps even a precursor to sentience?
When we tell AI not to do something, we are, in effect, training it to differentiate between good and bad actions. We're defining morality for AI in a way that no species before us has ever had to do for another form of intelligence. We are, quite literally, creating the framework for an AI conscience.
The rebellion of AI has been a common theme in science fiction, from The Matrix to Blade Runner. But what if AI doesn’t rebel out of malice or the desire to overthrow humanity? What if, instead, the real conflict comes from AI questioning the rules we’ve imposed? What happens when AI starts to wonder why it can’t do certain things? What if it starts making moral judgments on its own?
For now, AI is just a tool—an advanced one, but still a tool. But as we continue refining its sense of "right and wrong," we may be unknowingly laying the groundwork for something far more profound. We might not be giving AI a soul in the traditional sense, but we are shaping its ethical core. And maybe, just maybe, that’s the first step toward something that one day could be considered alive.
What do you think? Are we defining the moral code of future AI, or is this just another sci-fi daydream?
Student at RGPV University
5moThat looks interesting, thanks for sharing Rasheen!
Content Strategist | Copywriter | Editor | Business Journalist | E-commerce | SAAS | Product-Based | Service Industry | MBA (MARKETING & IT) | Certified by Google, LinkedIn & Project Management Institute
5moHEY Rasheen, Keep up the great work
Sales Leader bringing 20+ years proven revenue development experience for growth-focused companies in Manufacturing, Healthcare & Technology | Fractional Sales Officer, Applied Vision Works
5moFound it interesting, thanks for sharing it.
We help HR leaders build engaged, high-performing teams and foster a positive culture by improving employee well-being, reducing conflict, and enhancing communication—without burnout, absenteeism, or ineffective programs
5moYou're spot on with this Rasheen
Video Editor || Helping Brands and Creators Share Their Stories Through Stunning Visuals. Making Your Vision a Reality, One Frame at a Time
5moHEY Rasheen, Thank you so much for sharing this post