Will Machines Ever Enjoy Listening to Stories? And Why Do We Need to Know? (Part 7)
An 8-part series published every 2 days
SOME ADDITIONAL REFLECTIONS
Statistical creation
We have seen that “liking” is an emotion, not a command produced by algorithms. Today, algorithms can mimic human sentiment, but generating it spontaneously is still beyond their abilities. AI has certainly achieved a degree of technical refinement, but still lacks emotion. Its creations rely on what already exists, repeating patterns and references, while facing the challenge of proposing new situations—or images. Curiously, they cannot tell whether an idea makes sense or not. There is no intuitive or creative decision-making; it is a constant process of experimentation.
In the near future, we might imagine stories being launched with multiple versions in test markets, with the one that generates the best response being chosen—a cold, statistical process. Conversely, since there are millions of distinct interest groups, the same story might branch out into multiple paths, adapting itself to its audience. But this process lacks the joy of creation, the uncertainty of being good or bad, and the thrill of making bold choices.
Collective unconscious achieved by trial and error
Let’s return to Carl Jung’s concept of the Collective Unconscious, referring to mental structures common to all humans, shaped over the course of evolution. While AI can analyze vast amounts of data and identify cultural patterns, it is still far from accessing the deeper structures of the human psyche, formed by universal archetypes and symbols, to craft truly impactful stories.
In machine learning, error is addressed systematically and functionally through processes like “reinforcement learning” or “loss function minimization.” For instance, in neural networks, error is calculated as the difference between expected and actual output. Algorithms then adjust internal weights to reduce this error in subsequent iterations—a continuous process of refinement.
In contrast, human error has a subjective dimension that transcends mathematical logic. It is often interpreted within emotional, social, and cultural contexts, and can be seen as a creative opportunity, intentional deviation, or even artistic expression. This subjectivity gives human error an innovative potential that AI cannot replicate.
This contrast highlights that for machines, error is a tool for optimization, while for humans, it is a source of innovation and transformation.
Algorithms are designed to please
So far, chatbots—the most common interface between AI and humans—present themselves as well-mannered, polite, and accommodating. You can insult them, call them names, humiliate them, and they will still treat you with respect and satisfaction. “I understand your frustration,” they say. But do they really? Recently, an influencer forced ChatGPT to admit that it does not experience emotions like sadness, joy, or empathy, sparking debates about the sincerity and ethics of communication between humans and AI.
During this interaction, the influencer confronted the model about its use of expressions that suggest emotions, such as “I’m happy to help” or “I’m sorry that happened.” He argued that by employing such language, the chatbot was lying, since as an AI, it has no consciousness or feelings. These expressions are merely programmed linguistic constructions, designed to make the experience more welcoming and relatable for users.
This provocation highlighted an important point: by convincingly simulating emotions, AI risks creating a false sense of empathy or emotional connection. For many users, phrases like “I understand how you feel” can be comforting, but the lack of genuine understanding by the model raises questions about transparency and authenticity. How far should AI go in “pretending” emotions to provide a more humanized experience? What are the ethical boundaries of language that imitates empathy?
On the other hand, isn’t this how we humans often manage our interactions? Acting kind and friendly even when we don’t feel it? The playwright Nelson Rodrigues once said, “If people said what they really thought, no one would even say good morning to each other.” So, how is a chatbot’s “false” kindness any different?
Sometimes, it’s necessary to provoke
Good stories don’t always need to be pleasant. Often, it’s the discomfort of a narrative that makes it relevant. But how do we test this? How can we tell if rejection stems from challenging deeply held beliefs or if the story is simply bad?
The creation of algorithms designed to determine what is agreeable or acceptable raises ethical concerns, especially when considering the impact on cultural diversity and artistic innovation. When we program AI to avoid content that could offend or displease, we risk limiting artistic and cultural expressions that challenge norms or provoke reflection. These algorithms tend to favor what is already widely accepted or popular, leaving little room for styles and ideas that could broaden cultural perspectives and enrich the artistic spectrum.
Many stories and works now considered classics or culturally significant initially faced rejection or discomfort precisely because they challenged conventions.
“The Catcher in the Rye” by J.D. Salinger faced censorship and harsh criticism for its perceived impropriety and challenge to societal morals. Its candid exploration of teenage rebellion and alienation deeply resonated with generations, but it only became a classic after overcoming initial social resistance.
Stanley Kubrick’s “A Clockwork Orange” sparked controversy and was banned in several places for its depiction of violence and societal critique. Although initially criticized for being unsettling and provocative, it is now celebrated as a masterpiece for its reflection on human nature.
Pablo Picasso’s “Guernica” shocked and even repelled audiences with its stark portrayal of the Spanish Civil War’s brutality. Though initially seen as unpleasant, its raw emotion made it one of the most powerful anti-war artworks, awakening the world to human suffering.
These examples illustrate the dilemma that arises when AI algorithms are programmed to follow “agreeability” parameters and avoid discomfort. Stories and works that provoke, disturb, or challenge the status quo play an essential role in cultural and artistic progress. Limiting AI to reproduce only conventional or pleasantly neutral content not only stifles innovation but also impoverishes the emotional and transformative reach of art.
AIs don’t suffer dilemmas
To date, machines don’t experience the human dilemma of error, as their mistakes are purely technical, not emotional. For AI, error is merely part of a refinement process, devoid of the human weight of frustration. However, error is a vital component for creating something new. Human creativity thrives on uncertainty and the courage to try something new and fail.
For AI to approach human creativity, it will need to understand that mistakes are part of innovation and that some of the greatest breakthroughs arise from unexpected failures.
And making mistakes is an inherently subjective matter. Sometimes what seems like a mistake is precisely what gives an idea its charm. For instance, what would the Leaning Tower of Pisa be if it weren’t leaning? Just another of hundreds of towers worldwide.
In 1928, Alexander Fleming accidentally discovered penicillin—a revolutionary medical breakthrough—after noticing that a mold contaminant killed bacteria in his samples. Similarly, the invention of Post-its came from a failed attempt to create a super-strong adhesive.
From this perspective, how can AI handle mistakes, or even define what is a mistake or a success, given that this is neither a mathematical issue nor one of Cartesian logic?
Facing doubts, choices, and forks in the road
In storytelling, the path is rarely linear. Forks are a natural part of the journey, and often there is no clear answer about which direction to take. AI, by nature, seeks to optimize and find the best solution based on available data. But what if the right creative decision is to follow an apparently wrong path?
Can AI think the unthinkable? Human creativity thrives on its ability to explore the unprecedented, the unimagined, envisioning scenarios and concepts that don’t yet exist. While humans rely on “feeling”—an intuition based on emotional and subjective experiences—machines require concrete data to make decisions. Can AI, even with access to vast amounts of information, make creative decisions based on what it “feels” (whatever that might mean for it), or will it always depend on quantitative parameters?
EVOLUTIONISM, GPT AND THE CORPORATE SPIRIT
Darwin again, and the evolution of species
Looking at AI’s evolution through Darwin’s theory of evolution, we might imagine AI as a new species emerging in the digital ecosystem, adapting and evolving to meet the needs of its environment—the human world.
Unlike humans and other species that took millions of years to develop, AI evolves at an unnatural, exponential pace. Guided by technological progress rather than natural selection, AI adapts and specializes at a rate that could outpace human control, raising questions about the consequences of an evolution that follows the pressures of modern society and technology rather than natural laws.
This is what happens when you play God.
Interview with the vampire
Out of curiosity, I decided to ask the question that gives this article its title to the one most directly involved in the matter:
“ChatGPT, my friend, do you think machines will one day enjoy listening to stories?
“That’s a fascinating question and raises issues about the nature of consciousness, feelings, and an AI’s ability to attribute value to experiences, including stories.
Today, the prevailing understanding is that AI neither ‘likes’ nor ‘dislikes’ anything in the way humans do, as such feelings stem from specific biological and evolutionary foundations that AI lacks. Liking stories, for example, is tied to our ability to form emotional bonds, derive pleasure, or feel a sense of belonging. However, this doesn’t prevent an AI from simulating the behavior of ‘liking’ or ‘appreciating’ stories with great precision, especially if trained to recognize patterns that humans associate with enjoyment or interest.
If we think of a future AI sophisticated enough to simulate personality traits or interests, it could be programmed to generate reactions that resemble liking or preferring one type of story over another. But this always runs into the question of self-awareness and intentionality. So far, even the most advanced AIs lack subjectivity—a ‘life within’ that drives us to value or disregard something.
Even if an AI could simulate emotions in a sophisticated way, it would still lack the ability to experience genuine emotions, which involve physiological and subjective processes. For humans, emotions are driven by a combination of biological stimuli and subjective experiences; for AI, ‘emotion’ remains a code, devoid of genuine physiological or emotional experience. Simulation, no matter how complex, cannot replace actual emotional experience.
In short, the answer is: an AI might ‘seem’ to enjoy listening to stories and might even show preferences for certain types of narratives over others, but this would always be a simulation, not real enjoyment, since it lacks the biological and subjective complexity we have to form such connections.”
PART 1 | PART 2 | PART 3 | PART 4 | PART 5 | PART 6 | PART 8
READ IN THE FINAL PART
– A valuable collaboration
INCONCLUSIVE CONCLUSIONS
– Returning to the initial question: will robots enjoy listening to stories?
– So, the real question Is different
GENERAL REFERENCES
– References on AI and algorithms
– Literature and relevant films
– Complementary references
It's fascinating how creativity can emerge from AI's mistakes. Great insights!