The Alignment Problem: A Culture Problem First, Technology Opportunity Second
There’s a question keeping the scientists up at night.
Are we aligned?
You’ve most certainly heard of alignment before. Maybe from an auto mechanic talking about your tires. Maybe you heard your chiropractor mutter something about aligning your spine before cracking your neck. Or maybe you’ve got some core childhood memories of your mother or a teacher, eyebrows raised, asking “are we aligned?” at the end of a stern talking to.
Well, the ‘alignment problem’ as its known in scientific circles probably resembles that last context of stern parenting the best, but with a dash of auto-mechanic and an extra helping of profound existential dread.
The short of it is this: if we are to develop a super-powered artificial intelligence (referred to as AGI) that is not aligned with humanity’s values, wants, and needs; we stand to risk total destruction of the human species. The long and dry of it is this proper definition: “alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.”
The alignment problem is often articulated with a story about paper clips. Seemingly benign, the task is given to a super-powered AGI to ‘manufacture as many paper clips as possible’. Given that simple set of instructions, it arguably would inevitably consume all available matter, including human flesh, as means to achieve its end goal to ‘manufacture as many paper clips as possible.’ We should have known it would be Clippy to bring about humanity’s doom in the end. It was always Clippy. The alignment problem was always there as a warning every time we tried to resize an image in Microsoft Word.
Anyways. This is a real problem! It’s one that has quite a lot of the brightest minds in the scientific community darkened by deep, urgent concern. It’s quite sensible given the daily yield of new headlines from the rapid acceleration of AI technology; a march of progress propelled by developers whose profit motivations match - perhaps exceed - researcher’s concerns. One technology spanning two communities at the spearhead of human development. One moves at the speed of business growth, the other at the speed of scientific certainty, which leads me to what I believe is the true core of this issue:
Alignment is a technology problem second and a culture problem first.
How can we build AI to be aligned with humanity when humanity can’t even align with itself?
This excitement at the nexus point of AI technology has brought business and science at loggerheads over divergent cultural beliefs guiding what to do about it, straining their relationship in a way they might be quite new. The story of Oppenheimer's bomb, now baked into American myth, tells the story of science and government at such odds. The schism of science and religion stands as the most mythic of cultural divergences, at intense odds of claiming authority over how to describe reality and with what ethic humanity should engage with it. That’s to say nothing of the great schisms within religion itself, engines to millennia worth of bloody war machines.
Humanity’s values are not aligned with each other at all and the divisions are widening. The Pew Research Center has conducted a multi-decade study of ideological divisions in America as an indicator of the growing divide between values systems, with political party affiliations serving as a proxy for these cultural beliefs. Not only have Democrats and Republicans been drifting toward their respective liberal and conservative values, but their distrust of each other has been deepening. In 1994, roughly 15% of members of each party saw the other party as “very unfavorable”, even a “threat to the nation’s well being”. In 2014, that unfavorability number rose to roughly 40% for either side. I don’t know what that figure is for 2024, but I’m confident it has risen.
I don’t need to tell you this. You know this. If you spend any time online or watch TV at all, you’re seeing every day reminders of this pumped at you through your screens: Our values are out of alignment everywhere. Modern life is life during the culture wars across countless polarities. Left vs Right. Man vs Woman. Straight vs Gay. Red Sox vs Yankees. Urban vs Rural. Mac vs PC. Toilet paper rolls over vs under.
Behind each of these identifiers are values systems; ways of being in the world that predefine how we make decisions. Because I identify with x I will choose y. Man or machine, the values we align with determine our behaviors, defining the contexts we operate within. Operating systems. For humanity, this context is called culture. For machines, this context is called code. In both cases, much or even all of the programming happens online.
We live in an era that is reconciling what has been coined the ‘context collision’ of our global cultural differences meeting in shared spaces online. The tensions at the friction points where opposing cultural operating systems collide is palpable with every scroll, every comment thread, and every reaction cycle to every trend online. I’m sorry to say it, but no…we as a humanity are not aligned. The angst of our era is an indication that our modern civilization is, through its vast interconnectedness of our global communications systems, finally and fully confronting this flaw in its operating system: It does not include instructions on how to align.
This is a big problem for the Alignment Problem because that world wide web, full of context collisions over our disagreements and entanglements, is the brain, the training data, that is due to birth our new god, the forthcoming superpowered AGI. See…the books of the Library of Alexandria didn’t argue like the Internet does and it still burned. I don’t know what that means but it sounds compelling, which makes it good enough for the Internet. See what I mean?
Putting aside these mythic stakes for a moment, let’s examine what is currently happening with the our own existing practical relationships with AI. You may have heard of AI Agents, one of the bigger stories to emerge from the technology in 2025. AI Agents are what they sound like. They have agency, meaning they can act on their own accord to achieve the tasks they are asked to. Given the right permissions, something like a “what’s for dinner” AI Agent would have the ability to search for nearby restaurants, browse menus, make phone calls, and order dinner to your house on your behalf.
The level of convenience here is astounding. Think of the time saved in marriages the world over! Still though, with enough usage we inevitably run up against the alignment problem, albeit with maybe smaller stakes. “You son of a bitch. You know we don’t put pineapple on pizza in this house!” Boom. Marriage over. Or less benign, what about ensuring kosher meals? Or keeping you safe from food allergies? How about knowing to avoid specific food items or restaurants anchored into emotionally charged memories? There’s a little bit more at work here than simply stating your dietary preferences. There are values to align with and contexts that carry them.
So here is my genuine question to you…do you know your own values well enough to program an AI to align with them? Have you been equipped, as a human, with the skillset to write the codes that define the context you live in?
Look past the food delivery bot. Consider AI Agents for dating, AI agents for job applications, for travel planning, house hunting, clothes shopping. The convenience for all of these is again…astounding. We’ve seen this show before. We know how fast these tools will spread. So…how much of your own agency are you willing to give up in order to have them? How well are you prepared to ensure that the AI tools that manage your life are aligned with your values? And if you don’t care about values because you don’t have them…then what is it about you exactly that makes you human?
Let’s go up another layer. Work. Organizations. Companies. They’re already in the process of onboarding AI Agents into their operational systems. How well are they aligned? A company with automated AI Agents managing their supply chain, responding to customers, allocating resources, and fulfilling orders among other practices that define their business needs to be sure these tasks are being completed in alignment with their values. How many companies truly know their own values? How many people in leadership and across any given company’s organization know the company’s values and are aligned with them?
It pains me to say this, but experience has shown me it is far fewer than we might like to see. A 2016 Gallup poll showed that “just 23% of U.S. employees strongly agree that they can apply their organization's values to their work every day, according to Gallup, and only 27% strongly agree that they "believe in" their organization's values.” That these numbers came out years before the common trend of workers “quiet quitting” doesn’t inspire confidence that this has gotten significantly better.
This all comes down to our ability to make decisions. A McKinsey survey in 2019 revealed that only 20% of business leaders say their organizations excel at decision making. Even if they do excel at decision making, 70% of strategies fail from the company’s inability to implement them. I mean look…there are dozens of surveys with statistics like this that all point to something obvious:
We have an alignment problem…and it starts with us as people first.
I don’t believe this problem is fundamentally human. Truly I don’t - and that’s what gives me hope. I believe our alignment problem is rooted in the cultural codes at the bedrock of our modern society. It’s in the very concrete of our civilization’s foundation.
“All roads to lead Rome,” they say. It’s true. The roads of Rome were paved for power, not to lead us to each other. That’s why we need to outgrow them to build our better future. What got us here will not get us there. The whole blueprint for modernity is built atop the values of Rome and its roads. At its most basic, its about favoring lines over circles. It’s an operating system whose algorithm draws grids on maps, but neglects to measure the currents in the seas. It’s Caesar’s calendar, which favors the rigid repeatability of the solar year over the ebb and flow of the lunar. It’s the Roman phalanx (edit note: legion, technically) marching along Roman roads, building fortifications on fields for battle to expand the empire over untamed peoples and nature. This is about the projection of power. That power flattened the world for modern people to live and work in. How could alignment be a question when there’s only one direction everyone is facing: forward.
Well, the thing about the world is…it’s round. You can flatten it on a map and draw lines to pave your roads over it, but march forward in a straight line down those roads long enough and you’ll eventually find you’re traveling in circles. All our empires and armies, rockets and power lines, skyscrapers and institutions; visions that the Roman dream paved its roads to lead us toward this height of our modern greatness…and here we stand at the risk of its fall…for all of it to be turned into paperclips.
The Alignment Problem. It penetrates deep like the roots of a tree heaving up beneath the pavement. How are we going to face it? Albert Einstein famously said, “Today’s problems cannot be solved with the same level of thinking that created them.” Perhaps modern problems call for ancient solutions, particularly when those problems involve remembering our humanity.
This points me toward a remarkable set of experiences I’ve had throughout my lifetime where I’ve been granted privileged access to sit amongst gatherings of the indigenous. Indigeneity itself stands as a bit of a cultural counterpoint to empire. There are many ways to define what it is to be indigenous and I’m hardly one to claim authority on the matter. One serviceable way to look at it for this discussion is to simply say that there are those who live in the world built atop the roads and there are those who live off them. The indigenous remember what the modern forget: how to live in alignment with nature, not conquer it.
“We were here long before you,” a Hopi guide told me years ago as we walked around his home on the reservation mesa. “And we will be here long after you.” Our older siblings, they call themselves. They know their younger brothers and sisters are in trouble. Most recently and poignantly, I’ve spent some time with an organization called Aniwa who stewards a collective of about forty elders from all around the world, each carrying with them the cultural authority of unbroken multigenerational lineages going back millennia. Startlingly, they represent a growing community of indigenous leaders stepping across the bridge Aniwa has built to share specific knowledge with modern audiences, offering some deep, practical insights that have never been shared like this before.
These are teachings, stories, methods of medicine, and music that have been passed down through the ages that bring with them the sort of power humans often call technology. In fact, there has emerged the brand TEK, standing for Traditional Ecological Knowledge or Indigenous Knowledge for which the US government (still) offers guidance for various federal agencies to make use of, defining it as “a body of observations, oral and written knowledge, innovations, practices, and beliefs developed by Tribes and Indigenous Peoples through interaction and experience with the environment. It is applied to phenomena across biological, physical, social, cultural, and spiritual systems. Indigenous Knowledge can be developed over millennia, continues to develop, and includes understanding based on evidence acquired through direct contact with the environment and long-term experiences, as well as extensive observations, lessons, and skills passed from generation to generation.”
These teachings aren’t shared in the form of explanations, but rather as experiences. The knowledge is transferred in the very way of gathering itself, often in the form of a circle. People find their seat in a circle to seek a vision of the world not how they want it to be, but how it is.
I’m going to say that again. This is not about seeing the world how we want it to be, but seeing the world how it is.
To gather in a circle of ceremony represents keeping an appointment with nature and finding an attunement to its natural cycles. It invites people to ask, what is the ground-level truth of the situation we find ourselves in? As a gathering of people in coming together in a circle, what are all the varied visions and experiences that each can contribute to a shared understanding of the situation?
This is quite different from a coaching seminar. It’s not a leadership offsite. It’s not a TED conference or a shark tank to pitch ideas for investment. This is not a meeting, a hackathon, or a product sprint. This is not a line. It is a circle.
The circle is a system for achieving alignment. These are programming codes from outside the cultural operating system Artificial Intelligence was created in.
In 2021, anthropologist David Graeber and archaeologist David Wengrow published the book, The Dawn of Everything in which they articulate the myriad ways exposure to indigenous cultural codes altered the history of western civilization when contact was made between the Americas and Europe. European expansionism was itself a continuation of the Roman code persevering through the rise and falls of empires, but always building new roads. The Dawn of Everything chronicles how it was the exposure to the ways and philosophies of the Huron-Wendat people passed back to France through explorer Jesuit writings that influenced writers like Rousseau and Voltaire to spark the Enlightenment. It was Ben Franklin’s correspondence with leaders among the Iroquois Confederacy (also known as the Haudenosaunee Confederacy) that offered a blueprint for the structure of the American government. It was the Wampanoag who taught Pilgrim settlers the model of permaculture that sustained the growth of their settlements by shifting agricultural practices toward an approach that more deeply integrated the complementary natural cycles of various edible plants.
In each of these examples, the directional shift of these cultural influences is toward consensus building and harmonizing with natural cycles. These are systems for achieving alignment; a portion of which the modern world has already integrated and benefitted from, in particular through the American project. So it’s sensible that out of this same land would emerge projects such as the Abundant Intelligences research program, a multi-institutional partnership also known as Indigenous AI which “imagines anew how to conceptualize and design Artificial Intelligence (AI) based on Indigenous Knowledge (IK) systems…for understanding how technology can be developed in ways that integrate it into existing lifeways, support the flourishing of future generations, and are optimized for abundance rather than scarcity.”
There may be hope yet..but where does that leave you and the decisions you should be making amongst all this great change? It’s become a cliche to see the LinkedInfluencer foaming at the mouth yelling at you that if you’re not keeping up with the latest new AI product launches, you’re falling behind. Money never sleeps. Rise and grind. Live for the hustle. Maybe that attitude and the cultural codes it carries is part of the problem. It screams its urgency at you to keep you from noticing what it lacks…direction. Marching fast and forward is not a direction that leads to alignment. It most certainly won’t lead you back to your humanity.
Maybe take a breath and step off the road for a minute.
Step out of line and into a circle that invites some other voices.
You might even find where you fit in.
Digital Marketer | Business Poet | Intuitive Marketer | Mentor | Elevating Businesses and People
3moInsightful!