We have fewer problems than you might realize
Grand Canyon, 2012

We have fewer problems than you might realize

I’m currently juggling an interesting mix of projects: a full time job engaging philanthropists on the effectiveness of their strategies to shift society toward more equitable outcomes; a consulting project to help build consensus among philanthropies toward ensuring the benefits of AI accrue to everyone; a career-path exploration to better understand the mechanisms that drive behavior within for-profit markets toward or away from pro-social and -ecological outcomes; and, helping run ops for a responsible AI startup. 

From these pursuits, a consistent throughline has emerged in how I think we can best move the needle on each of these fronts: the alignment of incentives with values. 

There are incredibly smart and thoughtful people in all of the fields I’m working in who are trying to find ways for us to make progress toward a more just and sustainable future. But many of the conversations I witness strike me as not going far enough upstream to diagnose the main problem underlying everything from social inequities to climate change and existential risk of AI (whether one thinks existential risk might result from “misuse” of it, or from an emergent “misalignment” with human interests).

A heavy dose of the escalating public discourse on AI safety the past year has centered on a fear of AI gaining superhuman intelligence that we can no longer control – and it then overtakes us, making decisions that don’t align with human interests (see Hal 9000 or a hundred other dystopian sci-fi characters). There are plenty of very intelligent people who fear this could happen - but it seems to miss the fact right in front of us that we already live with that today. It’s not named Hal, it’s called: the economy.

Often, left-of-center folks say: Capitalism - that’s our problem! This narrower analysis is headed in the right direction but is a glib assessment that fundamentally misunderstands capitalism. I see a lot of well-intentioned folks stop there and just accept that it is the boogeyman behind all our issues. 

Capitalism is a schema for transacting which leverages competition to produce better and better outputs, with greater and greater efficiency. I appreciate David Brin 's view that capitalism is an arena - governed by rules - that maximizes positive-sum outcomes via markets, akin to how the arena of science maximizes knowledge and the arena of democracy maximizes the interests of its citizens.* 

Capitalism has produced enormous benefits for humanity, spurring innovation and lifting the standard of living for people around the globe. He quips: “Reciprocal competition is both how nature evolved us and how we became the first society creative enough to build AI.” But clearly, the wins/losses within our current arena have not been evenly distributed among all participants. The key is in the rules - when properly regulated (and regulations actively observed by all participants), a competitive arena can result in positive-sum outcomes. 

Ignoring the label “capitalism” (which seems increasingly useless for understanding anyone in public discourse these days) for a moment, let’s look at incentives. Incentives are the helm that steer the ship of our global economy. Currently, we incentivize cheap and efficient, and we value based on scarcity. With these bearings, our economy sails toward extraction, exploitation, and externalization of costs.

The problem is not capitalism, but the underlying principles that - lacking sufficient guardrails - point us in a pathological direction. The current incentives align to each person maximizing short term gains for themselves. This results in a spiraling race to the bottom - a classic tragedy of the commons.

Kat Taylor (an exemplary philanthropist, in my professional opinion) says, “If you can’t solve a problem, then make it bigger.”**

From where I’m sitting, we have fewer problems than you might realize - namely, one big one: misaligned incentives. 

When we zoom out to see this as our problem, it becomes very simple. Even obvious. And yet of course this problem is so big that it can seem like it’s not worth naming - because it’s at a point that feels beyond our ability to intervene. 

Accepting defeat on this front is likely a death sentence for most of the living things on the planet.


All systems produce what they are perfectly optimized to produce. (Deming) And what we’re getting today is: widening inequality, lapsing social institutions, loneliness epidemic, rising authoritarianism, and anti-knowledge discourse (i.e. cancel culture on the Left and “alternative facts” on the Right). This is what our system is currently optimized for.

We could decide to design for other outcomes. We could make other choices. 

If incentives were turned toward producing outcomes that allow humans and the biosphere to flourish, we would unleash the limitless creativity that is our human birthright. Then, buoyed by the competitive zeal of our capitalist arena, ride the waves of our market economy toward progress. 

As neoliberal ideology wanes, it is less and less likely to be controversial that we should be putting our thumb on the scales in the direction we want our economy to flow. But what ought we aim at? 

This is both such a simple and impossible question to answer. I would love to simply apply my own moral lens, and off the bat, I can easily name several specific ends (e.g. carbon neutrality, biodiversity, scientific discovery, accessibility, physical health and mental wellbeing!). Or more broadly, I’d love to put forth my personal values: incentivize curiosity, rationality, and compassion.

But those would just be my values. I loved how James Manyika responded when asked to unpack his conception of “AI alignment.”  

“There’s also a question of alignment, which many in the field have thought about for a long time, going all the way back to Alan Turing. When we say alignment, we mean: How does society make sure that these [AI] systems do what we want? That they are aligned with our goals, with our preferences, with our values? The issue is as much about us as it is about how we build these systems. For example, you and I can say we want a system aligned with our values. But which ones? Especially in a complex and global world, involving many people and places with varying cultures and views and so on—these are the classic problems of normativity. These are questions about us.”***

We are going to have to start answering these critical questions as a collective: What do we value?

Shifting the foundational principles on which our society is built is definitely anything but simple. But we must be clear-eyed about what we are actually aiming to change - anything less significant will not successfully dislodge us from the deep, deep ruts (canyons, even!) our patterns of behavior have dug us into. Singularly focused on our problem, we can be ruthless - while also pragmatic - about the steps we take to get out. While the work to climb out will be anything but easy, it is possible. The economy is a human-created system; it is humanity which holds responsibility to redesign it. If we are interested in solving the multiple crises we’re facing (and the multitude more looming on the horizon), this is the aperture for our focus. Call it our “f/16 approach.”

As I’ve previously written, in order to shift systems at such a massive level, we’re going to have to use the lever of collaboration. The AI ethics vs. AI safety debate is a distraction. Yes, ethics, AND safety. AND climate resilience, AND economic and racial justice, AND and and…

What is the world we want to create? 

This question evokes for me a different “AI” - Appreciative Inquiry: an affirmative approach to taking on challenges in organizations, rather than the traditional, deficit-based approach. The founder of Appreciative Inquiry, David Cooperrider , describes the method’s basic assumption: “that an organization is a mystery that should be embraced as a human center of infinite imagination, infinite capacity, and potential.” And he describes organizations as “creative centers of human relatedness, alive with emergent and unlimited capacity.”****

We could sub-in “society” for “organization.” Here we are, limitless creative potential and ridiculously powerful tools - which are only getting more powerful by the day - at our collective disposal. What an incredible moment for self-authorship as a species.


Following Manyika’s questions, the question about where to start regulating is another question of “how” - how do we go about this massive task of rearranging the underlying principles of our economic structures so incentives are re-aligned with human values?

Coercing people to do something isn’t the angle we typically choose because we value freedom and autonomy. But we are quite capable of disincentivizing things that we know will harm us. We can set new incentives by adding new guardrails: how must people not behave? 

The recent FCC ban on AI-generated voices in robocalls seems to have missed this point. A tool (“what” is being used) is agnostic. People will find beneficial and harmful ways to use any tool. All software fits into this category. On one hand, I’m incredibly excited to support a startup that is creating new tools to help people focus deeply on solving our most complex problems, and yet I’ve lamented that it will also be used by people whose worldviews and aims are completely opposite my own as they desire to leverage the same world class tools achieve their goals. Regulating the tool itself is a fool’s errand, particularly at the rate that new tools are becoming available. 

What we can put in place are guardrails around how tools are allowed to be used. Some basic (obvious) potential boundaries would include: you must not use these new tools to ____________

  • deceive.
  • discriminate.
  • do anything that exceeds a human’s ability to turn off the tool.
  • make money off of others using your tools in these ways that you could have foreseen and prevented.

At least a couple of these boundaries might sound familiar - because we already have them - not with regards to a specific tool, but as general rules about what is acceptable behavior within our society. (Many lapses in behavior notwithstanding.)

Regulating AI tools is not the task before us. It does not make sense to slow the progress of innovation when the thing we fear is not the tools themselves, but the ways tools will be used and the emergent second- and third-order effects of having those tools. We have little hope of controlling how tools will be used/misused. 

The nature of AI is that it creates a challenge of managing changes in efficiency - the rapidity of change within society will continue to compound at a mind-boggling speed. This change in efficiency on steroids requires a substantial upgrade to our governance to keep up. We need to dramatically accelerate our sensing and responding - understanding the outcomes our incentives and guardrails are producing, and updating those if we don’t like the outcomes. We need an economic operating system that’s getting updated at least as frequently as the laptop I’m writing this on.

Boundaries or rules are mechanisms for governing how we should be using the tools we have. On the what front - i.e. what we ought to build - we can also provide positive incentives to concentrate the energy of technologists on developing tools that are useful in solving actual problems. Tools with purpose, not tools made just so someone could exploit a market opportunity.*****

We can also incentivize creating those tools in ways that increase their likely benefits and reduce their likelihood to do accidental harm. For instance, provide incentives to incorporate diverse voices into the design and development of tools so that we can anticipate and preemptively solve for more of the potentially emergent qualities of that new technology.

Disincentivize bad behaviors. Remove friction to prosocial behaviors. Monitor emergent outcomes. Further refine the guardrails and incentives. 🔁

This will be an ongoing cycle which requires drastically more agile governance mechanisms than exist today. To meet the challenge - I mean, make the most of this opportunity - we will need to: 

  • Clarify and make explicit our collective values.
  • Adopt far more adaptive mechanisms of governance.
  • And, democratize financial and intellectual resources, enabling everyone to adopt best practices proactively.


If you have been worried about the challenges brought on by this new technology, you’re not actually afraid of AI, or even of capitalism. You’re afraid of an exponentially more-efficient version of our current, pathologically-incentivized economy.

We are at a critical moment to invest in this economy-wide OS upgrade. By consciously directing our economic system into guardrails that align with our collective values, we can unleash creativity and innovation towards outcomes that prioritize the well-being of humanity and the planet. Once incentives are aligned with collective human values, innovation, climate, and a host of other issues will resolve themselves.

If we manage to reengineer our incentives - from cheap ➡️ to holistic value, from efficiency ➡️ to deliberate mindfulness, and from valuing based on scarcity ➡️ to based on effectiveness at generating human flourishing… 

Can you even imagine the world we would be living in? 




*"Give Every AI a Soul - or Else"

**Shift Happens: A Conversation with Taj James and Kay Taylor

***Google Dialogues: On AI, society, and what comes next, p. 9

****Appreciative Inquiry Handbook, 2nd Ed. p. 16-17. // Another favorite Cooperrider quote: “We live in the worlds our questions create.”

*****Though I will note, the Father of Modern Management, Peter Drucker, said: “Every single social and global issue of our day is a business opportunity in disguise.”

Lauren Statman

Organization Change Leader Creating a Better World of Work | Senior Consultant + US Country Lead @ Veldhoen + Company

1y

Fascinating take J. M. Johnson. You are gifted at weaving together many different fields' thought into a holistic view. I appreciated reading this! Zoe (Zhuyu) Chen - given your current learning focus on AI and your love of behavior science, you might appreciate J. M. Johnson's take on incentives here!

Peter Schurman

Building organizations to take world-changing ideas to scale.

1y

Incentives & values. 💯

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics