Is it ever justifiable to sacrifice one life to save many?

Is it ever justifiable to sacrifice one life to save many?

This is the latest in a series of questions posed to me by Microsoft CoPilot when asked: What are some of the most profound and complex questions philosophers grapple with?

By Geoffrey Moore

Author – The Infinite Staircase: What the Universe Tells Us About Life, Ethics, and Mortality


For me, the question itself is more interesting than the answer.  As to the latter, I think the answer is, only if it is voluntary.  That is, it is justifiable to ask for the sacrifice, and it is justifiable to make it, but only if the person so doing willingly consents.  That said, there is material embedded in the question that calls out for our attention, beginning with, why are we even asking it?

For starters, note that this is a secular question, not a religious one.  It has its origins in a utilitarian philosophy that equates morality with actions that create the greatest good for the greatest number of people.  As a guideline for public policy, this is a very reasonable idea.  It does not work for moral decision-making, however, for two important reasons,

First, it flattens all human relationships into a single common denominator—the sheer number of those who are affected.  That is not how we live our lives, nor how we are expected to.  It is important to care for oneself first and foremost because, frankly, who else is better suited than us to do so?  More importantly, it is important to care for our family and loved ones with a level of love and respect we do not give to others, meaning their good is more important to us than that of those others, and should be.  This is core to our mammalian heritage.  The principle can be extended to communities in which we participate, again meaning their good is more important to us than that of other communities.  In short, we live within a series of concentric circles of trust and commitment, and moral action must take this landscape into account.

Article content

The other reason utilitarianism does not work for moral decision-making is that it is algorithmic.  It suppresses the differentiating elements in any given situation in order to support a deterministic decision.  As such, it represents an attempt by analytics to divorce its decision-making from narratives.  This is a mistake.  In the infinite staircase, narratives precede analytics.  They power our ethical commitments.  The proper role of analytics is to critique these narratives, not to displace them.  Reducing moral decision-making to rule-based processing is to miss a step in the staircase.  It abdicates our fundamental responsibility to authentically engage with the narratives in play, to experience their issues and concerns before coming to a judgment.  Moral algorithms are still useful as checkpoints in this process, but they cannot be used as a substitute for it.

That’s what I think.  What do you think?


Follow Geoff on LinkedIn | Geoffrey Moore Main Mailing List | Infinite Staircase Mailing List


Geoffrey Moore | Infinite Staircase Site | Geoffrey Moore X | Infinite Staircase X | Facebook | YouTube


Fernando Dias

I turn complexity into sharp strategies.

1w

💡Nuno Reis there’s the Utilitarian thinking again :)

Aman Monga

Senior Solar Consultant | Helping AU businesses cut power bills 80–90% with solar and battery rebates

2w

Morality is far from a simple calculation Geoffrey Moore

Like
Reply
Mathias Carvalho, DBA - MSc

Digital media consultant, MBA visiting professor - Doctorate Candidate at ESDi - DBA, MSc, PMP, PRINCE2, SFC

2w

This could be compared to the Trolley Problem thought experimen (Foot, 1967), but in this case the decision maker poses as the victim, not an external actor to the situation. Still, it makes me wonder how individuals come to enact decision making. Is it really a personal choice or does it build to a self inflicted "yes / no" moment, but out of a series of shared dynamics performed before and after the fact (the sacrifice does not come necessarily right after consent). I use an interesting TED video in my classrooms, dealing to a similar thought experiment about autonomous vehicles crashing; who is to blame? The decision making does not happen at the moment of the (potential) incident, but rather beforehand. Still, the victim, at some point, chose to use the AV and potentially volunteered him/herself "out of time". Does it count? Dealing with shared decision making (are we really ever alone?) concerns how we think, trust and act in society.

Like
Reply
Glenn Inn

Collaboration | Full Stack | Medtech | Inspirational Speaker ►Full Stack | Embedded Systems

2w

Geoffrey Moore a variant of this question could be applied to the Vax/No-Vax debate. If a parent ops to Vax, they are potentially (albeit small) exposing their child to fatality. If they don't Vax, they are potentially exposing other children to fatality. Is the (risk of) sacrificing one child justifiable to save many others here?

Like
Reply
Arthur Koenig

I Sell Speed (No, not that kind!) to Geotechnical Engineers

2w

So...who is making the decision to sacrifice? Do I get to choose whether I sacrifice or are you going to choose that I get killed so that you can call it a sacrifice. If you choose it is not a sacrifice unless you are choosing for yourself.

Like
Reply

To view or add a comment, sign in

Explore content categories