2nd Law of Software Dynamics: Uncertainty (Entropy) can't be Decreased.

2nd Law of Software Dynamics: Uncertainty (Entropy) can't be Decreased.

Last week, I drove from Dallas, Texas to Ely, Minnesota, leading a group of boys on a northern canoe expedition. As I departed Kansas City, I relied on my mapping app to guide the route. The logical path from Kansas City to Minnesota is a direct shot up Interstate 35. However, my app charted a bizarre 20+ hour course through South Dakota. I assumed it would correct itself, so I followed I-35 North and checked periodically. But for the next hour, it persistently insisted I reroute through South Dakota, even as I approached Des Moines. Occasionally, it corrected itself, only to revert minutes later, urging a U-turn back south. We joked it was some new Wall Drug marketing campaign, but as a system developer it raised some deeper thoughts about uncertainty.

The Second Law of Thermodynamics tells us that the entropy of an isolated system never decreases; it either increases or remains constant. In software development, this appears at odds with our commitment to structure, clarity, and SOLID principles aimed at reducing system chaos. But software systems aren’t isolated they’re closed. An isolated system allows no exchange of energy or matter, while a closed system allows energy to move but keeps matter fixed. In closed systems, entropy can be reduced internally, but only by displacing it elsewhere.

Think of the blue screen of death or the ubiquitous 404 error. These aren’t just failures, they’re instances where a system expels its entropy, offloading uncertainty onto the user.

In early software systems, the burden of uncertainty was often handed to a human almost immediately. We repeated, rebooted, reinstalled, or reformatted, the "Four Rs" of tech support. Over time, tolerance for this shifted. We built intermediary systems to absorb entropy: logging, alerting, monitoring. Each added system became a new bucket where entropy could be deposited, eventually escalating to a human operator.

Now, we face a new evolution. Robotic systems: autonomous vehicles, automated flights, physical agents all represent open systems capable of acting on the physical world. They receive entropy from digital systems and convert it into physical change.

In Kansas City, I knew the route was wrong. The system pushed entropy to me through its interface, and I corrected the course manually. In the near future, these decisions may fall to machines. Will they follow erroneous instructions? Will they question them? When entropy emerges from a cascade of subtle failures, will they even detect it?

This isn’t about one bug or one bad input. It’s about emergent failure, entropy, distributed across many interconnected systems. Try-catch blocks won’t catch these. In many systems the current paradigms don’t account for this scale of complexity.

We are entering a time where managing entropy is no longer a backend concern but a system architecture concern. As we build complex, autonomous networks, the critical question becomes: where is the entropy going?

Every time we simplify a system or shield a user from uncertainty, we are pushing that uncertainty somewhere else. Are we aware of where it’s going? Do we understand what entropy looks like in that context? And most importantly—can that system handle it?

In the end, reducing uncertainty in one place is only meaningful if we know where we’ve moved it.

Challenging assumptions sparks innovation. Great reflection!

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics