Two “Gotchas” that Make Predictions Go Crazy-Wrong

Two “Gotchas” that Make Predictions Go Crazy-Wrong

There’s an old saying, “Predicting the future is easy. Getting it right is hard.” As a former research analyst at IDC, a leading market intelligence firm, I wrote predictions about the marketing profession each year, and I can tell you from experience that just getting a prediction into the ballpark of accuracy takes a lot of work and getting it right is nearly impossible.

Since leaders base business decisions (in part) on predictions, it's useful to understand two “gotchas” that frequently cause predictions to careen horribly off the mark. First, many predictions start off already biased and second, the future is nonlinear.

Before I talk about these gotchas, here’s a fun reminder of how crazy-wrong some predictions can be:

In 1876, William Preece of the British Post Office said, “Americans have need of the telephone, but we do not. We have plenty of messenger boys.”

In 1906, composer John Phillip Sousa lamented that mechanically produced music would wipe out all interest in amateurs making music. In fact, today about 1 in 5 Americans play a musical instrument.

In 1946, Darryl Zanuck of 20th Century Fox, said, “Television won’t be able to hold onto any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.” (What would Darryl think about our tiny phone screens?!)

While most failed predictions tend to be of the doomsday variety, some are hopeful.

In 1930, economist John Maynard Keyes, projected how by 2030 increased levels of prosperity would allow people in industrialized societies to enjoy a 15-hour work week.

In 2004, Bill Gates told the BBC that “[Email] spam will be a thing of the past in two years.”

The First Gotcha: Predictions Reveal Cognitive Biases

If a prediction starts out on the wrong foot, the chances that it will right itself are slim and starting out wrong is super easy to do because human thinking is subject to many unconscious biases. A common one is the confirmation bias, which is our tendency to seek information that confirms our existing beliefs while blocking information that discredits our world view. This bias motivates us to predict futures that proves how right we are. For example, a company investing heavily in a new technology will forecast its gains from its benefits, while downplaying problems. Wishful thinking may further encourage positive spin. “Our revenue will grow 45% next year!” Maybe.

Negativity bias (our tendency to register and dwell on negative stimuli more readily than the good stuff) produces those doomsday forecasts as in “things are awful, and they are going to get worse”. In 1970, Life magazine projected that “in a decade, urban dwellers will have to wear gas masks to survive air pollution.” Fortunately, that didn’t happen.

To counter the effects of bias, forecasters can seek out views that complement their own. At IDC I was privileged to obtain peer-reviews from expert colleagues in a range of arenas. I learned to honor skeptics. Wise forecasters also leverage base-rates, which are historical trends. For example, let’s say that each year for the past five years, a company has gained 10% more customers. That’s a solid base-rate on which to forecast expectations for the next year, making a lot more sense than the wishful projection that they’ll get 25% more. Well-constructed analytical models can reduce bias by leveraging data-supported trends.

The Second Gotcha: The Future is Not Linear

But even with unbiased beginnings, virtually all forecasts miss the mark to some degree because people make linear predictions. We tend to assume that the direction of today’s trend will go forward unchanged. In the real world, linearity doesn’t exist. Every trend gets buffeted by unforeseen interactions that send the trend careening in unexpected directions. No matter how good we are, we can’t envision every possibility.

Article content
Predictions go wrong because the future is nonlinear.

Consider what was missed in the crazy-wrong predictions highlighted above. Zanuck missed how watching TV releases feel-good hormones, helps us relax, and evolved to be effortlessly entertaining. Keyes did not account for how prosperity would generate a treadmill of personal spending driving people to work even longer hours. Gates didn’t anticipate how easy and cheap email blasting would become.

Chris Anderson, author and entrepreneur, reportedly said, “It’s interesting when you look at predictions made during the peak of the boom of the 1990’s, about ecommerce, or internet traffic, or broadband adoption, or internet advertising, they were all right – they were just wrong in time.” I’ve noticed that the technical or built aspects of predictions (things that are under our control) are more accurate than the human aspects (which are out of our control). Forecasters frequently miss the influence of psychology and social dynamics, such as how long it takes for change to happen (sometimes generations) or how irrational humans can be.

 “Unknown-unknowns” can cause trends to veer into completely unexpected territory. The accidental discovery of penicillin is one example. In 1928, Alexander Fleming noticed how mold in a petri dish prevented bacteria from growing around it, kicking off a journey that eventually resulted in antibiotics. Before this surprising discovery, everyone would have predicted many, many more deaths from infections.

Could AI Help Make Better Forecasts?

AI could reduce forecast bias by looking at data more dispassionately. However, AIs inherit the attributes of their underlying data. If an AI system is trained on biased data, it’s going to make biased forecasts. AI could help with identifying the sources and impact of interactions that could possibly redirect trends. By juggling the multidimensionality of complex situations, AI could identify patterns that human brains would never see. But even AI won’t help with unknown-unknowns. For these, perhaps we can seek insight from the imaginations of artists and fiction writers – they’ve predicted some amazing things.

Decision-makers must hold predictions flexibly.

Because of bias and nonlinearity, it’s best to operate as though every prediction lacks some piece of vital information that could change everything. Use forecasts, but don’t fall in love with them or become angry or when they go awry (because they will). Ferret out as many important elements as possible that could influence trends. Develop analytical models that consider multidimensionality. And keep our squishy, goofy, beautiful humanity in mind

Predict all you wish, but the black swan has a nasty bite.

Like
Reply
Mark Stouse

Causal AI | 35k+ Cross-Functional Followers | Fiduciary Responsibility | Risk Mitigation | “Best of LinkedIn” | Professor | NACD | HSE | Pavilion | Forbes | MASB | FASB | ANA | Author

10mo

One thing I’d add is that most people think of #Bias very differently— and more consciously— than it often is. Most of what changes things is the introduction of something we hadn’t thought of versus something that we over-emphasized. It’s the old saying about “it’s the bullet you never heard that kills you.” The biggest factor re nonlinearity that throws stuff into a new direction is that bullet thing again. It’s like space junk that collides, sending pieces out on new trajectories to ultimately impact other space junk. It’s the “ripple of time and effect” that makes knowing the future impossible, but calculating the odds on the bet you want to make both very possible and very necessary.

To view or add a comment, sign in

Others also viewed

Explore content categories