False Precision and Framing Houses

False Precision and Framing Houses

At 15 years old, my first real job was framing houses. As the new guy and the kid on the job, I spent the first few days carrying lumber around, fetching tools, and cleaning up the job site. For $11 an hour, back then, it wasn’t bad.

 

But I wanted to learn some actual carpentry skills, and soon I was knocking out 2 by 4 walls using a 25-ounce framing hammer and 16 penny nails. I never got to use the nail gun, but I did get so proficient with the hammer that I could drive two nails into each stud with just two swings per nail.

 

(Humble brag alert, last fall I won the Hammerschlagen at Oktoberfest versus a bunch of younger local tradesman who probably use nail guns. Still got it!)


Article content
Author displaying his Hammerschlagen skills


Among the many things I learned, one important lesson was sometimes, good enough is good enough. When you are framing an internal wall, it must be square and the studs all need to be crowned in the same direction, sure. But if you cut it a touch long, you don’t take the wall down, dissemble it and shave off a hair on each stud before putting it back up.

 

You just bang it into square with the framing hammer and nail it in place. Dents from an errant hammer strike on finish trim are a big no-no. But those same donkey tracks on the stud aren’t going to matter once the drywall is hung.

 

Sometimes, greater precision doesn’t yield better outcomes.


False Precision

 

False precision, also known as overprecision, occurs when numerical data is presented with an unjustified level of precision. One example of this would be stating the population of a town is exactly 35,765, when the sampling error is plus or minus 100 people. Another example might be a company that claims a 95.82% efficacy in something based upon a self-reported sample of a few thousand respondents.

 

The real problem this creates is precision bias. When numerical information is presented with the appearance of more precision than the underlying data justifies, it leads to a sense of greater accuracy and confidence in the result than should be warranted. Decisions based upon this flawed logic are then likely to be sub-optimal.

 

I like to say that any statement is 75% more believable with a statistic attached to it. A VC on a panel with me once responded – without missing a beat – “I 100% believe that!” Funny, but it’s true – a higher degree of perceived precision means the argument is more likely to be believed, even if it's based on flawed data or logic. 

 

Obviously, in an industry that is based on making decisions about an unknowable future with imperfect information using mathematical models, (and always trying to look right!) this problem is seriously underappreciated.

 

I know I struggle with it all the time — getting lost in the weeds in a new, impressive dataset only to ultimately experience some unforeseen real-world consequence that overwhelms the significance of whatever effect I just uncovered.

 

It’s important to remember that in finance, mathematical models are just approximations of reality; they are not reality. There is a big difference between physical sciences, where mathematical laws are reality, and social sciences, which are probabilistic in nature. But people don’t think probabilistically; we think deterministically.

 

And this makes the false precision fallacy even stronger. Let’s look at a few scenarios drawn from real-world experience where this bias impacts real investment decisions.

 

Imagine the investment committee at a small non-profit foundation. Often, these are volunteers from the local community. They are usually professionals, but not always investment professionals. In this scenario, let’s assume this IC is presented with two choices of strategic asset allocation.

Article content

First of all, extending the decimal points to two places when these outputs are based upon estimates on top of estimates is itself an example of precision bias. However, and probably more insidious, is the fact that realized returns and volatility will certainly be very different from these estimates. In fact, the most accurate statement we could make would be volatility for both strategies will probably be between 8% and 14%.

 

Selecting one of these allocations based on this insignificant difference in volatility – and I’ve seen a committee do just that, arguing that 11.2% is just “too much risk” – is a bit like measuring with a micrometer and then cutting with a chainsaw.

Article content


It’s silly, really. We just can't be that precise.

 

And the problem becomes worse the less relevant the data point is – for example when mixing public and private assets in an allocation, illiquidity becomes another dimension of risk entirely that is missing from the decision if you are purely focused on volatility.

 

It’s a short trip from precision bias to framing bias, where how the data is presented is as important – or perhaps more important – than what is actually presented. In this case, if you wanted to sell the investment committee on the first asset allocation, you would try to show how irresponsibly risky the second portfolio is, given the much higher volatility.


Article content

On the other hand, if you showed the entire x axis, you would see the chart below. Zooming out puts it into the context of a bigger picture, and the difference looks quite a bit less imposing.  

Article content


Of course, these two charts plot the same two data points. In many investment contexts, the data can present entirely different pictures depending on the time frame or data set selected, not merely how you choose to frame it. For example, let’s examine which of the underlying two companies are more highly levered.


Assume both companies have $10 million of EBITDA. However, that is where the similarities end. Company 1 has a total enterprise value of 10x EBTIDA, the other 25x EBITDA. Company 1 has $50 million of debt, but company 2 has $63 million.


Article content

 

Now, if I wanted to argue that company 1 was more highly levered, I would point to the debt to enterprise value ratio of 50% versus just 25% for company 2. Clearly, the second capital structure is less levered.

Article content

Or is it? If instead I want to focus on EBITDA coverage, I would choose to highlight that company 2 is levered 6.3 times EBITDA, which is 1.3 full turns more leverage than company 1.

 

Article content


Now, this may not be a terribly realistic example. And I’ll leave it to the reader to determine which of the arguments has more merit. The crux remains: it is hard to be precise when different data points can tell you different stories.

 

But what about when data flips to tell a different story entirely?

 

Put selling is a strategy that generates consistent, small gains as puts that are sold well below market prices generally expire worthless, leaving the put seller with the premium they collected. These small gains add up over time, and such options writing or short volatility strategies can be very profitable, right up until they are not.

 

Using a random number generator to create a series of 500 daily returns between -0.08% and 0.23%, we get the cumulative return profile below. Not completely realistic, but not out of the realm of possibility for put selling, which can be consistently profitable for a long time.


Article content


However, all it takes to wipe out those gains is one period where the reference asset trades through the strike price, which happens when the stock market sells off. In that scenario, for example a 7% drop with a short put struck 5% below the market, losses for the short put position can escalate rapidly. In this circumstance, after 500 mostly profitable trading days, on day 501, I made the last day a loss of 25%.

Article content


This is often called picking up pennies in front of a steamroller, for obvious reasons. What looked like a safe, consistent strategy changes instantly, as the descriptive statistic table below shows. The annualized return plunges from over 20% to just 4% – still profitable, but now just the cash rate – and look how the risk statistics changed. Volatility exploded from 1.4% to 18% and skew actually flipped, from slightly positively skewed to a massive negative skew. Sharpe ratio plunges from 12 to 0.

 

Here are the descriptive statistics from the two periods:


Article content


Now, this isn’t the classical case of false precision like the earlier example. However, it highlights how data can give a sense of unfounded accuracy. Someone who simply looked at the risk-return characteristics before would have a completely inaccurate picture of the true profile. Instead of what looks like risk-free return, they actually had return-free risk. And all it took was a single data point to flip the picture!


Article content

This is why it’s important not to be seduced by that false sense of security that hard data provides, especially when one data point can invert the conclusion. It’s not to say data is not important, it’s just that the math is a model not reality. Often, we have to look past limited, imperfect data to see the bigger picture.

 

Ironically, there’s an excellent mathematical analogy here.

Local versus global maxima

 

The maxima is the highest value produced by a mathematical function; conversely, the minima is the lowest. For complex, non-linear equations, there will be multiple maxima and minima depending upon the range of the function specified.

 

Local maxima and minima are the highest and lowest values respectively within a constrained range of the function. Iterative solutions trained on a limited data set often arrive at these local extreme values.


Article content


However, when you zoom out, the global – or absolute – maxima and minima can be very different than those values calculated over local ranges. I think optimizers do something similar to this, falling into the trap of solving for specific, limited ranges of outcomes.


Article content


Now, I cringe whenever I hear someone say investing is more art than science. I’m not sure that isn’t a euphemism that actually means ignore what the data says and buy what I’m selling instead. However, being skeptical of incomplete information, short time periods, biased databases, and layering empirical analysis with experiential perspective is critical to avoiding some of these mistakes.


The Everything Store Missed Something

 

I recently came across another great story that illustrated this problem.

 

Years ago, executives at Amazon.com found themselves receiving numerous complaints about long wait times for calls into the customer service line. But executives couldn’t seem to figure out the problem. Every week, Jeff Bezos gathered his leadership team for their weekly business review meeting to cover every aspect of running the firm.

 

In one meeting, the head of customer service proudly produced a graphic showing average phone wait times of just 59 seconds before clicking through to the next slide. Bezos knew something wasn’t right, so he stopped the presenter right there, and in the middle of the meeting, he called the customer service line on the speaker phone.

 

The room went silent as the phone rang and rang. 60 seconds passed, then 2 minutes. Then 5 minutes, and still no answer. After more than 10 minutes of sitting on hold listening to Muzak playing in the background, the fallacy of false precision in this instance was completely shattered.

 

So, what was the problem? Well, the customer service team had been calculating the average wait time for calls that were answered. Calls that went unanswered were never even measured. 59 seconds was both perfectly precise, and completely wrong.

 

Of course, Amazon fixed that problem, but the situation crystallized the importance of viewing data through a lens of experienced perspective in how Bezos ran his firm. Recalling that meeting years later, he reflected “When the data and the anecdotes disagree, the anecdotes are usually right.”

 

I certainly love geeking out over a cool data set while scrubbing it and analyzing it in excel to see what stories it can tell. But there are a few key lessons I try to keep in mind:


1.      Be careful with automated output and mathematical models. They are models, not reality.

2.      There should always be a strong fundamental reason behind the relationship, not just empirical data.

3.      Take “optimizers” with a grain of salt. Estimtates based on estimates are just rough estimates and sometimes rules of thumb are just as good. Try to achieve robust versus optimal.

4.      Scenarios and sensitivity analysis can help insulate our conclusions from the errors of point estimates. They also help build up that broader perspective.

5.      Look for effects across multiple data sets, multiple periods, and multiple asset classes. It's less likely to be a statistical artifact.

6.      Use error margins or confidence intervals where you can. Think about the real significance of the effect.

7.      Round appropriately and use “approximately” more often. Being humble is a great hedge against false precision.

Math in finance is just an approximation because we are measuring results that are all based upon the behavior of market participants. Fewer things in our industry have fixed effects like they do in the physical sciences. We need to view results with a healthy dose of skepticism and humility.  

 

That’s not to say that we shouldn’t be measuring our cuts at all. In fact, another carpentry lesson is instructive here – it’s wise to measure twice and cut once.

 

Our job as fiduciaries is to help our clients meet their financial objectives, full stop. If we remain focused on achieving that, then sometimes good enough is good enough.

 

The opinions expressed in this article are solely those of the author, and do not necessarily represent those of any entities or organizations. This content is of a general nature and for information only, and you should not construe any such information or other material as legal, tax, investment, financial, or other advice. Past performance is not indicative of future results and should not be taken as an indication or guarantee of any future performance or prediction.

Aaron Filbeck, CAIA, CFA, CFP®, CIPM, FDP

Managing Director, Global Content Strategy | CAIA Association

3mo

Great read Chris!

Like
Reply
James E. Washington III

Financial Steward | Private Capital "Gym Rat" | Agent of Alchemy | Intellectually Curious | Knowledge Repository | Poet Warrior | Stage 4 Throat Cancer Survivor | Child of God |

3mo

Christopher Schelling, CAIA - great 👍🏽 read’ thanks!

Like
Reply
Nolan Bean, CFA, CAIA

CIO / Head of Portfolio Management at FEG Investment Advisors

3mo

Good stuff Chris. For music lovers reminds me of the Todd Snider song ‘Statisticians Blues’.

To view or add a comment, sign in

Others also viewed

Explore content categories