Bayesian ≠ Causal: Why GTM Metrics Still Miss the Mark (Razor #55)
Hey there, happy Wednesday! Glad you're here.
Last week’s Razor got into Bayesian modeling and how it can help prove the impact of effective brand marketing. But it also raised some valid concerns in the comments about whether or not Bayesian can model cause.
Bayesian models estimate what likely played a role. But they don’t explain what caused the result, or what might have happened if something changed. Causal models test and measure what contributed and why. This week’s Razor gets is a follow-up.
Precision and Truth Are Not the Same
As mentioned in last week’s Razor, Bayesian models can help marketers get out of the last-touch attribution trap. They give us a way to estimate how likely it is that something contributed to a particular result.
But that’s not the same as knowing what caused the result and why.
Too many GTM teams still confuse probability with proof; correlation with causation. But more precision does not mean more truth.
Causal models answer a different question: what would have happened if we had done something else? That’s what your CFO wants answered. And it’s the one your current model can’t touch.
We need to ask better questions instead of defending bad math.
Much of this discussion was sparked by Mark Stouse on LinkedIn. He clarified a common misconception: that Bayesian modeling is the same as causal inference. It’s not. And that distinction is what we’re getting into.
“Past is not Prologue.” - Mark Stouse, CEO, ProofAnalytics.ai
What Most GTM Teams Still Get Wrong
Most attribution models are shortcuts, not models.
Rule-based. Last-touch. “Influenced” revenue. They’re easy to run. Easy to explain. But disconnected from real buying behavior.
Attribution measures who gets credit, not contribution.
Bayesian modeling doesn’t rely on a single touchpoint or fixed credit rule. It estimates how likely it is that something played a role, like a channel, sequence, or delay.
Bayesian models give you a better approximation of influence than rule-based methods. But they stop short of answering the causal question: What made this happen and why?
Most attribution models never get past basic association (correlation).
As Jacqueline Dooley explains in this MarTech article, rule-based methods don’t reflect how buying actually works. They measure what happened, not why it happened.
In other words, most GTM teams are still stuck in Level 1 of the causal ladder.
What Bayesian Models Are Good At
Bayesian models help you estimate whether something played a role. Not how much credit to assign.
That’s why they help measure things like brand recall, ad decay, and media saturation. They estimate influence but they don’t explain the cause.
Bayesian vs. Causal Models: What They Can and Can’t Tell You
Mark isn’t the only one pushing for clarity here.
Quentin Gallea wrote an excellent article on Medium that details how machine learning models are built to predict outcomes, not explain them. They’re correlation engines. And when teams mistake those outputs for causal insight, bad decisions follow.
If your model only shows what happened under existing conditions, it can’t tell you what would’ve happened if something changed. That’s the whole point of causal reasoning.
Causal AI tools like Proof Analytics (shown below) helps teams run “what if” scenarios at scale. It uses machine learning to handle the messiness of data with causal logic to explain what can actually make an impact.
What Causal Models Tell Us That Bayesian Models Can’t
Causal modeling shows what might have happened if you changed something, like timing, budget, message.
That’s the question your CFO is already asking.
As Mark pointed out, Bayesian models can’t answer that. Unless you impose a causal structure, they just update likelihoods based on what already occurred.
If you’re only predicting what’s likely under existing conditions, you’re stuck in correlation.
Click Path vs. Causal Chain
What to Measure
As mentioned, GTM dashboards show you what happened, like clicks. They don’t tell you what contributed to those clicks and why.
Bayesian models help you spot patterns.
How often something showed up.
How long it stuck.
How likely it played a role.
That’s useful. But it’s not enough.
Why? Because even though Bayesian models are probabilistic, they don’t model counterfactuals unless a causal structure is added. They estimate likelihoods, not outcomes under different conditions.
If you want to know whether something made a difference (or what would’ve happened if you did it differently) you need a model that can test it.
So instead of more data, focus on the data you already have.
An Aside on NBD-Dirichlet Modeling
If you’ve never looked at buyer behavior through a statistical lens, Dale Harrison’s 10-part LinkedIn series on the NBD-Dirichlet model is worth bookmarking. This series will help you understand how buyers typically behave in a category:
how often most people buy (70% of all purchases are made by light buyers, not heavy ones)
how rarely they buy from the same brand twice
why brand growth depends more on reaching more buyers than retaining the same ones
Final Thoughts
Rule-based attribution like first/last-touch only tracks what happened. It doesn’t explain what mattered.
Bayesian modeling gets you closer by helping you see patterns. But it doesn’t explain cause.
Causal models let you test what could make an impact, what may not, and why.
And as Mark Stouse pointed out, this only works if you’re using a proper causal framework. Bayesian models can’t tell you what caused something unless that structure is built in.
If you like this content, here are some more ways I can help:
Follow me for bite-sized tips and freebies throughout the week.
Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
Subscribe to my blog to get ongoing insights and strategies sent to your inbox. You can also subscribe here on LinkedIn if you prefer this format instead.
As always, jump in with any feedback.
Thanks again for subscribing. I appreciate you.
Cheers!
This article is AC-A and originally appeared on Achim’s Razor at KLOR Consulting. If you’d like to share it or refer to it, consider using the original. Thank you!
Consulting and V.o.C. research in b2b markets leading to insight and actionable strategies and tactics. Providing marketing research for b2b. This makes market research actionable and enables better business decisions
2moGreat point. I would say data and insights beat metrics and confirmation of paradigms.
Helping B2B Leaders Make Smarter Market Decisions with Consulting-Driven Analyses. Research That Prepares You for Tomorrow’s Market.
2moMuch appreciated! This will fit perfectly in our Strategic B2B Marketing roundup. Wishing you a fantastic week ahead. https://www.linkedin.com/newsletters/strategic-b2b-marketing-7037310594625994752/
I help tech companies go to market with clarity, not chaos.
2moClearly I stuck a chord 😂 That's a good thing. It's how we learn. I truly appreciate all of your comments and clarifications. I mean that. To be clear, my point is... Models beat metrics. Not one. Many. I shared a few I have come to know from the folks who know more about this stuff than I do. Somewhere that got lost in the weeds. Not saying I am an expert, not saying I'm right. I'm wrong more often than not. If I am misleading and wrong, then so are many folks who know more about this stuff than most of us in this thread. Again, thanks for chiming in. I appreciate your POV.
Marketing Analyst | Strategic Insights | MSc in Applied Statistics | MSc in Marketing Management
2moAchim, your post isn’t just a little misleading. It’s fundamentally wrong. *Bayesian is a statistical paradigm, causal inference is a modelling paradigm.* Bayesian can be causal, or not. Causality is not driven by the statistical method chosen, every statistical method can be causal if the underlying theory supports the selected model. “Bayesian models help you estimate whether something played a role. Not how much credit to assign.” Not true. You can get effect sizes with Bayesian statistics. “Bayesian models… just update likelihoods based on what already occurred.” Not true. Bayesian models update prior held beliefs. The likelihood (e.g., binomial, normal, Poisson) is a function selected by the researcher as part of the model specification. Confusing “likelihood” with “probability” might be excusable in casual conversation, but not when criticising statistical and modelling methods publicly while claiming authority. You need to get the mechanics right if you want to take a stage. “Because even though Bayesian models are probabilistic, they don’t model counterfactuals unless a causal structure is added.” No method does by default unless the researcher designs a model (Bayesian or frequentist) that does.
Causal AI | 35k+ Cross-Functional Followers | Fiduciary Responsibility | Risk Mitigation | “Best of LinkedIn” | Professor | NACD | HSE | Pavilion | Forbes | MASB | FASB | ANA | Author
2moSome of the comments are a classic example of the conflicts between academic data science literalists and those that seek deployable techniques to get operationally relevant answers in business environments. Counterfactual models are indeed a way of exploring hypothetical scenarios by asking "what if" questions about past events or variable outcomes. They also can enable us to not start with the past as is normal, but to explore what it would take to create a given future end-state, using synthetic data. They help understand the causal relationships between variables by comparing observed results with what would have happened if certain factors had been different. These models are used in various fields, including ML, causal inference, and decision-making, to explain predictions, analyze interventions, and improve model performance. The above is what I got when I shared the various communications here with Anthropic & OpenAI. I added both into Deep Research, and it synergized them as you see above. My advice is to use AI to evaluate what you read before you signify support. I have trained 4 AI tools to duplicate a "peer review" on each other's work, reducing the chance that I will accidentally share inaccurate information.