SlideShare a Scribd company logo
DoWhy: An End-to-End Library for
Causal Inference
Amit Sharma (@amt_shrma), Emre Kıcıman (@emrek)
Microsoft Research
Causal ML group @ Microsoft:
https://www.microsoft.com/en-us/research/group/causal-inference/
Code: https://github.com/microsoft/dowhy
Paper: https://arxiv.org/abs/2011.04216
From prediction to decision-making
Decision-making: Acting/intervening based on analysis
• Interventions break correlations used by conventional ML
• The feature with the highest importance score in a prediction model,
• Need not be the best feature to act on
• May not even affect the outcome at all!
For decision-making, need to find the features that cause the outcome &
estimate how the outcome would change if the features are changed.
Many data science Qs are causal Qs
• A/B experiments: If I change the algorithm, will it lead to a higher
success rate?
• Policy decisions: If we adopt this treatment/policy, will it lead to a
healthier patient/more revenue/etc.?
• Policy evaluation: knowing what I know now, did my policy help or
hurt?
• Credit attribution: are people buying because of the
recommendation algorithm? Would they have bought anyway?
In fact, causal questions form the basis of almost all scientific inquiry
Prediction Causation
Assume:
𝑃𝑡𝑟𝑎𝑖𝑛 𝑊, 𝐴, 𝑌 = 𝑃𝑡𝑒𝑠𝑡(𝑊, 𝐴, 𝑌)
Estimate: min 𝐿( 𝑦, 𝑦)
Evaluate: Cross-validation
Assume:
𝑃𝑡𝑟𝑎𝑖𝑛 𝑊, 𝐴, 𝑌 ≠ 𝑃𝑡𝑒𝑠𝑡(𝑊, 𝐴, 𝑌)
Two Fundamental Challenges for Causal Inference
Multiple causal mechanisms and estimates
can fit the same data distribution.
Estimation about different data
distributions than the training distribution
(no easy “cross-validation”).
1. Assumptions
2. Evaluation
Real World: do(A=1) Counterfactual World: do(A=0)
We built DoWhy library to make assumptions front-
and-center of any causal analysis.
- Transparent declaration of assumptions
- Evaluation of those assumptions, to the extent possible
Most popular causal library on GitHub (>2.4k stars, 300+ forks)
Taught in third-party tutorials and courses: O’Reilly, PyData, Northeastern, …
An end-to-end platform for doing causal inference
EconML,
CausalML,
CausalImpact,
tmle,…
Formulate correct
estimand based on
causal assumptions?
Estimate causal
effect
Check
robustness?
Input Data
<action, outcome,
other variables>
Domain Knowledge
Causal
Estimate
Model causal
mechanisms
•Construct a
causal
graph
based on
domain
knowledge
Identify the
target estimand
•Formulate
correct
estimand
based on
the causal
model
Estimate causal
effect
•Use a
suitable
method to
estimate
effect
Refute estimate
•Check
robustness
of estimate
to
assumption
violations
Input Data
<action, outcome,
other variables>
Domain Knowledge
Causal
effect
DoWhy
Action
w
Outcome
v3 v5
v1,v2
DoWhy provides a general API for the four
steps of causal inference
1. Modeling: Create a causal graph to encode assumptions.
2. Identification: Formulate what to estimate.
3. Estimation: Compute the estimate.
4. Refutation: Validate the assumptions.
We’ll discuss the four steps and show a code example using DoWhy.
I. Model the assumptions using a causal graph
Convert domain knowledge to a formal
model of causal assumptions
• 𝐴 → 𝐵 or 𝐵 → 𝐴?
• Causal graph implies conditional statistical
independences
• E.g., 𝐴 ⫫ 𝐶, 𝐷 ⫫ A | B, …
• Identified by d-separation rules [Pearl 2009]
• These assumptions significantly impact the
causal estimate we’ll obtain.
A
C
B D
Key intuitions about causal graphs
• Assumptions are encoded by missing edges, and direction of edges
• Relationships represent stable and independent mechanisms
• Graph cannot be learnt from data alone
• Graphs are a tool to help us reason about a specific problem
Example Graph
Assumption 1: User fatigue does not
affect user interests
Assumption 2: Past clicks do not
directly affect outcome
Assumption 3: Treatment does not
affect user fatigue.
..and so on.
User
Interests
YT
User
Fatigue
Past Clicks
Intervention is represented by a new graph
User
Interests
YT
User
Fatigue
Past Likes
YTYT
Want to answer questions about data that
will be generated by intervention graph
Observed data generated
by this graph
II. Identification: Formulate desired quantity
and check if it is estimable from given data
Trivial Example: Randomized Experiments
• Observed graph is same as intervention graph
in randomized experiment!
• Treatment 𝑇 is already generated independent of
all other features
•  𝑃 𝑌 𝑑𝑜 𝑇 = 𝑃(𝑌|𝑇)
• Intuition: Generalize by simulating randomized
experiment
• When treatment T is caused by other features, 𝑍,
adjust for their influence to simulate a randomized
experiment
YT
Adjustment Formula and Adjustment Sets
Adjustment formula
𝑝 𝑌 𝑑𝑜 𝑇 =
𝑍
𝑝 𝑌 𝑇, 𝑍 𝑝(𝑍)
Where 𝑍 must be a valid adjustment set:
- The set of all parents of 𝑇
- Features identified via backdoor criterion
- Features identified via “towards necessity” criterion
Intuitions:
- The union of all features is not necessarily a valid adjustment set
- Why not always use parents? Sometimes parent features are unobserved
Many kinds of identification methods
Graphical constraint-based
methods
• Randomized and natural
experiments
• Adjustment Sets
• Backdoor, “towards necessity”
• Front-door criterion
• Mediation formula
Identification under additional
non-graphical constraints
• Instrumental variables
• Regression discontinuity
• Difference-in-differences
Many of these methods can be used through DoWhy.
III. Estimation: Compute the causal effect
Estimation uses observed data to compute the target
probability expression from the Identification step.
For common identification strategies using adjustment sets,
𝐸[𝑌|𝑑𝑜 𝑇 = 𝑡 , 𝑊 = 𝑤]= 𝐸 𝑌 𝑇 = 𝑡, 𝑊 = 𝑤
assuming W is a valid adjustment set.
• For binary treatment,
Causal Effect = 𝐸 𝑌 𝑇 = 1, 𝑊 = 𝑤 − 𝐸 𝑌 𝑇 = 0, 𝑊 = 𝑤
Goal: Estimating conditional probability Y|T=t when all
confounders W are kept constant.
Control Treatment (Cycling)
Simple Matching: Match data points with the same
confounders and then compare their outcomes
Simple Matching: Match data points with the same
confounders and then compare their outcomes
Identify pairs of treated (𝑗) and
untreated individuals (𝑘) who are
similar or identical to each other.
Match := 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑊𝑗, 𝑊𝑘 < 𝜖
• Paired individuals have almost
the same confounders.
Causal Effect =
𝑗,𝑘 ∈𝑀𝑎𝑡𝑐ℎ(𝑦𝑗 − 𝑦 𝑘)
:
:
:
:
:
:
:
:
:
Challenges of building a good estimator
• Variance: If we have a stringent matching criterion, we may obtain
very few matches and the estimate will be unreliable.
• Bias: If we relax the matching criterion, we obtain many more
matches but now the estimate does not capture the target estimand.
• Uneven treatment assignment: If very few people have treatment,
leads to both high bias and variance.
Need better methods to navigate the bias-variance tradeoff.
Machine learning methods can help find a
better match for each data point
Synthetic Control: If a good match
does not exist for a data point, can we
create it synthetically?
Learn 𝑦 = 𝑓𝑡=0 𝑤 ,
𝑦 = 𝑓𝑡=1(𝑤)
Assuming f approximates the true
relationship between 𝑌 and 𝑊,
Causal Effect =
𝑖
𝑡𝑖(𝑦𝑖 − 𝑓𝑡=0(𝑤𝑖)) + (1 − 𝑡𝑖)(𝑓𝑡=1 𝑤𝑖 − 𝑦𝑖)
Confounder (W)
Outcome(Y)
Confounder
T=1
T=0
Other ML methods generalize estimation to
continuous treatments
The standard predictor, 𝑦 = 𝑓 𝑡, 𝑤 + 𝜖
may not provide the right estimate for
𝜕𝑦
𝜕𝑡
.
Double-ML [Chernozhukov et al. 2016]:
• Stage 1: Break down conditional
estimation into two prediction sub-tasks.
𝑦 = 𝑔 𝑤 + 𝑦
𝑡 = ℎ 𝑤 + 𝑡
𝑦 and 𝑡 refer to the unconfounded variation in
𝑌 and 𝑇 respectively after conditioning on w.
• Stage 2: A final regression of 𝑦 on 𝑡 gives
the causal effect.
𝑦~ 𝛽 𝑡 + 𝜖
Outcome(Y)
Confounder (W)
Treatment(T)
Confounder (W)
Residual Treatment ( 𝑇)
ResidualOutcome(𝑌)
Depending on the dataset properties,
different estimation methods can be used
Simple Conditioning
• Matching
• Stratification
Propensity Score-Based [Rubin 1983]
• Propensity Matching
• Inverse Propensity Weighting
Synthetic Control [Abadie et al.]
Outcome-based
• Double ML [Chernozhukov et al. 2016]
• T-learner
• X-learner [Kunzel et al. 2017]
Loss-Based
• R-learner [Nie & Wager 2017]
Threshold-based
• Difference-in-differences
All these methods can be called through DoWhy.
(directly or through the Microsoft EconML library)
IV. Robustness Checks: Test robustness of
obtained estimate to violation of assumptions
Obtained estimate depends on many (untestable) assumptions.
Model:
Did we miss any unobserved variables in the assumed graph?
Did we miss any edge between two variables in the assumed graph?
Identify:
Did we make any parametric assumption for deriving the estimand?
Estimate:
Is the assumed functional form sufficient for capturing the variation in
data?
Do the estimator assumptions lead to high variance?
Best practice: Do refutation/robustness tests
for as many assumptions as possible
UNIT TESTS
Model:
• Conditional Independence Test
Identify:
• D-separation Test
Estimate:
• Bootstrap Refuter
• Data Subset Refuter
INTEGRATION TESTS
Test all steps at once.
• Placebo Treatment Refuter
• Dummy Outcome Refuter
• Random Common Cause Refuter
• Sensitivity Analysis
• Simulated Outcome Refuter
/Synth-validation [Schuler et al. 2017]
All these refutation methods are implemented in DoWhy.
Caveat: They can refute a given analysis, but cannot prove its correctness.
Example 1: Conditional Independence Refuter
Through its edges, each causal graph
implies certain conditional independence
constraints on its nodes. [d-separation, Pearl
2009]
Model refutation: Check if the observed
data satisfies the assumed model’s
independence constraints.
• Use an appropriate statistical test for
independence [Heinze-Demel et al. 2018].
• If not, the model is incorrect.
W
YT
A B
Conditional Independencies:
𝐴⫫𝐵 𝐴⫫T|W 𝐵⫫ T|W
Example 1: Placebo Treatment (“A/A”) Refuter
Q: What if we can generate a dataset where
the treatment does not cause the outcome?
Then a correct causal inference method should
return an estimate of zero.
Placebo Treatment Refuter:
Replace treatment variable T by a randomly
generated variable (e.g., Gaussian).
• Rerun the causal inference analysis.
• If the estimate is significantly away from zero,
then analysis is incorrect.
W
YT
W
YT
?
Original
Treatment
“Placebo”
Treatment
Example 2: Add Unobserved Confounder to
check sensitivity of an estimate
Q: What if there was an unobserved confounder
that was not included in the causal model?
Check how sensitive the obtained estimate is
after introducing a new confounder.
Unobserved Confounder Refuter:
• Simulate a confounder based on a given
correlation 𝜌 with both treatment and
outcome.
• Maximum Correlation 𝜌 is based on the maximum
correlation of any observed confounder.
• Re-run the analysis and check if the
sign/direction of estimate flips.
W
YT
Observed
Confounders
W
YT
U
Unobserved
Confounder
Walk-through of the 4 steps using
the DoWhy Python library
𝒙
𝒚
𝒙 𝒚 𝒘
A mystery problem of two correlated variables:
Does 𝒙 cause 𝒚?
Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inference
You can try out this example on Github:
https://github.com/microsoft/dowhy/blob/master/docs/source/example_notebooks/dowhy_confounder_example.ipynb
What else is in DoWhy?
• A unified, extensible API for causal inference that allows external
implementations for the 4 steps
• Can use estimation methods from external libraries such as EconML and CausalML.
• A convenient CausalDataFrame (contributed by Adam Kelleher)
• Pandas DataFrame extension with inbuilt functions for calculating causal effect.
Summary: DoWhy, a library that focuses on
causal assumptions and their validation
Growing open-source community: > 30 contributors
• Roadmap: More powerful refutation tests, counterfactual prediction.
• Please contribute! Would love to hear your ideas on Github.
Resources
 DoWhy Library: https://github.com/microsoft/dowhy
 Arxiv paper on the four steps: https://arxiv.org/abs/2011.04216
 Upcoming book on causality and ML: http://causalinference.gitlab.io/
Goal: A unified API for causal inference problems, just like
PyTorch or Tensorflow for predictive ML.
thank you– Amit Sharma
(@amt_shrma)

More Related Content

PDF
Causality without headaches
PPTX
Can We Assess Creativity?
PPTX
Anomaly Detection Technique
PPTX
Machine Learning and Causal Inference
PDF
Numeric Range Queries in Lucene and Solr
PDF
Death by PowerPoint
PPTX
Model Context Protocol (MCP) Training Presentation
PPSX
Application Security: AI LLMs and ML Threats & Defenses
Causality without headaches
Can We Assess Creativity?
Anomaly Detection Technique
Machine Learning and Causal Inference
Numeric Range Queries in Lucene and Solr
Death by PowerPoint
Model Context Protocol (MCP) Training Presentation
Application Security: AI LLMs and ML Threats & Defenses

What's hot (20)

PPTX
DoWhy Python library for causal inference: An End-to-End tool
PDF
Feature Engineering
PDF
Feature Engineering
PDF
ベイジアンディープニューラルネット
PDF
Overview of tree algorithms from decision tree to xgboost
PDF
A brief introduction to recent segmentation methods
PPTX
An Introduction to XAI! Towards Trusting Your ML Models!
PDF
Model selection and cross validation techniques
PDF
Finding connections among images using CycleGAN
PDF
ノンパラベイズ入門の入門
PDF
Predictive Modelling
PPTX
ノンパラメトリックベイズ4章クラスタリング
PDF
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
PPTX
Explainable AI in Industry (KDD 2019 Tutorial)
PDF
PAC Learning
PPTX
A Unified Approach to Interpreting Model Predictions (SHAP)
PDF
A Multi-Armed Bandit Framework For Recommendations at Netflix
PDF
Kaggle presentation
PDF
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
PPTX
頻度論とベイズ論と誤差最小化について
DoWhy Python library for causal inference: An End-to-End tool
Feature Engineering
Feature Engineering
ベイジアンディープニューラルネット
Overview of tree algorithms from decision tree to xgboost
A brief introduction to recent segmentation methods
An Introduction to XAI! Towards Trusting Your ML Models!
Model selection and cross validation techniques
Finding connections among images using CycleGAN
ノンパラベイズ入門の入門
Predictive Modelling
ノンパラメトリックベイズ4章クラスタリング
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
Explainable AI in Industry (KDD 2019 Tutorial)
PAC Learning
A Unified Approach to Interpreting Model Predictions (SHAP)
A Multi-Armed Bandit Framework For Recommendations at Netflix
Kaggle presentation
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
頻度論とベイズ論と誤差最小化について
Ad

Similar to Dowhy: An end-to-end library for causal inference (20)

PPTX
Statistical Learning and Model Selection (1).pptx
PDF
Complete picture of Ensemble-Learning, boosting, bagging
PPTX
Statistical Learning and Model Selection module 2.pptx
PPTX
Unit 3 – AIML.pptx
PDF
Machine learning
PPTX
ML Unjkfmvjmnb ,mit-2 - Rejhjmfnvhjmnv gression.pptx
PDF
Data Science Interview Questions PDF By ScholarHat
PDF
Top 10 Data Science Practitioner Pitfalls
PDF
Data Analytics Tools presentation having different DA tools
PPTX
The 8 Step Data Mining Process
PPTX
Machine learning - session 3
PDF
Data Ananlysis lecture 7 Simon Fraser University
PPT
5_Model for Predictions_Machine_Learning.ppt
PDF
Machine Learning part 3 - Introduction to data science
PDF
Challenging Common Assumptions in the Unsupervised Learning of Disentangled R...
PDF
Introduction to Artificial Intelligence_ Lec 10
PDF
R - what do the numbers mean? #RStats
PPTX
Intro to machine learning
DOCX
Sampling theory teaches about machine .docx
PPTX
Predicting House Prices: A Machine Learning Approach
Statistical Learning and Model Selection (1).pptx
Complete picture of Ensemble-Learning, boosting, bagging
Statistical Learning and Model Selection module 2.pptx
Unit 3 – AIML.pptx
Machine learning
ML Unjkfmvjmnb ,mit-2 - Rejhjmfnvhjmnv gression.pptx
Data Science Interview Questions PDF By ScholarHat
Top 10 Data Science Practitioner Pitfalls
Data Analytics Tools presentation having different DA tools
The 8 Step Data Mining Process
Machine learning - session 3
Data Ananlysis lecture 7 Simon Fraser University
5_Model for Predictions_Machine_Learning.ppt
Machine Learning part 3 - Introduction to data science
Challenging Common Assumptions in the Unsupervised Learning of Disentangled R...
Introduction to Artificial Intelligence_ Lec 10
R - what do the numbers mean? #RStats
Intro to machine learning
Sampling theory teaches about machine .docx
Predicting House Prices: A Machine Learning Approach
Ad

More from Amit Sharma (20)

PPTX
Alleviating Privacy Attacks Using Causal Models
PPTX
The Impact of Computing Systems | Causal inference in practice
PPTX
Artificial Intelligence for Societal Impact
PPTX
Measuring effectiveness of machine learning systems
PPTX
Causal data mining: Identifying causal effects at scale
PPTX
Auditing search engines for differential satisfaction across demographics
PPTX
Causal inference in data science
PPTX
Causal inference in online systems: Methods, pitfalls and best practices
PPTX
Equivalence causal frameworks: SEMs, Graphical models and Potential Outcomes
PPTX
Estimating the causal impact of recommender systems
PPTX
Predictability of popularity on online social media: Gaps between prediction ...
PPTX
Data mining for causal inference: Effect of recommendations on Amazon.com
PPTX
Estimating influence of online activity feeds on people's actions
PPTX
From prediction to causation: Causal inference in online systems
PPTX
Causal inference in practice
PPTX
Causal inference in practice: Here, there, causality is everywhere
PPTX
The interplay of personal preference and social influence in sharing networks...
PDF
The role of social connections in shaping our preferences
PDF
[RecSys '13]Pairwise Learning: Experiments with Community Recommendation on L...
PDF
RSWEB 2013: A research platform for social recommendation
Alleviating Privacy Attacks Using Causal Models
The Impact of Computing Systems | Causal inference in practice
Artificial Intelligence for Societal Impact
Measuring effectiveness of machine learning systems
Causal data mining: Identifying causal effects at scale
Auditing search engines for differential satisfaction across demographics
Causal inference in data science
Causal inference in online systems: Methods, pitfalls and best practices
Equivalence causal frameworks: SEMs, Graphical models and Potential Outcomes
Estimating the causal impact of recommender systems
Predictability of popularity on online social media: Gaps between prediction ...
Data mining for causal inference: Effect of recommendations on Amazon.com
Estimating influence of online activity feeds on people's actions
From prediction to causation: Causal inference in online systems
Causal inference in practice
Causal inference in practice: Here, there, causality is everywhere
The interplay of personal preference and social influence in sharing networks...
The role of social connections in shaping our preferences
[RecSys '13]Pairwise Learning: Experiments with Community Recommendation on L...
RSWEB 2013: A research platform for social recommendation

Recently uploaded (20)

PDF
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
PDF
BET Eukaryotic signal Transduction BET Eukaryotic signal Transduction.pdf
PDF
Is Earendel a Star Cluster?: Metal-poor Globular Cluster Progenitors at z ∼ 6
PDF
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
PDF
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PPTX
BODY FLUIDS AND CIRCULATION class 11 .pptx
PDF
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
PDF
. Radiology Case Scenariosssssssssssssss
PPT
1. INTRODUCTION TO EPIDEMIOLOGY.pptx for community medicine
PPTX
perinatal infections 2-171220190027.pptx
PDF
Worlds Next Door: A Candidate Giant Planet Imaged in the Habitable Zone of ↵ ...
PDF
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
PPTX
Seminar Hypertension and Kidney diseases.pptx
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PPTX
Science Quipper for lesson in grade 8 Matatag Curriculum
PPTX
Pharmacology of Autonomic nervous system
PPTX
Fluid dynamics vivavoce presentation of prakash
PPTX
Overview of calcium in human muscles.pptx
PDF
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
BET Eukaryotic signal Transduction BET Eukaryotic signal Transduction.pdf
Is Earendel a Star Cluster?: Metal-poor Globular Cluster Progenitors at z ∼ 6
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
BODY FLUIDS AND CIRCULATION class 11 .pptx
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
. Radiology Case Scenariosssssssssssssss
1. INTRODUCTION TO EPIDEMIOLOGY.pptx for community medicine
perinatal infections 2-171220190027.pptx
Worlds Next Door: A Candidate Giant Planet Imaged in the Habitable Zone of ↵ ...
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
Seminar Hypertension and Kidney diseases.pptx
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
Science Quipper for lesson in grade 8 Matatag Curriculum
Pharmacology of Autonomic nervous system
Fluid dynamics vivavoce presentation of prakash
Overview of calcium in human muscles.pptx
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...

Dowhy: An end-to-end library for causal inference

  • 1. DoWhy: An End-to-End Library for Causal Inference Amit Sharma (@amt_shrma), Emre Kıcıman (@emrek) Microsoft Research Causal ML group @ Microsoft: https://www.microsoft.com/en-us/research/group/causal-inference/ Code: https://github.com/microsoft/dowhy Paper: https://arxiv.org/abs/2011.04216
  • 2. From prediction to decision-making Decision-making: Acting/intervening based on analysis • Interventions break correlations used by conventional ML • The feature with the highest importance score in a prediction model, • Need not be the best feature to act on • May not even affect the outcome at all! For decision-making, need to find the features that cause the outcome & estimate how the outcome would change if the features are changed.
  • 3. Many data science Qs are causal Qs • A/B experiments: If I change the algorithm, will it lead to a higher success rate? • Policy decisions: If we adopt this treatment/policy, will it lead to a healthier patient/more revenue/etc.? • Policy evaluation: knowing what I know now, did my policy help or hurt? • Credit attribution: are people buying because of the recommendation algorithm? Would they have bought anyway? In fact, causal questions form the basis of almost all scientific inquiry
  • 4. Prediction Causation Assume: 𝑃𝑡𝑟𝑎𝑖𝑛 𝑊, 𝐴, 𝑌 = 𝑃𝑡𝑒𝑠𝑡(𝑊, 𝐴, 𝑌) Estimate: min 𝐿( 𝑦, 𝑦) Evaluate: Cross-validation Assume: 𝑃𝑡𝑟𝑎𝑖𝑛 𝑊, 𝐴, 𝑌 ≠ 𝑃𝑡𝑒𝑠𝑡(𝑊, 𝐴, 𝑌)
  • 5. Two Fundamental Challenges for Causal Inference Multiple causal mechanisms and estimates can fit the same data distribution. Estimation about different data distributions than the training distribution (no easy “cross-validation”). 1. Assumptions 2. Evaluation Real World: do(A=1) Counterfactual World: do(A=0)
  • 6. We built DoWhy library to make assumptions front- and-center of any causal analysis. - Transparent declaration of assumptions - Evaluation of those assumptions, to the extent possible Most popular causal library on GitHub (>2.4k stars, 300+ forks) Taught in third-party tutorials and courses: O’Reilly, PyData, Northeastern, … An end-to-end platform for doing causal inference
  • 7. EconML, CausalML, CausalImpact, tmle,… Formulate correct estimand based on causal assumptions? Estimate causal effect Check robustness? Input Data <action, outcome, other variables> Domain Knowledge Causal Estimate
  • 8. Model causal mechanisms •Construct a causal graph based on domain knowledge Identify the target estimand •Formulate correct estimand based on the causal model Estimate causal effect •Use a suitable method to estimate effect Refute estimate •Check robustness of estimate to assumption violations Input Data <action, outcome, other variables> Domain Knowledge Causal effect DoWhy Action w Outcome v3 v5 v1,v2
  • 9. DoWhy provides a general API for the four steps of causal inference 1. Modeling: Create a causal graph to encode assumptions. 2. Identification: Formulate what to estimate. 3. Estimation: Compute the estimate. 4. Refutation: Validate the assumptions. We’ll discuss the four steps and show a code example using DoWhy.
  • 10. I. Model the assumptions using a causal graph Convert domain knowledge to a formal model of causal assumptions • 𝐴 → 𝐵 or 𝐵 → 𝐴? • Causal graph implies conditional statistical independences • E.g., 𝐴 ⫫ 𝐶, 𝐷 ⫫ A | B, … • Identified by d-separation rules [Pearl 2009] • These assumptions significantly impact the causal estimate we’ll obtain. A C B D
  • 11. Key intuitions about causal graphs • Assumptions are encoded by missing edges, and direction of edges • Relationships represent stable and independent mechanisms • Graph cannot be learnt from data alone • Graphs are a tool to help us reason about a specific problem
  • 12. Example Graph Assumption 1: User fatigue does not affect user interests Assumption 2: Past clicks do not directly affect outcome Assumption 3: Treatment does not affect user fatigue. ..and so on. User Interests YT User Fatigue Past Clicks
  • 13. Intervention is represented by a new graph User Interests YT User Fatigue Past Likes
  • 14. YTYT Want to answer questions about data that will be generated by intervention graph Observed data generated by this graph II. Identification: Formulate desired quantity and check if it is estimable from given data
  • 15. Trivial Example: Randomized Experiments • Observed graph is same as intervention graph in randomized experiment! • Treatment 𝑇 is already generated independent of all other features •  𝑃 𝑌 𝑑𝑜 𝑇 = 𝑃(𝑌|𝑇) • Intuition: Generalize by simulating randomized experiment • When treatment T is caused by other features, 𝑍, adjust for their influence to simulate a randomized experiment YT
  • 16. Adjustment Formula and Adjustment Sets Adjustment formula 𝑝 𝑌 𝑑𝑜 𝑇 = 𝑍 𝑝 𝑌 𝑇, 𝑍 𝑝(𝑍) Where 𝑍 must be a valid adjustment set: - The set of all parents of 𝑇 - Features identified via backdoor criterion - Features identified via “towards necessity” criterion Intuitions: - The union of all features is not necessarily a valid adjustment set - Why not always use parents? Sometimes parent features are unobserved
  • 17. Many kinds of identification methods Graphical constraint-based methods • Randomized and natural experiments • Adjustment Sets • Backdoor, “towards necessity” • Front-door criterion • Mediation formula Identification under additional non-graphical constraints • Instrumental variables • Regression discontinuity • Difference-in-differences Many of these methods can be used through DoWhy.
  • 18. III. Estimation: Compute the causal effect Estimation uses observed data to compute the target probability expression from the Identification step. For common identification strategies using adjustment sets, 𝐸[𝑌|𝑑𝑜 𝑇 = 𝑡 , 𝑊 = 𝑤]= 𝐸 𝑌 𝑇 = 𝑡, 𝑊 = 𝑤 assuming W is a valid adjustment set. • For binary treatment, Causal Effect = 𝐸 𝑌 𝑇 = 1, 𝑊 = 𝑤 − 𝐸 𝑌 𝑇 = 0, 𝑊 = 𝑤 Goal: Estimating conditional probability Y|T=t when all confounders W are kept constant.
  • 19. Control Treatment (Cycling) Simple Matching: Match data points with the same confounders and then compare their outcomes
  • 20. Simple Matching: Match data points with the same confounders and then compare their outcomes Identify pairs of treated (𝑗) and untreated individuals (𝑘) who are similar or identical to each other. Match := 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑊𝑗, 𝑊𝑘 < 𝜖 • Paired individuals have almost the same confounders. Causal Effect = 𝑗,𝑘 ∈𝑀𝑎𝑡𝑐ℎ(𝑦𝑗 − 𝑦 𝑘) : : : : : : : : :
  • 21. Challenges of building a good estimator • Variance: If we have a stringent matching criterion, we may obtain very few matches and the estimate will be unreliable. • Bias: If we relax the matching criterion, we obtain many more matches but now the estimate does not capture the target estimand. • Uneven treatment assignment: If very few people have treatment, leads to both high bias and variance. Need better methods to navigate the bias-variance tradeoff.
  • 22. Machine learning methods can help find a better match for each data point Synthetic Control: If a good match does not exist for a data point, can we create it synthetically? Learn 𝑦 = 𝑓𝑡=0 𝑤 , 𝑦 = 𝑓𝑡=1(𝑤) Assuming f approximates the true relationship between 𝑌 and 𝑊, Causal Effect = 𝑖 𝑡𝑖(𝑦𝑖 − 𝑓𝑡=0(𝑤𝑖)) + (1 − 𝑡𝑖)(𝑓𝑡=1 𝑤𝑖 − 𝑦𝑖) Confounder (W) Outcome(Y) Confounder T=1 T=0
  • 23. Other ML methods generalize estimation to continuous treatments The standard predictor, 𝑦 = 𝑓 𝑡, 𝑤 + 𝜖 may not provide the right estimate for 𝜕𝑦 𝜕𝑡 . Double-ML [Chernozhukov et al. 2016]: • Stage 1: Break down conditional estimation into two prediction sub-tasks. 𝑦 = 𝑔 𝑤 + 𝑦 𝑡 = ℎ 𝑤 + 𝑡 𝑦 and 𝑡 refer to the unconfounded variation in 𝑌 and 𝑇 respectively after conditioning on w. • Stage 2: A final regression of 𝑦 on 𝑡 gives the causal effect. 𝑦~ 𝛽 𝑡 + 𝜖 Outcome(Y) Confounder (W) Treatment(T) Confounder (W) Residual Treatment ( 𝑇) ResidualOutcome(𝑌)
  • 24. Depending on the dataset properties, different estimation methods can be used Simple Conditioning • Matching • Stratification Propensity Score-Based [Rubin 1983] • Propensity Matching • Inverse Propensity Weighting Synthetic Control [Abadie et al.] Outcome-based • Double ML [Chernozhukov et al. 2016] • T-learner • X-learner [Kunzel et al. 2017] Loss-Based • R-learner [Nie & Wager 2017] Threshold-based • Difference-in-differences All these methods can be called through DoWhy. (directly or through the Microsoft EconML library)
  • 25. IV. Robustness Checks: Test robustness of obtained estimate to violation of assumptions Obtained estimate depends on many (untestable) assumptions. Model: Did we miss any unobserved variables in the assumed graph? Did we miss any edge between two variables in the assumed graph? Identify: Did we make any parametric assumption for deriving the estimand? Estimate: Is the assumed functional form sufficient for capturing the variation in data? Do the estimator assumptions lead to high variance?
  • 26. Best practice: Do refutation/robustness tests for as many assumptions as possible UNIT TESTS Model: • Conditional Independence Test Identify: • D-separation Test Estimate: • Bootstrap Refuter • Data Subset Refuter INTEGRATION TESTS Test all steps at once. • Placebo Treatment Refuter • Dummy Outcome Refuter • Random Common Cause Refuter • Sensitivity Analysis • Simulated Outcome Refuter /Synth-validation [Schuler et al. 2017] All these refutation methods are implemented in DoWhy. Caveat: They can refute a given analysis, but cannot prove its correctness.
  • 27. Example 1: Conditional Independence Refuter Through its edges, each causal graph implies certain conditional independence constraints on its nodes. [d-separation, Pearl 2009] Model refutation: Check if the observed data satisfies the assumed model’s independence constraints. • Use an appropriate statistical test for independence [Heinze-Demel et al. 2018]. • If not, the model is incorrect. W YT A B Conditional Independencies: 𝐴⫫𝐵 𝐴⫫T|W 𝐵⫫ T|W
  • 28. Example 1: Placebo Treatment (“A/A”) Refuter Q: What if we can generate a dataset where the treatment does not cause the outcome? Then a correct causal inference method should return an estimate of zero. Placebo Treatment Refuter: Replace treatment variable T by a randomly generated variable (e.g., Gaussian). • Rerun the causal inference analysis. • If the estimate is significantly away from zero, then analysis is incorrect. W YT W YT ? Original Treatment “Placebo” Treatment
  • 29. Example 2: Add Unobserved Confounder to check sensitivity of an estimate Q: What if there was an unobserved confounder that was not included in the causal model? Check how sensitive the obtained estimate is after introducing a new confounder. Unobserved Confounder Refuter: • Simulate a confounder based on a given correlation 𝜌 with both treatment and outcome. • Maximum Correlation 𝜌 is based on the maximum correlation of any observed confounder. • Re-run the analysis and check if the sign/direction of estimate flips. W YT Observed Confounders W YT U Unobserved Confounder
  • 30. Walk-through of the 4 steps using the DoWhy Python library
  • 31. 𝒙 𝒚 𝒙 𝒚 𝒘 A mystery problem of two correlated variables: Does 𝒙 cause 𝒚?
  • 36. You can try out this example on Github: https://github.com/microsoft/dowhy/blob/master/docs/source/example_notebooks/dowhy_confounder_example.ipynb
  • 37. What else is in DoWhy? • A unified, extensible API for causal inference that allows external implementations for the 4 steps • Can use estimation methods from external libraries such as EconML and CausalML. • A convenient CausalDataFrame (contributed by Adam Kelleher) • Pandas DataFrame extension with inbuilt functions for calculating causal effect.
  • 38. Summary: DoWhy, a library that focuses on causal assumptions and their validation Growing open-source community: > 30 contributors • Roadmap: More powerful refutation tests, counterfactual prediction. • Please contribute! Would love to hear your ideas on Github. Resources  DoWhy Library: https://github.com/microsoft/dowhy  Arxiv paper on the four steps: https://arxiv.org/abs/2011.04216  Upcoming book on causality and ML: http://causalinference.gitlab.io/ Goal: A unified API for causal inference problems, just like PyTorch or Tensorflow for predictive ML. thank you– Amit Sharma (@amt_shrma)