Editing
Causal Inference in AI
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == The fundamental problem of causal inference is what statistician Donald Rubin called the '''Fundamental Problem of Causal Inference''': we can never observe both potential outcomes for the same unit at the same time. Either a patient received the drug (Y(1) observed, Y(0) unobserved) or they didn't (Y(0) observed, Y(1) unobserved). We can never know what would have happened to the same person under the alternative treatment β the counterfactual. Judea Pearl's '''Ladder of Causation''' describes three levels of causal reasoning: 1. '''Association''' (rung 1): "What is?" β Observing and predicting correlations. Standard ML lives here. 2. '''Intervention''' (rung 2): "What if I do X?" β Reasoning about the effect of deliberate actions. Requires a causal model. 3. '''Counterfactual''' (rung 3): "What if I had done X instead?" β Imagining alternate histories. Requires a complete structural causal model. Most ML systems operate only on rung 1. To make reliable decisions and avoid discrimination, AI systems often need rung 2 or 3. '''Why this matters for AI''': * '''Spurious correlations''': A model that classifies "pneumonia" as lower risk may have learned that pneumonia patients sent to the ICU have lower final mortality β confusing treatment effect with baseline risk. * '''Fairness''': Is a model discriminating based on race, or is it using variables that are correlated with race but causally related to the outcome? Causal fairness criteria give precise answers. * '''Policy decisions''': If we deploy an AI to recommend interventions, we must understand the causal effect of those interventions β not just their correlation with past outcomes. * '''Robustness''': Models that learn causal relationships rather than spurious correlations generalize better when the environment changes (distribution shift). </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information