Long-Termism and the Ethics of the Deep Future
How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?
Long-Termism and the Ethics of the Deep Future is the "Study of the Distant Tomorrow"—the investigation of the "Ethical and Political Philosophy" (~2000s–Present) that "Argues" "The Most" "Important" "Moral" "Consideration" for "Humanity" is "The" **"Long-Term Future"** — "The" "Vast" "Number" of "Potential Future People" (or "Beings") who "May" "Exist" "Over" "Billions of Years" — and "That" "Our Actions" "Today" have "Profound" "Moral Implications" for "This" "Deep Future." While "Traditional Ethics" "Focuses" on "Present" "Suffering" and "Near-Term" "Welfare," **Long-Termism** "Focuses" on "Existential Risk," "Civilizational Trajectory," and "The Moral Weight" of "The Future." From "Toby Ord's 'The Precipice'" and "Existential Risk Research" to "Population Ethics" and "The Repugnant Conclusion," this field explores "The Ethics of Deep Time." It is the science of "Temporal Morality," explaining why "The Future" "May" "Have" **"Far More" "Moral Weight"** than "The Present"—and why "Some" "Disagree" "Profoundly" with "This" "View."
Remembering[edit]
- Long-Termism — "The Ethical" "View" that "Positively Influencing" "The Long-Term Future" is "A Key" "Moral Priority" — "Because" "The Expected" "Number" of "Future" "People" is "So Large."
- Existential Risk — (Nick Bostrom, Toby Ord). "A Risk" that "Could" "Permanently" "Curtail" "Humanity's" (or "Earth's") "Long-Term Potential" — "Either Extinction" or "Permanent Lock-In" to "A Bad State."
- The Precipice — (Toby Ord, 2020). "The Book" "Arguing" that "The Current" "Moment" is "The Most" "Dangerous" in "Human" "History" due to "Existential Risks."
- Population Ethics — "The Branch" of "Ethics" dealing with "Questions" about "Future People" (Is "A World" with "More" "People" "Better"? "Does Non-Existence Harm"?).
- The Repugnant Conclusion — (Derek Parfit). "The Counter-Intuitive" "Result" that "A Sufficiently Large" "Population" "Of People" "With Lives" "Just Barely" "Worth Living" is "Better" than "A Small" "Population" of "Very Happy People."
- Trajectory Change — "The Idea" that "Small" "Actions" "Today" that "Nudge" "The Long-Term Trajectory" of "Civilization" are "More Important" than "Large" "Actions" that "Only" "Help" "The Present."
- Lock-In — "A Scenario" where "The World" becomes "Permanently" "Fixed" in "A" "Particular" "State" (e.g. "Authoritarian AI Governance") — "Closing Off" "All" "Better" "Futures."
- Strong Long-Termism — "The View" that "Because" "The Future" is "So Large," **"Almost Everything"** of "Moral" "Importance" "Is" "About" "The Future."
- Presentism (Ethical) — "The Counter-View" that "Present" "People's" "Suffering" "Should Take" "Priority" over "Speculative" "Future Populations."
- The Effective Altruism (EA) Movement — (See Article 630). "The Primary" "Institutional" "Home" of "Long-Termist" "Ethics" — "Directing" "Resources" toward "Existential Risk Reduction."
Understanding[edit]
Long-termism is understood through Scale and Uncertainty.
1. The "Astronomical" Stakes (Scale Argument): "A quadrillion future beings outweigh 8 billion present ones."
- (See Article 665). "If" "Humanity" (or "Its Successors") "Survive" for "1 Billion Years" and "Spread" "Through" "The Galaxy," "The Number" of "Future Beings" could be **"10^23."**
- "If" "Each" has "Moral Worth," "Their" "Combined" "Welfare" **"Dwarfs"** "That" of "All" "Current" "Living" "Beings."
- "Therefore," "Actions" that "Improve" **"The Long-Term Trajectory"** have "Enormous" "Expected Value."
- "The Future" is **"The Weight."**
2. The "Leverage" Opportunity (Current Moment): "We live at the hinge of history."
- (See Article 699). "Long-Termists" "Argue" that "The Current" "Era" is **"Unusually Important"** — "We Are Developing" "Technologies" (AI, Biotech, Space) that will "Shape" "The Trajectory" of "All" "Future" "Civilization."
- "Small" "Investments" in "Existential Risk Reduction" **"Now"** have "Enormous" "Expected Returns."
- "This Is" "The Logic" of "The Effective Altruism" "Movement."
- "Leverage" is **"Temporal."**
3. The "Uncertainty" Challenge (Critics): "We can't know the long-term effects of our actions."
- (See Article 665). "Critics" "Argue" that "Long-Termism's" "Calculations" are "Based" on "Speculative" "Numbers" (How "Many" "Future People"? "How Happy"?).
- "Prioritizing" "Speculative" "Future Populations" over "Certain" "Present" "Suffering" (e.g. "Preventing" "Malaria" vs. "Funding" "AI Safety") "Is" "Ethically" "Contested."
- **"Epistemic Humility"** (We "Cannot" "Know" "Long-Term Effects") "May Counsel" **"Focusing" on "The Present."**
- "Certainty" is **"The Counter-Argument."**
The 'Toby Ord' Career Move (2009)': **"Toby Ord"** (Oxford Philosopher) "Pledged" **"10% of his income"** to "Effective" "Charities" for "Life" — "Founding" "Giving What We Can." His "Subsequent Work" on "Long-Termism" "Influenced" **"Billions" of "Dollars"** of "Philanthropic Funding" toward "Existential Risk Reduction." It proved that "Philosophy" "Can" "Have" **"Real" "World" "Impact."**
Applying[edit]
Modeling 'The Long-Term Stakes' (Calculating 'Expected Value' of Existential Risk Reduction): <syntaxhighlight lang="python"> def calculate_xrisk_value(baseline_extinction_prob_pct, reduction_pct,
future_beings, qaly_per_being):
"""
Shows the enormous expected value of even tiny reductions in existential risk.
"""
baseline_p = baseline_extinction_prob_pct / 100
reduction_p = reduction_p_absolute = baseline_p * (reduction_pct / 100)
# Expected beings saved = P(extinction reduced) * future population
expected_beings_saved = reduction_p_absolute * future_beings
expected_value_qalys = expected_beings_saved * qaly_per_being
return (f"Baseline X-Risk: {baseline_extinction_prob_pct}% | "
f"Reduction: {reduction_pct}% of that\n"
f" Expected beings saved: {expected_beings_saved:.2e}\n"
f" Expected value: {expected_value_qalys:.2e} QALYs\n"
f" (For reference: Eradicating malaria = ~2e8 QALYs total)")
- Reducing 1% of a 20% existential risk over galactic timescale
calculate_xrisk_value(20, 1, 1e23, 80) print(calculate_xrisk_value(20, 1, 1e23, 80)) </syntaxhighlight>
- Philosophical Landmarks
- Parfit's Reasons and Persons (1984) → (See Article 732). "Founding" **"Population Ethics"** — "Including" "The Repugnant Conclusion."
- Bostrom's Superintelligence (2014) → (See Article 724). "The" **"Existential Risk"** "Framework" — "Founding Text" of "AI-Focused Long-Termism."
- Ord's The Precipice (2020) → "The Comprehensive" **"Case"** for "Long-Termism" as "The Priority" "Ethical" "Framework."
- MacAskill's What We Owe the Future (2022) → "The Popular" "Introduction" to **"Strong Long-Termism"** — "New York Times" "Bestseller."
Analyzing[edit]
| For Long-Termism | Against Long-Termism |
|---|---|
| "Astronomical number of future beings have moral weight" | "We cannot reliably affect the long-term future" |
| "Current moment is unusually leveraged (AI, Biotech)" | "Prioritizes speculative beings over certain suffering" |
| "Existential risk reduction has enormous expected value" | "Can justify neglecting present injustice" |
| "Trajectory-change has compound interest over time" | "Uncertain discount rates make calculations meaningless" |
| "Some interventions clearly help both present and future" | "Elite bias: focuses on AI Safety over poverty" |
The Concept of "The Moral Discount Rate": Analyzing "The Controversy." (See Article 665). "Should" "Future" "People" "Count" **"Less"** than "Present" "People" simply because "They" "Don't Yet Exist"? "Most Economists" "Apply" "A" "Positive Discount Rate." "Most" "Ethicists" "Say" **"Zero Discount Rate"** — "A Future Person's" "Suffering" "Is Just As Real." "The Discount Rate" chosen "Determines" whether "Long-Termism" "Has" "Overwhelming" "Force" or "None" "At All." "This" is **"The Most Important" "Number" in "Ethics."**
Evaluating[edit]
Evaluating Long-Termism:
- Priority: Should "Long-Termist" "Ethics" **"Redirect"** "Resources" from "Present Poverty" to "AI Safety"?
- Power: Has "Long-Termism" "Become" "A Justification" for "**Powerful Tech Elites**" to "Pursue" "Their Own" "Visions"?
- Discount: Is "A Zero" "Moral Discount Rate" "For" "Future People" **"Defensible"** — or "Should Present" "People" "Have Priority"?
- Impact: How does "Long-Termist" "Thinking" "Change" the "Way" "Governments" and "Businesses" **"Plan"**?
Creating[edit]
Future Frontiers:
- The 'Future' Impact AI: (See Article 08). An "AI" that "Estimates" the **"Long-Term Impact"** of "Any" "Proposed" "Policy" across "Civilizational Timescales."
- VR 'Deep Future' Exploration: (See Article 604). A "Walkthrough" of a **"Year 1,000,000 AD" "Civilization"** — "Showing" "The Stakes" of "Decisions Made Today."
- The 'Long-Term Impact' Ledger: (See Article 533). A "Blockchain" for **"Tracking"** "The Long-Term" "Trajectories" of "Major" "Policy Decisions."
- Global 'Long-Term Future' Treaty: (See Article 630). A "Planetary" "Agreement" that "Requires" **"Long-Term Impact Assessments"** for "All" "Major" "Technologies" before "Deployment."