Future Ethics and Longtermism: Difference between revisions
BloomWiki: Future Ethics and Longtermism |
BloomWiki: Future Ethics and Longtermism |
||
| Line 1: | Line 1: | ||
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | |||
{{BloomIntro}} | {{BloomIntro}} | ||
Future Ethics and Longtermism is the "Study of the Deep Future"—the investigation of the "Philosophical Framework" (~2010s–Present) that "Argues" that "Positively Influencing" the "Lives" of "Billions" (or Trillions) of "Future Humans" is a "Primary Moral Priority" for "People Living Today." While "Traditional Ethics" (see Article 114) "Focuses" on the "Present," **Longtermism** "Focuses" on the "Vast Potential" of the "Eons to Come." From **Derek Parfit’s** "Non-Identity Problem" and **William MacAskill** to "Existential Risk" (X-Risk) and "Intergenerational Justice," this field explores the "Expansion of the Moral Circle." It is the science of "Legacy," explaining why "We" "Should" "Care" about someone living **10,000 Years** from now—and how "Our Actions" "Shape" the "Whole" of "Human History." | Future Ethics and Longtermism is the "Study of the Deep Future"—the investigation of the "Philosophical Framework" (~2010s–Present) that "Argues" that "Positively Influencing" the "Lives" of "Billions" (or Trillions) of "Future Humans" is a "Primary Moral Priority" for "People Living Today." While "Traditional Ethics" (see Article 114) "Focuses" on the "Present," **Longtermism** "Focuses" on the "Vast Potential" of the "Eons to Come." From **Derek Parfit’s** "Non-Identity Problem" and **William MacAskill** to "Existential Risk" (X-Risk) and "Intergenerational Justice," this field explores the "Expansion of the Moral Circle." It is the science of "Legacy," explaining why "We" "Should" "Care" about someone living **10,000 Years** from now—and how "Our Actions" "Shape" the "Whole" of "Human History." | ||
</div> | |||
== Remembering == | __TOC__ | ||
<div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | |||
== <span style="color: #FFFFFF;">Remembering</span> == | |||
* '''Longtermism''' — The "View" that "Improving" the "Long-term Future" is a "Key Moral Priority" of our time. | * '''Longtermism''' — The "View" that "Improving" the "Long-term Future" is a "Key Moral Priority" of our time. | ||
* '''Existential Risk''' (X-Risk) — (Nick Bostrom). A "Risk" that "Threatens" to "Permanently" "Destroy" "Human Potential" (e.g., 'Nuclear War,' 'Super-Pandemic,' 'Misaligned AI'). | * '''Existential Risk''' (X-Risk) — (Nick Bostrom). A "Risk" that "Threatens" to "Permanently" "Destroy" "Human Potential" (e.g., 'Nuclear War,' 'Super-Pandemic,' 'Misaligned AI'). | ||
| Line 13: | Line 18: | ||
* '''Great Stagnation''' — The "Risk" that "Human Progress" (Social/Scientific) "Stops" "Permanently," "Preventing" a "Better Future." | * '''Great Stagnation''' — The "Risk" that "Human Progress" (Social/Scientific) "Stops" "Permanently," "Preventing" a "Better Future." | ||
* '''S-Risk''' (Suffering Risk) — A "Risk" that "Leads" to "Vast Amounts" of "Future Suffering" (e.g. 'Eternal Totalitarianism'). | * '''S-Risk''' (Suffering Risk) — A "Risk" that "Leads" to "Vast Amounts" of "Future Suffering" (e.g. 'Eternal Totalitarianism'). | ||
</div> | |||
== Understanding == | <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | ||
== <span style="color: #FFFFFF;">Understanding</span> == | |||
Future Ethics is understood through '''Probability''' and '''Scale'''. | Future Ethics is understood through '''Probability''' and '''Scale'''. | ||
| Line 42: | Line 49: | ||
'''The 'Non-Identity Problem' (1984)'''': Parfit’s "Logical Trap." If a "Young Girl" has a baby "Now" vs "Wait 5 years," the babies are **"Different"** people (different DNA). If she "Chooses Now" and the child has a "Hard Life," has she "Harmed" him? If she had "Waited," **HE** "Wouldn't Exist." It proved that "Traditional Ethics" "Fails" when "Dealing" with "The Future." | '''The 'Non-Identity Problem' (1984)'''': Parfit’s "Logical Trap." If a "Young Girl" has a baby "Now" vs "Wait 5 years," the babies are **"Different"** people (different DNA). If she "Chooses Now" and the child has a "Hard Life," has she "Harmed" him? If she had "Waited," **HE** "Wouldn't Exist." It proved that "Traditional Ethics" "Fails" when "Dealing" with "The Future." | ||
</div> | |||
== Applying == | <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | ||
== <span style="color: #FFFFFF;">Applying</span> == | |||
'''Modeling 'The Future Impact' (Calculating 'Moral Value' of X-Risk Reduction):''' | '''Modeling 'The Future Impact' (Calculating 'Moral Value' of X-Risk Reduction):''' | ||
<syntaxhighlight lang="python"> | <syntaxhighlight lang="python"> | ||
| Line 67: | Line 76: | ||
: '''AI Alignment''' → (See Article 13). The "Technical Challenge" of "Ensuring" that "Super-AI" (see Article 08) "Shares" "Human Values" to "Avoid" "Existential Catastrophe." | : '''AI Alignment''' → (See Article 13). The "Technical Challenge" of "Ensuring" that "Super-AI" (see Article 08) "Shares" "Human Values" to "Avoid" "Existential Catastrophe." | ||
: '''Effective Altruism''' → (See Article 618). The "Sister Movement" that "Aims" to "Use" "Data and Reason" to "Do the Most Good" (often by 'Focusing' on the 'Long-term Future'). | : '''Effective Altruism''' → (See Article 618). The "Sister Movement" that "Aims" to "Use" "Data and Reason" to "Do the Most Good" (often by 'Focusing' on the 'Long-term Future'). | ||
</div> | |||
== Analyzing == | <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | ||
== <span style="color: #FFFFFF;">Analyzing</span> == | |||
{| class="wikitable" | {| class="wikitable" | ||
|+ Short-termism vs. Longtermism | |+ Short-termism vs. Longtermism | ||
| Line 85: | Line 96: | ||
'''The Concept of "Moral Over-demandingness"''': Analyzing "The Burden." Critics argue that if we "Truly Care" about **"Trillions"** of future people, we "Should" "Spend" **"Every Cent"** and **"Every Moment"** "Saving" them, "Ignoring" the "Suffering" of **"People Alive Today."** (See Article 114). Longtermists "Counter" that we "Only Need" to "Pivot" a "Small Part" of "Global GDP" to "Make a Massive Difference." "Balance" is **"Sustainability."** | '''The Concept of "Moral Over-demandingness"''': Analyzing "The Burden." Critics argue that if we "Truly Care" about **"Trillions"** of future people, we "Should" "Spend" **"Every Cent"** and **"Every Moment"** "Saving" them, "Ignoring" the "Suffering" of **"People Alive Today."** (See Article 114). Longtermists "Counter" that we "Only Need" to "Pivot" a "Small Part" of "Global GDP" to "Make a Massive Difference." "Balance" is **"Sustainability."** | ||
</div> | |||
== Evaluating == | <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | ||
== <span style="color: #FFFFFF;">Evaluating</span> == | |||
Evaluating Future Ethics: | Evaluating Future Ethics: | ||
# '''Uncertainty''': How can we "Predict" what "People 1,000 Years" from now will "Actually Need"? (Maybe they 'Hate' our 'Cathedrals'?). | # '''Uncertainty''': How can we "Predict" what "People 1,000 Years" from now will "Actually Need"? (Maybe they 'Hate' our 'Cathedrals'?). | ||
| Line 92: | Line 105: | ||
# '''Inequality''': (See Article 560). Does "Focusing" on "Existential Risk" "Ignore" the "Injustice" "Facing" "Billions" of "People in Poverty" **Right Now**? | # '''Inequality''': (See Article 560). Does "Focusing" on "Existential Risk" "Ignore" the "Injustice" "Facing" "Billions" of "People in Poverty" **Right Now**? | ||
# '''Impact''': How did "Longtermism" "Influenced" the **"UN's Our Common Agenda"** (2021)? | # '''Impact''': How did "Longtermism" "Influenced" the **"UN's Our Common Agenda"** (2021)? | ||
</div> | |||
== Creating == | <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> | ||
== <span style="color: #FFFFFF;">Creating</span> == | |||
Future Frontiers: | Future Frontiers: | ||
# '''The 'Future' AI-Sim''' (The Oracle): (See Article 08). An AI that "Simulates" **1 Million Future Paths** to "Identify" "Hinges" (Decisions) we "Make Today" that "Affect" "History" the most. | # '''The 'Future' AI-Sim''' (The Oracle): (See Article 08). An AI that "Simulates" **1 Million Future Paths** to "Identify" "Hinges" (Decisions) we "Make Today" that "Affect" "History" the most. | ||
| Line 110: | Line 125: | ||
[[Category:Effective Altruism]] | [[Category:Effective Altruism]] | ||
[[Category:Longtermism]] | [[Category:Longtermism]] | ||
</div> | |||
Latest revision as of 01:51, 25 April 2026
How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?
Future Ethics and Longtermism is the "Study of the Deep Future"—the investigation of the "Philosophical Framework" (~2010s–Present) that "Argues" that "Positively Influencing" the "Lives" of "Billions" (or Trillions) of "Future Humans" is a "Primary Moral Priority" for "People Living Today." While "Traditional Ethics" (see Article 114) "Focuses" on the "Present," **Longtermism** "Focuses" on the "Vast Potential" of the "Eons to Come." From **Derek Parfit’s** "Non-Identity Problem" and **William MacAskill** to "Existential Risk" (X-Risk) and "Intergenerational Justice," this field explores the "Expansion of the Moral Circle." It is the science of "Legacy," explaining why "We" "Should" "Care" about someone living **10,000 Years** from now—and how "Our Actions" "Shape" the "Whole" of "Human History."
Remembering[edit]
- Longtermism — The "View" that "Improving" the "Long-term Future" is a "Key Moral Priority" of our time.
- Existential Risk (X-Risk) — (Nick Bostrom). A "Risk" that "Threatens" to "Permanently" "Destroy" "Human Potential" (e.g., 'Nuclear War,' 'Super-Pandemic,' 'Misaligned AI').
- Derek Parfit — (See Article 112). The "Father" of Future Ethics: author of 'Reasons and Persons' (1984), who "Changed" how we think about "Identity" and "Future People."
- The Non-Identity Problem — Parfit's "Puzzle": if we "Change" our "Policy" (e.g. 'Climate Action'), the "People" who are "Born" in the future will be **"Different"** people. Can we "Harm" someone who "Wouldn't have existed" otherwise?
- William MacAskill — The "Modern Face": author of 'What We Owe the Future' (2022), who "Argues" for "Effective Altruism" (see Article 618) on a "Longtermist" scale.
- Moral Circle Expansion — (See Article 661). The "Historical Process" of "Including" "More Entities" (Slaves, Women, Animals, and now 'Future People') into our "Moral Consideration."
- Greatest Potential — The "Idea" that if humanity "Survives" for **1 Million Years**, the "Trillions" of "Lives" yet to be born "Dwarf" the "Billions" alive today.
- Intergenerational Justice — (See Article 664). The "Legal and Moral Obligation" to "Not" "Pass on" a "Destroyed Planet" to "Future Generations."
- Great Stagnation — The "Risk" that "Human Progress" (Social/Scientific) "Stops" "Permanently," "Preventing" a "Better Future."
- S-Risk (Suffering Risk) — A "Risk" that "Leads" to "Vast Amounts" of "Future Suffering" (e.g. 'Eternal Totalitarianism').
Understanding[edit]
Future Ethics is understood through Probability and Scale.
1. The "Vastness" of Time (Scale): "We are at the "Beginning."
- If "Human History" is a **"Book,"** we are on the **"First Page."**
- (See Article 133). Most species "Live" for **1 Million Years.**
- We have "Only" been "Civilized" for **10,000 Years.**
- **Longtermism** "Argues" that if we "Fail" **Now**, we "Kill" **"Trillions"** of "Potential Lives."
- "The Future" is the **"Moral Majority."**
2. The "Silent" Sufferer (Temporal Neutrality): "Distance in Time" is like "Distance in Space."
- (See Article 114). If you see a "Child" "Drowning" in front of you, you "Help."
- If a child is "Drowning" **1,000 Miles away**, you "Still Care."
- **Future Ethics** "Argues" that a child "Sutffering" **1,000 Years from now** "Matters" **"Exactly the Same."**
- "When" someone lives is **"Irrelevant"** to their **"Value."**
- "Justice" is **"Time-Blind."**
3. The "Hinge" of History (Existential Risk): "Our Century" "Matters" "More."
- (See Article 133). We are the **"First Generation"** with the "Power" to "Destroy" "Everything" (Nuclear/AI).
- This makes our "Actions" **"Uniquely Important."**
- If we "Protect" the "Future" **Now**, it "Can" "Last" for **Millions of Years.**
- We are the **"Hinge"** on which "Human Destiny" "Turns."
- "Responsibility" is **"Temporal."**
The 'Non-Identity Problem' (1984)': Parfit’s "Logical Trap." If a "Young Girl" has a baby "Now" vs "Wait 5 years," the babies are **"Different"** people (different DNA). If she "Chooses Now" and the child has a "Hard Life," has she "Harmed" him? If she had "Waited," **HE** "Wouldn't Exist." It proved that "Traditional Ethics" "Fails" when "Dealing" with "The Future."
Applying[edit]
Modeling 'The Future Impact' (Calculating 'Moral Value' of X-Risk Reduction): <syntaxhighlight lang="python"> def calculate_x_risk_value(risk_reduction_pct, future_lives_trillions):
"""
Shows why 'Saving the Future' is the 'Best' action.
"""
# Lives Saved = (Risk Reduction) * (Total Potential Lives)
lives_saved = (risk_reduction_pct / 100.0) * (future_lives_trillions * 1e12)
if lives_saved > 1e15:
return f"IMPACT: COSMIC. {int(lives_saved / 1e12)} Trillion lives 'Protected'. (This is the 'Moral Priority')."
else:
return f"IMPACT: LOCAL. {int(lives_saved)} lives 'Protected'."
- Case: Reducing the risk of AI-war by 0.01% (assuming 1000 Trillion future lives)
print(calculate_x_risk_value(0.01, 1000)) </syntaxhighlight>
- Longtermist Landmarks
- The 'Global Seed Vault' (Svalbard) → A "Physical" "Longtermist Project": "Storing" "Seeds" (see Article 589) to "Protect" "Agriculture" for **10,000 Years.**
- The 'Clock of the Long Now' → A "Clock" "Built" to "Tick" for **10,000 Years**, "Designed" to "Help" humans "Think" in "Deep Time."
- AI Alignment → (See Article 13). The "Technical Challenge" of "Ensuring" that "Super-AI" (see Article 08) "Shares" "Human Values" to "Avoid" "Existential Catastrophe."
- Effective Altruism → (See Article 618). The "Sister Movement" that "Aims" to "Use" "Data and Reason" to "Do the Most Good" (often by 'Focusing' on the 'Long-term Future').
Analyzing[edit]
| Feature | Short-termism (The Voter) | Longtermism (The Architect) |
|---|---|---|
| Horizon | "Next Election / Quarterly Profit" | "1,000 to 1,000,000 Years" |
| Priority | "Current Human Suffering" | "Existential Risk / Human Potential" |
| Goal | "Economic Stability / Comfort" | "Permanent Survival / Flourishing" |
| View of Duty | "To the Living" | "To the Potential" |
| Analogy | 'Fixing a Leak' | 'Planting a Cathedral Forest' |
The Concept of "Moral Over-demandingness": Analyzing "The Burden." Critics argue that if we "Truly Care" about **"Trillions"** of future people, we "Should" "Spend" **"Every Cent"** and **"Every Moment"** "Saving" them, "Ignoring" the "Suffering" of **"People Alive Today."** (See Article 114). Longtermists "Counter" that we "Only Need" to "Pivot" a "Small Part" of "Global GDP" to "Make a Massive Difference." "Balance" is **"Sustainability."**
Evaluating[edit]
Evaluating Future Ethics:
- Uncertainty: How can we "Predict" what "People 1,000 Years" from now will "Actually Need"? (Maybe they 'Hate' our 'Cathedrals'?).
- Democracy: Is "Longtermism" "Anti-Democratic"? (Since 'Future People' 'Cannot Vote' for their 'Interests').
- Inequality: (See Article 560). Does "Focusing" on "Existential Risk" "Ignore" the "Injustice" "Facing" "Billions" of "People in Poverty" **Right Now**?
- Impact: How did "Longtermism" "Influenced" the **"UN's Our Common Agenda"** (2021)?
Creating[edit]
Future Frontiers:
- The 'Future' AI-Sim (The Oracle): (See Article 08). An AI that "Simulates" **1 Million Future Paths** to "Identify" "Hinges" (Decisions) we "Make Today" that "Affect" "History" the most.
- VR 'Deep Time' Walkthrough: (See Article 604). A "Experience" where you "See" the "Rise and Fall" of "Civilization" over **100,000 Years** to "Build" **"Temporal Empathy."**
- The 'Legacy' Blockchain Ledger: (See Article 533). A "Platform" where "People" "Deposit" "Knowledge and Resources" "Locked" for **1,000 Years**, "Enforced" by "Smart Contracts."
- Global 'Existential' Safety-Net: (See Article 630). A "Planetary Agreement" to "Dedicate" **1% of Global GDP** to "Preventing" **X-Risks** "Forever."