Editing
Long-Termism and the Ethics of the Deep Future
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Long-Termism and the Ethics of the Deep Future is the "Study of the Distant Tomorrow"βthe investigation of the "Ethical and Political Philosophy" (~2000sβPresent) that "Argues" "The Most" "Important" "Moral" "Consideration" for "Humanity" is "The" **"Long-Term Future"** β "The" "Vast" "Number" of "Potential Future People" (or "Beings") who "May" "Exist" "Over" "Billions of Years" β and "That" "Our Actions" "Today" have "Profound" "Moral Implications" for "This" "Deep Future." While "Traditional Ethics" "Focuses" on "Present" "Suffering" and "Near-Term" "Welfare," **Long-Termism** "Focuses" on "Existential Risk," "Civilizational Trajectory," and "The Moral Weight" of "The Future." From "Toby Ord's 'The Precipice'" and "Existential Risk Research" to "Population Ethics" and "The Repugnant Conclusion," this field explores "The Ethics of Deep Time." It is the science of "Temporal Morality," explaining why "The Future" "May" "Have" **"Far More" "Moral Weight"** than "The Present"βand why "Some" "Disagree" "Profoundly" with "This" "View." </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Long-Termism''' β "The Ethical" "View" that "Positively Influencing" "The Long-Term Future" is "A Key" "Moral Priority" β "Because" "The Expected" "Number" of "Future" "People" is "So Large." * '''Existential Risk''' β (Nick Bostrom, Toby Ord). "A Risk" that "Could" "Permanently" "Curtail" "Humanity's" (or "Earth's") "Long-Term Potential" β "Either Extinction" or "Permanent Lock-In" to "A Bad State." * '''The Precipice''' β (Toby Ord, 2020). "The Book" "Arguing" that "The Current" "Moment" is "The Most" "Dangerous" in "Human" "History" due to "Existential Risks." * '''Population Ethics''' β "The Branch" of "Ethics" dealing with "Questions" about "Future People" (Is "A World" with "More" "People" "Better"? "Does Non-Existence Harm"?). * '''The Repugnant Conclusion''' β (Derek Parfit). "The Counter-Intuitive" "Result" that "A Sufficiently Large" "Population" "Of People" "With Lives" "Just Barely" "Worth Living" is "Better" than "A Small" "Population" of "Very Happy People." * '''Trajectory Change''' β "The Idea" that "Small" "Actions" "Today" that "Nudge" "The Long-Term Trajectory" of "Civilization" are "More Important" than "Large" "Actions" that "Only" "Help" "The Present." * '''Lock-In''' β "A Scenario" where "The World" becomes "Permanently" "Fixed" in "A" "Particular" "State" (e.g. "Authoritarian AI Governance") β "Closing Off" "All" "Better" "Futures." * '''Strong Long-Termism''' β "The View" that "Because" "The Future" is "So Large," **"Almost Everything"** of "Moral" "Importance" "Is" "About" "The Future." * '''Presentism (Ethical)''' β "The Counter-View" that "Present" "People's" "Suffering" "Should Take" "Priority" over "Speculative" "Future Populations." * '''The Effective Altruism (EA) Movement''' β (See Article 630). "The Primary" "Institutional" "Home" of "Long-Termist" "Ethics" β "Directing" "Resources" toward "Existential Risk Reduction." </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == Long-termism is understood through '''Scale''' and '''Uncertainty'''. '''1. The "Astronomical" Stakes (Scale Argument)''': "A quadrillion future beings outweigh 8 billion present ones." * (See Article 665). "If" "Humanity" (or "Its Successors") "Survive" for "1 Billion Years" and "Spread" "Through" "The Galaxy," "The Number" of "Future Beings" could be **"10^23."** * "If" "Each" has "Moral Worth," "Their" "Combined" "Welfare" **"Dwarfs"** "That" of "All" "Current" "Living" "Beings." * "Therefore," "Actions" that "Improve" **"The Long-Term Trajectory"** have "Enormous" "Expected Value." * "The Future" is **"The Weight."** '''2. The "Leverage" Opportunity (Current Moment)''': "We live at the hinge of history." * (See Article 699). "Long-Termists" "Argue" that "The Current" "Era" is **"Unusually Important"** β "We Are Developing" "Technologies" (AI, Biotech, Space) that will "Shape" "The Trajectory" of "All" "Future" "Civilization." * "Small" "Investments" in "Existential Risk Reduction" **"Now"** have "Enormous" "Expected Returns." * "This Is" "The Logic" of "The Effective Altruism" "Movement." * "Leverage" is **"Temporal."** '''3. The "Uncertainty" Challenge (Critics)''': "We can't know the long-term effects of our actions." * (See Article 665). "Critics" "Argue" that "Long-Termism's" "Calculations" are "Based" on "Speculative" "Numbers" (How "Many" "Future People"? "How Happy"?). * "Prioritizing" "Speculative" "Future Populations" over "Certain" "Present" "Suffering" (e.g. "Preventing" "Malaria" vs. "Funding" "AI Safety") "Is" "Ethically" "Contested." * **"Epistemic Humility"** (We "Cannot" "Know" "Long-Term Effects") "May Counsel" **"Focusing" on "The Present."** * "Certainty" is **"The Counter-Argument."** '''The 'Toby Ord' Career Move (2009)'''': **"Toby Ord"** (Oxford Philosopher) "Pledged" **"10% of his income"** to "Effective" "Charities" for "Life" β "Founding" "Giving What We Can." His "Subsequent Work" on "Long-Termism" "Influenced" **"Billions" of "Dollars"** of "Philanthropic Funding" toward "Existential Risk Reduction." It proved that "Philosophy" "Can" "Have" **"Real" "World" "Impact."** </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Modeling 'The Long-Term Stakes' (Calculating 'Expected Value' of Existential Risk Reduction):''' <syntaxhighlight lang="python"> def calculate_xrisk_value(baseline_extinction_prob_pct, reduction_pct, future_beings, qaly_per_being): """ Shows the enormous expected value of even tiny reductions in existential risk. """ baseline_p = baseline_extinction_prob_pct / 100 reduction_p = reduction_p_absolute = baseline_p * (reduction_pct / 100) # Expected beings saved = P(extinction reduced) * future population expected_beings_saved = reduction_p_absolute * future_beings expected_value_qalys = expected_beings_saved * qaly_per_being return (f"Baseline X-Risk: {baseline_extinction_prob_pct}% | " f"Reduction: {reduction_pct}% of that\n" f" Expected beings saved: {expected_beings_saved:.2e}\n" f" Expected value: {expected_value_qalys:.2e} QALYs\n" f" (For reference: Eradicating malaria = ~2e8 QALYs total)") # Reducing 1% of a 20% existential risk over galactic timescale calculate_xrisk_value(20, 1, 1e23, 80) print(calculate_xrisk_value(20, 1, 1e23, 80)) </syntaxhighlight> ; Philosophical Landmarks : '''Parfit's ''Reasons and Persons'' (1984)''' β (See Article 732). "Founding" **"Population Ethics"** β "Including" "The Repugnant Conclusion." : '''Bostrom's ''Superintelligence'' (2014)''' β (See Article 724). "The" **"Existential Risk"** "Framework" β "Founding Text" of "AI-Focused Long-Termism." : '''Ord's ''The Precipice'' (2020)''' β "The Comprehensive" **"Case"** for "Long-Termism" as "The Priority" "Ethical" "Framework." : '''MacAskill's ''What We Owe the Future'' (2022)''' β "The Popular" "Introduction" to **"Strong Long-Termism"** β "New York Times" "Bestseller." </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Long-Termism: For vs. Against ! For Long-Termism !! Against Long-Termism |- | "Astronomical number of future beings have moral weight" || "We cannot reliably affect the long-term future" |- | "Current moment is unusually leveraged (AI, Biotech)" || "Prioritizes speculative beings over certain suffering" |- | "Existential risk reduction has enormous expected value" || "Can justify neglecting present injustice" |- | "Trajectory-change has compound interest over time" || "Uncertain discount rates make calculations meaningless" |- | "Some interventions clearly help both present and future" || "Elite bias: focuses on AI Safety over poverty" |} '''The Concept of "The Moral Discount Rate"''': Analyzing "The Controversy." (See Article 665). "Should" "Future" "People" "Count" **"Less"** than "Present" "People" simply because "They" "Don't Yet Exist"? "Most Economists" "Apply" "A" "Positive Discount Rate." "Most" "Ethicists" "Say" **"Zero Discount Rate"** β "A Future Person's" "Suffering" "Is Just As Real." "The Discount Rate" chosen "Determines" whether "Long-Termism" "Has" "Overwhelming" "Force" or "None" "At All." "This" is **"The Most Important" "Number" in "Ethics."** </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Evaluating Long-Termism: # '''Priority''': Should "Long-Termist" "Ethics" **"Redirect"** "Resources" from "Present Poverty" to "AI Safety"? # '''Power''': Has "Long-Termism" "Become" "A Justification" for "**Powerful Tech Elites**" to "Pursue" "Their Own" "Visions"? # '''Discount''': Is "A Zero" "Moral Discount Rate" "For" "Future People" **"Defensible"** β or "Should Present" "People" "Have Priority"? # '''Impact''': How does "Long-Termist" "Thinking" "Change" the "Way" "Governments" and "Businesses" **"Plan"**? </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Future Frontiers: # '''The 'Future' Impact AI''': (See Article 08). An "AI" that "Estimates" the **"Long-Term Impact"** of "Any" "Proposed" "Policy" across "Civilizational Timescales." # '''VR 'Deep Future' Exploration''': (See Article 604). A "Walkthrough" of a **"Year 1,000,000 AD" "Civilization"** β "Showing" "The Stakes" of "Decisions Made Today." # '''The 'Long-Term Impact' Ledger''': (See Article 533). A "Blockchain" for **"Tracking"** "The Long-Term" "Trajectories" of "Major" "Policy Decisions." # '''Global 'Long-Term Future' Treaty''': (See Article 630). A "Planetary" "Agreement" that "Requires" **"Long-Term Impact Assessments"** for "All" "Major" "Technologies" before "Deployment." [[Category:Arts]] [[Category:Science]] [[Category:Philosophy]] [[Category:Ethics]] [[Category:History]] [[Category:Political Philosophy]] [[Category:Future Studies]] [[Category:Universal Ethics]] [[Category:Existential Risk]] [[Category:Moral Philosophy]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information