Utilitarianism: Difference between revisions

From BloomWiki
Jump to navigation Jump to search
BloomWiki: Utilitarianism
 
BloomWiki: Utilitarianism
 
Line 1: Line 1:
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
{{BloomIntro}}
{{BloomIntro}}
Utilitarianism is a philosophy of ethics that says the "Right" action is the one that produces the greatest amount of "Good" for the greatest number of people. Developed by Jeremy Bentham and John Stuart Mill, it is based on the idea of "Consequentialism"—the belief that the morality of an action depends only on its results, not on its intentions or rules. It is the math of morality: if an action causes 10 units of pain but 100 units of joy, a utilitarian would say it is the correct choice. It is a powerful tool for public policy and law, but it raises difficult questions about the rights of individuals and the value of "Justice" vs. "Happiness."
Utilitarianism is a philosophy of ethics that says the "Right" action is the one that produces the greatest amount of "Good" for the greatest number of people. Developed by Jeremy Bentham and John Stuart Mill, it is based on the idea of "Consequentialism"—the belief that the morality of an action depends only on its results, not on its intentions or rules. It is the math of morality: if an action causes 10 units of pain but 100 units of joy, a utilitarian would say it is the correct choice. It is a powerful tool for public policy and law, but it raises difficult questions about the rights of individuals and the value of "Justice" vs. "Happiness."
</div>


== Remembering ==
__TOC__
 
<div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
== <span style="color: #FFFFFF;">Remembering</span> ==
* '''Utilitarianism''' — The ethical theory that determines right from wrong by focusing on outcomes (Utility).
* '''Utilitarianism''' — The ethical theory that determines right from wrong by focusing on outcomes (Utility).
* '''Utility''' — A measure of happiness, pleasure, or well-being.
* '''Utility''' — A measure of happiness, pleasure, or well-being.
Line 12: Line 17:
* '''Rule Utilitarianism''' — Creating general rules (e.g., "Don't Lie") that tend to produce the most happiness in the long run.
* '''Rule Utilitarianism''' — Creating general rules (e.g., "Don't Lie") that tend to produce the most happiness in the long run.
* '''Eudaimonia''' — A Greek word for "Flourishing" or deep happiness, often used to define "Good" in modern utility.
* '''Eudaimonia''' — A Greek word for "Flourishing" or deep happiness, often used to define "Good" in modern utility.
</div>


== Understanding ==
<div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
== <span style="color: #FFFFFF;">Understanding</span> ==
Utilitarianism is understood through '''Calculation''' and '''Aggregation'''.
Utilitarianism is understood through '''Calculation''' and '''Aggregation'''.


Line 34: Line 41:


'''The Trolley Problem''': A classic utilitarian test. A trolley is headed for 5 people. You can pull a lever to switch it to a track with 1 person. A utilitarian would say you MUST pull the lever (1 death is better than 5).
'''The Trolley Problem''': A classic utilitarian test. A trolley is headed for 5 people. You can pull a lever to switch it to a track with 1 person. A utilitarian would say you MUST pull the lever (1 death is better than 5).
</div>


== Applying ==
<div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
== <span style="color: #FFFFFF;">Applying</span> ==
'''Modeling 'The Utility Score' (Deciding on a public policy):'''
'''Modeling 'The Utility Score' (Deciding on a public policy):'''
<syntaxhighlight lang="python">
<syntaxhighlight lang="python">
Line 67: Line 76:
: '''Effective Altruism''' → A modern movement that uses utilitarian math to find the most efficient ways to save lives (e.g., "Donating $1,000 to malaria nets saves more lives than $1,000 to a local museum").
: '''Effective Altruism''' → A modern movement that uses utilitarian math to find the most efficient ways to save lives (e.g., "Donating $1,000 to malaria nets saves more lives than $1,000 to a local museum").
: '''Triage in Medicine''' → During a disaster, doctors use utilitarian logic to treat the patients they can save, rather than the ones who are the most hurt but likely to die anyway.
: '''Triage in Medicine''' → During a disaster, doctors use utilitarian logic to treat the patients they can save, rather than the ones who are the most hurt but likely to die anyway.
</div>


== Analyzing ==
<div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
== <span style="color: #FFFFFF;">Analyzing</span> ==
{| class="wikitable"
{| class="wikitable"
|+ Act vs. Rule Utilitarianism
|+ Act vs. Rule Utilitarianism
Line 83: Line 94:


'''The Concept of "Aggregation"''': Analyzing why we "Sum up" happiness. This is the biggest strength and weakness of the theory. It allows for clear decisions, but it means that the "Minority" can be sacrificed for the "Majority."
'''The Concept of "Aggregation"''': Analyzing why we "Sum up" happiness. This is the biggest strength and weakness of the theory. It allows for clear decisions, but it means that the "Minority" can be sacrificed for the "Majority."
</div>


== Evaluating ==
<div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
== <span style="color: #FFFFFF;">Evaluating</span> ==
Evaluating utilitarianism:
Evaluating utilitarianism:
# '''Justice''': If framed by the police, would a utilitarian execute an innocent person to stop a massive riot? (Standard utilitarianism might say yes).
# '''Justice''': If framed by the police, would a utilitarian execute an innocent person to stop a massive riot? (Standard utilitarianism might say yes).
Line 90: Line 103:
# '''Demands''': Is it "Too Hard"? If I can save a life by giving all my money to charity, am I "Evil" for buying a coffee?
# '''Demands''': Is it "Too Hard"? If I can save a life by giving all my money to charity, am I "Evil" for buying a coffee?
# '''Integrity''': Does it force us to throw away our personal values and commitments to follow the "Math"?
# '''Integrity''': Does it force us to throw away our personal values and commitments to follow the "Math"?
</div>


== Creating ==
<div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
== <span style="color: #FFFFFF;">Creating</span> ==
Future Frontiers:
Future Frontiers:
# '''AI Alignment''': Programming AI to be "Utilitarian" to ensure it helps humanity, while also building in "Constraints" so it doesn't do anything horrifying to reach a goal.
# '''AI Alignment''': Programming AI to be "Utilitarian" to ensure it helps humanity, while also building in "Constraints" so it doesn't do anything horrifying to reach a goal.
Line 101: Line 116:
[[Category:Ethics]]
[[Category:Ethics]]
[[Category:Politics]]
[[Category:Politics]]
</div>

Latest revision as of 02:01, 25 April 2026

How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?

Utilitarianism is a philosophy of ethics that says the "Right" action is the one that produces the greatest amount of "Good" for the greatest number of people. Developed by Jeremy Bentham and John Stuart Mill, it is based on the idea of "Consequentialism"—the belief that the morality of an action depends only on its results, not on its intentions or rules. It is the math of morality: if an action causes 10 units of pain but 100 units of joy, a utilitarian would say it is the correct choice. It is a powerful tool for public policy and law, but it raises difficult questions about the rights of individuals and the value of "Justice" vs. "Happiness."

Remembering[edit]

  • Utilitarianism — The ethical theory that determines right from wrong by focusing on outcomes (Utility).
  • Utility — A measure of happiness, pleasure, or well-being.
  • Jeremy Bentham — The founder of utilitarianism who created the "Hedonistic Calculus."
  • John Stuart Mill — The philosopher who refined utilitarianism to include "Higher" and "Lower" pleasures.
  • Consequentialism — The broad family of ethical theories that judge actions by their consequences.
  • The Greatest Happiness Principle — The core rule: "Act always so as to produce the greatest happiness for the greatest number."
  • Act Utilitarianism — Judging every single action individually based on its results.
  • Rule Utilitarianism — Creating general rules (e.g., "Don't Lie") that tend to produce the most happiness in the long run.
  • Eudaimonia — A Greek word for "Flourishing" or deep happiness, often used to define "Good" in modern utility.

Understanding[edit]

Utilitarianism is understood through Calculation and Aggregation.

1. The Hedonistic Calculus: Bentham believed that morality could be treated like math. To decide if an action is right, you measure:

  • Intensity: How strong is the pleasure?
  • Duration: How long does it last?
  • Certainty: How likely is it to happen?
  • Propinquity: How soon will it occur?
  • Extent: How many people are affected?

2. Higher vs. Lower Pleasures (Mill's Refinement): Mill disagreed with Bentham that "Pushpin is as good as poetry."

  • Lower Pleasures: Physical pleasures (eating, sleeping, sex).
  • Higher Pleasures: Intellectual and moral pleasures (reading, friendship, helping others).

Mill argued that "It is better to be a human being dissatisfied than a pig satisfied."

3. Impartiality: In utilitarianism, your own happiness counts exactly as much as anyone else's. You cannot "Favor" your family or yourself. You must look at the world from the perspective of an "Ideal Observer" who wants the total sum of happiness to be as high as possible.

The Trolley Problem: A classic utilitarian test. A trolley is headed for 5 people. You can pull a lever to switch it to a track with 1 person. A utilitarian would say you MUST pull the lever (1 death is better than 5).

Applying[edit]

Modeling 'The Utility Score' (Deciding on a public policy): <syntaxhighlight lang="python"> def calculate_utility(policy_name, effects):

   """
   effects = list of (happiness_gain, population_size)
   """
   total_utility = sum(h * p for h, p in effects)
   
   return {
       "Policy": policy_name,
       "Total Utility": total_utility,
       "Verdict": "Ethically Good" if total_utility > 0 else "Ethically Bad"
   }
  1. Policy: Build a new park.
  2. Costs 100 neighbors 5 units of 'Quiet'.
  3. Gives 1000 kids 20 units of 'Play'.

park_effects = [(-5, 100), (20, 1000)] print(calculate_utility("City Park", park_effects))

  1. Policy: Tax everyone $1 to give one person $1,000,000.

lottery_effects = [(-1, 1000000), (1000000, 1)] print(calculate_utility("Person-specific Lottery", lottery_effects))

  1. Note: Utilitarianism says this is neutral (0), but many people feel it is unfair!

</syntaxhighlight>

Utilitarian Landmarks
The Panopticon → Bentham's design for a "Perfect Prison" that would produce the most safety (utility) with the least staff.
Animal Rights → Bentham was one of the first to argue that animals deserve moral consideration because they can "Suffer," even if they can't "Reason."
Effective Altruism → A modern movement that uses utilitarian math to find the most efficient ways to save lives (e.g., "Donating $1,000 to malaria nets saves more lives than $1,000 to a local museum").
Triage in Medicine → During a disaster, doctors use utilitarian logic to treat the patients they can save, rather than the ones who are the most hurt but likely to die anyway.

Analyzing[edit]

Act vs. Rule Utilitarianism
Feature Act Utilitarianism Rule Utilitarianism
Focus The specific action right now The general rule for society
Decision "Should I lie to this person right now?" "Is 'Honesty' a good rule for everyone?"
Flexibility High (Everything depends on context) Lower (Follow the rules)
Problem Can justify "Unjust" acts (like killing 1 to save 5) Can be "Rule-bound" even if it causes pain

The Concept of "Aggregation": Analyzing why we "Sum up" happiness. This is the biggest strength and weakness of the theory. It allows for clear decisions, but it means that the "Minority" can be sacrificed for the "Majority."

Evaluating[edit]

Evaluating utilitarianism:

  1. Justice: If framed by the police, would a utilitarian execute an innocent person to stop a massive riot? (Standard utilitarianism might say yes).
  2. Measurement: Can you really "Measure" happiness in numbers? Is my "5 units of joy" the same as yours?
  3. Demands: Is it "Too Hard"? If I can save a life by giving all my money to charity, am I "Evil" for buying a coffee?
  4. Integrity: Does it force us to throw away our personal values and commitments to follow the "Math"?

Creating[edit]

Future Frontiers:

  1. AI Alignment: Programming AI to be "Utilitarian" to ensure it helps humanity, while also building in "Constraints" so it doesn't do anything horrifying to reach a goal.
  2. Neuro-Utility: Using brain scans (fMRI) to actually measure the "Pleasure" of different policies in real-time.
  3. Planetary Ethics: Expanding utilitarian math to include the "Future Generations" and the "Environment" as moral actors.
  4. Automated Justice: Using utilitarian algorithms to distribute public funds or organ transplants in the most efficient way possible.