Ethics

From BloomWiki
Revision as of 01:50, 25 April 2026 by Wordpad (talk | contribs) (BloomWiki: Ethics)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?

Ethics — also called moral philosophy — is the systematic study of morality: what is right and wrong, good and bad, virtuous and vicious. It asks: How should we live? What do we owe each other? What makes an action morally permissible, required, or forbidden? Ethics operates at three levels: metaethics (what is the nature of morality itself?), normative ethics (what are the correct moral principles?), and applied ethics (how do those principles apply to specific situations — abortion, euthanasia, war, AI, climate?). Far from merely theoretical, moral philosophy shapes law, public policy, professional codes, and individual conscience.

Remembering[edit]

  • Ethics — The philosophical study of morality, including what is right and wrong, good and bad, and how we should live.
  • Metaethics — Philosophical inquiry into the nature of moral facts, moral language, and moral knowledge.
  • Normative ethics — The study of moral principles and theories that prescribe how we ought to act.
  • Applied ethics — Application of ethical theory to specific moral issues: bioethics, environmental ethics, business ethics, AI ethics.
  • Moral realism — The view that objective moral facts exist independently of minds; moral claims can be true or false.
  • Moral anti-realism — The view that there are no objective moral facts; moral judgments express attitudes or conventions.
  • Consequentialism — The view that actions are right or wrong based on their consequences; outcomes determine morality.
  • Utilitarianism — A form of consequentialism holding that the right action maximizes overall happiness/utility; Bentham, Mill.
  • Deontology — The view that actions are right or wrong intrinsically, independent of consequences; duties and rights are primary; Kant.
  • Categorical Imperative — Kant's supreme moral principle: act only on maxims you could will to be universal laws.
  • Virtue Ethics — The view that ethics is about character; the right action is what a virtuous person would do; Aristotle.
  • Eudaimonia — Aristotle's term for the goal of human life: human flourishing or wellbeing.
  • Social Contract Theory — Moral and political obligations derive from an (actual or hypothetical) agreement among persons; Hobbes, Locke, Rousseau, Rawls.
  • The Veil of Ignorance — Rawls's device: choose principles of justice without knowing your place in society.
  • Trolley Problem — A famous moral dilemma: pull a lever to divert a trolley, killing one person instead of five?

Understanding[edit]

The three major normative theories each capture important moral intuitions while facing distinctive challenges:

Utilitarianism: Jeremy Bentham and John Stuart Mill argued that the right action is the one that produces the greatest happiness for the greatest number. The appeal: it takes everyone's interests equally into account; it gives a clear decision procedure. The challenge: it seems to permit — or even require — horrifying acts if they maximize aggregate welfare. The utility monster (Nozick): a being that gets enormous pleasure from consuming resources should, by utilitarian logic, receive everything. Jim and the Indians (Williams): utilitarianism seems to demand that Jim kill one innocent person to prevent twenty deaths — violating personal integrity.

Kantian deontology: Kant held that morality derives from reason, not experience. The supreme principle — the Categorical Imperative — has several formulations:

  1. Universalizability: act only on principles you could consistently will to be universal law.
  2. Humanity formula: treat persons always as ends in themselves, never merely as means. Kant's insight: persons have a dignity that cannot be traded off for aggregate welfare. The challenge: Kant's strict rule against lying seems to require telling a murderer where your friend is hiding — an implication most people find monstrous.

Virtue ethics: Aristotle rejected the search for a single moral rule, arguing that ethics is about character. The virtuous person — courageous, just, temperate, prudent — acts well because they have developed excellent habits of feeling and action. The right act is what the virtuous person would do. The challenge: without a decision procedure, virtue ethics can seem to give insufficient guidance in novel dilemmas. Contemporary virtue ethicists (Foot, MacIntyre, Hursthouse) have developed more sophisticated accounts.

Metaethics and the nature of moral facts: Even granting a normative theory, metaethical questions remain. Are moral claims genuinely true or false? Moral realists say yes — there are objective moral facts about what is right. Error theorists (Mackie) say we act as if there are such facts, but there aren't — all moral claims are false. Expressivists (Hare, Blackburn) say moral claims don't describe facts but express attitudes or commitments. Each position carries implications for moral knowledge, moral progress, and moral disagreement.

Applying[edit]

Implementing utilitarian and Kantian decision frameworks: <syntaxhighlight lang="python"> from dataclasses import dataclass from typing import Callable

@dataclass class Action:

   name: str
   consequences: dict[str, float]  # stakeholder → utility change
   uses_person_as_means: bool       # Kantian criterion
   universalizable: bool            # Categorical Imperative criterion

def utilitarian_evaluate(action: Action) -> dict:

   """Greatest happiness principle: choose action maximizing total utility."""
   total_utility = sum(action.consequences.values())
   return {
       'framework': 'Utilitarianism (act)',
       'total_utility': total_utility,
       'verdict': 'Permissible' if total_utility > 0 else 'Impermissible',
       'rationale': f"Net welfare change: {total_utility:+.1f} utils across all stakeholders"
   }

def kantian_evaluate(action: Action) -> dict:

   """Categorical Imperative: universalizability + treating persons as ends."""
   if not action.universalizable:
       verdict = "Impermissible"
       rationale = "Maxim cannot be universalized without contradiction"
   elif action.uses_person_as_means:
       verdict = "Impermissible"
       rationale = "Treats a person merely as a means, violating human dignity"
   else:
       verdict = "Permissible"
       rationale = "Maxim is universalizable and respects persons as ends"
   return {'framework': 'Kantian Deontology', 'verdict': verdict, 'rationale': rationale}

def virtue_evaluate(action: Action, virtuous_agent_would: bool) -> dict:

   """Virtue ethics: what would a person of excellent character do?"""
   return {
       'framework': 'Virtue Ethics',
       'verdict': 'Permissible' if virtuous_agent_would else 'Impermissible',
       'rationale': 'Assessed by reference to a person of practical wisdom (phronesis)'
   }
  1. The trolley problem

trolley_pull_lever = Action(

   name="Pull lever (divert trolley, kill 1 to save 5)",
   consequences={"five_people": +50.0, "one_person": -10.0},  # Net +40 utils
   uses_person_as_means=True,    # The one person becomes an instrument to save five
   universalizable=True           # "Divert threats to minimize harm" can be universalized

)

for evaluate in [utilitarian_evaluate, kantian_evaluate]:

   result = evaluate(trolley_pull_lever)
   print(f"\n{result['framework']}: {result['verdict']}")
   print(f"  → {result['rationale']}")
  1. Footbridge variant: push a large person off a bridge to stop trolley

push_person = Action(

   name="Push person off bridge to stop trolley",
   consequences={"five_people": +50.0, "one_person": -10.0},  # Same consequences
   uses_person_as_means=True,    # Person IS the means of stopping the trolley
   universalizable=False          # "Use people as obstacles" leads to contradiction

) print("\n--- Footbridge variant ---") print("Same utilitarian calculation → Pull lever = Push person (different intuitions!)") print(f"Kantian: {kantian_evaluate(push_person)['verdict']}") </syntaxhighlight>

Key texts and thinkers
Consequentialism → Bentham (Introduction to Principles of Morals), Mill (Utilitarianism), Singer (Practical Ethics)
Deontology → Kant (Groundwork for the Metaphysics of Morals), Ross (prima facie duties), Scanlon (What We Owe to Each Other)
Virtue Ethics → Aristotle (Nicomachean Ethics), MacIntyre (After Virtue), Foot (Natural Goodness)
Social Contract → Hobbes (Leviathan), Locke, Rousseau, Rawls (A Theory of Justice)
Metaethics → Moore, Mackie (Ethics: Inventing Right and Wrong), Blackburn, Parfit (On What Matters)

Analyzing[edit]

The Three Major Ethical Theories on Key Cases
Case Utilitarianism Kantian Deontology Virtue Ethics
Trolley problem (pull lever) Pull — saves 4 net May be permissible Probably pull — right response to crisis
Trolley (push person) Push — same calculus Never — uses person as means Never — shows vice, not virtue
Lying to save a friend Lie — maximizes welfare Never lie (strict Kant) Lie — compassion outweighs honesty rule
Torture for information Sometimes permissible (ticking bomb) Never — violates dignity Never — corrupts character
Euthanasia (consensual) Permissible if preferred Disputed Depends on virtues of compassion and respect

Moral dilemmas and thought experiments: The Experience Machine (Nozick): if we could plug into a machine giving perfect simulated experiences, should we? (Challenges hedonistic utilitarianism.) Heinz's Dilemma (Kohlberg): should Heinz steal a drug to save his wife? Used to study moral development stages. The Drowning Child (Singer): if you can save a drowning child at minimal cost, you must — so why not donate to prevent distant deaths at equal cost?

Evaluating[edit]

Evaluating ethical theories:

  1. Moral intuitions: does the theory's implications match strong, widely-shared intuitions? Counterintuitive implications are prima facie evidence against a theory.
  2. Coherence: is the theory internally consistent? Does it give consistent verdicts across similar cases?
  3. Scope: does the theory cover all morally relevant cases, or does it have blind spots?
  4. Practical action-guidance: can people actually use the theory to guide decisions, or is it too abstract?
  5. Reflective equilibrium: can we find wide equilibrium between theory and intuitions through mutual adjustment?

Creating[edit]

Applied ethics in practice:

  1. AI ethics: apply the trolley problem structure to autonomous vehicle decision algorithms; apply Kant's humanity formula to manipulative AI systems.
  2. Climate ethics: how does each theory assign responsibility for historical emissions and obligations to future generations?
  3. Global poverty: Singer's argument for substantial redistribution; objections from libertarian and communitarian perspectives.
  4. Bioethics: organ markets, genetic enhancement, the ethics of CRISPR in humans — each theory gives different verdicts and reasons.
  5. Institutional design: Rawlsian veil of ignorance as a design tool for fair policy: what rules would you choose not knowing your position?