Decision Making
How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?
Decision Making and Rationality are the study of how individuals choose between alternatives, the biases that influence those choices, and what it means to make a "good" or "rational" decision. While classical economics assumes humans are "Econs"—perfectly rational agents who maximize utility—cognitive science reveals that we are "Humans"—beings with limited information, finite processing power, and a suite of "heuristics" (mental shortcuts) that lead to predictable errors. Understanding these processes is critical for everything from individual financial planning to the design of public policy and the development of ethical AI.
Remembering
- Rationality — The quality of being based on or in accordance with reason or logic; often divided into instrumental and epistemic.
- Utility — A measure of the total satisfaction or value received from consuming a good or service.
- Heuristic — A mental shortcut or "rule of thumb" used to solve problems or make decisions quickly.
- Cognitive Bias — A systematic pattern of deviation from norm or rationality in judgment.
- Bounded Rationality — Herbert Simon's idea that rationality is limited by available information, time, and the mind's processing power.
- Prospect Theory — Kahneman and Tversky's theory describing how people choose between probabilistic alternatives that involve risk.
- Loss Aversion — The tendency to prefer avoiding losses to acquiring equivalent gains (the "pain" of losing $100 is greater than the "joy" of winning $100).
- Anchoring — The tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions.
- Availability Heuristic — Judging the frequency or probability of an event based on how easily examples come to mind.
- Confirmation Bias — The tendency to search for, interpret, and recall information in a way that confirms one's prior beliefs.
- Sunk Cost Fallacy — Continuing an endeavor as a result of previously invested resources (time, money, effort) even when it's no longer optimal.
- Framing Effect — People react to a particular choice in different ways depending on how it is presented (e.g., as a loss or as a gain).
- Expected Value — The sum of all possible values for a random variable, each multiplied by the probability of its occurrence.
- System 1 vs. System 2 — Dual-process theory: System 1 is fast/intuitive; System 2 is slow/deliberative.
Understanding
The central tension in this field is between "Normative" models (how we should decide) and "Descriptive" models (how we actually decide).
Dual-Process Theory: Popularized by Daniel Kahneman in Thinking, Fast and Slow.
- System 1: Operates automatically and quickly, with little or no effort and no sense of voluntary control. It's the source of intuition and biases.
- System 2: Allocates attention to the effortful mental activities that demand it, including complex computations. It's the "rational" part but is "lazy" and often accepts System 1's suggestions.
Prospect Theory: Overturned the "Expected Utility" model by showing that: 1. We evaluate outcomes relative to a reference point, not in absolute terms. 2. We are risk-averse for gains but risk-seeking for losses (trying to "break even"). 3. We overweight small probabilities (why people play the lottery) and underweight certain probabilities (the certainty effect).
Nudge Theory: Proposed by Thaler and Sunstein. Because humans have predictable biases, "choice architects" can design environments (nudges) that steer people toward better decisions without forbidding any options or significantly changing their economic incentives (e.g., making organ donation the "default" option).
Applying
Calculating Expected Value vs. Utility (The St. Petersburg Paradox): <syntaxhighlight lang="python"> import math
def expected_value(p_win, win_amount, p_loss, loss_amount):
"""Calculates the mathematical expected value of a gamble.""" return (p_win * win_amount) + (p_loss * loss_amount)
def logarithmic_utility(amount):
"""Bernoulli's proposal: Utility increases logarithmically with wealth."""
return math.log(amount) if amount > 0 else -float('inf')
- Gamble: 50% chance to win $200, 50% chance to lose $50
ev = expected_value(0.5, 200, 0.5, -50) print(f"Expected Value: ${ev}") # +$75
- Now consider utility for someone with $1000 base wealth
base_wealth = 1000 u_no_gamble = logarithmic_utility(base_wealth) u_gamble = 0.5 * logarithmic_utility(base_wealth + 200) + \
0.5 * logarithmic_utility(base_wealth - 50)
print(f"Utility of not gambling: {u_no_gamble:.4f}") print(f"Utility of gambling: {u_gamble:.4f}")
- If u_gamble > u_no_gamble, a rational agent (Bernoulli sense) takes the bet.
</syntaxhighlight>
- Practical Applications
- Finance → Avoiding panic-selling during market dips (loss aversion).
- Medicine → Understanding how "80% survival" vs "20% mortality" frames patient choices.
- Public Policy → Designing "Save More Tomorrow" programs that leverage hyperbolic discounting.
- Management → Overcoming the "planning fallacy" (underestimating time/cost) in projects.
Analyzing
| Heuristic | Function | Potential Bias |
|---|---|---|
| Availability | Use ease of recall as proxy for frequency | Overestimating rare, dramatic events (e.g., plane crashes). |
| Representativeness | Judge by similarity to a prototype | Ignoring base rates (e.g., the "Linda problem"). |
| Anchoring | Start from a given value and adjust | Insufficient adjustment from the initial number. |
| Affect | Use feelings as a guide to judgment | Decisions based on emotion rather than data. |
The Base Rate Fallacy: People often ignore the overall probability of an event (the base rate) in favor of specific, descriptive information. For example, if a test for a rare disease (1 in 10,000) is 99% accurate, and you test positive, your chance of having it is still only about 1%—yet most people (and many doctors) would guess 99%.
Evaluating
Evaluating rationality:
- Ecological Rationality: Gerd Gigerenzer argues that heuristics aren't "errors" but adaptations that are "fast and frugal" and work well in the specific environments they evolved for.
- Coherence vs. Correspondence: Is a decision rational if it's internally consistent (coherence), or only if it leads to the best real-world outcome (correspondence)?
- Intertemporal Choice: How do we evaluate decisions that benefit the current self at the expense of the future self (e.g., climate change or retirement)?
Creating
Future Directions:
- Augmented Rationality: Developing AI tools that act as "external System 2s," identifying and correcting human biases in real-time.
- Neuroeconomics: Using fMRI to identify the brain regions (like the amygdala and prefrontal cortex) that compete during risky decisions.
- Algorithmic Fairness: Ensuring that automated decision systems (in hiring, lending, or law) don't encode or amplify human biases.
- Collective Intelligence: Designing voting and prediction market systems that aggregate individual "irrational" guesses into a "rational" collective prediction.