Decision Theory and Expected Utility Explained

This is a retelling of Chapter 6 Part 1 (sections 6.1 through 6.6) from “Behavioral Finance for Private Banking” by Thorsten Hens, Enrico G. De Giorgi, and Kremena K. Bachmann (Wiley, 2018).

Chapter 6 is where the book stops talking about biases and starts talking about how decisions should be made. Or at least, how mathematicians over the last 350 years have tried to figure that out. Here’s the thing. This chapter is dense. But it matters, because everything about portfolio construction later in the book builds on this foundation.

Why Decision Theory Matters for Your Money

There’s a famous study by Ibbotson and Kaplan (2000) that says your strategic asset allocation determines more than 90% of your investment performance. Not stock picking. Not timing. The split between stocks, bonds, and cash.

But here’s the problem. That only works if you actually stick with your allocation through the full market cycle. And most people don’t. When losses hit, investors panic and abandon their plan.

So the question becomes: how do you design an allocation that people will actually hold through bad times? That is what this chapter is about. Building a framework for making investment decisions that accounts for how humans actually behave.

A Very Short History: From Prayer to Math

Humans always had to make risky decisions. Hunt a mammoth or a deer? Fight or surrender? For most of history, people prayed to gods before making these choices. Nobody thought math could help.

Then in 1670, Blaise Pascal had an idea. For any risky choice, write down all possible outcomes, estimate their probabilities, multiply each outcome by its probability, and add them up. Pick the option with the highest expected value.

Simple example. Coin toss 1: heads you get $6, tails you get $2. Coin toss 2: heads you get $9, tails you get $1. Pascal says pick coin 2 because its expected value is $5 versus $4 for coin 1.

For 68 years, that was the standard approach.

Bernoulli’s Insight: It’s Not About the Money, It’s About the Use of Money

In 1738, Daniel Bernoulli broke Pascal’s framework with a thought experiment called the St. Petersburg game.

Here are the rules. Flip a coin until it lands heads. If heads shows on the first flip, you get $1. If first heads is on flip two, you get $2. Flip three, $4. Each extra tails doubles the prize. The math says the expected value of this game is infinite.

But nobody would pay infinite money to play it. Why?

Because, as Bernoulli wrote, what matters is not the money itself but the use of that money. And the use depends on how rich you already are. In his words: “A gain of one thousand ducats is more significant to the pauper than to a rich man.”

This is the birth of expected utility theory. Instead of just multiplying outcomes by probabilities, you first convert the outcomes into “utility” using some function that reflects how useful that money actually is to you. Then you calculate the expected value of that utility.

The key property is what economists call decreasing marginal utility. The first $1,000 matters a lot. The same $1,000 added on top of a million you already have? Not so much.

Von Neumann and Morgenstern: The Rationality Rules

For 200 years after Bernoulli, mathematicians kept proposing different utility functions. It became a mess.

In 1944, John von Neumann and Oscar Morgenstern cleaned it up. They asked a simple question: what basic rules should any rational decision maker follow? They came up with three:

  1. Transitivity. If you prefer A over B and B over C, you must prefer A over C. No circular preferences.
  2. Independence. If two choices share identical parts, your decision should depend only on the parts that are different.
  3. Monotonicity. When comparing two sure things, prefer the bigger one.

Then they proved something important. Expected utility is the only decision framework that satisfies all three rules. So if you want to be “rational” in the mathematical sense, expected utility is your only option.

But here’s the catch. They did not restrict what the utility function looks like. It just has to be increasing. In theory, a rational person could have a very complicated utility function, which makes practical application difficult.

The Allais Paradox: Where Rational People Act “Irrational”

Despite the elegant math, people violate expected utility all the time. French Nobel laureate Maurice Allais showed this with a famous experiment.

Choice 1: Would you take a guaranteed $3,000 or an 80% chance of getting $4,000 (with 20% chance of nothing)? Most people take the sure $3,000.

Choice 2: Would you take a 10% chance of $3,000 or an 8% chance of $4,000? Most people switch and take the 8% shot at $4,000. The reasoning is: if my chances are small either way, I might as well go for the bigger prize.

Makes total sense intuitively. But it violates expected utility theory. The math behind the two choices is contradictory if you use the same utility function. This is called the Allais paradox, and it shows that the independence axiom breaks down in practice.

Markowitz and Mean-Variance: Reward vs. Risk

In 1952, Harry Markowitz took a different approach. Instead of computing utility from every possible outcome, just focus on two numbers: the expected return (reward) and the variance (risk).

The idea is straightforward. First, throw out any option that gives less return for the same risk. What’s left is the “efficient frontier.” Then pick the point on that frontier that matches your personal comfort with risk.

More risk averse? Pick something on the lower left of the frontier. Less return, less volatility. More aggressive? Move to the upper right. Higher return, higher variance.

This became the standard textbook method for portfolio construction. Every finance student learns it. Every robo-advisor uses some version of it.

Prospect Theory: How People Actually Decide

In 1979, two psychologists, Daniel Kahneman and Amos Tversky, published a new decision theory based not on axioms but on experiments. They watched how real people make choices. The work eventually won a Nobel Prize.

Prospect theory has two phases.

The Editing Phase

Before you evaluate anything, your brain reorganizes the choices. You frame them, simplify them, code them as gains and losses relative to some reference point. This is where biases sneak in.

Here’s a classic experiment. Group 1 gets $1,000 and must choose between a sure gain of $500 or a 50/50 chance of gaining $1,000 or nothing. Group 2 gets $2,000 and must choose between a sure loss of $500 or a 50/50 chance of losing $1,000 or nothing.

From a total wealth perspective, both groups face identical options. But Group 1 mostly takes the sure thing. Group 2 mostly gambles. Because people think in terms of gains and losses, not total wealth.

The Value Function

The value function in prospect theory is shaped like an S. Concave for gains (each extra dollar of gain matters less). Convex for losses (each extra dollar of loss also matters less, which means you become risk-seeking). And steeper on the loss side than the gain side.

That steepness is loss aversion. Losses hurt roughly 2.25 times as much as equivalent gains feel good, according to Kahneman and Tversky’s experiments.

There’s a nice example in the book. Imagine you have two credit cards and two wallets. There’s a 25% chance of losing each wallet. A traditional expected utility person (let’s call him “Bernoulli”) puts one card in each wallet to spread the risk. A prospect theory person (let’s call him “Kahneman”) puts both cards in one wallet. Why? Because he would rather have a higher chance of losing nothing than spread out the pain. He’s risk-seeking in the domain of losses.

Probability Weighting

The second key feature of prospect theory is that people don’t treat probabilities at face value. Small probabilities get overweighted. Large probabilities get underweighted.

A 1% chance feels bigger than 1% in your head. A 60% chance feels smaller than 60%.

This explains why people buy lottery tickets (overweighting the tiny chance of a big win) and why people buy insurance against rare disasters (overweighting the tiny chance of a big loss). In traditional theory, these two behaviors are contradictory. Prospect theory explains both.

The book calls this the “fourfold pattern of risk taking”:

LossesGains
Small probabilityNo risk taking (buy insurance)Risk taking (buy lottery tickets)
Moderate/high probabilityRisk taking (gamble to avoid sure loss)No risk taking (take the sure gain)

Are These Theories Actually Rational?

Here’s where it gets interesting.

Mean-variance has problems. It can violate the independence axiom. It can also lead to the “mean-variance paradox” where an investor prefers an asset that never makes money over one that sometimes does, just because the first has zero variance. Variance treats upside volatility the same as downside volatility. But a real person would never consider the possibility of a big gain as “risk” to avoid.

Two investments can have the same mean and variance but look completely different. One might have capital protection with unlimited upside. The other might have limited upside with unlimited downside. Mean-variance says they’re equal. No sane investor would agree.

Prospect theory without probability weighting is actually rational. If you set the probability weighting parameter to 1 (meaning you don’t distort probabilities), prospect theory becomes a special case of expected utility. It satisfies all three von Neumann-Morgenstern axioms. The S-shaped value function is just a different utility function, and the axioms don’t restrict which utility function you use.

With probability weighting turned on, prospect theory can violate the “more is better” rule. But the normalized version fixes this issue.

And here’s the interesting connection. Mean-variance analysis is actually a special case of prospect theory. Take prospect theory, remove probability weighting, remove loss aversion, make the value function the same shape for gains and losses, and you get mean-variance. So mean-variance is what you get when you strip out everything that makes human decision-making human.

My Take

This chapter is the math backbone of the book. Three theories, each building on the previous one.

Expected utility says: people care about the usefulness of money, not just the amount. It’s rational but doesn’t match how people actually behave.

Mean-variance says: simplify everything to reward and risk. Very practical but has serious theoretical holes. It treats good volatility the same as bad volatility.

Prospect theory says: people think in gains and losses, they hate losses more than they enjoy gains, and they distort probabilities. It matches human behavior but at the cost of some rationality.

The practical takeaway for investors? If your financial advisor builds your portfolio using only mean-variance optimization, they might miss important aspects of how you actually experience risk. Your real risk is not variance. Your real risk is losses. And how you feel about losses is personal.

Next part covers how these theories translate into actual portfolio construction. That’s where the rubber meets the road.


Previous: Investment Personality Diagnostic Tests

Next: Prospect Theory - How We Really Make Decisions

About

About BookGrill

BookGrill.org is your guide to business books that sharpen leadership, refine strategy and build better organizations.

Know More