Utility Theory: How Much Risk Can You Handle?

Would you rather have a guaranteed $5 million or a 50/50 shot at $10 million? Most people take the sure thing. Mathematically the expected value is the same. But something inside you says the safe option just feels better. That feeling is exactly what utility theory tries to capture, and Chapter 62 of Wilmott’s book lays down the framework for it.

Why Utility Matters in Finance

Most of derivatives theory is about hedging and killing risk. You delta hedge, you eliminate uncertainty, life is good. But perfect hedging is not always possible. Sometimes there is residual risk you cannot get rid of. What do you do with it?

One approach is to just ignore it and look at expected outcomes. Another is to look at both the average and the standard deviation. We have seen those before. But there is a third way: measure how “happy” each outcome makes you.

That is utility. You assign a number to each outcome that represents how much satisfaction it gives you. And here is the key insight: $10 billion is twice as much money as $5 billion, but it does not make you twice as happy. Both numbers are so huge that the difference in your personal happiness is small. This simple observation has deep consequences for how we make investment decisions.

Ranking Your Preferences

Before we get to utility functions, we need a way to rank things. You prefer an E-type Jaguar over a BMW. You cannot decide between peanut butter cheesecake and pumpkin pie. These preferences follow some basic rules that Wilmott calls axioms.

Completeness says that given any two outcomes, you can always decide which you prefer, or declare them equal. You are never stuck with “I literally cannot compare these two things.”

Reflexivity is obvious: any outcome is at least as good as itself.

Transitivity is the big one. If you prefer A to B, and B to C, then you must prefer A to C. Without this, your preferences would go in circles and nothing would make sense.

There is one more axiom called continuity. If you prefer A to B to C, then somewhere between A and C there is a lottery (a mix of the two) that you find exactly equal to B. This last axiom is what allows us to represent preferences with a mathematical function.

The Utility Function

A utility function U(W) takes your wealth W and spits out a number representing your happiness. Different investors have different utility functions because everyone has a different attitude toward risk.

Two properties are almost universal. First, more wealth is better. The utility function goes up as wealth increases (U’(W) > 0). Second, each additional dollar of wealth makes you a little less happy than the previous one. The function curves downward (U’’(W) < 0). Economists call this the law of diminishing returns.

That downward curvature is what makes you risk averse. If the utility function were a straight line, you would be indifferent between a sure thing and a fair gamble with the same expected value. But because the curve bends down, the pain of losing $1 million is bigger than the joy of gaining $1 million. So you prefer certainty.

Measuring Risk Aversion

There are two standard ways to measure how risk averse someone is.

The absolute risk aversion function is A(W) = -U’’(W)/U’(W). This tells you how much you dislike risk at a given level of wealth. The negative sign makes it positive for risk-averse investors.

The relative risk aversion function is R(W) = -W * U’’(W)/U’(W). This scales the absolute version by your wealth, giving a dimensionless measure.

These are not just theoretical curiosities. Specifying either one completely determines how an investor ranks risky investments.

When it comes time to actually use utility theory, a few standard choices keep showing up.

Constant Absolute Risk Aversion (CARA) uses U(W) = -e^(-aW). Your risk aversion stays the same no matter how rich you are. A billionaire and a millionaire with the same CARA utility would have the same dollar amount they are willing to risk. This is not very realistic for most people, but it is mathematically convenient.

Constant Relative Risk Aversion (CRRA) uses U(W) = W^(1-gamma) / (1-gamma). Your risk aversion scales with your wealth. A billionaire would bet more dollars than a millionaire, but the same fraction of their wealth. The special case where gamma goes to zero gives you U(W) = log(W), the logarithmic utility that keeps popping up in finance.

Hyperbolic Absolute Risk Aversion (HARA) is a broad family that includes both CARA and CRRA as special cases. It is the most flexible, but also the most complicated.

Certainty Equivalent Wealth

Here is a concept that makes utility theory practical. Suppose you face a risky investment with random outcomes. What amount of guaranteed cash would make you equally happy?

That number is called the certainty equivalent wealth, and you find it by solving U(W_c) = E[U(W)]. The certainty equivalent is always less than the expected outcome for a risk-averse investor. Always. The difference between the two tells you how much the risk “costs” you in terms of happiness.

Wilmott walks through a nice example. You can win or lose one dollar on a coin toss. The expected outcome is zero. But the expected utility is less than U(0), which means the certainty equivalent is negative. You would actually need to be paid something to take this bet, even though it is perfectly fair on average.

For small bets around your current wealth W, the certainty equivalent turns out to be approximately the expected wealth minus a correction term that depends on the variance and the absolute risk aversion function. This is where A(W) earns its keep. Higher risk aversion means a bigger penalty for uncertainty.

Maximizing Expected Utility

The main practical use of utility theory is choosing the best portfolio. You have N risky assets, each with random returns. You pick the weights of each asset to maximize your expected utility.

If you add a risk-free asset earning interest rate r, the problem becomes cleaner because you automatically satisfy the budget constraint. Whatever you do not invest in risky assets earns the risk-free rate.

The math is standard optimization with constraints. For specific utility functions like CRRA or CARA, you can sometimes get explicit solutions. For general utility functions, you solve numerically.

Ordinal vs Cardinal Utility

There is a subtle but important distinction here. If you only care about ranking outcomes (chocolate beats vanilla beats strawberry), then any monotonically increasing transformation of your utility function works just as well. This is ordinal utility.

But when you face uncertain outcomes and need to take expectations, the actual shape of the function matters. Two investors might rank certain outcomes the same way but make completely different decisions when facing gambles. This is cardinal utility, also called the von Neumann-Morgenstern utility function.

Wilmott illustrates this with a clean example. An investor with U(W) = W faces a choice between a 50/50 gamble on 0 or 9, versus a sure 4. The expected utility of the gamble is 4.5, which beats 4. Take the gamble. Now switch to U(W) = W^(1/2). The expected utility of the gamble becomes 1.5, which loses to the sure thing’s utility of 2. Take the safe option. Same ranking of 0, 4, 9. Different decisions under uncertainty.

Key Takeaways

Utility theory gives us a rigorous way to think about risk preferences. Three things to remember. First, diminishing returns means risk aversion. The curvature of your utility function determines how much you dislike uncertainty. Second, the certainty equivalent is always less than the expected outcome for risk-averse people, and the gap measures the cost of risk. Third, cardinal utility matters more than ordinal utility when decisions involve uncertainty, because the shape of the function, not just its ranking, drives the choice.

Wilmott admits utility theory is not popular outside of academic economics. But it forms the foundation for the asset allocation and investment models we will see in later chapters. Understanding how individuals value risk differently is the starting point for understanding how markets price risk collectively.


Previous post: Feedback Effects in Illiquid Markets

Next post: Advanced American Options: Optimal Exercise and Profit

About

About BookGrill

BookGrill.org is your guide to business books that sharpen leadership, refine strategy and build better organizations.

Know More