Volatility Modeling: The Big Picture
You cannot see volatility. You cannot touch it. You cannot even measure it precisely at any given instant. And yet, it is the single most important input in options pricing. Get volatility wrong and nothing else matters. Get it right and you can make a lot of money.
Chapter 49 is a roadmap. Wilmott steps back from the math and gives you the big picture of volatility modeling before diving into the details in the chapters that follow. If you are going to spend your career worrying about volatility (and many quants do exactly that), this chapter tells you what you are in for.
The Different Flavors of Volatility
There are several things people call “volatility,” and they are all different. Understanding the distinctions is half the battle.
Actual volatility is the real, instantaneous amount of randomness in an asset at any moment. This is what goes into the Black-Scholes equation as a coefficient. The problem? You cannot observe it directly. It exists at each instant and might vary from moment to moment. It is the thing we actually need, and the thing we can never truly know.
Historical (realized) volatility is a backward-looking measure. Take some period of past data, compute the standard deviation of returns. This gives you a number. But it depends on your choice of time period and calculation method. The 30-day historical vol and the 90-day historical vol can give very different answers, and neither one tells you what volatility will be tomorrow.
Implied volatility is the number you plug into Black-Scholes to match a market price. It is often described as “the market’s view of future volatility.” Wilmott is skeptical of this interpretation, and so should you be. Implied vol is also influenced by supply and demand, fear and greed, and the simple fact that out-of-the-money puts are expensive because people need downside protection, not because they have sophisticated views about future variance.
Forward volatility is the predicted volatility for some future period. You can get this from implied vol data or from models like GARCH. It is what you actually need for pricing, and it is the hardest to get right.
Measuring Historical Volatility
The simplest approach: take a window of N days, compute the standard deviation of daily returns. But this has a nasty artifact. If one day has a huge return, the volatility estimate jumps up immediately and stays elevated for exactly N days until that return falls out of the window. The resulting “plateauing” effect is completely spurious.
A better approach is the exponentially weighted moving average (EWMA):
sigma_n^2 = lambda * sigma_(n-1)^2 + (1-lambda) * return_n^2
Recent returns get more weight. Old returns fade away gradually. No plateauing. This is what RiskMetrics uses, and it is a solid workhorse method.
Take this one step further and you get GARCH (Generalized Autoregressive Conditional Heteroscedasticity). Yes, the name is ridiculous. The idea is simple: volatility tends to revert to a long-term mean. GARCH combines the EWMA concept with mean reversion:
sigma_n^2 = alpha * long_term_variance + beta * sigma_(n-1)^2 + gamma * return_n^2
GARCH is useful because it lets you forecast future volatility. With EWMA, the expected future variance is just today’s value forever. With GARCH, it gradually reverts to the long-term mean. This makes more economic sense, since periods of extreme volatility do not last forever.
Beyond Closing Prices
Here is a practical gem. The standard volatility estimate uses only closing prices. But every day you also have the open, the high, and the low. Throwing away all that information seems wasteful. And it is.
Wilmott presents several estimators that use intraday price data:
Parkinson (1980) uses the daily high and low. It is five times more efficient than close-to-close. Meaning for the same amount of data, your estimate has one-fifth the variance.
Garman and Klass (1980) uses open, high, low, and close. Seven times more efficient.
Rogers and Satchell (1991) is similar but does not assume zero drift. Important in trending markets.
If you are estimating volatility from daily data and not using range-based estimators, you are leaving accuracy on the table. The improvement is significant and essentially free.
Maximum Likelihood Estimation
Wilmott takes a detour into statistics with a delightful taxi example. You arrive in a city and get taxi number 20,922. How many taxis are in the city?
Using maximum likelihood: the probability of getting taxi 20,922 if there are N taxis is 1/N. This is maximized when N is as small as possible, so N = 20,922. You would estimate there are exactly 20,922 taxis. Simple but powerful idea: choose the parameter that makes the observed data most likely.
Applied to volatility: if you have a series of returns and assume they are normally distributed, the maximum likelihood estimate of volatility is exactly the standard deviation formula you already know. MLE gives you the “best” estimate in a well-defined statistical sense.
Wilmott even applies MLE to a survey of quant salaries from wilmott.com (mean salary: $133,284 with a lognormal distribution). A fun example that also shows MLE works for distributions beyond the normal.
Smiles and Skews
For a series of options with the same expiration but different strikes, plot implied volatility against strike. If Black-Scholes were correct and volatility were constant, this plot would be flat. It never is.
If the plot curves upward at both ends like a smile, it is called a volatility smile. If it slopes from top-left to bottom-right, it is a skew (common in equity markets, where downside protection is expensive). If it slopes the other way, it is a positive skew.
What do smiles and skews tell us? Several things, and Wilmott is refreshingly honest about the ambiguity:
- The market does not believe Black-Scholes assumptions
- Supply and demand push certain options higher or lower
- Traders need downside protection and are willing to overpay
If out-of-the-money puts cost 10 cents when they should theoretically cost 7 cents, is that irrational? Not really. If you need the protection, three extra cents is nothing. “Penny wise, pound foolish,” as the saying goes.
You can speculate on smiles and skews using specific structures. A straddle lets you bet on the level of volatility. A risk reversal lets you bet on the skew. A butterfly spread lets you bet on the smile. These are the building blocks of volatility trading.
The Six Approaches to Volatility Modeling
Here is the roadmap for the chapters ahead:
1. Deterministic volatility surfaces (Chapter 50). Assume volatility is a function of stock price and time: sigma(S, t). Calibrate this function to match all market prices. Wilmott hates this approach with “every atom of his being” because it assumes market prices contain perfect information about future volatility. They do not. If you calibrate today and come back next week, the surface will have changed. It never passes the simplest scientific test.
2. Stochastic volatility (Chapter 51). Volatility itself follows a random process. This makes sense because we genuinely cannot predict volatility. The problem: now you have two sources of randomness (stock and volatility) but only one stock to hedge with. You need another option to hedge volatility risk, and that introduces the “market price of volatility risk,” a concept that is wonderful in theory and unreliable in practice.
3. Uncertain parameters (Chapter 52). Volatility lies within a range, but you make no assumptions about its probability distribution within that range. This is uncertainty, not randomness. The result: you get a range of option values instead of a single number, and long and short positions take different values. The pricing equation turns out to be the same nonlinear equation from the transaction costs chapter.
4. Empirical analysis (Chapter 53). Analyze actual data to figure out what volatility does. How does it move? What is the volatility of volatility? This is the scientific approach: let the data tell you the model.
5. Static hedging (Chapter 60). Instead of dynamically hedging with the stock, replicate your exotic option’s payoff using a portfolio of vanilla options. This reduces exposure to model error because you match cash flows with traded instruments. Practical and increasingly popular.
6. Stochastic volatility with mean-variance analysis (Chapter 54). Accept that not all risk can be hedged. Build a framework that uses dynamic hedging with stock and static hedging with vanillas. This avoids the problematic market price of risk concept.
To Calibrate or Not?
This is the big philosophical question, and Wilmott has strong opinions.
Should your model reproduce market prices exactly? Calibration enthusiasts say yes. Wilmott says the market is not that smart. He compares it to a corner shop pricing milk: the shopkeeper does not do utility analysis of his customers, he just marks things up as much as he can. Similarly, out-of-the-money puts are expensive because of fear and demand, not because the market has perfect knowledge of future volatility.
He compares calibration to phlogiston theory, the disproven idea that burning materials release a substance called phlogiston. Calibrated volatility surfaces are financial phlogiston: they seem to explain what we observe, but the underlying model is wrong.
The pragmatic takeaway: use calibration as a consistency tool, not as truth. Make sure your exotic pricing is consistent with vanilla prices because you hedge exotics with vanillas. But do not believe the calibrated surface predicts anything about the future.
A Summary Table
Wilmott provides a useful comparison of the models:
| Model | Math Complexity | Popularity |
|---|---|---|
| Constant vol | Black-Scholes formulae | Very, especially for vanillas |
| Deterministic vol | sigma(S,t), Black-Scholes PDE | Very, for exotics |
| Stochastic vol | Higher dimensions, transforms | Very, for exotics |
| Jump diffusion | Poisson processes | Increasing |
| Uncertain vol | Nonlinear PDE | Not popular (unfortunately) |
| Stoch vol + mean-variance | Higher dimensions, nonlinearity | Not popular (unfortunately) |
Notice the “(unfortunately)” for the last two. Wilmott thinks these are the best approaches but they have not caught on in practice. The market prefers tractable models over correct ones. This is a recurring theme in quantitative finance: speed beats accuracy when traders need numbers in real time.
Key Takeaways
Derivatives are all about volatility. Every pricing model, every hedging strategy, every risk measure depends on getting volatility right. And volatility is the one thing we cannot observe directly, cannot measure precisely, and cannot forecast reliably.
The chapters ahead will present many different approaches, each with its own strengths and weaknesses. But the single most important lesson from this overview chapter: never forget that volatility is a property of the stock, not a property of the options market. It would still exist even if no derivatives were traded. Option prices may reflect expectations about volatility, but they also reflect fear, greed, and supply-demand dynamics. Taking implied volatility at face value is a mistake.
Previous post: Transaction Costs: The Hidden Tax on Every Trade
Next post: Volatility Surfaces: Smiles, Skews, and Local Vol