Calendar Effects in the Stock Market - January, Weekends, and Holidays
What if I told you that the day of the week affects your stock returns? Or that one specific month is consistently better than the other eleven? Sounds like astrology for finance people, right?
But here’s the thing. Researchers have been finding these patterns for decades. And they still can’t fully explain them. Chapter 17 of Burton and Shah’s book is about calendar effects. Patterns in the stock market tied to nothing more than the calendar.
If these patterns are real, they are a problem for the efficient market hypothesis. Because it would mean you can predict returns based on something as simple as “what month is it.” And that shouldn’t be possible in an efficient market.
The January Effect
This is the most famous calendar anomaly. Stocks tend to do better in January than in other months.
Richard Thaler brought this to public attention using a 1976 paper by Rozeff and Kinney. They found extraordinary January returns in an equally weighted stock index. But not in the Dow Jones Industrial Average, which tracks only 30 large companies. So the January effect is mostly a small-company thing.
And it’s stubborn. Haugen and Jorion checked the data again in 1996. They looked at NYSE stocks from 1926 through 1993. The January effect was still there, just as strong as when it was first discovered twenty years earlier. You’d expect the effect to disappear once people know about it. Smart money should trade it away. But it didn’t go away.
Hansen and Lunde checked 27 stock exchanges across 10 countries. Same result. Year-end calendar effects show up, but mostly in small-cap stocks.
So Why Does January Behave Differently?
The most popular explanation is tax-loss selling. Here is how it works.
Investors hold losing stocks all year. In December, they finally sell those losers to claim a tax deduction. All that selling pushes prices of beaten-down stocks even lower than they should be. Then January arrives, the selling stops, and those stocks bounce back. Simple supply and demand.
There is evidence for this. Odean found that December is unusual because investors sell losing stocks and winning stocks in almost equal proportions. The rest of the year, people tend to sell winners and hold losers (the disposition effect). But December is different because taxes motivate people to finally dump their losers.
A study from Finland adds more support. Finland has no “wash sale” rule, which means Finnish investors can sell a stock for the tax loss and immediately buy it back. Researchers found that Finnish investors do exactly that. They sell losers at year-end and repurchase the same stocks in January, especially the biggest losers.
But tax selling can’t be the whole story. The January effect shows up in countries that don’t have capital gains taxes. It shows up in countries where the tax year doesn’t end in December.
China is an interesting case. China’s year-end falls in January or February, and there are no capital gains taxes. Researchers found a March effect in Chinese stocks, not a January effect. This is consistent with the tax story in a weird way. The bounce happens after the year ends, regardless of which month that is.
Another Explanation: Window Dressing
There’s a second theory. Fund managers who are ahead of their benchmarks tend to play it safe near year-end. They shift their portfolios to match the index so they don’t mess up their annual numbers. This means they avoid buying small, interesting stocks until January.
But if window dressing were the cause, the January effect wouldn’t be concentrated in small-cap stocks. Fund managers hold all kinds of stocks. So this explanation doesn’t fit as neatly.
The Other January Effect
There’s a second January effect, and it’s completely different from the first one.
If January stock returns are positive, the rest of the year tends to be positive too. If January is negative, the rest of the year tends to be weaker.
Cooper, McConnell, and Ovtchinnikov confirmed this in 2006 using data from 1940 to 2003. January returns have real predictive power for the next eleven months. They noted that Wall Street traders had believed this for over 150 years before academics finally tested it.
Both January effects are still in the data long after everyone knows about them. Why they persist is still an open question.
The Weekend Effect
Monday is the worst day of the week for stock returns. On average, Monday returns are negative. After 1952, Fridays had the best returns. Before 1952, when U.S. stocks also traded on Saturdays, Saturday was the best day.
Think about what the efficient market hypothesis would predict. Since the market earns positive returns on average, every trading day should have positive average returns. And Monday comes after two non-trading days, so it should have roughly three times the average return. Instead it has negative returns. That’s the opposite of what you’d expect.
Is it Monday that’s bad, or is it the weekend? Rogalski dug into this in 1984. He compared Friday’s close to Monday’s open, and then Monday’s open to Monday’s close. The loss happened over the weekend, between Friday’s close and Monday’s open. Monday’s actual trading hours were fine.
So something about weekends makes stock prices drop. Nobody really knows why.
Research from other countries is mixed. Weekend effects show up in Thailand and several Asian markets. But they don’t show up in Indonesia, Singapore (for Monday), or several Eastern European countries. In Malaysia, the worst day was Monday but the best day was Wednesday, not Friday.
Is it back-to-work depression affecting traders? Are people overly optimistic on Fridays and the weekend corrects for it? These are guesses. Nobody has a solid answer.
Pre-Holiday Effects
Stock returns are abnormally high on trading days before holidays. Lakonishok and Smidt calculated that the pre-holiday return is 23 times larger than a regular daily return. And holidays account for about 50 percent of the total price increase in the Dow Jones Industrial Average.
That’s a wild number. Half of all gains happen right before holidays. And nobody can really explain it.
But there is an update. A study by Chong, Hudson, Keasey, and Littler found that the pre-holiday effect has essentially disappeared from U.S. data since 1991. It actually reversed for a while, turned into a negative pre-holiday return from 1991 to 1997, and then vanished completely. The effect still showed up in UK and Hong Kong data though.
The fact that it disappeared could mean traders arbitraged it away. Or it might mean the effect was never real in the first place.
The Data Snooping Problem
And this brings us to the most important part of the chapter. Sullivan, Timmermann, and White published a paper in 2001 that asks an uncomfortable question: what if all these calendar effects are just statistical noise?
Here is the logic. If you run enough statistical tests on the same data, you will find patterns. Even if there are no real patterns to find. A 95% significance test means there’s a 5% chance you reject a true null hypothesis. Run 100 tests and you’ll get roughly 5 false positives. Run thousands of tests over decades and you’ll “discover” all kinds of effects that look real but aren’t.
This is called data mining. Or data snooping. And calendar effects research is a prime candidate for this problem. Decades of researchers have tested every possible calendar pattern on the same stock market data.
Sullivan, Timmermann, and White applied corrections for this multiple-testing problem. Their conclusion was direct: “although nominal p-values for individual calendar rules are extremely significant, once evaluated in the context of the full universe from which such rules were drawn, calendar effects no longer remain significant.”
In plain language: the patterns look real if you test them one at a time. But when you account for the fact that people tested hundreds of calendar rules on the same data, the statistical significance melts away.
What to Make of All This
The jury is still out. The Monday effect has the strongest evidence and shows up internationally. The January effect for small stocks is persistent and has plausible explanations like tax-loss selling. But the Sullivan, Timmermann, and White critique is serious. Maybe these are just artifacts of too many researchers looking too hard at the same numbers.
The honest answer from Burton and Shah is that nobody knows for sure. More testing with out-of-sample data from different countries will help. But right now, there is no definitive conclusion.
And that might be the most useful takeaway. When someone tells you they’ve found a market pattern, always ask: how many other patterns were tested before this one was found? Because in a big enough data set, you can find patterns in pure randomness.
Previous: Short-Term Momentum in Stocks