Some of us are kicking ourselves for not having slavishly followed the Almanac, which shows that November has historically been a bullish month.To what extent are the Almanac observations predictive, and how would we have done over the years if we had used Almanac-like data as it was available at the time, to guide our market bets? The following study takes a quick look at that question.

I looked at monthly returns of the Dow Jones Industrial Average (ignoring dividends) going back to January, 1959. Each month, I paired that month’s percent return (”Y”) with the average percent return (”X”) of the previous 10 instances of that same month. For example, I paired the October, 2003 return with the average of the returns from October 2002, October 2001, October 2000…October 1993. An Almanacian approach would be to say that if the prior 10 Octobers have been good, then this one is likely to by good as well.

It turns out that there seems to be some value in this approach. The regression is of the form:

Y = m*X + b


(This month’s percent return) = m*(average of prior 10 instances of the month) + b, and the result is:

Y = 0.184*X + 0.495

There were 573 months under observation (That’s about 46 years times 12 months/year). The adjusted R-squared was about 0.2%, meaning that the prediction “explains” only about 0.2% of the observed variation. The observed slope has a t-score of about 1.5, which indicates that there’s about a 14% chance that a slope this large or large would come about through randomness alone. That doesn’t meet most thresholds for “statistical significance”, but it’s not too far from it.

Below are the few most recent rows of data.

Date Month Dow close Dow % change Avg Dow % change over prior 10 instances of that month
08/31/2006 8 11381.150 1.748 -1.809
09/30/2006 9 11679.070 2.618 -2.162
10/31/2006 10 12080.730 3.439 2.965
11/30/2006 11 12221.930 1.169 3.750
12/31/2006 12 NA NA 1.489

For the month of December, the predicted return is 0.495+0.184*(1.489), or about 0.77%. This approach was correctly bullish in October and November but missed the rallies in August and September.

I’d conclude that it looks like there might be a little bit of value in this particular Almanac-like approach, but not much.

Stephan Kraus Responds:

Thanks for that interesting article. Since I find it hard to evaluate such effects on their own, I calculated the sum of absolute deviations between the seasonality-based forecast and the actual performance, using the same data set and 10-year period as you did. In addition, I calculated a non-seasonal forecast based on all monthly returns during the previous ten years, i.e. 120 observations. Surprisingly, the sum of absolute deviations for the non-seasonal forecast is smaller than that of the seasonal one, even though the difference is small and probably not statistically significant (though I didn’t test that yet). The seasonal forecast is the better one in only 210 out of 456 months, or 5.5 months per year. Using a 5-year look back period, that spread widens, and the seasonal forecast is only better in 211 out of 516 months, or less than 5 months per year.


Resources & Links