Nov

27

We are looking at the Vanguard study that mentions Shiller favorably and it's obviously flawed. The overlap, part whole correlations, and selected starting and ending points, as well as the intrinsic illogic of a 10 year horizon forecasting well but not a 1 year which means that the previous 9 years were much more predictive than the last year, or that the last 5 years are correlated differently from the prior 5 years, comes to mind. But of course, the lack of degrees of freedom with 10 year data with all the overlap, to say nothing of the historical data that Shiller uses, which is retrospective and not reported at the time. But of course one hasn't read it yet, and they purposely make their methodology opaque wherein one could have found the real problems with it.

Kim Zussman writes: 

It would seem that in the face of most long-term historical market conclusions the Japanese stock market must be considered an outlier; in terms upward drift as well as P/E.

Alex Castaldo adds:

The study we are talking about can be found here [20 page pdf].  The problem I see is this.  They evaluate forecasts over a 1 year horizon and over a 10 year horizon.  The one year procedure makes sense to me: You make a forecast, you wait one year to see how it turns out and then you make another forecast. The R**2 is a measure the quality of the forecast, or more precisely it is the percentage of the variance of returns explained by the forecast. The R**2's for one year are small, as one would expect, and nothing to get excited about.  But what is the meaning of R**2 in the 10 year case ? You make a forecast in 1990, invest until 2000 and the go back (how? with a time reversal machine?) to 1991 and make a forecast for 2001? I am not sure the procedure is meaningful from an investment point of view.  And statistically the return for 1991-2001 is going to be very similar to the return for 1990-2000; so if you forecast the latter to some small extent, you will probably forecast the former as well. It seems to me there is a kind of double counting or artificial boosting of the R**2 going on.

When the predicted variable has overlap it is standard to use the Hansen-Hodrick t-statistic which attempts to compensate for the correlation introduced by the overlap.  But because the study only gives an R**2, and not the Hansen-Hodrick t, we don't get any adjustment for overlap.

I am sure that the 10 year R**2 are not comparable to the 1 year R**2, they are apples and oranges. Someone suggested to me that it may still be valid to compare the 10 year R**2 to each other, as a relative measure of forecasting power.  I don't know if that is true or not.


Comments

Name

Email

Website

Speak your mind

3 Comments so far

  1. vic on November 28, 2012 12:55 am

    the r2 are amazingly high. a r2 of 20% corresponds to a r or 0.45 and would give incredible increases in returns if it were predictive. the r’;s are often 3 to 4 times their standard error of 0.11. why as rumpole would ask “is it always the fed model they wish to discredit at the expense of adulation forthe retrospective contrived Yale professor”. and one would ask how 2 phds and a cfa could fail to note the random cyclical nature of rolling returns a la slutsky yule. vic

  2. Ed on November 28, 2012 2:52 pm

    The first thought my simple mind reached upon perusing the document was that 20 page papers might be short by academic standards, yet are still almost never necessary. Give me brevity or give me a bic lighter and a metal trash can.

  3. Craig Bowles on November 29, 2012 4:02 pm

    Tom McClellan does a weekend post that’s always interesting. Here was an interesting heads up for stocks related to housing based on lumber prices (moved forward one year).

    http://www.mcoscillator.com/learning_center/weekly_chart/lumber_prices_call_for_housing_stocks_rally/

Archives

Resources & Links

Search