Nov

27

We are looking at the Vanguard study that mentions Shiller favorably and it's obviously flawed. The overlap, part whole correlations, and selected starting and ending points, as well as the intrinsic illogic of a 10 year horizon forecasting well but not a 1 year which means that the previous 9 years were much more predictive than the last year, or that the last 5 years are correlated differently from the prior 5 years, comes to mind. But of course, the lack of degrees of freedom with 10 year data with all the overlap, to say nothing of the historical data that Shiller uses, which is retrospective and not reported at the time. But of course one hasn't read it yet, and they purposely make their methodology opaque wherein one could have found the real problems with it.

Kim Zussman writes: 

It would seem that in the face of most long-term historical market conclusions the Japanese stock market must be considered an outlier; in terms upward drift as well as P/E.

Alex Castaldo adds:

The study we are talking about can be found here [20 page pdf].  The problem I see is this.  They evaluate forecasts over a 1 year horizon and over a 10 year horizon.  The one year procedure makes sense to me: You make a forecast, you wait one year to see how it turns out and then you make another forecast. The R**2 is a measure the quality of the forecast, or more precisely it is the percentage of the variance of returns explained by the forecast. The R**2's for one year are small, as one would expect, and nothing to get excited about.  But what is the meaning of R**2 in the 10 year case ? You make a forecast in 1990, invest until 2000 and the go back (how? with a time reversal machine?) to 1991 and make a forecast for 2001? I am not sure the procedure is meaningful from an investment point of view.  And statistically the return for 1991-2001 is going to be very similar to the return for 1990-2000; so if you forecast the latter to some small extent, you will probably forecast the former as well. It seems to me there is a kind of double counting or artificial boosting of the R**2 going on.

When the predicted variable has overlap it is standard to use the Hansen-Hodrick t-statistic which attempts to compensate for the correlation introduced by the overlap.  But because the study only gives an R**2, and not the Hansen-Hodrick t, we don't get any adjustment for overlap.

I am sure that the 10 year R**2 are not comparable to the 1 year R**2, they are apples and oranges. Someone suggested to me that it may still be valid to compare the 10 year R**2 to each other, as a relative measure of forecasting power.  I don't know if that is true or not.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search