May

1

I have reviewed the HBR forthcoming study, predicting the next big thing. The data it uses are based on data from mid 2002 to mid 2005. In an era where markets can move 10% in a minute or two, with high speed computers and printers avaiable to even the most non-computer person, it is amzing to see results based on just 3 years of data 10 years old. Certainly the question emerges as to why they didn't use more current data or more than 3 years. The query relating to selective starting and throwing out negative results arises.

The measure used to test their thesis is an amazing one that one has never seen before even though I worked with Zarnowitz on many of his forecasting studies and have kept up with subsequent work in the field: "(they) calculated for each forecaster, the proportion of forecasts that were more than 20% above or below the average prediction… When the average outcome was more than 20% above or below the average prediction". ?????? Huh? They find that the correlation between this measure and the average forecast error was 0.53. Naturally by chance, when the proportion of above average relative to actual is high, then the average outcome for these forecasters is going to be bad.

There are also several contrived experiments that the authors carry out to set a foundation for their empirical study and a few appeals to Bayes rule for standard forecasting that are contained in the opening pages. However, in view of the statistical biases in their results, the short and untimely period of their data collection, and the consonance of their results with the idea that has the world in its grip— that egalitarianism is the goal, and that all variations in outcome are due to luck and unfair initial endowments, we can see why  Harvard would find such a study propitious.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search