Jan

24

 Inspired by euphemism, misinformation, pseudospeak, and Delphic utterances in the market, I have decided to read through every scientific article on forecasting in the stock market available on the academic database JSTOR. I am reading them in a Galtonian or non-parametric Siegel vol. test fashion, in which I read the oldest first, then the two newest, and then the remaining two oldest, etc.. The first was Money Supply and Stock Prices, A Probabilistic Approach by Manak C. Gupta, from the Journal of Financial and Quantitative Analysis, Jan. 1974. It's obviously out of date and recent updates by the Fed and others have shown there is no relation in recent years. That's de rigeur for almost all studies I have read. What they use is out of date, and they usually stop right before the relation is ready to stop. For example, if they're bullish, they stop in 1999, and if they're bearish, they stop in 2002. There is never any attempt to update the results or to give timely results and there is no consideration for the principles of ever changing cycles.

What's worse about the Gupta paper is that there's no attempt to even consider the statistical significance of the results. In addition, in his methods of defining the leading indicator, the money supply seems to use perfect knowledge of the two preceding months in both dependent and independent variables. The turning points in money supply are somehow related to turning points in stock prices, without any consideration of the inertia of the money supply series, and the statistical irregularities that a series with macroscopic inertia would impart on the dependent variable. Multiple comparisons with every conceivable back and forward interval are considered. No attempt is made to compare to a random strategy or to a buy and hold a strategy. This is the kind of paper that presumably would not appear in a modern journal and would be superseded by many mathematical niceties that would disguise their defects today.

The second paper considered is On Style Momentum Strategies by Ferdi Aarts and Thorsten Lehnert in Applied Economics Letters, 2005. The idea here is that there is momentum in the stock market, and the question is whether the momentum is within 'styles' of stock (value or growth) or just in  particular stocks themselves. Their research is based on FTSE stocks. Like most authors, they take as a given some work from the 70s or 80s in the U.S. markets, and make no attempt to verify whether the facts have changed or whether there were data errors or retrospection involved.

These authors are well versed in statistical methods, but have no feel for what they are doing. They use the usual table with four back intervals and four forward intervals classified by equal weighting or market cap weighting, and they do this in a comparison of style portfolios classified by price to book value. They then give another 12×8 table where they consider the results of momentum without regard to style. They conclude that book value style doesn't matter, but that regular momentum strategies are better and less risky.

The authors appear to have no concept of multiple comparisons and it's amazing that a referee didn't make them account for it. Out of 192 comparisons of means, one would expect to find twenty or so significant results that would occur by chance anyway. Indeed, none of the results within the years they study are independent, and they overstate the number of independent observations in each entry of the table by a tremendous amount. Furthermore, their results are meaningless and completely consistent with randomness. At least the authors seem to know this last point, as they conclude that:

"it is interesting the average month returns in the Chen and De Bondt studies (the ones that supposedly showed great momentum in the U.S.) are smaller than the few significant returns that we found."

It is a grave disappointment to come across a study like this after studying phenomena like these some 45 years ago myself.

The third study is A Neurofuzzy Model for Stock Market Trading by S.D. Bekiros in Applied Economic Letter, 2007. This is a typical study in which the author posits a method of prediction that's based on recent work in programming and artificial intelligence. In this case, the Neuro Fuzzy models translate the numeric variable into fuzzy linguistic terms, e.g. low high, and each term is assigned by a membership function by if then statements.

The next section of the paper describes the architecture in a way that no one but the author and the people that wrote the program could possible understand. This is done in a description of five layers, with nodes, parameters, piecewise linear interpolating functions, firing strengths, normalizations, weight vectors, singular value decompositions, et al.. It is wrong to scoff at technical language, but I find it hard to believe that more than a handful could possibly unravel the descriptive technique that the author uses and differentiate it from the host of rival neural network and fuzzification programs out there.

I have always found that similarity matrices give exactly the same results, with exactly the same lack of predictivity, as all the neural networks that I have ever experimented with. Finally, empirical results forecasting the Nikkei from 1998-2002 are given and it's hard not to raise an eyebrow when the author says  that:

"additionally for the rrn. the best forecasting ability was derived empirically by a typology which incorporated ten neurons in the hidden layer , and the lags were based on Juun Box statistics, Schwarz information criterions, as well as empirically."

After all the training and fine tuning and parameters, the authors conclude that when the Nikkei went down, their model was better than buy and hold, and that when the Nikkei went up their model gave results that were significantly worse than buy and hold. They came up with approximately 48% correct predictions of direction. I could have told them this would happen before they went through all that trouble. I also could have told them that it's completely improper to divide their sample up into bear and bull periods retrospectively since it's impossible to tell whether it's bull or bear prospectively, and if they do it retrospectively, they're introducing a terrible bias that all the kings' men and fuzzy neural techniques that they used couldn't possibly circumvent. For instance, it is obvious in retrospect that you can find periods, when your model beat buy and holds and when it didn't. The question is whether it will beat it prospectively without retrospective subclassifications.

There is some ad hoc explanation that the authors came up with to justify their random results, and to indicate that

"the profitability of trading models improves substantially in bear markets since they present higher volatility."

Excuse me, that's guaranteed to happen if you select the period that the market went down retrospectively and call them bear markets.

Regretfully, my first three papers reviewed do not give me much encouragement that the field has improved a lot from the time that I plied the seas. My long snooze has so far been awakened by pseudotalk, and euphemisms in prediction and forecasting, that are so prevalent today.

Alston Mabry comments: 

The Chair wrote:

… pseudotalk and euphemisms in prediction and forecasting that are so prevalent today.

One technique that jumps out is the use of wishy-washy language to hedge every statement:

If earnings waver, or if long-term rates keep rising, the Dow's long run could end.

And then again, maybe not.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search