May

30

 After 5 years or so, I finally got to the point of confidence in conducting basic quantitative studies. (Very basic…)

While reading again Philip's book "Optimal Portfolio Modeling", I got stuck in the following sentences:

"Professor Niederhoffer was just such a divergent thinker.

His help and guidance taught me to see things at their simplest. That is the essence of his approach. His enlightenment also helped me to learn how to avoid the numerous pitfalls that can arise in quantitative studies. *In fact, one of the things he taught me was what not to do on a quantitative study*."

I couldn't help to think what such advice would be…

And what the Specs thinks of what one should avoid while performing any counting studies.

Steve Ellison writes: 

Be very careful to consider only information that was known at the time. For example, when doing a study that uses the high price of the day, you cannot know that any price will be the high of the day until after the close. Similarly, you cannot act on the closing price or anything based on the closing price, such as a moving average, until the next day.

Beware of data mining bias. If you test the same set of data enough times, you will find some results that appear to have statistical significance, but occurred just by chance. For example, I analyzed the most favorable trading days of the year. There are an average of 252 trading days per year, so one would expect 12 days to have results with p<0.05 just by chance. You need to control for data mining bias either by setting a more stringent p threshold or testing out of sample. Any time you have considered multiple strategies and selected the one with the best results, you should assume that part of the good result was by luck and expect worse results going forward.

Statistical significance is not necessarily predictive. In an era of much quantitative analysis, a regularity may not last long. It has happened more often than I would expect by chance that I found a pattern that was bullish or bearish with statistical significance, and the out of sample results were statistically significant in the opposite direction.

Bruno Ombreux writes:

Data mining bias can be experienced in the most vivid manner with the new Google correlation engine. It can come up with some of the weirdest, actually impossible, correlations. Google correlation results are more illustrative and striking than any theoretical academic stuff about multiple comparisons.

Phil McDonnell writes:

An incomplete list of things NOT to do on a quantitative study:

1. Avoid retrospective data. Many fundamental data bases have retrospectively adjusted data. sometimes the data is adjusted years after the fact and could not possibly be known at the time.

2. Avoid retrospective price data. Many so called quants pat themselves in the back for 'correcting' their data after the fact. Any valid study must include the data as it was known at the time.

3. Avoid the part whole fallacy. There is more on this in the Chair and collab's book Practical Speculation.

4. Use non-parametric/robust statistics to avoid fat tail issues.

5. Simplify your studies to a very small number of variables.

6. Avoid looking at simultaneous relationships. They are descriptive and not tradeable. Instead concentrate on predictive relationships.

7. Avoid indexes, rather use prices that actually trade.

This list is only some of the pitfalls and traps to avoid in doing a proper quantitative study.

Newton Linchen writes:

It has happened more often than I would expect by chance that I found a pattern that was bullish or bearish with statistical significance, and the out of sample results were statistically significant in the opposite direction.

Isn't that annoying?

Doesn't it pushes us to the other side of the coin, of pure "tape reading", etc?


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search