Sep

14

Counting, from Adi Schnytzer

September 14, 2007 |

Niels BohrWhat theory exactly do I test, and how is it put together on the basis of history? In the physical sciences, the answer is, ironically enough, often gut feeling and intuition. Bohr's "crazy" ideas about atoms are an example. That is what makes counting so difficult: What the heck do I count? Statistics and econometrics are fabulous tools, but applying them to forecasting is tough!

Rod Fitzsimmons Frey adds:

I agree that is the crux of the issue. The inductive leap that all scientists must make is a mystery that is not itself explained by science. Francis Bacon, who convinced me to ditch philosophy and take up engineering, hand-waved it away by putting a philosopher-king at the head of the rational, scientific state, with all the other citizens scurrying about gathering data to test the hypotheses that he came up with.

Nigel Davies remarks:

The reason a chess player should practice analysing positions is in order to cultivate intuition. Many players wrongly believe that the idea is to find specific improvements from specific positions, but they rarely get the opportunity to spring their cooks.

I have come to believe that the same role is played by counting for traders, that the main goal should be to cultivate understanding and awareness rather than devise specific trades. And one can find many other examples in difficult human endeavours, such as the importance of kata to the martial artist.

Bill Rafter explains:

Bill RafterThe answer to "what to test?" is "everything."  You try to break everything down to its smallest components and test each.  You keep records and their summaries on everything.  If you learn of something new, you have to go back and test everything again using that new method.

Suppose you know with certainty that the market is headed up in the near future.  A simple and intuitive strategy would be to buy the high beta stocks.  But testing that strategy would prove you wrong.  You cannot know that unless you test.  Okay, now let's consider the reverse:  you know with certainty that the market is headed down.  What about selling the highest beta stocks?  Test and you will find out.

One of the big topics now is volatility.  How do most people define volatility?  Are there any other ways to define volatility?  Is there any symmetry to the various definitions of volatility?  That is, does it work the same way in up days/weeks as it does in down days/weeks?  If you define volatility as one-day rates of change, the answer is affirmative.  But not so with other definitions.

Most researchers make the mistake of testing their ideas against "the market".  Well, the market is just the average.  You are not going to find any leading indicators by looking at the average.  So let's say that instead of looking at the S&P 500, you do your research on the 10 Sectors.  The results are different.  So then you drill down a little more to the 24 Industry Groups, and then to the 60+ Industries. If you are "on to something" you will find that the results get better with additional focus.  Your universe is the same 500 stocks, but you are no longer averaging to mediocrity.  Note that I didn't say this was technical or fundamental research; it's just research rather than intuition.

Most people do research badly.   Let me give some examples. (1) One of the major data suppliers (50,000 subscribers) gives its users the ability to construct their own portfolios.  That's important, as you may just want to work with stocks of companies with positive cash flow.  However a call to the support department of that data supplier will inform you that virtually none of their subscribers make their own portfolios. (2) The research software platform with the highest number of users does not even allow users to construct their own portfolios.  They give them pre-constructed portfolios of the S&P, Russell, Dow, etc. Take it or leave it.  (3) One of the leading (at least by reputation) institutional and retail providers of fundamental research allows its users to screen stocks on the basis of certain factors.  Their screening tool does not work correctly; giving the wrong results.  It's been that way for the two years that we have had a comp account.  No one has fixed it, most likely because no one has noticed.  We noticed, but of course we're not going to tell them.

So if flocks of "counters" or "quants" did poorly in the recent selloff, it may not be because counting or quant research is a flawed concept. It may because the researchers are not giving an honest day's work for their pay.  They are pretending to do research.  Their version of the scientific method is shoddy at best.  But that's okay.  To be a consistent winner, you need a supply of losers.

David Lamb writes:

"What to test" brings to mind the passages on counting in Vic and Laurel's books. In one, Artie, Vic's father, was writing on a yellow pad of paper while he was watching handball players. Upon a completion of a point Artie would notate: OTWK (off the wall killer); KW (killer, winner); DW (drive killer); A (ace); AW (angle winner). He was trying to calculate "the chances of winning the next point after runs of winning and losing points of different magnitudes."

And Dr. Rafter's comments on not testing ideas against the market, due to the market's being average, if further demonstrated by Artie's note taking during handball matches. He wasn't watching average players, he was watching a particular "sector" of players. In this case it was the best players.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search