A trader I speak with made a good observation/hypothesis about a year ago on the inverse $/S&P relationship — that the S&P would tend to stay constant in "global currency" terms.  Seems that is exactly the way it has been, and is, playing out.



 On Sunday July 29th, I revisited Saratoga Racetrack, roughly twelve years after the scene that is recounted in Vic’s Education of a Speculator. Since I was meeting with my old friend and officemate at the time, hedge fund manager Andy Goodwin, the event elicited memories of days gone by.

The afternoon was clear and the track was fast. Based on my analysis of the Saratoga racing meet to date (which was four days old), my initial hypothesis (which was standing unless encountering clear data to the contrary) was that "early speed" horses running on or near the lead were at a clear advantage. Since my handicapping style is best suited to the aforementioned conditions, I felt conservatively confident in our prospects. Andy Goodwin (Hedge Fund manager), Seth Faler (insurance analyst and college roommate), and Nicole Carey (friend and drama teacher) were also feeling it, and all invested in a share of my wagers for the day.

The early results were not great. In the 1st, 2nd, and 4th we cashed no tickets. In the 3rd race we squandered a golden opportunity by swinging for the fences and investing 80% of our race allocation on two horses to finish exactly first and second. While our top choice won, our second choice was caught late from behind and finished third. Although we did have a "saving" trifecta ticket (1-2-3 in exact order), the second place finisher was well bet and therefore we made only a modest profit.

After the first four races were finished, I felt that my hypothesis had been confirmed: betting on horses with "early speed" on the dirt course was highly preferable to betting on horses that needed to come from "off the pace." The only problem was that my three losers (I had the winner in the 3rd) were further back in the early going than I had projected.

This thought was foremost in my mind while analyzing the 5th race for Maiden (having never won a race) 2 year old (youngest racing age, equate roughly to early teens for humans in physical development terms) fillies (females) who were bred in New York. These races historically are where I have my worst ROI (return on investment). The reason is the limited amount of printed information available for these young horses, especially the ones running in their first career race.

In the case of a "first time starter", the primary information available to the racing public is a) characteristics of breeding b) trainer profile c) jockey profile and d) the times of all recent morning work-outs (which can be very deceiving because it is hard to gauge the effort that was exerted in this type of "practice" run). Clearly, for a first timer, the main piece of relevant info is missing ….how fast will the horse run in a race? Many horses look like champions in the morning and are colossal underperformers at the races.

Due the huge unknown component in this category, and the fact that one often has very little racing history to base his decisions on, the betting public sometimes appears to engage in highly irrational speculative behavior. For example, imagine that some horseplayers actually employ a strategy which bases its selections on horses that seem over-bet (offering a much lower return than one would consider fair by the printed information) assuming that this overzealous betting activity is taking place because "somebody knows something" (info that is not available to the general public). The skeptical amongst us might suggest that this behavior is similar to weeding out used cars in the classifieds by the criterion of which ones are grossly overpriced and assuming that those who are charging too much for their cars must be doing so because the information that is difficult to discern (whether the car is actually reliable….or potentially a lemon) must be positive because said owners value their cars so highly.

My approach in these races has always been to stick to the facts, and one such fact caused me to eliminate the #3 in the 5th race as a potential contender immediately: The trainer of the #3 was 1 for 44 in sending starters to the track for the first time, and the ROI for these bets would have been 75 cents in losses for each $1 wagered. When the filly in question has done nothing at the races to cast doubt on that statistic, given no previous racing history, how can you bet on her? Prior to this race, I would guess that I had never bet on a trainer with a % of less than 3% for first timers.

However, I pored over the race 4 or 5 separate times asking myself the same question ….which of the contenders that I was considering had a reasonable chance to lead the race early (in line with the aforementioned "hypothesis") given the limited information available. If I was thinking that the early fractions would be somewhere in the vicinity of the "par" times for similar races….then the answer was "none of them."

 My attention kept reverting to the #3. Amongst the other facts were:

1)    The father of the filly (Hook and Ladder) showed stakes caliber early speed as a racehorse AND was one of the most successful New York bred "first crop sires" in recent memory in terms of winning % for first time starters. In other words…ideal breeding for early speed in the first career start.

2)    The filly showed the two fastest morning "workouts" of the field (both were fast relative to "par" best times for other 2-year-old maiden fillies).

3)    The jockey named to ride the #3 (Ramon Dominguez) was unquestionably the best jockey in the country YTD for shorter "sprint" distances, winning on an incredible 30% of such mounts in 2007.

Yet, despite the overwhelming evidence that this filly could come out running, I couldn't overlook the 1 for 44 trainer stat (and neither could the betting public … she was 14-1 on the tote board) until I was fortunate enough to ask myself the following two questions:

"How much weight was I giving to the trainer variable if I was willing to eliminate a filly @ 14-1 who would have been either the favorite, or a close second choice (certainly 3-1 or less) based on the other three variables and adding a better performing trainer?" Well, the answer to this question is obvious considering that I wouldn't even consider a potential 4x return versus what I would have gotten in the case of a trainer who was more successful with first time starters. Thinking about it this way, my approach didn't make much sense. Therefore, I had to ask myself a second question:

"Why had I made it a fundamental, unbreakable rule that one should never bet a first time starter from a trainer that had such a poor winning % and ROI?" After some deliberation, I decided that the main reason was "intent." Simply put, some trainers consider the first career race to be a warm up of sorts. These trainers win at a low first time % because they are not as concerned with winning the race at hand as they are setting the horse up for a successful career. Obviously, it would be foolish to bet on a horse (filly) whose trainer viewed winning as a secondary goal, unless this indifference towards winning was more than reflected in the odds.

Suddenly, my opinion of the race changed dramatically. Outside of the 1 for 44, there was strong evidence that the trainer was interested in winning that day. Consider these points:

-    Would a trainer who didn't care about winning a race work his filly briskly (from the starting gate, nonetheless) on two separate occasions in preparation?

-    Would a trainer who didn't care about winning a race solicit the top sprint jockey in the country (one who intuitively cares about winning EVERY race given his 30% success rate)? I would expect that a trainer could only do that so many times before this jockey (a valuable resource) would avoid him altogether.

-    Finally, if you had a filly from a Sire whose progeny were winning at an amazing rate (57% according to my forms), would you throw away such a good opportunity and send the filly out for a jog without the intent of winning?

My answer to all three questions was …..of course not. As a result, my reasoning was that if the filly with the best breeding (to win that specific day), the best workouts, and the best jockey was saddled by a trainer who had come to the track that day intending to win………well then why couldn't she? In my eyes, she had a very realistic chance.

After returning from the betting window, and immediately prior to the race, I started to get exited. 14-1? This was madness. I grabbed Andy Goodwin by both shoulders and started shaking him while laughing, "Ever-changing cycles, ever-changing cycles …hah, hah."

Sure enough ….ever-changing cycles, indeed. The filly came out running, went straight to the early lead, and held on to win. She paid $30 and was the key to what became a very profitable afternoon for all of us.

What is to be learned from this story? Does it have any relevance to us as financial markets practitioners? I think it does.

First of all, it reinforces one of my primary opinions about the markets: Most of the best proprietary trades are the ones that are the most difficult to do. The ones that you really need to dig for …..and that also have an element of uncertainty that make them emotionally uncomfortable upon first analysis.

 My math super-genius collegue, who aspires to build successful algorithmic trading models (which we will call ALGOs for short), often bounces hypotheses off me for systematic trades. My reaction to his ideas is almost always the same, "This is too easy to do, and therefore I don't think that you can make money at it." What can be visually observed by a programmer while trading his personal account part-time is probably not representative of a systematic market inefficiency that can be modeled and exploited for profit …..or that's how I see it, at least. My best ideas trade ideas are usually a combination of a lot of "tinkering" quantitative analysis, and observing the effectiveness of the best subset of those ideas (that become hypothetical models) through trial and error. I need to watch a lot of prices and make a lot of trades to stay on top of what is working and what isn't working. Any attempt to shortcut this process inevitably costs me money. Maybe I need to watch hundreds of horse races before encountering an opportunity like the one described above.

I am (and was) absolutely convinced that the filly in question was an excellent bet at 14-1 in that her pre-race chances were much better than 1 in 15. Interestingly, of all the serious horse players that I showed the sheets to retrospectively, not a single one guessed that this filly could have possibly gone off at odds higher than her 6-1 morning line price, which is amazing given the fact that all could have probably predicted the final odds of the winner in all of the other nine races that day within an allowance for error of +/- 30%. Yet, with this filly they were all more than 100% off.

Now, would these same handicappers have been able to overlook the 1 for 44 trainer stat in real-time, and bet on what seemed to be an extraordinary value. In the majority of the cases, I would say no. My guess is that they probably had similar concrete rules regarding NEVER betting on a trainer with such a low first time success rate and ROI, and while they would have been disappointed after the fact for not having bet on the filly at such a big price, I would assume that they would have eliminated her the same way that I did in my first four passes.

On the car ride home from Saratoga to Vermont, I pondered at length the afternoon's experiences with relation to another theory of mine (and one that I often debate with my Quant and ALGO trading associates), namely that it will be a long time before computers are able to fully replace humans as decision makers in parimutuel gambling environments like horse racing ….or even financial markets. For now, and into the foreseeable future, I think that there will continue to be opportunities for human "traders" to prosper in the markets.

A frequent argument by supporters of fully systematic trading (and by inference, skeptics regarding the usefulness of humans in any trading role) is "Provide me one concrete example of where a reasonably well informed and highly competent human would surely make a better decision than the best ALGO model." My answer…… "variable weighting" in cases of uncertainty or disequilibrium (similar to our horse racing example). Coincidently, a perfect example was provided for me in the same calendar week.

In support of the aforementioned, consider the events of August 3rd. Largely influenced by negative developments related to the housing market and sub-prime credit, the S&P 500 had declined roughly 5% over the prior two week period. Before the open, Bear Stearns had issued further negative guidance in a statement which was to be followed up with a conference call @ 2:00 PM.

 Since sub-prime debt was experiencing a period of huge uncertainty (as portfolios were losing huge chunks of their value), and the implications of the quality of this debt had such strong correlation with many other parts of the economy including housing, banking, other financials, and all sectors sensitive to the purchasing power of the lower middle class etc., the market was completely focused on Bear Sterns since they are widely considered the biggest player on Wall Street in the credit arena. Few would have debated that Bear Stearns was by far the most influential stock in the market on that morning.

Was it possible for an ALGO (if unassisted by a human trader or analyst) to know that Bear Stearns was so important to the health of the market? Not in any way that I can imagine. The best chance was that this ALGO could have had a news reading component that noticed that Bear was the most prominently mentioned company in the news that day (and several other days over the proceeding two weeks). However, this same ALGO would intuitively have absolutely no chance of understanding that today's headline stock (as opposed to on a "normal" day) was so important and its influences so wide reaching. Even a database of business segment inter-relations would have only been of limited value. While this tool might have been helpful in identifying what stocks might move (due to good/bad performance in certain areas), the ALGO would still be missing the crucial "weighting" factor … sensitive the market had become to the sub-prime issue, in general, and Bear Stearns, in specific.

Although both the S&P 500 Index and Bear Stearns stock (down 6%) sold off sharply in the early stages of trading that day, both rallied to roughly unchanged by the time of the conference call based on a street-wide sentiment that "the cat was out of the bag" so to speak and the worst news was already priced into the market.

However, on the 2:00 PM conference call, instead of reassuring investors as anticipated, Bear Stearns representatives unexpectedly painted a picture of doom and gloom, describing the credit picture as "the worst in decades." Upon hearing this, most discerning humans who were immersed in the US equity market immediately recognized that the catalyst for the big rally had disappeared instantly and left a great deal of negative sentiment, and huge uncertainty, in its wake.

Immediately, Bear Stearns stock fell from the sky, erasing a good chunk of its retracement gains in a few minutes. Logically, the rest of the market, which was focused that specific day on sub prime and Bear Stearns, followed suit and slowly but surely grinded down 40 points (or almost 3%) to the close.

While it could be argued that the market direction after the Bear Stearns call was not clearly predictable (although it was my strong impression that the market was heading down and I was short to the close), all of the following seemed relatively certain:

The huge change in sentiment regarding the credit market (immediately reflected by the sharp decline of Bear Stearns stock) signaled a new disequilibrium for the price of hundreds of stocks that were strongly influenced by the health of the sub-prime market ….or even credit market in general. Clearly, selling volatility in such an environment or applying mean reverting strategies (that might have been successful under normal conditions) could potentially be a dangerous activity for an ALGO that had no understanding of what had occurred or the implications thereof. In addition, any correlation relationships (the basis for many "pairs trading" or "relative value" ALGOs) for companies having reasonable sensitivity to sub-prime were also in danger of busting, depending on the relative exposure to sub-prime. Consider a homebuilder that sold to rich people versus one that sold to poor, sub-prime candidates. Although these companies might have traded at a very high correlation historically, that kind of relationship was in jeopardy of breaking down going forward.

Clearly, in such a disequilibrium or uncertainty period, many ALGOs are in significant danger of putting on a lot of really bad trades before figuring out the peril of the situation and modifying their behavior and either scaling back, shutting down, or changing their strategy altogether. This is reflected by the unusually high number of quant or ALGO trading hedge funds that blew up as a result of these events.

What's worse, this big disequilibrium move occurred in the wake of a year of extremely low volatility (relative to the mean for the prior 10 years). Since short-term ALGOs are often trained on limited data sets going back only three to six months, some of them knew little of high volatility environments (outside of a couple of big down days from the previous two weeks), and were surely caught somewhat off guard.

In summary, computers are extremely valuable tools both for analysis and execution. In many cases, computer programmers, despite little interaction with the markets, can build ALGO trading models that can outperform the great majority of human traders. However, it is my contention that 1) In some cases, many ALGOs could benefit from more (even if extremely limited) human involvement that could assist them, or at least keep them out of trouble, in times of disequilibrium or spontaneous uncertainty 2) There is still a place for human proprietary "traders" in the markets because they are able to identify and exploit occasional inefficiencies that simply can't be recognized by machines. Given the open ended nature of the problem, and the myriad of variables to be considered, I don't think that this is going to change for the foreseeable future.

Dan Murphy is the owner/CEO of Green Mountain Analytics

Jim Sogi comments:

Systems traders don't like to make the real time decisions, but the decisions are merely moved up a level. Is it time to change the system, change the parameter, adjust here, adjust there, keep it the same? Always decisions. The other issue, in addition to the human variable weighting, is: can an astute trader add value through execution to a system? This relates to trader performance and human foibles. We make stupid mistakes sometimes.

Clock's comment about the short training period for the algo systems is critical. The lookback needs to consider historical max drawdown for money management purposes, even if the algo system parameters are set for a shorter time. This brings up another issue. Are also systems and money management different? Can they be combined? I think money management is better accomplished by leverage than by stops.

James Tar remarks:

In my experience as a horse owner and racing speculator, the fall meet at Belmont offers the public a considerable amount of mean reversion regarding payouts. This follows the NYRA's blatant stealing from the public during the Saratoga meet.

Today in Race 4 there are some interesting horses entered into the race that are right up the speculator's alley:

#3 - Bearish
#8 - Moral Compass

You can bet my money is on the #8 horse. 

Vincent Andres mentions: 

In reference to Mr. Sogi's comments: For/from the few systems I tested, it seems me obvious that MM cannot be considered as external to the used system. There is MM1=MM/system1, MM2=MM/system2, etc. MM has to be combined/adapted to the system.

This prevents of course not most MM to have some (good sense) common points.

(And also if our edge appears to be really only 50/50 … then there's no need to worry about MM.)

Jim Sogi comments:

The problem with combining risk in the trade system is that any system has in it a bias due to the time frame in which it is framed. That bias may not protect against the long term risk parameters and a built in risk system will not "see" the regular 8-10 sigma events which occur every few years. Bringing in an outside crash protection risk system is a possible add on module to a trade system and may require separate calculations. A robust example of this idea is from Triumph of the Optimists. 1.9 x would have multiplied the 1.5 Million percent return many many times over through compounding. 2 x would have gone bust. A risk add on might limit exposure to an average of < 2 x. An individual system may say, heck go 20 x, as this arbitrage is fool proof (see LTCM). Longterm plug in says, cut it at 2 x.


Resources & Links