I often ask about ways to test a system to know when to stick with it or abandon. Quality control/TCM is one of the ways people mention of knowing how far removed your model or process if from the expected. I thought of another semi related way borrowed from blackjack.

In blackjack counters use a system wherein certain cards are given a +1 value, others a -1, and then some receive 0. Idea behind this is low cards (2,3,4,etc.) benefit the dealer while high cards (10, j, q, k, etc) benefit the player. If a player sees a lot of low cards come out successively, the count goes up by +1 per low card and the odds go up. When the count gets high the players have a decided advantage.

I think an interesting way to apply this to markets, mainly via systematic models, would be to line up the overall stats/odds of a given model. Then one could paper trade it keeping a sort of count for high probability entry. So if paper trading and you see the model loses 6 straight trades and this has only happened <5% of the time over a large N and the historical info shows that the odds (in both frequency and magnitude terms) favor a win on trade 7, you go live. The "count" has given you a high probability entry within a high probability system.

An ultimate stop loss per model could be assigned. You could give the model maybe 10% total drawdown potential. Subdivide this into 4 parts of
2.5%. Enter when the odds or "count" go in your favor (as described above). If you start making money then great, let it ride. If however you lose 2.5% (even after entering at the high probability point) you stop trading, wait some predetermined amount of time, and try again with the approach above. If the model experiences a 10% drawdown after 4 attempts with the count approach then you shelf it.

The same could be applied to the winning side but I would be more inclined to just let it run if in the black. If playing with the house's money why not let it ride.

This logic could also be used to size trades, increasing size when the count gets high and decreasing when the count gets low.

Jim Sogi writes:

Since stops degrade performance, the alternative is to use trade size to prevent disaster. Yet smaller trade size decreases performance as well. Is there a study stop systems vs a trade size systems comparing the two with some sweet spot data on lower drawdowns, and ultimate returns? I think the long term historical optimum was 1.9 leverage. 2.1 leverage went bust long term in the big crashes.

Phil McDonnell writes: 

In my opinion trade size is the only reliable way to control risk. I am not sure why Mr. Sogi believes that reducing size reduces performance. If you have a stock picking method that gives you many stocks then the edge should be similar for each of them with no loss of edge. I suspect he is assuming something else.


WordPress database error: [Table './dailyspeculations_com_@002d_dailywordpress/wp_comments' is marked as crashed and last (automatic?) repair failed]
SELECT * FROM wp_comments WHERE comment_post_ID = '7436' AND comment_approved = '1' ORDER BY comment_date




Speak your mind


Resources & Links