Sep

9

The scientific method has two parts. There is theory, which requires knowledge and intuition to posit a cause and effect, and there is testing, collecting data to determine whether the observations refute the theory. If I understand your point correctly, empiricism is necessary but not sufficient. There should be a theory that is not entirely based on the observed data. As an imaginary example, “The S&P 500 is likely to decline on Friday afternoon because day traders are biased to the long side and want to be out of the market before the weekend” is better than “The S&P 500 was down on 19 of the past 30 Friday afternoons”.

Ralph Vince responds: 

Steve, yes, but the premise, the cause, needs to be proven. “The S&P 500 is likely to decline on Friday afternoon because day traders are biased to the long side and want to be out of the market before the weekend” needs to be proven as causal, not merely posited as a possible cause.

Frankie Chui writes:

Yes, I always end up asking myself “why does it not work anymore after it has worked for so long?” when the moment I trade it the system stops working. It has also happened to me quite often where I backtest a strategy, everything seems ok, trade it for 2-3weeks and that’s the end of that system. Therefore, I am now experimenting with optimizing parameters in systems more frequently, perhaps once every two weeks on a rolling basis. Optimize two weeks of data, trade it for a week, optimize the past 2 weeks again, trade it for another week. Of course the 2 week/1 week time frame may not be the best (I just randomly chose it), but has anyone ever done anything with this kind if approach? I’m curious to see if this will work for day trading. I am new in mechanical trading, but I’m very curious to know if optimizing data fast enough will allow a trading system to work better and longer (for day trading).

Jeff Watson writes: 

Frankie, you’re running up against Bacon’s ever changing cycles, which tend to render systems obsolete.

Phil McDonnell adds: 

There is an insidious danger when you use optimization. The optimizer will fit the system to the data too well. It will never perform as well out of sample as in sample. It becomes especially important to use tests of statistical significance when you do optimizations.

The optimizer can actually create a multiple comparison problem in some cases. For example if you tested, looking for seasonality and wanted to find which month was the best to buy it would create a multiple comparison bias and any test for significance would have to have a much higher threshold than if you just tested September.

One way to judge a system and evaluate whether it will continue to work is to plot out the equity curve. If your testing assumes an equal sized investment each time then the system can be plotted on an ordinary arithmetic scale. If you compound it should be plotted on a log scale. Either way the most desirable system would be a system that looks like a smooth line going monotonically up to the right as time passes. If it starts to roll over then it may be a system about to fail.

Paolo Pezzutti writes: 

The system should be quite robust. It should work pretty well with a sufficiently wide range of values of parameters. There should also be few parameters avoiding curve fitting.

 


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search