I use an intuitive and unscientific rule of thumb derived from a law of cybernetics.

To forecast/control a system of degree N, one needs a system of at least degree N+1. Positing an arbitrary hierarchy of markets systems: years > months > weeks > days, means that to forecast one day ahead one needs to look at weekly anomalies.

That is sets of five trading days. Since in a parametric setting one needs at least 20 to 30 data to converge to a normal law, the minimum length of data is 5*20 to 5*30, that is 100 to 150 days, to operate at the daily frequency.

Paolo Pezzuti replies:

My opinion on this issue is that the length of data should not be defined as a fixed number (e.g. 150 or 200). Data selection to run the tests should reflect criteria of behavior observed in relationship with the scope of your test objectives. In the everchanging cycle process you might recognize that a cycle has changed because of increased volatility or directionality or what else you have identified as your guiding parameters of the market “personality”. In this case your length could vary a lot. For example, if you assume that a new paradigm began in 2003 with a low volatility environment, etc. and this is relevant for the type of assumption that you want to demonstrate with your testing activity than you could use a 600-days data test set.


WordPress database error: [Table './dailyspeculations_com_@002d_dailywordpress/wp_comments' is marked as crashed and last (automatic?) repair failed]
SELECT * FROM wp_comments WHERE comment_post_ID = '405' AND comment_approved = '1' ORDER BY comment_date




Speak your mind


Resources & Links