Today I did a quick hand study to make a point and figure chart. I looked at swings of five or more points, only above an S&P futures level of 1400, and starting on the 15th of March. I defined a sequence as a plus plus (++) or minus minus (- -), a negative reversal as a plus followed by a minus (+-), and a positive reversal as a minus followed by a plus (-+).

There has been a nine long (++++++++++) positive sequence up to the 1450 level, then a negative sequence of three (- - - -) to 1434, and then a positive sequence of one (++) to close again at 1450 . There has since been a negative reversal of length one to bring us to 1439.

I hypotheize that the expectations after negative reversals of length 1 are negative. 

Bill Rafter writes: 

Two years ago we wrote some code to test Point and Figure data. Our goal was to quantify some of the anecdotal claims going around. We were predisposed to like the concept of running data through a P&F filter, as the resulting data is non-linear.

It is also asymmetric (it usually takes more points to reverse than to continue) and in the classic version, adaptive. We found that smoothing the data with all varieties of P&F filters did produce better results than obtained by using raw data, by reducing "noise."

However, it was vastly inferior to most other filters, mainly because the P&F process introduced additional lag. So, the dog hunts, but he doesn't hunt as well as the other dogs.

Sushil Kedia writes:

Dr. Rafter's post has invigorated the ever hungry mind. Without trying to pry any profitable filtering ideas from list members, one is curious to learn how to think of suitable filters.

What properties, in general, should one look for in including a particular filter or filtering system on the list of "need to test further or develop further?" And which attributes would help put the rest on the other list?

The more learned could help neophytes sharpen their thought and creativity tools to focus better. Thanks in advance.

Bill Rafter replies:

By filters we mean filtering the data, not spam filters for email, although the principle is the same: weeding the wheat from the chaff. By filtering we create a surrogate dataset. The goal is to create a dataset with all of the good attributes of your original and minus a few of the bad features of the original. We are of two opinions that are not necessarily mutually exclusive, but approaching it:

1. It is not a question of separating the short-term noise from a long-term signal, which is the general consensus. According to our research, the true signal is in what most refer to as the noise. That is the counter-intuitive point of view. A good example of this is illustrated where we break down the put-call ratio by wavelet de-noising, and then reassemble the shortest time frame components.

2. Opposing that is the fact that we have found that some minor smoothing universally improves our results. I suggest that you look at 3-period medians and 2-period exponentials for a start. Medians have some great advantages. I definitely recommend against midranges (daily or otherwise), although that's what many "experts" tout.

The consensus refers to filtering as smoothing. That's not always the case. Some filtering will result in data that seems to be more erratic or discontinuous. A fairly inclusive "course" on this can be found at one of our sites.





Speak your mind


Resources & Links