# Rankings and Cycles, from Victor Niederhoffer

March 9, 2009 |

One way to see if cycles occur is to rank closing prices and see if the distribution of ranks is consistent with independence.

take 5 prices          800, 801, 799, 804, 803,

let's rank them         2      3        1    5     4                                from lowest to highest.

The distribution of actual ranks for the last 10years, 1, 1 99 to present in S&P adjusted futures prices is as follows:

rank    number of times

1       637

2       418

3       338

4       404

5       733

Queries: Is this consistent with independence? What does this show about the shape of the distribution? How can this be generalized? How does it compare to other markets? Is there any predictive value to such studies?

## Victor Niederhoffer adds:

mkt rank   S&P   Bonds   Bund  Yen  Gold Crude

1                  637  612      624  672  395      430

2                  418  328      354  409  277      294

3                  338  382      352  405  265      257

4                  404  406      404  431  318      295

5                   733  781      789  709  596      529

drift             -03   01       02   00   03       00        per day
Sd                154  69       37   27   81       142

The starting point is 1/1/1999 to present daily for all except gold and crude. Note unusal number of rank 2 for bonds. The surprises are the number of rank 5 for stocks which exceed the rank 1 even though there was a drift of -03 a day. This is consistent with larger big declines than big rises.

## Charles Pennington comments:

You got 733 "fives" and 637 "ones", a difference of 96.

I looked at ten simulations of a 2500 step random walk, using numbers drawn from a normal distribution. In only one of the ten was there a difference between the number of "fives" and "ones" that exceeded 96. In that one instance, the difference is 137. The standard deviation of the 10 differences was 42.

So an effect this big should show up randomness alone on the order of 1 in 10 times. It's probably a non-random effect.

## Victor Niederhoffer asks:

Excuse me, Professor, but did you use the drift of -03 a day and given standard deviation of 137?

## Charles Pennington replies:

That seems like false precision to me, with the market's own real-time estimate of volatility having varied by a factor of ten over the period, and with the drift flattish until the last year of the ten year period. If the point is that "the market has gone down rapidly over the past six months", then count me in. Even over that period, though, four of the top ten 1-day magnitude moves have been in the positive direction, including the biggest and the second biggest. And even from 1929-1939, the biggest 1-year magnitude move was the UP year of 1933 with the market up 78%. I think if you or I had lived through that, then..*, **

My friend, you would not tell with such high zest; To children ardent for short selling glory, "Teres quod tardus est, ut venalicium oriri".

* mangled version of poem "Dulce et decorum est",

** Online English-to-Latin translation of "Smooth and slow it is, when markets rise". I have no idea if the translation is right, but it looks nice.

## Victor Niederhoffer requests:

I'd appreciate it if someone would test whether the distribution of ranks for S&P for the 5 day periods is consistent with independence with actual changes since 1/1/1999.

## An Artful Simulator writes in:

Using 1000 randomly resampled (w/replacement) data series, I get

where
obs = observed distribution of ranks
exp = expected number of ranks from simulations
exp = 95% empirical confidence interval from the simulations

rk   obs   exp  95% conf
1    658   702  [644,767]
2    434   417  [382,454]
3    349   366  [332,397]
4    419   396  [360,432]
5    741   720  [656,782]

(i added some random noise to break ties)

doesn't look that non random to me.  Also, the chi squared statistic
for observed as a function of expected is 6.28 with p val of 18%

## Victor Niederhoffer comments:

The Professor was right.

# Comments

Name

Email

Website

Speak your mind

9 Comments so far

1. the boy on March 8, 2009 1:26 pm

what a complete waste of time

2. vic niederhoffer on March 9, 2009 12:08 am

there is perhaps more in heaven and earth than the ken of the boy who models himself after the shortened lifespan roué mariner. vic

3. douglas roberts dimick on March 9, 2009 3:59 am

Half The Equation

First, know, most of what is written here, I “know nothing” about – Sergeant Schultz from Hogan’s Heroes for a clue. Researching the questions presented an epileptic-like presentment of most of the issues that I have been stumbling about these past 8 years on SMART. That said…

Are we talking pseudorandom numbers?

Randomness and knowledge then become primary issues. Known distribution may be quantitatively related so as to form distribution cycles based on assimilated input.

Regarding knowledge of the data itself, what about inter/intra- market correlations? 10 years ago is 2009, when Clinton repealed Glass-Steagall – Rubin already with one foot in the door at Citi. Relative and potential trends of capital flows, liquidity, leverage and capital reserve ratios at industry as well as national and regional levels of the economy significantly shifted.

Thereafter, engineering was in place for expansion beyond pre-existing (Glass-Steagall Era) commercial/investment banking parameters. So the 10 year range appears correct for randomness and knowledge analysis.

Constancy? S&P is an external, physical process, so conditional probability would invite a one-way function (f) for any x in the defined domain.

At least one unknown, though, is highlighted in Russ’s preceding article on FAS 157. In that this ecological change appears to directly impact systemic processing within markets, we do not know how it (the rule’s application as well as its represented trend of economic globalization) may affects market cycling. Quantification of time relative to capital investment (e.g., mark to model) is an indication.

Polynomial time appears possible here as we can formulate a known size. We would have bound data sets by uniform quantification of price range action during relative time periods.

January of 2007, a fellow member on Tradestation’s Help Forum (a PhD on molecular something-or-other) introduced me to polynomials. The concept resolved my last issue for design of the market situation component to support a binary strategy design.

See Jim’s recent comment on curve fitting as to slicing time. Could such an approach marginalize randomness?

Could a HAVEGE-like methodology be employed?

Based on my recent state machine work, the number of system states here appears manageable. Also interval sequencing would permit modular arithmetic to achieve congruence.

If so, we may advance system design from strategy to execution. Reduction to modular relativity could facilitate binary processing.

Oddly, state diagramming required me to return to the strategy component to reconstruct state-input-state and state-input-output excitation into binary processing. By doing so, reduction demands could only be satisfied by modular application – or so I am working towards.

If simple linear function may allow modular reduction, then how do you determine the form? You could construct binary representation of the successively generated values for subsamples. Again, in the curve fitting article, I think Phil referenced this approach regarding independence.

Variates within runs would be stochastic from the modular processing. So although multivariate, there is (semi?) uniform distribution generated by the leaping ahead during correlation to generate uniform price ranges.

My approach at present for state machine design is forming relative types of regions for comparisons that allow sequential or combination correlations. Lags have thus become a central issue.

Batching is bad. That is all I have to say about it… except, the trader, who originally presented me with the idea that Relativity explains the physics of electronic exchange market systems, was big on batching. We split six months into collaboration over that issue. We also disagreed on benchmarking – another nonstarter for me.

He had dual masters – math and finance. I could not perceive how one may take a handful of gravel, even of the same grade (as stocks of one industry sector), throw “it” in a singular batch, and therefrom correlate. Yes, indexes are so constructed. Granted, you can correlate with physics; however, it has always appeared to me that you come round to begging the same questions presented here – only then without the systemic knowledge and a much greater degree of randomness relative to external processing albeit there being the equivalent of sanctioned, electronic systematics for rock throwing.

Right: volatility arbitrage. Wonder how that scene is working out these past five months?

Last year, pre-crisis, I observed that a noted arb-academic was quoted to confess… “At present, I don’t know of any good benchmarks.”

As for the queries, I do not profess to have the expertise to comment further on independence. The shape indicates wave distribution accelerating in a downward trending en mass.

Traders, depending on their religion, I suppose would be looking for disposition. The Contra’s would start analyzing where to begin placing their bets.

Quantitative Relativity requires additional input for correlating generalizations and comparisons. From a risk management perspective, our objective would be to minimize predilections in favor of relative positioning. Accordingly, as the five prices are ranked irrespective of time, we await the data for computation of that second half of the formulation.

dr

4. Radek on March 9, 2009 6:27 am

Points well made, Victor. Yet, is there anything ‘better’?

5. Craig Bowles on March 9, 2009 7:50 am

How about ranking growth rates to make it more comparable? We’ve already moved below the 1973-75 recession but the pattern is similar with the start holding up better than a normal recession leading people to think it’s just an economic growth slowdown. Now this late deterioration makes people get hysterical. If we assume that stock prices and the economy are linked, it makes sense to compare now to other late cycle times when the lagging indicators are starting to roll over but still positive. Historically, the best time to buy stocks is when the lagging economic index has negative growth rates. Random Walk disregards stocks being linked to the economy.

6. Matt Johnson on March 9, 2009 2:02 pm

I think there is no predictive value to this first study. The second one, yes, to find the strongest and weakest markets.
I use a ranking system, too, in my trading but I compare products against each other, not against themselves, to find which is 'stronger' or 'weaker', like in your second point; but I drill down one more level.
I'm not an equity guy, but I think an example would be: The S&P's are in a downtrend (and say the weakest one of the portfolio) - my bias is lower (short only) — now, which stocks do I want to hit? The weakest ones, here is where the deeper level ranker comes into play. So, of the S&P 500 rank the sectors, then rank the weakest stocks in each sector, a rule might be to only hit the bottom 50% of the sector ranking and only hit the bottom 50% of the stocks in that sector.
So a ranker is very important in trading, it's an edge, and it increases the chances of a trader being long the stronger products and short the weaker ones.
PS You're just touching the tip of the iceberg on this study.

7. the boy on March 9, 2009 2:44 pm

Call me whatever esoteric name you wish, but Im still alive after a 20% return last year and starting this year nicely positive. And all done with 100% system trading (not trend following). How you?

8. vic niederhoffer on March 9, 2009 4:26 pm

yes. mr. johnson. that's the whole point of daily spec. just to touch the tip. otherwise it would soon become useless. vic

9. douglas roberts dimick on March 10, 2009 12:10 pm

Relativity of time intervals between rankings of distributions is determinative, yes? We would have to be able to establish a point of origin (in time not just position) for each high/low cycle.

Drift would indicate constancy of the data runs. That correlation could be determinative of one-way function (f) in a given domain.

Thus, we are again presented with a time issue. How are we to determine the systemic (or cyclical) states if there is no time sequencing of data correlating the highest/lowest rankings (1 thru 5) to establish uniformity for both input (quantifying the states) and domains (or regions for output excitation)?

Absent that assimilation, we cannot construct modular runs of bound data sets, thereby accounting for randomness within the generalized trending input to then compare and predict (a la polynomial time). The alternate implied by presentment of the data is probability mapping by event occurrence?