# Correlation and Probability, from Philip J. McDonnell

Winning percentage = 50 + 32*(correlation)

Because the constant is 50 (~50%) it would appear that the numbers input to this were basic coin flip probabilities. To a good approximation most markets do obey coin flip odds so this is very useful. I would, however, conjecture that if the odds were different, say 20%, then a new approximation might be needed with different parameters.

## Charles Pennington notes:

I'm not sure what Dr. Phil means when he says that "it would appear that the numbers input to this were basic coin flip probabilities."

1. Generate 2 series, A and B, each having 10,000 random numbers from a normal distribution with average 0 and standard deviation 1.
2. To each element of B, add alpha times the corresponding element in A, to generate a new series C. (I will end up trying alpha values ranging from much less than one to much greater than one.)
3. For the first 5,000 elements of A and C, run a regression of C versus A. This gives a correlation, slope, and intercept.
4. For the remaining 5,000 elements of A and C, use the regression, and the A values, to predict the C values.
5. Count the fraction of instances in which the prediction gets the sign right–that's the "winning percentage".
6. Now you have a correlation, and a winning percentage.
7. Repeat for a different values of alpha to generate a table of winning percentage versus correlation. (When alpha is small, much less than one, then alpha and the correlation that emerges are very close. When alpha becomes much greater than one, the correlation approaches 100%.)

For correlations approaching 0, the winning percentage approached 50, as of course it should. For large correlations, it approached 100%, as also it should. For small but non-zero correlations, I found this result, as stated earlier:Winning percentage = 50 + 32*(correlation).

Seems like a reasonable answer for a guy who wanted a rule-of-thumb mapping of correlation onto winning percentage. Obviously if there is a big drift term, or any other number of things are true, it could be significantly wrong.

# Trend Following Study, from Kim Zussman

Here is a fixed-system signal using a 10 month moving average. It is a bonus quantification of risk using Ulcer index, which was not clearly diagnosed with endoscopy.

## Philip J. McDonnell writes:

Looking at exhibit two, it should be noted that the greatest differential between the 200-day moving average timing strategy and buy and hold was achieved in 1932. Since that time buy and hold has out performed. The trend following strategy appeared to do better mainly in the 1929 crash, 1973-4, and 2000-2002 bear market years. Given that it is 30 to 40 years between such events one wonders how soon the next one will be.

# Fat Tails, from Philip McDonnell

James Sogi wrote:

I wonder why drops such as the last few weeks materialize out of a normal expectation as they seem to exceed the normal expectation of the current low volatility regime…

There are theorems in statistics that show that nearly any well-behaved distribution ultimately converges to the normal, given enough iterations. However, such theorems almost always require that the variance be stable and finite. Notwithstanding Professor Mandelbrot's assertion to the contrary, the variance of markets is finite. There has never been an infinite price quote nor will there ever be.But that still leaves the issue of stable variance. That is the catch in two ways. First, the variance as measured by the VIX and even the realized standard deviation has been known to swing wildly in a short period of time. It has also demonstrated varying regimes on the order of 14 years in length.
The other issue is slightly subtler and relates to the underlying process. In the traditional random walk the process is an additive one. Each day's net change is added to the previous to arrive at the new price. If the standard deviation were stable then such distributions would converge to normal.

But if the underlying distribution is multiplicative then that alone causes the standard deviation to grow with time as measured arithmetically. A multiplicative process is consistent with the long-term compounded growth found by Dimson, et al. The way to measure the standard deviation and variance is in the transformed (natural log) variable just as the usual option models do. This is reminiscent of the recent discussion on non-linearity.

From the above it is reasonable to expect large tails whenever the variance increases on a short-term basis. In effect it is like a new higher variability distribution is being superimposed on the more common low variability distribution.

# Non-Linear Relationships, from Philip McDonnell

I would like to offer some simple thoughts on non-linear relationships. The usual way to study non-linear correlations is to transform one or more of the variables in question. For example if we have a reason to believe that the underlying process is multiplicative then we can use a log function to model our data. When we do a correlation or regression of y~x we can just take the transformed variables ln(y)~ln(x) as our new data set. We are still doing a linear correlation or a linear regression but now we are doing it on the transformed variables.

Ideally we would know the form of the non-linear relationship from some theory. Absent that we could use a general functional form such as the polynomials. So our transform could be something like X^2, X^3, or X^4. Using one of these terms is usually pretty safe. But combining them in a multiple regression can be problematic. The reason is that the terms x^2 and x^3 are about 67% correlated. Using highly correlated variables to model or predict some third variable is a bad idea because you cannot trust the statistics you get.

One way around that is to use orthogonal polynomials or functions. We have previously discussed Fourier transforms and Chebychev polynomials. Both of these classes are orthogonal which also means that we can fit a few terms and add or delete terms at will. The fitted coefficients will not change if we truncate or add to the series. Each term is guaranteed to be linearly independent of the others.

Using one of these terms is usually pretty safe. But combining them in a multiple regression can be problematic. The reason is that the terms x^2 and x^3 are about 67% correlated. Using highly correlated variables to model or predict some third variable is a bad idea because you cannot trust the statistics you get.

I have a question.

One of the reasons for adding regressors is to take into account all possible reasons behind a move in the variable we are trying to explain. However, multicollinearity being prevalent in finance, it is a source of headaches.

If we could randomize and/or design experience plans for empirical studies, as we do in biology, we could get rid of part of the problem.

Is it possible to randomize ex post? Let's say I what to study Y = aX+ b + e. If instead of taking the full history of observed (Y,X), I am taking a random sample of (Y,X), it creates some kind of post-randomization, which should reduces the impact of other factors.

Does it make sense? Of course, we would lose all the information contained in the non-sampled (Y,X). That means even less data to work with, which is not nice with ever-changing cycles.

## Rich Ghazarian mentions:

And of course if you want a more powerful model, you fit a Copula to your processes and now you are in a more realistic Dependence Structure. Engle has a nice paper on Dynamic Conditional Correlation that may interest Dependence modelers on the list. The use of Excel correlation, pearson correlation, linear correlation … these must be the biggest flaws in quant finance today.

With linear functions we can compute the Eigenvectors to get an orthogonal representation. One problem that gets in the way of nonlinear models is that it isn't clear what is the appropriate "distance" measurement. You need a formal metric of distance to model, compare, or optimize anything. How far apart are these points?

With linear axes, distance is determined by Pythagoras. But what is suggested for the underlying measure of distance if the axes aren't linear?

These remarks about correlation resonate with me, especially in the case of the stock market.

## From Vincent Andres:

If you did replace your original axis X and Y by new axis X'=fx(X) and Y'=fy(Y) this is a transformation of the kind P=(x,y) -> P'=f(P)=(x',y')=(fx(x), fy(y)).

This transformation can be reverted without worry. P'=(x',y') -> P=(x,y) where x and y are the antecedents of x' and y' thru the reciprocal functions fx^-1 and fy^-1.

A "natural" suggested distance measure in this new universe is thus : dist(P1, P2) = dist(ant(P1), ant(P2)) ant = antecedent.

This works for all functions fx and fy being monotonous, e.g., (ln(x), x^2, etc) because there is a strict bijection between the two universes. It could even do something for a more large class of functions.

Sorry for the difficult notations, but I hope the idea is clear.

# A Word Of Caution From The Almanatarian, by Phil McDonnell

Tim Hesselsweet wrote: "The almanatarian introduces a little low n counting."

Thanks to Tim for bringing this to our attention. Beside the low number of observations the study seems to have another odd methodology. It is the average subsequent extreme low readings. To trade this one would require actually catching that low precisely. The study is moot on the subject of how to catch the annual low.

However, looking at the same data over the entire period there is a way to trade it. If we sell at the recommended point and see where we stand at the end of the year we find that we missed out on a 4.7% gain for all the periods. To see how it has done recently we can look at the last 12 years worth of signals. These recent sell signals would cause us to miss out on an average 10% gain for the rest of the year.

In this case it is best to ignore the signal.

# Conservation of Money, from Philip McDonnell

Two summers ago, at the Spec Party in Central Park, Victor said something to me which was at once profound yet seemingly too simple. "There is only so much money." To someone who did not understand,it would seem rather sophomoric, or even downright cryptic. But it was all he needed to say because I had read his books.

The statement referred to a simple conservation law much like the conservation laws of physics. In physics, energy and mass are the most significant variables in most mechanical systems. So we have laws such as Conservation of Energy, Conservation of Mass and Conservation of Momentum. In financial markets a similar law applies. Money is conserved. At any given time there is only so much money.

Let us imagine an island economy where there are only two stocks, X and Y. There is only so much money on the island. When the traders on the island decide they want to invest in X they need to figure out how to pay for the purchase. The only liquid source of money is stock Y. So they sell Y. The price of X goes up and Y goes down.

Let us draw this on an X-Y coordinate plot and assign some real numbers to it. The relationship between X and Y would show up as a line from high up on the Y axis sloping downward to some point of a large X value. Suppose the amount of money was \$100. If everyone wanted to own Y and no one wanted X then we would have Y=100, X=0.

Conversely if everyone wanted X and not Y then Y=0 and X=100. We can think of the distance of the current market valuations as the distance from the origin which is equal to the buying power of the money. It is a simple conservation law on our island. The \$100 defines a radius from the origin. It therefore defines a circle. It is easy to draw on a 2D chart or even in 3D. Drawing a 5000 dimensional sphere for the 5000 actively traded stocks is a project still in progress.

# Fundamental Laws, by Victor Niederhoffer

February 11, 2007 | 2 Comments

The moves in markets often seem to imitate the kinds of things we see in nature: in gas; in water; and in electricity. For example, the gentle back and forth of the stock market last week, gradually building up pressure and then exploding on the downside, is like a cork bursting from a bottle of champagne, or a volcano erupting.

In electronic circuits we often see a signal gently oscillating between set points, then gathering a slight bit of amplitude on one side or the other, and finally tripping the set point thereby triggering a major change in the output. In capacitor resistor circuits, we find the same buildup of charge, with little change in the output until the time constant of the capacitor is fulfilled and the output suddenly and dramatically changes.

The reason for these similarities is they are all results of various energy conservation laws. Energy coming into a system cannot just disappear. One major conservation law in electronics is Kirchoff's Current law. It holds that current going into the confluence of two wires equals the current coming out. Another major law is Kirchoff's voltage law. It states the voltage that's input to a closed circuit is equal to all the voltage used up in work in the circuit.

I find the major applications of conservation laws in markets relating to some input from outside a system. Usually, some information or money flow gets distributed to the various components, companies, and markets of the system. A major merger announcement affects not just one company but all companies related to it. An increase in liquidity in the system gets distributed according to market's laws similar to Kirchoff's laws in electronics.

To be continued.

Two summers ago, in Central Park, the Chair said something to me which was at once profound yet seemingly too simple. "There is only so much money." That was all that he said. To someone who did not understand, it would seem rather sophomoric or even downright cryptic. But it was all he needed to say because I had read his books.

The statement referred to a simple conservation law much like the conservation laws of physics. In physics energy and mass are the most significant variables in most mechanical systems. So we have laws such as the Conservation of Energy, Conservation of Mass and Conservation of Momentum. In financial markets a similar law applies. Money is conserved. At any given time 'there is only so much money'.

Let us imagine an island economy where there are only two stocks X and Y. There is only so much money on the island. When the traders on the island decide they want to invest in X they need to figure out how to pay for the purchase. The only liquid source of money is stock Y. So they sell Y. The price of X goes up and Y goes down.

Let us draw this on an X-Y coordinate plot and assign some real numbers to it. The relationship between X and Y would show up as a line from high up on the Y axis sloping downward to some point of a large X value. Suppose the amount of money were \$100. If everyone wanted to own Y and no one wanted X then we would have Y=100, X=0. Conversely if everyone wanted X and not Y then Y=0 and X=100.

We can think of the distance of the current market valuations as the distance from the origin that is equal to the buying power of the money. It is a simple conservation law on our island. The \$100 defines a radius from the origin. It thus defines a circle. It is easy to draw on a two-dimensional chart or even in 3D. Drawing a 5000 dimensional sphere for the 5000 actively traded stocks is a project still in progress.

Is it not the beauty of Eurodollars that since there is no reserve requirement (being out of the country and not under the auspices of the Fed), foreign banks can create and loan as many dollars as they want?

Not quite. After the Eurodollar blew up in 1974, central bankers convened at the behest of the Bank of England to put a lid on the runaway growth of the Eurodollar market. It was agreed that each CB would be responsible for defaults of the banks they regulate even if the default were in the Eurodollar market. Following that, each foreign CB put reserve requirements on Eurodollar deposits.

## From: George R. Zachar:

Not quite. After the eurodollar blow up in 1974 of Bank Herstadt, central bankers convened at the behest of the Bank of England to put a lid on the runaway growth of the eurodollar market. It was agreed that each CB would be responsible for defaults of the banks they regulate even if the default were in the eurodollar market. Following that, each foreign CB put reserve requirements on eurodollar deposits. /Gregory van Kipnis/

Given

1) That central banks are increasingly players themselves,

2) The clubby incestuous relationships within the govt/bank community in places like Italy,

3) The fact that one major central bank has had a high official murdered by someone he regulated (Russia),

4) The asset explosion in nations whose financial infrastructure hasn't been tested (the Gulf States),

5) The nil possibility that govt bankers grok the array and scope of derivatives…

I would not assume the central banking clerisy is on top of things. They might be, but there's reason for doubt.

## Easan Katir writes:

The moves in markets often seem to imitate the kinds of things we see in nature… VN

To continue the Chair's analogy, it would seem the next practical question is how do we predictively discover the impedance of that market capacitor which discharged on February 8, provided the "3 of a kind," then tripped another point of capacitance and surged in the opposite direction for the past 4 days? What voltmeter can we use to measure the current passing through?

Or is this market more like a big kid bouncing on a "40-day moving average" trampoline for the past seven months?

# Playing Cards with the Market Mistress, by Henrik Andersson

I was playing with the thought of analyzing the market with the help of a card game analogy. Assume you have a deck of cards with the daily returns for the last five years for the S&P 500. If you draw the cards one by one what would the optimal strategy be, that is, when would you stop in order to maximize your return?

This doesn't seem like an easy problem to solve by hand, so I performed some Monte Carlo simulations. With a simple fixed return-based stop, it seems like a maximum return can be achieved by stopping when you are at around 50% return for an average return of some 40%. The average waiting time for this game strategy is almost 3 years.

For the last five years the S&P 500 moved from 1090 to 1447, a 32.75% return. I believe a more complex analysis of this kind could possibly yield some interesting results. This was inspired by a question in "Heard on the Street: Quantitative Questions from Wall Street Job Interviews" by Timothy Falcon Crack, where one is asked to calculate the optimal stopping rule from 52 playing cards if red cards pay you a dollar and black cards fine you a dollar.

## Philip McDonnell writes:

If the market truly has a long term upward drift then there is no good stopping point. In a sense this may be the wrong question. Perhaps the better question is when to enter the market with new money or when to increase one's leverage. The idea is that this implicitly recognizes that buy and hold is very difficult to beat.

The market also differs from the card deck in that the investment horizon is very long, not just 52 known cards. Another difference is that the card deck analogy is a model without replacement. If you have seen unfavorable cards so far then one need only stay in the game to the end to be guaranteed at least a break-even outcome. One is guaranteed a form of mean reversion because cards are drawn without replacement. To assume that the market distribution acts in a mean reverting fashion is a major assumption which should be tested first.

# Pain Frequency By Kim Zussman

On 2/5/07, Andrea Ravano wrote:

Evidence from Capuchin Monkey Trading behavior: The study confirms for animals, what behavioral studies have shown for human beings; that to offset a loss of 1 you must have a profit 2.5 times as big. In other words the perception of your pain is greater than that of your pleasure.

That pain of loss is 2.5 greater than pleasure of gain, in absolute terms, has been bandied about in literature for a while. What is the nature of a trader's state of mind as a function of trading (or more specifically, position checking) frequency?

One check on this is to look at the effect of multiplying losses by 2, and comparing with gains scaled at 1. Using SPY returns since 1993, checked average returns for daily, weekly, and monthly intervals:

Daily       Weekly    Monthly

Ave:   -0.003      -0.005     -0.002

Pos:    1855          411       411

Tot:    3529          730        169

%Pos:    52            56          65

When the "effect" of losses on your soul is double that of gains, you are suffering, on average, in all intervals. Therefore, it is no coincidence there are so many psychologists/psychiatrists involved in trading. Percentage of the positive, however, scales up with longer intervals, so you feel bad less often.

Consider what happens when you lose: How much is required to break even?

Loss       Required Gain          Ratio
-20%           25%                    1.25
-25              33.3%                 1.33
-50              100                     2.00
-75              300                     4.00

Average Ratio                       2.15

The ratio of how much is required to break even rises rapidly as the losses increase. Although the above unscientific data points appear to be in the ball park of the putative 2.5 ratio, the underlying ratios are clearly non-linear and NOT well described by a simple number. In fact any simple ratio is far too simplistic to be a good measure.

I would argue that a log linear utility function is what an investor, and any rational individual, would want. In their famous paper on Prospect Theory, Kahnemann and Tversky identified what appeared to be irrational behavior on the part of university students and some faculty when presented with hypothetical bets. The Nobel Prize winning professors concluded that the students chose irrationally as compared to the Gold standard of statistical expectations based on an arithmetic utility of money.

But if money compounds, one would want a log utility of money. When the examples cited in the study were recalculated with a log utility based on the relative net worth of typical students the results showed that the student subjects were invariably quite consistent with a log utility function. This re-opens the question: Were the subjects or the professors the irrational ones?

If one expresses the gains and losses in the above table as the natural log of the price relative, then the negative logs of the losses exactly cancel the logs of the gains.

These experiments that psychology professors run on students invariably involve the students' winning or losing maybe \$100 or less. That's a small amount by any reasonable metric.
\$100 is very small, for example, compared with their first year's salary out of school. So it's quite reasonable for the professors to assume that the amount is in the limit of a "small" amount, in the sense that it (1+x) is approximately x if x is "small."

Any reasonable person, offered the opportunity to bet with a 50% chance of winning \$250 and a 50% chance of losing \$100, should take the bet. That's true even if he only has \$250 to his name, because he also has prospects for future earnings.

In this case, the professors are more rational than the monkeys.

## J. T. Holley wrote:

"Could it be that all the bruised and battered hold-outs from '00 - '03 will finally join in, and we resume the incessant trek toward the summit of market-based capitalism?" kz

How about this simple fact: For the first time in recent years that I can remember, the Dow and S&P indexes (headline purposes) outperformed the price appreciation, across America, of houses or real estate. This is roughly a two to one ratio. Now for the sake of simplicity, how many of the '00 - '03 bruised and battered people are going to scratch their heads and say, "twice as much, huh?"

I think the "Confidence Index" mentioned by Carret has a ways to go fellas; but this must obviously be tested.

"Any reasonable person, offered the opportunity to bet with a 50% chance of winning \$250 and a 50% chance of losing \$100, should take the bet, and that's true even if he only has \$250 to his name, because he also has prospects for FUTURE earnings."

I would agree that future earnings can be and perhaps should be factored in. But to a freshman with \$100 (not \$250) the 50% chance of no beer, pizza, and dating for four years might seem an unacceptable risk. Losing it all results in a utility of Ln (zero), the way I look at things. Ln asymptotically approaches negative infinity.

A few points:

1. KT did include some bets in the thousands of dollars.

2. Most of the KT bets were fairly close calls even viewed from an expected arithmetic value as opposed to a log utility.

3. KT never concluded that the indifference ratio was 2.5 or any other number in their ground-breaking paper.

# Consecutive Up Days and Down Days, by Alston Mabry

January 28, 2007 | 1 Comment

Here is a very interesting study by TradingMarkets on consecutive up days and down days in individual stocks.

Some concerns:

They state that mean 1-day, 2-day and 1-week moves are their "benchmarks," but they never again refer to the 1-day and 2-day moves, which makes one at least a little suspicious about data snooping. It's better to say something like, "We looked at 1-day and 2-day moves, but there weren't any significant differences."

Speaking of significance, they don't provide standard deviations for their benchmarks or counts for total stocks in the x-day up/down groups, so one cannot calculate the significance of the sub-groups' deviations from the means of the overall group. The means of the sub-groups look interesting and convincing, but one would want to know that the differences are not created by a small number of highly volatile outliers.

And that brings up a broader issue: Are the benchmarks appropriate at all? TradingMarkets states:

We looked at over seven million trades from 1/1/95 to 6/30/06. The table below shows the average percentage gain/loss for all stocks during our test period over a 1-day, 2-day, and 1-week (5-days) period.

Of what relevance is this average 5-day move for these seven million trades? This is neither a cap-weighted nor an equal-weighted index, or any reproducible ex-ante yardstick, but just an artifact of their database, conflating the performance of all types of companies in all sorts of market conditions and creating an "average."

To paraphrase, TradingMarkets is saying that the stocks that went down five straight days in week[zero] did better in week[+1] than the overall "average" 5-day move. So what? The question needs to be: If stock XYZ went down five straight days in week[zero], and then went up in week[+1], how did this up move compare to some other benchmark for week[+1]?

It's easy to imagine that the sub-groups in the study select out highly-volatile stocks that are subject to wide swings in value and strings of up and down days. It might be more fruitful to compare these stocks to their own history, to determine whether multi-day moves are predictive for that individual stock. One might find differences between highly volatile stocks and relatively sedate stocks.

# Lunar Cycles, from Dr. Phillip J. McDonnell

Not too many days ago the New York Times published an article by Mark Hulbert which claimed some credibility for certain phases of the Moon’s cycle to be more profitable than others. In particular the academic paper cited claimed to show evidence that buying the market during new Moon phases was more profitable than full moons.

The Moon has the following cycles besides the roughly 12 hour daily tidal cycles:

• 27.33 day orbital cycle
• 27.321 Sidereal cycle (Moon returns to same place)
• 29.530 Synodic cycle (returns to same phase)
• 6585 (about 18.03 years) Saros cycle where the Earth Moon Sun trio return to the same configuration.

To evaluate whether this cycle was even plausible, the correlation between market days 18, 19, 20 and 21 day s apart were considered. These would correspond to calendar day lags of 26 through 30 days. Data used were the Dow Industrials ln of the price relatives based on adjusted closes going back to 1950. This was a total of a little over 14000 daily observations.

 Lag Correlation 18 -.24% 19 -.71% 20 .11% 21 -.81%

All of the above correlations are less than 1% and are clearly quite consistent with randomness.

A review of the Saros cycle of 6585 days using trading day lags of 4545 through 4558 yielded the following correlations:

 Lag Correlation 4545 -.84% 4546 -.36% 4547 1.21% 4548 .85% 4549 -.64%

Again we have a result which is completely consistent with randomness. One wonders if studying moonbeams too long can lead to lunacy.

# Equity Curve Random Generator, from Rick Foust

I found a fun and educational Equity Curve Random Generator where you can enter values of win/loss ratios and win probabilities and see their effect on returns over time. Note that increasing the number of lines (third blank) will overlay multiple runs on the chart. Playing with this revealed rather quickly that ratcheting up the win/loss ratio in tenths only gradually improves the curve, but ratcheting up the win probability in tenths rapidly improves the curve. Even hundredths are important. Try ratcheting the probability .55, .56, .57, .58, .59, .60 and you will see. Improving odds even a tiny amount can dramatically improve returns.

I would caution all that the author of the web site mentioned by Mr. Foust uses the Average Win/Loss ratio as his characteristic criteria. As Rick found the criteria did not seem to be the most helpful. Part of my caution comes from the author’s apparent use of the Average Win/Loss ratio in conjunction with the Kelly Criterion. The Kelly Criterion applies only to gambling games with binomial outcomes.

Some people have tried to extend it to multinomial outcomes such as we have in investments. They try to use the average win and the average loss as though they were binomial outcomes. In so doing they commit a basic arithmetic mistake. Implicitly they are assuming that the distributive law applies to logarithms. It does not and that is where they go wrong.

This error has been repeated to the point of being a meme. Many books espouse it, much software is written to calculate it and articles proclaim it to the unwary. The simple fact is that the incorrect formula invariably leads to over trading and will CAUSE the ruin which it ostensibly promises to prevent.

# A Letter from the President of the Old Speculators Club

I was watching CNBC over a bowl of cereal a short while ago and Pisani said something along the following lines: “The S&P cash hit 1388 or 1389 again this morning and backed off. This is about the seventh time this has happened.” I don’t generally follow information like this but have seen lengthy List conversations about various indicies at various levels (including “the round”). First, is Pisani’s information accurate (or at least close)? Secondly, is it indicative of anything important? I’ve heard much of “levels of support and resistance” but am not sure that they, or double and triple tops, are significant (it wasn’t that long ago that Dow 11350 was being watched as an area of resistance - obviously it has been overcome).

John Bollinger replies:

I find repeated visits to a level, line, band, etc… useful as ‘logical places’ to take decisions. This is the sort of thing that is hard to ‘count’, but relatively easy to trade. Some of you may know Fred Wynia, it was he that taught me the importance of making decisions at ‘logical places’. I put ‘logical places’ in quotes as it is Fred’s term.

Dr. Phil McDonnell replies:

Pisani is accurate. The previous 6 dates and highs are:

`11/13  1387.61`
`11/8   1388.92`
`11/8   1388.61`
`11/7   1388.19`
`10/27  1388.89`
`10/26  1389.45`

So this morning’s high made the 7th such high in the 1387-1389 range.

Under the category of knowing one’s adversary I would note that there has been a long term up channel which one can draw on a chart over the last few years. The tops of significant rallies appear to be collinear, so drawing a line through them gives an upper limit to the channel for chartists. We hit that line on 10/26 at 1389 (or so) on the SnP.

Note that the line is an up sloping line but it has a much more gradual slope than the recent advance from the June-July lows. I would put this in the same category as round numbers and other such things which cannot possibly work but do. Human beings are superstitious and those who look at charts may sell at such junctions ‘just to be safe’.

# Variations on a Theme of Greenblatt, from Prof. Charles Pennington

Victor and Laurel note: A heated debate regarding Joel Greenblatt’s “The Little Book That Beats the Market” recently cropped up among our colleagues. Below is some detailed follow-up work from one of our eminent researchers who is as adroit at analysis of single crystal NMR of high temperature superconductors as he is at uncorking the seemingly suggestive system work of hedge fund managers with putative 40% returns. Please note our response, which follows, as well as earlier intriguing commentary which began in early November and is found further down on the site.I’ll report here the results of a study that I did that addresses the results in Joel Greenblatt’s book. This study focuses on the large cap stocks that make up the S&P 500 index. Just as in Greenblatt’s work, I used the Compustat Point-in-Time database, in which the fundamental data are listed as they were at the time, and not restated.

Greenblatt’s ranking method involved both “earnings to price” ratio and “return on capital”. For “earnings to price”, he actually uses “EBIT” (earnings before interest and taxes) divided by “enterprise value” (market cap + debt + preferred stock), and for “return on capital” (”ROC”) he uses EBIT/(working capital + property, plant, and equipment). All these items can be specified using Compustat Point-in-Time.

After ranking stocks separately by E/P and ROC, he then takes these two ranking numbers and literally adds them together, and then finally ranks again based on that sum. He finds that the stocks that have both high E/P and high ROC tend to do well.

Here are the ground rules for my study. Stocks are ranked and then purchased at the end of each quarter, and held in that decile until the next quarter, when stocks are re-ranked. The most recent trailing four quarters of EBIT are summed to find the trailing yearly EBIT. In order to be purchased, stocks must have been components of the S&P 500 as of the start of the calendar year under consideration. As of the purchase dates, their share price must be greater than \$2.

I checked and found that yes, the study did include Enron and WorldCom. Enron was bought on 9/28/2001 at \$27.23 and sold at \$0.60 for a loss of 97%. It was not re-purchased the next quarter because its share price had fallen below \$2. At the time of purchase, Enron was near the middle of the rankings in terms of both E/P and ROC.

For each stock the “total return” was calculated, including dividends, using data from what we believe to be a reputable commercial vendor. However, I confess that I need to check on what the exact algorithm is for computing total return when there is something complicated, such as a merger or a spinoff.

At the start of each quarter the stocks were sorted into deciles according to Greenblatt’s ranking method. For each decile, the average of the forward 1-quarter fractional total returns for the approximately 50 stocks was calculated. Calling that number “R”, we then calculated 100*ln(1+R) for that decile and that quarter, and I’ll let Dr. Phil McDonnell (a frequent site contributor, trader and academic) explain why we did that. (As long as that number is not too big or small, it’s going to be pretty close to the percentage change in the portfolio.)

Our study covers 1992 to present, 59 quarters of data. The reason that we went back to 1992 was simply that we happen to already have had a convenient file listing the S&P components year-by-year back to 1992.

For each decile there are 59 quarterly returns. Below we give the results of our study, the average and standard deviation of those 59 numbers for each decile.

Decile 1 is the one with high E/P and high ROC; decile 10 is the one with low E/P and low ROC. The last column is the average divided by the standard deviation. Multiply that number by two and you have the annualized “Sharpe ratio” for that decile, if I understand the definitions correctly.

`1    3.84    7.89     49%`
`2    3.33    8.57     39%`
`3    3.07    8.23     37%`
`4    3.69    7.46     49%`
`5    3.34    6.79     49%`
`6    3.04    7.40     41%`
`7    2.44    7.32     33%`
`8    2.47    7.46     33%`
`9    2.35    9.98     24%`
`10  2.51   13.27     19%`

The Greenblatt “favorites” portfolio averages 3.84% per quarter with a standard deviation of 7.89%, with an average/standard deviation of 49%. The Greenblatt “bad guys” decile, decile 10, averages 2.51% with a standard deviation of 13.27%. So this confirms that the Greenblatt strategy has worked reasonably well since 1992 on the kinds of large-cap stocks that make up the S&P 500.

An investment of \$1 in decile 1 stocks grew to \$9.63; \$1 invested in decile 10 stocks grew \$4.39, and it was more volatile along the way.

Greenblatt’s data end at the end of year 2004, so below I will show you how this S&P 500 version of Greenblatt has performed since then. However, first, I will show you how some other strategies fared during the same 59 quarter period since 1992.

First, here are the results for a ranking based solely on E/P:

`Avg      SD     Avg/SD`
`3.54    8.91     40%`
`3.88    8.20     47%`
`3.50    8.55     41%`
`3.01    7.00     43%`
`3.04    6.62     46%`
`2.77    6.78     41%`
`2.65    6.97     38%`
`2.40    7.76     31%`
`2.88   10.05     29%`
`2.36   13.78     17%`

(First row: Highest E/P, Last row: Lowest E/P)

The results are similar to Greenblatt’s, though perhaps not quite as good. All that’s not surprising (if you believe Greenblatt’s thesis), since E/P is one of Greenblatt’s two ranking factors.

ROC is Greenblatt’s other ranking factor, and below is the performance of deciles sorted based on ROC alone:

`Avg      SD    Avg/SD`
`3.72    8.03       46%`
`3.65    7.92       46%`
`3.08    6.69       46%`
`2.97    7.68       39%`
`2.68    7.81       34%`
`2.77    7.72       36%`
`2.72    8.25       33%`
`3.09    8.16       38%`
`2.85    8.76       33%`
`2.62   13.02       20%`

(First row: Highest ROC; Last row: Lowest ROC)

Again, the highest ranked ROC deciles performed better than the lowest ROC deciles.

So it seems that both E/P and ROC each have some independent value as ranking criteria (though we haven’t examined the extent to which E/P are correlated or anti-correlated).

Finally, here are a few other ranking methods.

First, here’s another “value” ranking method. Many value investors claim that it’s bullish if a company has a high ratio of cash-and-equivalent on hand to market-value-plus-debt. Below is the performance according to that ranking:

`Avg      SD   Avg/SD`
`3.96   10.40      38%`
`3.42   11.43      30%`
`3.14    9.68      32%`
`2.73    8.95      31%`
`3.44    7.72      45%`
`2.81    7.47      38%`
`2.91    6.73      43%`
`2.81    6.79      41%`
`2.49    6.26      40%`
`2.57    6.05      42%`

First row: Highest cash/(market value plus debt); Last row: Lowest..

Here the firms with the highest cash had the highest average return, but they also had a relatively high standard deviation, and there is no clear trend in the Sharpe ratio vs. decile number. I would argue therefore that this “cash” ranking did not have much value.

Others have suggested that the Greenblatt effect might be some artifact of share price and/or market capitalization. So here are studies of those factors.

First, share price:

`3.04   14.94    20%`
`3.14   10.12    31%`
`3.39    9.11    37%`
`2.62    7.43    35%`
`3.35    7.58    44%`
`3.19    7.14    45%`
`2.76    7.75    36%`
`2.75    6.37    43%`
`2.71    6.74    40%`
`3.03    6.76    45%`

First row: Lowest share price; Last row: Highest share price

This table shows no trend in return vs. share price. The lower share prices, however, do have higher standard deviations in their returns, so arguably one should focus on higher share priced stocks for a smoother ride.

Next here are the results for a decile ranking based on market capitalization:

`3.33   12.14    27%`
`3.52    9.94    35%`
`2.89    9.36    31%`
`3.35    8.21    41%`
`3.46    7.48    46%`
`3.36    6.49    52%`
`2.62    7.71    34%`
`2.45    7.19    34%`
`2.57    7.22    36%`
`2.66    7.67    35%`

First row: Lowest market cap; Last row: Highest market cap

The lowest market caps did outperform the highest market caps by a small amount. However, their volatility was much higher, and their Sharpe ratios were about the same or lower. So it is not plausible to think that the Greenblatt effect, as observed in this study, is an artifact of small market capitalization.

Victor and Laurel compliment and caution:

We would just add that the “Minister’s” study leaves out the performance since the retrospective data ran out and it ain’t pretty. The Minister is complimented on the perfect study for DailySpec: totally good methodology suggesting fruitful lines of inquiry, but nothing that violates his mandate as “Minister of Non-Predictive Studies”.

Professor Pennington returns with updated figures:

Here is an update of the recent performance of the Greenblatt ranking system applied to S&P stocks. Greenblatt’s book gives data through the end of 2004. Shown below is data since 2004.

`10      9        8        7        6        5        4       3        2       1`
`12/31/2000  -7.6   -3.4    -0.9    -5.3    -2.0     0.8    -0.7    -0.1    -2.2   -1.5`
`03/31/2005   5.6    0.9     4.7     1.1     3.0     2.6     3.1     0.9    -0.2    5.3`
`06/30/2005   7.1    7.8     3.9     6.3     3.7     0.8     4.8     4.0     4.7    3.5`
`09/30/2005  -2.9    2.1     1.6     2.4     1.6     3.6     3.2     4.8     1.9    5.9`
`12/31/2005  10.2    6.6     7.8     4.4     7.3     6.7     5.9     4.9     5.9    1.8`
`03/31/2006  -7.9   -4.5     1.4    -1.8    -0.1    -0.9    -1.2    -1.8     0.4   -1.4`
`06/30/2006   0.9    3.3     2.7     4.8     3.9     7.9     1.6     5.7     3.2    4.9`
`09/30/2006   3.5    3.8     3.6     4.5     2.6     4.3     4.4     3.6     3.6    1.4`
`Avg                1.1    2.1     3.1     2.0     2.5     3.2     2.7     2.7     2.2    2.5`
`SD                 6.7    4.3     2.6     3.9     2.8     3.0     2.5     2.7     2.7    2.9`
`Avg/SD           17%  48%   120%  52%   89%   106%  104%  101%  80%  85%`

Short story is that the high ranked decile, decile 1 (high E/P, high ROC), gained an average 2.5% per quarter since 2005 with standard deviation 2.9%, and the least favored decile, decile 10 (low E/P, low ROC) returned an average 1.1% per quarter with standard deviation 6.7%.

In such a short time frame, this one’s probably a coin toss, but it looks like it did go in Greenblatt’s favor.

Dr. Phil McDonnell lauds and extends:

Kudos to Prof. Pennington for his thorough review of the Greenblatt study. His use of the log of the price relative is exactly the right way to go to take into account compounding.

In my opinion the best time period to study is the out of sample post publication time frame from 12/2004 to the present. Using this period eliminates most of the concerns and biases which I feared including the post publication bias.

Based upon that period I looked at the Spearman rank correlation coefficient for the mean and the Sharpe Ratio(*). The basic idea is to see if there is an overall correlation beyond just a differential between the top decile and the bottom. In this case we would expect a negative correlation simply because of the arbitrary ordering of the deciles by Dr. Pennington. The following R code gives us our answer:

# Test the Pennington-Greenblatt data using robust Spearman rank correlation
av<-c(1.1,2.1,3.1,2,2.5,3.2,2.7,2.7,2.2,2.5)
n<-c(10,9,8,7,6,5,4,3,2,1)
sr<-c(17,48,120,52,89,106,104,101,80,85)
cor.test( av,n,method="spearman" )
cor.test( sr,n,method="spearman" )

With respect to the average we get:

Spearman's rank correlation rho

data: avg and n S = 226.3731, p-value = 0.2899 alternative hypothesis: true rho is not equal to 0 sample estimates: rho -0.3719581

Here the rho is -37% and has an insignificant p value of 29%

With respect to the Sharpe Ratio(*) we get:

Spearman's rank correlation rho

data: sr and n S = 218, p-value = 0.3677 alternative hypothesis: true rho is not equal to 0 sample estimates: rho -0.3212121

Here rho is 32% and the p value is 37% also non-significant.

(*) Minor quibble on the Sharpe Ratio: The usual formula for the Sharpe Ratio is:

SR = (average - tBillRate) / stdev

The idea is that it purports to measure excess return over and above the riskless tbill rate. It is thus the excess return one received for taking on risk. However in the present case making this adjustment would not change the ranking of the deciles at all since each average is being adjusted by the same thing. Thus the Spearman rank correlation test is robust even to this factor.

Victor and Laurel rejoin:

We suspect, as does Russell Sears, who ran a four minute mile and is always on target, that Greenblatt isn’t as careful with his data as he would lead us to believe, and that a student did it for him, and that there are millions of multiple comparisons involved in his original work. it doesn’t make sense that you could make a profit without a forward earnings estimate, and that you would be paid just for assuming things so close to cost, with little risk.

I concur with the essence of your doubts (What!, no expectations!?) even with Prof. Pennington’s detailed validation. Haugen also re-did Greenblatt’s work verbatim on his (cleaner? better?) database (written up in Barron’s some time ago) and derived some different numbers — but not wildly different. But then Haugen is touting advice-for-profit of nearly the same kind, so there are caveats. But, Haugen is not dishonest, and the advice he sells also does carry expectational measures that help him squeeze more alpha with less variance (so he says), as we would both expect.

I am hesitant to disagree with you that the market rarely offers “freebies” for naively assuming risk, but I cannot help but ruminate upon the question: “Do the results make sense?” Bogus data, future information, dredging and questionable strategy heuristics aside, “loss-aversion” and “disposition effects” are powerful anomaly creators, especially in combination with feedback trading. I will grant you that “The Price Is (rather often) Right”, especially when conflicted with sparse non-price time-series data. Maybe elevated short-interest levels will soon make these disappear too, or at least delay gratification for a sufficiently demoralizing period of time.

One nagging thought: Is there really, as you suggest, such “little risk” in the undertaking? I think one might be surprised by the qualitative “risk”, when anecdotally assessed over time. Someone like Lakonishok might answer: “How can they be riskier if they produce more return?” But this seems insufficient. Risk, like HIV, can hide or remain dormant for extended periods (e.g. inflation in the 90s). I posit that there is risk being shouldered, but perhaps it’s different (i.e., a different array of factor risks) in each epoch, so it’s hard if not impossible to systematically isolate, let alone forecast. How can one measure the risk of buying a Chapter 11 candidate concurrent to potential deflation? It’s binary. Perhaps it’s just this embedded tail risk for which, like a reinsurance company, is good business to write if properly priced (The Reversion Trader?). And perhaps one day, the inherent risk will manifest itself and thereafter, disabuse anyone from naively pursuing The Magic Formula. Then again, maybe there are just a preponderance of traders with differing forms of myopia.

By the way, Prof. Pennington’s high/low return spread numbers for RoC seem elevated. The E/P spreads look about right, but it remains the inferior value proxy. “Quality” in general seems more efficiently priced.

# Market Queries From a Northern Neighbor, by John Burckett

November 2, 2006 | 1 Comment

I am a 27-yr old professional equity derivatives trader with several questions and comments for Dr. Niederhoffer and Ms. Kenner. I just read Practical Speculation. I had previously read Joel Greenblatt’s The Little Book That Beats the Market. Needless to say, the two works propound extremely different views on the relative merits of growth versus value stocks and on the ideas of Benjamin Graham. I’m sure this is a debate that has been beaten to death before I was born, and I’m sure you are entirely sick of the whole thing, but please bear with me. I am interested in reconciling the ideas of the two authors. I would like your opinion on Mr. Greenblatt’s work and his “system” for investing.

I wondered specifically what Dr. Niederhoffer and Ms. Kenner’s response would be to the data cited in Greenblatt’s book. Is this evidence entirely worthless due to statistical and sampling errors? Is it only since 1965 (the Value Line data in the book was for 1965-2002) that growth has overtaken value? What do Dr. Niederhoffer and Ms. Kenner think is the correct way to value a stock? Since it’s difficult to precisely ascertain current or even past “real” earnings for a single stock, let alone the mkt, how can one hope to accurately predict the level of future earnings (as you must do for growth stocks). What valuation model should be used? What valuation model can be used that works for both “growth” and “value” stocks (it seems fairly silly to categorize all stocks into one of these two fairly arbitrary columns, but that’s what seems to happen).

Anyone can go to Mr. Greenblatt’s website and get a list of “value” stocks. He argues that his system (buy 20 or 30 of these value stocks and then sell them after a year and get new ones from an updated list on his website) will beat market returns over time. I am suspicious, but where is the logical flaw or statistical error in Mr. Greenblatt’s book. Will his method really work, and if not, why ? Mr. Greenblatt posted excellent returns over many years (I believe 10 years of returns are necessary to eliminate luck as the explanation of a trader’s returns) at his hedge fund. I’m sure he wasn’t simply applying the method from his book, but he is clearly a “value” investor.

To me, the strength of “value” investing, especially as described by Mr. Greenblatt, is its seeming logic. Even though you can’t buy a stock portfolio for 50% of its liquidation value as Graham suggested, the market and especially individual stocks can fluctuate fairly wildly even over short time frames, so clearly it is possible at times to buy good stocks or the whole market “cheaply.” As I write this, AMD has a 52 week range of 16.90 - 42.70… with roughly 485 million shares outstanding, that means in terms of market value AMD was (according to the market) “worth” almost \$21 billion in late January, and only \$8 billion or so in late July. Maybe some of this move was due to new (bad) information, but in all probability (since the stock subsequently recovered- then dropped again) it was due to the overtrading and ridiculous focus on short-term results that Dr. Niederhoffer and Ms. Kenner lambaste in their book. Take a look at the way retail stocks move around on monthly same-store sales numbers or oil and gas move on weekly reserves numbers for further examples of ridiculous overtrading and short-term focus.

Nevertheless, to ignore volatility (which is how I make my living) and keep your eyes firmly on the long-term potential of a stock leads to two pitfalls. First, you miss out on opportunities when the stock swings around in the short run (for example, you could have sold some medium-dated calls in AMD in Jan, then used the proceeds to buy additional stock in July). Second, you are ignoring risk; in the short-run, you could see such severe swings that you go broke instead of getting your 1.5million % a century return. Volatility might be much higher than it “should” be, it might be due to overtrading, and it certainly is the result of a focus on meaningless short-term information, but it is a fact of life. In my opinion, it’s better to take advantage of this fact than to ignore it.

One solution is to actually buy volatility itself. There are several studies showing that a portfolio containing a volatility component of 10% or so will outperform a similar portfolio with no volatility component (an example of a volatility component would be VIX futures or a similar instrument, essentially just a long option position). The general basis for this is that implied volatility in the options market usually increases when the market drops. You are diversifying your portfolio with a negatively correlated asset. Since the VIX hovers at a very cheap 10 or so these days, it seems like a great hedge.

Any reply or even a suggestion of further reading on the value/growth debate would be greatly appreciated. I have also emailed Mr. Greenblatt’s website with similar questions (you can find that email below).

Doc Castaldo illuminates:

He has so many inter-related questions it is hard to know where to begin. The Tim Loughran article “Do Investors Capture the Value Premium?” which some Spec (Dr. Zussman perhaps?) sent to Steve Wisdom recently seems relevant, and I sent it to him (the answer Loughran gives is no). I believe Prof. Pennington and Mr. Dude reviewed the Greenblatt book and found it well done; though some of us have doubts as to how well the results will hold up going forward.

I have studied this deeply and although impossible to adequately reconcile this argument, my reply is that there is enough room in the world for value investors and growth investors. One is more of a science and the other is more of an art. And that which works for one will not work for another. And they tend to be complementary, whereas when value investing is in favor growth is out of favor and vice versa.

Case in point late ’90s. Nobody and I mean nobody wanted to be a value investor. At the time I was with a regional brokerage firm and we had one of the best value fund managers around, and he was never asked to speak anywhere. Everybody wanted growth and hard chargers. He told me directly that the worm would turn and that which one is hated will once again be loved. In 2001 and onward his style came back into vogue. His numbers became very good when the implosion of growth occurred and value turned to the good.

I feel that value investing is more of a quantitative approach to investing. It requires arcane methods and such as roe, price to sales, price to book. You can have value investors, deep value, vulture investors etc. And it is very important that with value investing that one be a patient investor with longer term time frames. I have referenced the Hennessy Funds as excellent quant funds. They have a very rigid stock selection process and rebalance their portfolio annually which they bought the rights to from James O’Shaughnessey who brought this methodology out in his book How to Retire Rich. Their long term track record is very good and they did very will since 2000 but this year for the most part the results have been flat. Martin Whitman is a deep value investor and his Third Avenue Fund has done very well over time. As has the Davis Funds. The First Eagle funds does excellent work with their global funds.

Growth investing is more of an art. It requires timing. Growth investing such that William O’Neil supports can be very successful yet very volatile. Small cap growth investors many times requires a longer term time horizon as the swings in price can be quite hard to take. I have always liked Ralph Wanger (A Zebra in Lion Country) and Tom Marsico in this area.

It is very important that the style of investing one uses incorporates their financial education, character and personality among others. They most definitely require knowledge and different wiring.

As to the trading of that the chair employs, I will let him speak for himself but I am confident that he will say the methods that one uses for value investing and growth investing would never work for his methods of day trading or swing trading.

To use a poker analogy (alas it always comes down to poker) I liken value investors to people like Dan Harrington, Howard Lederer and Phil Hellmuth. They are percentage players very methodical. They wait for premium hands and play those. These are the tight players.

On the other side of the ledger are the growth investors such as Phil Ivey and Gus Hansen, aggressive sometimes to a fault and they play many hands and many times on feel.

Both styles and much more in between are effective and can bring one to the promised land, they just take different routes.

Dr. Phil McDonnell reminisces:

Many years ago I was engaged in fundamental research on stocks for a finance class at Berkeley. Upon showing my results to one of the rising young finance Professors in the Business School I had a rude awakening. He promptly but kindly pointed out to me the myriad of biases which enter into such a study.

It prompts one to paraphrase the poem poem by Elizabeth Barrett Browning:

“How Do I Confound Thee?” Let me count the ways in which fundamental stock data can confound:

1. Stale Data. Data are not always reported on time. Some is late, but most studies do not account for this adequately.
2. Retrospective Bias. Most fundamental databases use the current ‘best’ information believing that is what you want now. But for historical studies that means the data may have been retrospectively edited as much as several years after the fact. This is a form of knowledge of the future. If you analyzed Enron before its collapse the fundamentals looked good and the stock was too cheap. If you analyzed today with a retrospective database you know that the company had catastrophic losses. But the truth about the losses was not known at the time and the adjusted numbers only came out years later.
3. Sample or Survivor Bias. Use of a current database often results in a sample bias due to the fact that only companies which continue to exist in the present will be included in the sample. In order to avoid this issue one must go to an historical source in existence at the time in order to manually select the sample for each month by hand. Many companies are delisted or otherwise stop trading. For these the data must be manually reconstructed from historically extant sources. Otherwise this bias translates into a strong bias in favor of value investing strategies. A strategy which buys out of favor, or high risk or near bankrupt companies will always do well with this bias. The bias guarantees that they will still be around years later because they are still in the database.
4. Data Mining. There are many variables to choose from with fundamental data. There are countless more transformed ratios or composite variables which can be constructed. This leads to the ability to try many things. Thus the researcher may have inadvertently tried many hypotheses before coming to the one presented as the best. Because fundamental data are low frequency (quarterly at best) there are only 40 observations in a 10 year period. True statistical significance can quickly vanish in a study of many hypotheses.
5. Data Mining by Proxy. Everyone reads the paper and keeps up with current trends in investments. Thus our thoughts are always influenced by findings of other researchers. Thus even if a researcher did a study which avoided the usual data mining bias it may be simply because he took someone else’s results as a starting point. In effect he used their results as a form of data mining by proxy to rule out blind alleys.
6. Fortuitous Events. In the 1990’s F*** & Fr**** published papers about factor models to augment the Sharpe beta model. Their significant new factor was Price to Book ratio. In James O’Shaugnessy’s book What Works on Wall Street one can see a sudden upward surge in value strategies in the early 1990’s coincident with the publication of the F & F model. However the event was a single one time upward valuation of value models in the 1990’s. Before and after that, the effect vanishes.
7. Post Publication Blues. After publication of any academic paper or book the money making method usually stops working. Sometimes it is due to data mining or some flaw in the study and the putative phenomenon was never really there. The market is efficient. If everyone knows something it will usually stop working even if the original study was valid.

Prof. Greenblatt’s book is a fun read and remarkably brief. In fact if someone wanted to just get the gist of it, each chapter ends with a very clear summary of the key points in that chapter. It would be possible to get all the main points in about 10 minutes simply by reading the summaries. Let me say that if one were to use a fundamentally oriented strategy then the profit margin and Book to Price are probably the first two on the list. To be fair to the author, reciting one’s efforts to avoid sample biases in a book intended for a popular audience probably would not help sales. Such discussion is usually reserved for academic papers but nevertheless its absence does not give reassurance that all possible bias was eliminated.

The best way to test this strategy is not to go to the library and do all the work yourself. Rather one could simply go to the web site and copy down all the stocks recommended. Then in 6 months and 12 months revisit them to see how they have done and to see if the performance was statistically significant.

Ever since those Berkeley days more than 30 years ago I have always been distrustful of fundamental studies. That lesson from then Prof. Niederhoffer has helped shape my market studies in many ways. The bias of fundamental data is yet another way the market can confound the research oriented trader.

Jaim Klein replies:

Let’s simplify. The market universe is large and diverse enough to accommodate different successful strategies. One catches fish with net, another with bait. Regarding the value of anything, no such. The value of a thing is the price it can fetch in a certain moment and place. At 27 I was also confused. Experience is the best (probably the only) teacher. He has to do his own work and reach his own conclusions. It is time consuming, but I know no other way. He can also observe what successful people is doing and try to copy them till he can do it too.

Prof. Charles Pennington rebuts:

Dr. Phil lists 7 things that can go wrong in research on stock performance and its relation to fundamentals. Oddly enough, the Greenblatt book itself also lists exactly 7 such reasons on page 146! They’re not exactly the same ones, but there is plenty of overlap. I’ll list Greenblatt’s 7 with my own paraphrasing:

1. Data weren’t available at the time (look-ahead bias)
2. Data “cleaned up”, bankruptcies, etc., removed (survivorship bias)
3. Study included stocks too small to buy
4. Study neglected transaction costs, which would have been significant
5. Stocks outperformed because they were riskier than the market
6. Data mining
7. Data mining by proxy

Greenblatt: “Luckily the magic formula study doesn’t appear to have had any of these problems. A newly released database from Standard and Poor’s Compustat, called ‘Point in Time’, was used. This database contains the exact information that was available to Compustat customers on each date tested during the study period. The database goes back 17 years, the time period selected for the magic formula study. By using only this special database, it was possible to ensure that no look-ahead or survivorship bias took place.”

To all the biases that we consider, I’ll add the “not invented here” bias. It’s too easy to assume that no one else out there can do rigorous research. I think Greenblatt’s is fine.

(He didn’t however do any original results on jokes. His jokes are all out of the Buffett/value-school jokebook. Fondly recall “There are two rules of investing. 1. Don’t lose money. 2. Don’t forget rule number 1.” That one’s there along with all your other favorites.)

Dr. Phil McDonnell replies:

The way we all remember the late 1990s is the dot com bubble. It was the front page mega meme. The stealth meme was the value stock idea.

Rather than think of it as a single paper consider the paper as the seminal idea of a meme. From the original paper there were follow on papers by various academics as well as FF. From there the meme spread to the index publishers who always want a new ‘product’ to generate marketing excitement. Naturally the index guys sold it to the funds and money mangers who promptly started new funds and rejiggered old funds along the lines of the new meme. The money management industry always wants new products but also each firm needs to act defensively as well. For example Vanguard cannot eschew the new fad and leave the playing field open for Fidelity. As with all memes it grows slowly and diffuses through society.

In all fairness one can never ‘prove’ cause but only correlation using statistics. But it is clear to me that something happened which caused the value part (really just Magic Formula) of the market to triple during those years albeit with only negligible public awareness early on.

For the sake of argument assume that the cause was not the FF paper and its impact on the value meme. Then what was Dr. Zussman’s ‘unseen factor(s)’ which caused a triple in value? Which factor or factors are more plausible?

My prediction for the end of the next meme is the collapse of the Adventurer’s bubble. To play it one needs to sell. But I would guess that it is only a one to three year collapse.

# Some Variations on Happening Again, by Victor Niederhoffer

On Sunday, Oct. 22, at 6:34 a.m., the S&P spiked up from 1374 to 1396 in one minute, shocking the sensibilities of all shorts, raising the hopes of all longs, and alerting all risk managers to the possibility of a squeeze of shorts the same way they are always attuned to the bust of longs, from the work of the doomsdayists and their academic legitimizers. Within a minute it had moved back to 1376. and it seemed like just a bad dream to those caught the wrong way and to longs who didnâ€™t take the profits. I was immediately reminded that the most important thing that the Palindrome taught me in the 16 years of our close relationship was always to use two cans of tennis balls when playing a practice match as it saves time, and the one and only thing that a personage who worked for me and then became a billion-dollar fundist specializing in wringing out 3% annual returns taught me was that when a terrible price against you appears on the screen, enough to take your breath away and give you a heart attack if you’re old and out of shape, and then you realize it’s just an error, forget about it — then you’re really in trouble, as shortly thereafter the market will inevitably go to that price and much worse. I believe that he called such events “Finnigans,” although when I took him out to dinner after giving him a drubbing in tennis recently he failed to remember the appellation. Such a “Finnigan” occurred on Oct. 22, as this week the S&P went from 1370 to 1395 in a gradual ascent over four straight days of rises — the same terrible distance as the misprint that was taken down by the authorities.

It’s Oct. 27 again today, the ninth anniversary of the only day that they closed the NYSE after the market declined the circuit-breaker limit of 550 Dow points, a day that will live in infamy as it was enough to bring me down, cause enormous losses to my customers and me, put an end to my customer business for many years, and appropriately humble me for the rest of my life (I was humble before also, but not enough). I have made it a point to remind everyone of how liable to error I am every week or so ever since, but it is always good to repent and reflect on the anniversary of such a tragedy (such tragedy a source of great merriment and misrepresentation and hoped-for recurrence by my enemies.

In the immediate aftermath of the Oct. 27 disaster, I received 50 copies of Tuesdays with Morrie, one of the most boring and depressing books I have ever read. Now, I receive many letters suggesting that I take the day off, and others inquiring indirectly and gently: “How are you doing this year, Vic? We were worried (hope, hope) that tragedy might have befallen you again. We heard you looked crestfallen at the Spec Party and you’ve stopped reporting your results.”

Yes, I have stopped reporting my results for the same reason that General George Washington didnâ€™t report his troop strength during the Revolutionary War. If the situation were bad, then the enemies would gather strength and confidence and be able to attack with renewed vigor and impunity. If the situation were good, then he wanted the enemy to be overconfident so that they would be asleep on holidays, especially around the end of the year when great victories can be won. I answered such correspondents with the Washington lesson, and added that it is possible that the hoped-for reports of our demise, such as have appeared on the message boards and papers spawned by the enemies encamped with opposite positions and agendas, the same way the reports of Washington’s death that were spawned by the French generals who came to America hoping for prestigious positions only to be humiliated with token corporalships may possibly have been exaggerated.

The Fantasticks, currently running as a revival on Broadway, is the perfect musical, as it has all the elements of the whole history of musicals stripped down to bare essentials, the boy hero climbs a wall to assure his father that there is nothing on the other side (actually, there is a beautiful damsel he plans to run away.) It was one of those moments that seem eerily familiar; I had just read Frankensteins of Fraud by Joseph T. Wells after a very educational visit to the Fraud Museum in Austin, Texas (which I’ll report on in detail later). In one part of the book, Wells describes how the Crazy Eddie team was able to engage in a hundred different inventory frauds. At the top of the list was the story of how one of the Antars climbed a ladder to report inventory that was actually empty boxes or vacant space. Here’s a sample:

Â

When the auditors came to make their counts [the warehouse manager] climbed on to the product stacks himself and called the numbers down to the person below. If the auditor insisted on climbing up, the warehouse manager held the auditor’s notebook and marked the contents himself. He used a range of inflationary strategies: counting empty boxes as merchandise, listing cheap merchandise at premium prices, building tall dummy columns at the edge of a large shelf and claiming the containers were stacked three or four deep when the rear area was in fact empty…The warehouse also fiddled with what retail people call “the reeps,” which are repossessions — products that have to be returned to the manufacturer, who then refunds the wholesale cost of the merchandise to the store. How easy it was to do all this! Pulling it off is like playing with kids. The big firms use their audit detail as a training ground. It’s not their fault, but these auditors, they’re kids just out of college — nice ones with 3.5 to 4.0 grade point averages. The auditors only took inventories at a third of the stores anyway, and I helped them decide which stores to look at. The auditor would hand a warehouse clerk a sheet of paper and say, ‘Make me a copy of this, will you?’ The paper lists the test counts showing which parts of the inventory the auditor planned to do tests on and which parts they’d just take rough counts. Of course we’d make a copy for ourselves. We knew where they were counting and where we could do what we pleased.

Â

Considering the ease and variety of the frauds that this retailer was able to perpetrate, the methods of inflating comp store sales was particularly ingenious. The above is merely the tip of the iceberg detailed by the author, with much help from the divorce and family feuds between the parties. I have a certain skepticism for reports of fantastic profit growth from many fledgling retail chains, especially when the audits are not performed by auditors who donâ€™t rely strictly on recent graduates, and if they do rely on old codgers, at least make sure that they do all the climbing of ladders and emptying of boxes themselves, with reports and notations to a member of their own staff rather than to the company representative itself.

How can one talk about the current bull move in stocks, from a low of 1223 on June 15, 2006 when the octagenerian Alan Abelson returned from his four months on leave to continue writing his humorous and acerbic bearish 40 year running column “Heard on the Street”. He returned with the query at the market bottom, “The only question is whether the market is in a cyclical or a secular bear market” and made witty remarks about how this time Chairman Ben Bernanke is truly serious about inflation. Since that time, he has been continuously bearish about the market trotting out what seems like (I have not performed the content analysis here the same way I did in Prac Spec, where I analyzed all of his permanently bearish columns from 1966 to 2002 while the Dow moved from 800 to 10000) over a hundred reasons to be bearish with nary a single column bullish and merely one nod to his cloudy crystal ball, as the S&P climbed continuously to its current level of 1389. What can one say about this documented record of what must be the least accurate but most influential forecaster since Cassandra and Laocoon of the Iliad 10,000 years ago? One can say that perhaps he is part of the necessary backdrop of pessimism for a bull market to occur and that it is not chance that his return from leave coincided with this continuous increase in wealth.

Dr. Phil McDonnell responds:

The Chair writes of a mysterious event called a Finnegan. During such time the market appears to hit an ephemeral number but it is quickly nullified. Market participants are induced to think about possibilities previously unforeseen. In like manner managers conducting audits are all too willing to believe inflated numbers. It feathers their own nest.

Such events should not be dismissed as mere urban legends. I have met Finnegan. He is very real.

When counting the tomatoes in my garden one would think that would determine how many I have. One would think that a bitch suckling her newborn pups would know how many mouths to feed. It is all a matter of counting and auditing. So one would think.

Finnegan has traveled about 10 miles from his home in Renton to our home. He never knocks or introduces himself. He doesn’t have to. I have seen him and know him from his mug shot Everyday when I check the garden there are more vegetables with little bite marks. Tomatoes which I had previously counted are now inedible. All my audits are completely useless. The only thing I am left with is a little green (*) souvenir with bite marks from a tiny mouth. For that I am grateful. Without that, my story would be just another Bigfoot or UFO anecdote. After all both stories started right here in the Northwest. Ostensibly the first UFO’s were sighted in 1947 by Kenneth Arnold flying near Mt. Rainier. Mt. Rainier is the biggest and probably the most spectacular mountain in the lower 48 states. It is located nearer Renton just a few miles South of here.

There is no doubt about which one Finnegan is. He is the one who built his nest in a particular tree with a vantage point so he could watch our dog. Of all the gray squirrels in that nest Finnegan stands out. When our dog barks at the squirrels Finnegan barks back. After all he was the oldest of his puppy litter and knows what it takes to be the alpha male.

For any doubters who think this may be an urban legend.

# Bayesian Methods, by James Sogi

The debate on the relative merits of classical predictive statistical analysis and the application of Bayesian analysis when applied to markets when you have a prior probability computed for a given time frame is whether it is better to exit at the optimal time given at the time of entry of the trade or to alter your probabilities and trade based on the arrival of new information, new ticks and new changes in price, news, or announcements while the trade is pending. The classical statistical theory asserts that you have to trade the probabilities and to alter course creates the danger of Bacon's "switches" and diminishing the favorable edge. The prior distribution has ups and downs in the returns but overall the sum probabilities will be positive over the long run and to try short run this distribution reduces the overall return. The application of the Bayesian theory argues that adjusting the position during the trade to the arrival of the new information and can increase the probabilities and returns and avoid the "switches". In practice the former seems to be beating the latter but this may be due to lurking variables in execution. This could be tested easily enough on historical data. The problem in testing is which parameters to use for the posterior criteria.

Thomas Leonard and John Hsu's book Bayesian Methods, from the Cambridge Series in Statistical and Probabilistic Mathematics, has understandable definitions of the concepts for the practitioner and the theoretician. The Bayesian paradigm investigates the inductive modeling process where inductive thought and data analysis are needed to develop and check plausible models. Indeed, one of the the main reason for the spec list is inductive thought to develop models and their testing. Mathematics and deductive reasoning are then used to test those models. Too much concentration on deduction can reduce insight, and too much concentration on induction can reduce focus. An iterative inductive/deductive modeling process has been suggested. Bayes' Theorem states generally that Posterior information equals prior information plus sampling information.

The Expected Utility Hypothesis (EUH) microeconomic procedure helps make rational decisions about money and might be used as a model to quantify decision making and risk as an addition or alternative to Dr. McDonnell's risk formula by considering the choices of the trader or client relative to the statistics to determine whether the amount at risk and the decision frame work being used is rational or will lead to losses or lower return for given probabilities. This work parallels the work of Tversky and Kahneman, but is quantified in Bayesian terms. The basic idea is that people place a premium on certainty which leads to irrational decisions about risk and leads to more losses than is right. This is a good quantification of the gambler vs speculator distinction just discussed, as the gambler's probability expectation is negative while the speculator's probability expectation is positive. Using as examples such as the St. Petersburg Paradox, Allais' Paradox, and the risk aversion paradox, the EUH can be used to make better decisions. Some seek the premium for certainty and fall into the trap known as the "Dutch Book" resulting in certain losses over a series of iterations. This is the distinction between a gambler and a speculator. Formally, does your choice satisfy the utility function U(r) such that for any P1 and P2 P1 <= P2 if and only if:

E(U(X) | P1) < = E(U(X) | P2)

The EUH measures whether you choose a positive but more random expectation over more certain but lesser return. The St. Petersburg paradox is a good example. A fair coin is tossed repeatedly until a head is obtained for the first time. If the first head is obtained after n tosses, you receive a reward of 2^n dollars. What certain reward reward would you accept as an equivalent to the random reward? The paradox occurs because the expected winnings are infinite, but most people would accept 6 or 8 dollars. The EUH can quantify an individual's utility function. This might be a good way to allocate money among funds or risk profiles using an elicitation process and creating utility curve for allocating either clients or moneys among funds or accounts with varying risk profiles and expectations or leverage.

Back to the original point. Let's say you are in a trade that says the optimum expectation is tomorrow. What if the market goes up big today in a big rush all of a sudden. Do you wait it out because the system says so or do you take the gift. The odds have changed on the expectation due to today's rise. What if the twin towers are bombed. Do you bail or ride the original trade? The answer to these is simple, but as shades go, it is not so clear. The criteria for judging the posterior probability seems to be the crux of the issue, but should follow a rational method.

Dr. Phil McDonnell responds:

The origins of Expected Utility Theory go back to The Theory of Games and Economic Behavior by Morgenstern and Von Neumann (1944). They asserted that the expected utility is given by:

E(u(x)) = sum( p(i) * u( x(i) ) summed over all outcomes i

where: p(i) is the probability of outcome x(i) occurring. u(x) is presumed to be an unknown but monotonic increasing utility function which may be unique to each individual. Note that the expectation is a sum over all outcomes.

In their paper Kahneman and Tversky (KT) on Prospect Theory a prospect is essentially a set of outcomes as above, in which the sum of the probabilities is 1. The latter constraint simply means that all outcomes are included. In the first page of KT they say "To simplify notation, we omit null outcomes". Null outcomes comes from two sources. One is a probability of zero which is innocuous because the zeros would not be included in the sum of all the probabilities adding to 1 in any case. KT makes the unsupported assumption about the nature of the utility function in the following two cryptic remarks from p.266.

,with u(0)=0, (p.266)
set u(0)=0

In both cases they are making an assumption about the utility function which neither supported nor even explained. In addition the paper is using what may be the wrong zero point.

Daniel Bernoulli made a very insightful analysis of the St. Petersburg Paradox mentioned by Jim Sogi. His key understanding was that the utility of money is logarithmic with the natural log ln() being the convenient choice. The compounded value of a dollar is given by (1+r)^t where r is the rate and t is time. This is simply a series of multiplications of (1+r) by itself t times. We know that multiplication can be replaced by sums of logarithms. After which we take the anti log to restore the final answer. So if our goal in a sequence of investments or even prospects (bets) is to maximize our long term compounded net worth we should look to the ln() function as our rational utility of a given outcome with respect to our current wealth level w. In particular, for a risk indifferent investor, a given outcome x would be worth:

u(x) = ln( (1+r) * w )

This thinking is the basis for the optimal money management formulas and for what could be called a Rational Theory of Utility. Note that the formula only depends on wealth. It is non linear and concave.

Thus it is reasonable to ask what was the average net worth of the individuals in the Israeli, Swedish, Allais and KT studies. For the most part they were students with some faculty. The average net worth of a student was probably about \$100. A little beer and pizza night out was considered living high for most.

Using questions with numbers reduced in size from Allais, KT asked subjects to choose between:

A: 2500 with p= .33 B: 2400 with certainty 2400 with p= .66 0 with p= .01 For N=72 [18%] [82% chose this]

The expected log utility for these choices under rational utility is: E( u(x) ) = 3.1996 E( u(x) ) = 3.2189

Clearly the 3.2189 value chosen by 82% of the subjects was the better choice.

Looking at Problems 3 and 4, Choices A, B, C and D we find that the rational utility function agreed with the test subjects every time without exception. KT disagreed with both the rational utility function proposed herein and their test subjects. Based on this metric it would appear that the subjects are quite rational in their utility choices.

One can find the KT paper here.

Here is some R code to calculate the expected log utility for a rational investor for each of the referenced KT problems:

Jeremy Smith responds:

Note that there are at least 2 ways the distribution can change. It might change unpredictably, e.g. due to arrival of new information or it might change for example because of the approaching expiration of options. Even if Bayesian methods and Markov Models aren't good at the first kind of change, they ought to be useful for the 2nd kind.

# Mandelbrot, Fractals and Chaos Theory, by Dr. Phil McDonnell

October 10, 2006 | 1 Comment

In response to Professor Haave's query: "Mandelbrot does a good job attacking modern finance theory, and he does a good job explaining what the rest of us call "fat tails". But otherwise, well, is it the same merit as Elliot Waves?"

There are several things wrong with Fractal and Chaos Theory:

1. The world view is fundamentally and fatally pessimistic. Benoit Mandelbrot argues that the variance must be infinite. He drew that conclusion 40+ years ago based on the fact that cotton prices had changing volatilities over time. Based on that slim evidence he jumped to the conclusion that it must be infinite. I have yet to see a real world price of infinity. Despite the lack of even a single infinite data point, Mandelbrot completely dismisses the modern GARCH models as being too complex. In my opinion, dealing with the intractability of infinite variance is far more complicated.

2. It is non-predictive. Generally speaking, randomly generated numbers (fractals) are produced and usually graphed. Then the pretty pictures are compared to real world phenomenon in the past. The pictures do seem to resemble some real world patterns to the human eye. In my opinion that is because the eye wants to see patterns in such things as snowflakes and turbulence swirls.

3. Lack of rigorous definition. Admittedly there are some valid mathematical proofs which have come out of this area. However in general the field is completely devoid of basic metrics. For example how does one define "similar to" or "close to" for a fractal? This lack of basic metrics comes out of the fundamental pessimism in point 1. The philosophy is that the real world has infinite variance so there is no point in measuring how far we are from something because the next event could be infinitely far away.

4. Lack of goodness of fit, statistics and feedback as to how well this theory fits the real world. You will never see a statistic of any kind in a paper on fractals or Chaos theory — no estimate of probability, no R squared, nothing. In his book, Didier Sornette performs numerous non-linear fits of various market crashes and yet never presents a single probability estimate or R squared value. It is always presented as "see how nicely the chart of the model overlays onto the actual market chart".

5. The theory is non-scientific. To be scientific a discipline must make testable falsifiable predictions. These theories are not predictive and therefore not testable nor falsifiable.

Chaos theory extends these fundamental issues one step further. Most mathematical definitions of a 'critical point' involve a term something like: 1 / ( t0 - t ) , where t is the current time and t0 (t zero) is the time of the critical event. At t approaches t0 the difference goes to zero. So at the critical juncture we are dividing by zero. The entire expression goes to infinity at that point. (Strictly speaking it is undefined not infinite). The point is that even if BM is wrong about infinite variance the models of Chaos Theory create their own asymptotic behavior which means these models do exhibit infinite variance even if the real world data does not!

About the only hope I have for these theories is that some of the older generation of thinkers will pass on and a new crop will come in and invent metrics and ways to measure goodness of fit which can turn this field into a predictive and testable science.

Russell Sears responds:

Could someone explain how theses theories lead to the doomsday scenario which Mandelbrot and the Derivatives Expert are so fond of? In my mind, I have worked out the following, tell me if I am on the right track:

First, the chaos part. You can often find "meltdowns" in markets. Points in time that markets "jump" and are discontinuous, or at least "non-normal" for brief periods. A butterfly flaps its wings and you have a thunderstorm, for 15 min. or perhaps even a hurricane for a couple days. Say a 13 sigma event for a day in Oct 87.

Second the fractals. The market is pattern that can be repeated, and scaled up simply by time, say perhaps by the square root of time. Combining these, if a 13 sigma event happened in the past for a day, it's only a matter of time until it happens for say a month, or a year. Likewise if we seen a 13 sigma, it is only a matter of time until we have a 1000 sigma event.

However, to "prove this" fractal they do the opposite. They take long periods of time and scale down. Of course these long time periods don't have the "chaos" pattern  yet.

The problem is: these longer distribution are pretty much continuous. Therefore, they follow a random walk pretty closely. Of course when you take a distribution that is stable, you see these "patterns". It is the same distribution after all.

The real assumption concerning fractals is the "energy" of the markets, i.e. that people can go infinitely into craziness, an unlimited nuclear chain reaction, fusion versus fission. They simply reject all evidence of extremes as being contained fission, by saying fusion will happen.

As you said, it's not science, but it uses a lot of math terms. It's a belief system.

Dr. Phil McDonnell responds:

News can cause jumps. This may induce something like the Merton jump diffusion model. The idea is that markets are generally lognormal but a few times a year some big news happens to cause a few outlier events per year. Also remember that if the market follows a jump diffusion model that the 1987 crash should be counted in the model.

The modern GARCH and EGARCH models are better because they take into account shifts in the volatility regime. The 1987 event was say a 13 sigma event taken over a 50 year average volatility. But when you look at it in the time frame of that week's actual volatility it was only about a three or four sigma event.

The concept of self similar behavior at both smaller scales and larger scales is probably one of the more interesting aspects of fractal theory. Lo has found that the square-root scaling of volatility over time does not quite hold but it is close. He derived his own test statistic specifically to test for this in markets in a non-parametric way.

How much time? There is considerable evidence that large negative jumps are mean reverting. This is a violation of the idea of self similar. As an aside even the normal distribution is self similar regardless of scale. The normal distribution for, say, 20 trading days can be decomposed into two periods of 10 days, four periods of five days, all of which are normal and scaled proportionally to the square root of the time. This has been known for something like 200 years.

Generally speaking a philosophy of pessimism pervades the study of fractals and chaos. I believe it is direct consequence of the assumption that the variance is infinite. This requires the ineluctable conclusion that the big one is out there — hence the predilection for doomsday scenarios.

I went to a talk Benny Mandelbrot gave at NYU, soon after his book came out. While it was interesting, it had very little relevance anyone trading on a daily basis to actually make money, or to value financial instruments. He was completely uninterested in questions of underlying mechanisms or market behaviors that might underlie his models. Attempts to get from him information on how to plug in actual market pricing into his models to give a predicted price were met by seemingly sincere shrugs of "the picture is enough, why make it more complicated?"

# Risk, from Scott Brooks

Risk is different depending on your perspective. Lets look at it in a non-academic manner. The risk is as follows:

Lets take a fictional person named Joe in 1975. Joe starts to get his life in order and finances stabilized in his early thirties. He starts saving some money and follows the edict of investing in the stock market.

He experiences extreme volatility for the first few years, but since he's only got a small, nearly inconsequential nest egg, he is able to ignore the extreme swings in the market that occurred from 1975 - 1982. After 7+ years of investing, his nest egg has built to a tidy little amount and by the accident of timing, he now has his decent nest egg positioned to take off with the greatest bull market in history.

And we are off to the races.

1987 hurts him, but he's got such huge gains, that he is able to overcome it, and "hold the course". less than 18 months later, he's back to even and off to the races again. The glitches along the way are still there, but they are quick, fairly painless, and its off to the races again.

By the end of 1999, between his pension account and 401k, he's built up \$1,000,000 in his nest egg. Since the market has taught him that he can easily get consistent double digit returns on his money, usually in the upper teens or lower 20's, he decides to roll over his pension and combine it with his 401k in a rollover IRA.

He has expanded his lifestyle through the greatest bull market in history, and thus needs about \$100,000/year from his portfolio to live. But that's no problem, with the market "drift" yielding him 15% - 20%/year (heck, in the late '90's, he was getting a 30%+ return), pulling 10%/year off his principle to live each year is no problem.

Then comes 2000.

He finds out he was right about being able to get double digit returns. Unfortunately, he finds out that they also come in the negative variety, too.

He pulls out his \$100,000 to live on in year 1, and earns a negative 10% return. Now he's got around \$750,000 in his nest egg. But that's only a glitch (and he's experienced these short term set backs before). In a few years he'll easily be over \$1,000,000 again.

Then comes 2001.

He pulls out his \$100,000 and the market goes down again. Now he has got around \$500,000. He begins to sweat. Then comes 9-11. And it scares him badly, but he remembers 1987 and the big run up that occurred afterwards and decides the hold the course (heck, what choice does he have if he wants to maintain his current lifestyle).

Then comes 2002.

He pulls out his \$100,000 and the bottom falls out of the market. The S&P 500 is at 755 (off from its 1550 high around the time he retired). Since he was an indexer to begin with, his nest egg is cut in half not counting withdrawals. Counting withdrawals, he is down to under \$200,000.

He has to go find a job.

He has to stop taking withdrawals (which he was doing under reg. 72t since he retired early), and in some cases, the "Joe's" of the world mortgaged their houses to take advantage of the boom, and lost all or most of their equity.

Remember, when we talk about drift, and we talk about risk, we are doing so in a largely academic sense. We are discussing it in a manner that the average person cannot relate too.

If you're running a huge hedge fund and have tens of millions of dollars under management invested with you from people that can afford to lose millions and not have it ruin them, or if you are running your own money and can get another "job" if necessary, or have another means of support besides your portfolio, that you have a vastly different perspective on drift and risk than the "average Joe's" of the world who have no other means of support nor do they have the intellectual capital to figure out what to do and how to fix a problem as severe as seeing their entire life's work melt down 80%.

A few years of losses hurt Joe more than a lifetime of gains helped him.

Dr. Kim Zussman responds:

Someone asked earlier "What risk I am compensated for when the drift is a certainty?" I have a few clarifications:

• That drift is not certain, or actually that drift is certain but only with an uncertain timescale
• The uncertainty of your own uncertainty (like the second derivative of certainty: first you are sure. Then unsure. Then unsure if you should be unsure)
• Pain and suffering: see above

The idea of the long term upward drift in the market is a stochastic reality. It would be inappropriate to apply words such as 'certain', 'always' or 'never' to a stochastic process.

As Dr. Zussman indicated the drift is not guaranteed to be a certain amount or even positive in any one year. It is increasingly likely that your diversified returns will approach the long term drift rate if held long enough. However even the phrase 'long enough' implies risk. For there is no guarantee that one will live 'long enough' to realize the drift rate due to other risk factors.

# Angles, from Bruno Ombreux

There are many angles to the markets.

There are Gann angles, which of course make no sense, because angles on a chart are depending on axis units.

There is also the Cauchy distribution, whose fat tails are scaring away those who can remember 1987 but have never traded smallcaps or electricity. This distribution can be generated from angles, made by someone shooting randomly at a distant target. Where "randomly" means uniform. Hence fat tails are the property of some distribution of angles.

More interestingly, there seem to be a lot of statistical tests developed for circular data; that is angles. I found out about them in 100 statistical tests by Gopal K. Kanji. Just for fun, I gave a try to the V-test, or Modified Rayleigh. It is a test for randomness, checking whether observed angles tend to cluster around a given angle.

The data is JPY/USD monthly returns since 1965. One problem surfaced though — how to transform returns into angles?

I chose to project them on a vertical axis, in some reminiscence of the Cauchy target experiment. As a result, time is factored out of the study. It could turn it into what I think they call an "axial study", in which all angles need to be doubled in the computations. This could be a mistake, but it does not affect the results. The conclusion is the same whether the angles are doubled or not.

And the conclusion is that we reject the null hypothesis that angles are random around the zero-line, at the 0.0001 significance level. Rayleigh's V is 5.255. There is some element of non-randomness in monthly JPY/USD, which still need to be identified.

Here is the code for the test, except that V's significance had to be checked in a table, unavailable in any R package.

Close <- 100*diff(log(YEN\$Last))
Angle <- 2*atan(Close) #Axial data

xbar <- sum(cos(Angle))/length(Angle)
ybar <- sum(sin(Angle))/length(Angle)
r <- sqrt(xbar*xbar+ybar*ybar)
theta0 <-0 # theoretical angle direction i.e. null hypothesis
nu <- r*cos(phi-theta0)
V <- nu*sqrt(2*length(Angle))

I am no Gann fan, but Gann angles escape the aspect ratio problem you mention because Gann plotted his charts on paper of a constant scale, so that one increment of time was always kept constant in physical distance relative to the units of price on the y-axis. Typically it was 1 point in price = 1 unit of time. In this way, a 45 degree line always related to a rate of ascent or descent of, say, one point per day. So there was some internal consistency. Or, as Shakespeare put it "Though this be madness, yet there is method in't."

Early technical analysis software packages like CompuTrac let you draw angled lines which were completely arbitrary because of the vertical range in the time period plotted in the chart, but they were there because the TAG group threw just about anything into the program that users asked for. But they also had methods of drawing Gann angle lines with the consistent aspect ratio ability.

Bruno replies:

To answer my own post, I tried as much as I could to find randomness with circular tests, but could not. The reason is certainly that returns data is not circular. The way to make it circular is to introduce time. Divide the circle in 12 for monthly returns.

Then returns have to be in a third dimension. Fortunately, we have got spherical statistics!

With all due respect, Gann realized his error only after he had been publishing for some time. It was a retrospective fix. But it is an inadequate fix.

When a stock splits 2:1 any normal chart is now no longer a 1 point to 1 time unit ratio. When Microsoft paid its 10% dividend is the new ratio now .90 to 1.00 or should it be 1.10 to 1. Gann is moot on the question.

When a stock pays dividends should the price be adjusted? Should the dividends just be ignored with the attendant error in rate of return?

What about weekly charts - is the time scale 5 units or 7 or just 1 (a week)?

What about monthly? Is it 1 unit, 21 units or 30 or the actual number of trading or calendar days?

With many angles, many time scales and dubious rules it should be quite easy to find many examples that come 'close' to turning points in the markets. In fact it is probably quite difficult to find any failures. This is especially true if one is allowed to define 'close' as whatever one needs to make the current data fit.

Even though the arcane mysticism of Gann is suspect, Bruno's ideas on angles may have some merit and should not be lumped into the same basket. The Cauchy distribution induced by the angle model can be problematic. During the 90's several advances were made by Zar and others in the statistics of angles and tests thereon. That area is relatively new but very workable.

# Buy and Sell Orders, from Dr. Phillip J. McDonnell

There are two types of buy / sell orders. Market orders are orders to be executed immediately at any available price. Limit orders are orders to be executed only if the specified price or better is reached. For all practical purposes the market maker bid-ask quote can be viewed as a limit order. The combination of market buy/sell and limit buy/sell gives us a total of 4 possible orders. To that we can add the cancellation of limit orders for a total of 6 possible type of trading orders. The case of cancellation of market orders is effectively eliminated because presumably the execution occurs so fast that there is no time for a cancellation.

So the interaction of these six orders is what determines a market. For a long time little was known about the distribution and interaction of these order types. The market micro-structure was a black box to most. However some early researchers, notably Vic and M.F.M. Osborne, studied the structure of markets. Vic went so far as to examine time and quote data when it was not widely available and to get permission to analyze specialists' books on the exchange in the 1960's. The findings are outlined in Education of A Speculator. A couple of key ones are:

1. Limit orders tend to be larger than market orders.

2. Given a market ticks up or down the next different tick will tend to be in the opposite direction with odds ranging from 4:1 to 7:1.

Today markets are a bit more transparent in that the order book is often available to participants willing to pay for it. However this opens up a new dimension of deception not previously available. Many orders are large bluff orders which are immediately canceled. Other gamers will repeatedly place and cancel an order at one second intervals. First this will create an aura of flashing quotes on a real time updated quote screen. Anyone who has ever seen the mesmerizing effect of a slot machine with its myriad blinking lights will quickly get the idea. The blinking lights have replaced the old time hypnotizing effect of the mindless ticker tape forever droning on.

Another trick is when there is a fairly small order on the bid or ask side and the gamester wants to attract attention to that side of the market hoping for someone to take the limit order out. They will use the same 1 second alternating big order then cancel routine to attract the eye just to the bid or the ask side of the quote. It reminds one of the mythical seductive sirens of ancient mariners seeking to lure the traveler over to the bid or ask side simply by attracting the eye.

The net result of the greater transparency has been to increase the deceptive aspects and the gaming dimension of order analysis. In addition effective analysis of order flow is now a job for a computer. Humans simply cannot keep up with the rapid fire order placement and cancellation. Even though the analysis is more difficult now compared to when Vic did his seminal studies of order flow at least we have the data.

# A Tutorial on Measuring Risk in Stock and Options Portfolios, from Dr. Phillip J. McDonnell

Measuring the risk of a portfolio which includes stocks and options can seem to be an insurmountable problem. Options move with respect to their underlying stock as a function of the option delta, and at the money option with a delta of .50 will move 50% as much as the underlying stock on a point basis. The CAPM theory says that stocks will move with respect to the market as a function of their beta. Empirical evidence says that the alphas are non-persistent, so only the beta need be considered. Thus a stock with a beta of 1.50 will move 150% as much as the underlying market index moves.

So if we wish to construct a portfolio scale metric which will measure the combined market response of a portfolio of stocks and long and short options we would simply multiply the delta and beta by the quantities suitably adjusted.

Suppose we had an IWM Russell 2K ETF call option with a delta of .50 and IWM is at \$72 per share. We wish to convert the position to a common metric of an equivalent dollar amount of SP index.

Convert the option to a dollar equivalent IWM by taking 50% of \$72 times 100 shares. This gives us \$3600 dollars. Let's say the beta of IWM with respect to SP is 1.50. Multiplying \$3600 by 150% gives us \$5400 worth of pseudo SP index.

For a stock take the stock value of \$7200 times the beta of 150% to get a dollar equivalent of \$10,800.

Remember that a short position puts a minus sign in front of the above numbers. Also note that a put has a minus sign built in as well. For a short put it is -1 * -1 = +1. After the equivalent risk of each position is calculated then add them up with their signs to find the total of the portfolio.

A final note of caution - this type of analysis is an excellent way to measure linear risk for small movements in the underlying index. However for non-linear assets, such as options one may wish to consider other measures which include gamma and even the third and higher calculus derivatives of the option model.

# Optimal Searching, from Dr. Phil McDonnell

If you have ever lost your keys or your wallet you will understand this problem. What is the best way to find things?Every student of Computer Science is required to learn a technique called a binary search. The primary requirement of a binary search is that the list of items to be searched must be sorted in order from smallest to largest or A to Z. Given that fact, the search examines the item in the middle of the list and is able to rule out all of the items before or all of the items after the examined one. Half are eliminated in one look. For the half which remain we again look at the middle element and further reduce the remaining possibilities by half. Using this technique one can search a list of up to 1,024 items by examining only 10 elements.

Sometimes we do not have the luxury of a perfectly sorted list of items. Occasionally the list may have increasing values up to a certain point and then declining values thereafter. Such an arrangement is known as an unimodal distribution — it has only one peak somewhere in the middle. For example a list of the probability values of the normal distribution would have one peak in the middle and a decline thereafter. In optimization problems such a pattern arises quite naturally, with the values to be optimized rising up to a certain point, after which they will fall. That point is the optimum (maximum).

To search a unimodal list the search of choice is called a Fibonacci Search which relies on the spacing between the Fibonacci numbers to calculate its next step size. As such it is more adaptable than the binary search. Under certain circumstances it can be shown that the Fibonacci search is an optimal search algorithm for such problems.

The Chair has frequently noted that the financial ecosystem requires much upkeep; a massive numbers of people and huge capital investment are required to keep the markets functioning all cost money. The source of revenue to fund these operations comes from three main sources, commissions, market spreads and professional advice and the key driver of all of these revenue sources is volume. The relationship between volume and commissions and market making profits are obvious. Order flow is everything to a commission broker or market maker. However those who sell advice also thrive on volume which is a proxy for interest in the market. When the public is interested in the market more money flows in and a certain portion of that money needs advice. The entire ecosystem of the financial industry thrives on volume, when volume is maximized the health of the financial system is maximized.

In this context it may be that the objective of the markets is not to maximize the price discovery process but rather to maximize the volume of trading, after all the health of the markets is integrally bound to the health of the financial system itself. If volume optimization is the real goal of the market is it not likely that the market uses an efficient search technique to discover the optimum. In this context the Fibonacci search is the best known algorithm for such a search in a unimodal volume environment.

One hastens to note that this is quite different from the mystical application of Fibonacci numbers which some traders try to apply in the price domain.

« go back