The Dow Theory, Big Cap, little cap, SPY/Russell, 2 factor theories are well tested on a variety of divergences. I think they work somewhat with interest curves as well.  I'm wondering about currencies, and countries. Would global/US, or small/big two factor model be predictive at all?

Bill Rafter writes: 

Two factor models work best when the two variables/inputs exhibit at least some negative correlation (obviously with changes, rather than levels). Equities v. Debt is a good example.

Also, we have noticed that in a competitive 2-horse race the overtaker is usually the first to move. That is, the buy signal in A is given before the sell signal in B. We have surmised this is because the smarter players start to acquire A while the complacent participants are reluctant to dump B until late in the game. Impossible to prove, but it makes some sense. This coincides with the experience that assets move up slower than they decline. As Matt Ridley puts it (Evolution of Everything), "Good things are gradual; bad things are sudden."



 This reminds me of the Sherlock Holmes short story called "The Silver Blaze" in which the mystery was the dog that did not bark.

Why would a practitioner have success with one stock (AAPL) and failure with another (FB)? How are they different (or is there something else) and what are the implications for price forecasting? For example, our tactical algorithms have most recently "nailed it" (AAPL) and "gotten nailed" (FB). Technical analyses sensed something in AAPL, but were 180 degrees off in FB. Why one and not the other? Could Apple's earnings (or at least an inkling of them) been in the market, whereas Facebook's were a total surprise? The market reactions suggest both were a surprise, but yet there were clues with one and not the other.

Here's a link to what we saw or didn't see.

There are many factors which can be used to explain price activity. Among them are price momentum and sentiment, both of which can be modeled by a practitioner or his computer. Somehow someone gets the inkling, real or imagined, that the wind is about to change direction, and either acts accordingly or just declines to follow the well-trod path. Then change happens. It is inexorable, almost evolutionary.

Freely traded markets are very efficient, but not perfectly efficient. That's why "technical analysis" or "counting" works, at least some of the time. Information leaks out and it shows up as a marginal change in the price. Could some companies better enforce a no-leaks policy than others? Maybe. But information can get out in other ways. For example, Apple has stores that are usually crowded.

Suppose all of a sudden they aren't crowded; that's a tell that can be modeled. The people who watch the stores will know before the earnings are released. Okay, then how do you do that for Facebook?

Facebook's revenues and earnings (i.e. fundamentals) are hard to model from the outside. We don't know of any tells. And they may have a rigorous no-leak policy. Which other companies have those same characteristics?

If you look in your program, both companies have similar profiles with regard to share statistics. That is, they have similar relative percents held by institutions and insiders. Their shorts as a percentage of float are similar. However their old school analysis characteristics are different; no one buys FB for the dividends.

Great quote from Robert Schiller: "We should not expect market efficiency to be so egregiously wrong that immediate profits should be continually available." That is both true and comforting when we are licking our wounds. If you have an edge, it's a small one, so diversify or watch the size of your bets.

But no matter how good you are at modelling momentum and sentiment, random things can screw up the forecast. Suppose that all of your algorithms identify a stock that is headed upwards. Then the company's corporate jet falls out of the sky with the executive team on board. That stock is going down, damn the forecast.

To us this is both a practical issue (our bank account) and a philosophical one (our minds). We would appreciate any and all ideas.

BTW, if you want to play with the algorithms yourself, send me an email and I will send you a link.



A few years ago there was a discussion on the site about an esteemed Dailyspecer's paper:
"Modeling the Active versus Passive Debate

That article generated a considerable amount of hate mail from investment "professionals" who felt the piece threatened their buy-and-hold livelihood. I consoled myself with some rather unkind thoughts.

Roger Arnold writes:

This reminds me of the discussion we had here 15 years or so ago when Triumph of the Optimists was published.

When I discussed the subject of the outsized returns of equities versus other asset classes with the principal author, Elroy Dimson, he said that in his opinion the 20th century returns were unique and not likely to be repeated over the next century. I won't go into his reasoning here as we discussed it then and I'm not sure if It's been discussed during my absence from the list.

The gist of the conversation though was that everything that provided the positive drift to publicly traded equities has been exhausted.

The positive drift is what made passive management a plausible money management scenario.



The numbers on Payroll Taxes are quite bullish. However if the Jobs Report shows similar, the stock market response could be negative, anticipating hawkish Fed moves.

The big difference in the data is that the BLS Jobs Report indicates jobs without any discrimination as to actual earnings. That is, a $10 per hour job counts as much as a $1000 per hour job. Payroll taxes intrinsically reflect the quality of the job.

Victor Niederhoffer writes: 

And yet Erica Groshen is still Commissioner of Labor Statistics and she's a very good friend of the Chair and they frequently speak together at testimonials and I believe coauthored an article on inequality together. However, unlike Erica, I have not been able to find evidence that the Chair sent her kids to Camp Kinder the way Erica did.

Bill Rafter writes: 

Today's comments by the Fed Chair give us an interesting observational platform.

If the Jobs Report on Friday is bearish on the economy, then it would appear that the Fed Chair was informed and stepped in before the release to keep the party going. (Whether such response is good is debatable.) Note that the survey period for this month ended on Saturday March 12th, so there has been plenty of time to inform someone who has a need to know.

However if the Payroll Taxes are correct and the jobs numbers are bullish on the economy, then the Fed Chair must be either poorly informed or illogical. Neither is comforting. In such a case one might question the need for such a Fed.



In two weeks the March Jobs Report will be out (Friday April 1st at 8:30am). The data to be reflected will be that collected thru this past week (March 12th). The Payroll Tax Receipts (distributed by the U.S. Dept. of the Treasury) thru March 16th already presage a Jobs Report considerably stronger than the prior one.



In the last four weeks U.S. equities have risen nicely. Some were lucky or good enough to forecast what happened (check their records). And there are some who are apprehensive about where the market is now. I cannot guess everyone's motive, but I believe more than a few of the hesitant are so because they fear a further bursting of the Chinese Bubble. However I present to you a brief phantasmagorical tour showing that the Chinese Bubble has already deflated.

In terms of three usable commodities (copper, wheat and cotton) the Shanghai Stock Exchange has mean-reverted to its price in mid-2014. If you are betting on a further Chinese decline, be cautious.



"Many scientific “truths” are, in fact, false"

In 2005, John Ioannidis, a professor of medicine at Stanford University, published a paper, "Why most published research findings are false," mathematically showing that a huge number of published papers must be incorrect. He also looked at a number of well-regarded medical research findings, and found that, of 34 that had been retested, 41% had been contradicted or found to be significantly exaggerated.

Since then, researchers in several scientific areas have consistently struggled to reproduce major results of prominent studies. By some estimates, at least 51%—and as much as 89%—of published papers are based on studies and experiments showing results that cannot be reproduced.

Bill Rafter writes: 

In academia the currency is published articles. It should therefore not be a surprise that many published articles are useless or worse, flat-out-wrong to the point of being fraudulent. Consider that in the United States the typical number of scientific-based papers published in a peer-reviewed journal by a doctoral candidate is ONE. In certain other countries that number could easily exceed a dozen. Consequently the avid reader of scientific papers learns to discriminate in his reading habits against certain universities and certain countries of origin.

Would you do business with a bank that had a reputation for handing our counterfeit currency? And the fact that counterfeit banknotes exist casts suspicion over all transactions.



A very reliable model of mine is the sign “CLOSED” on a store’s door.  It invariably means the store is closed.  But I was just given an example that a slight change in circumstances can render it totally off the mark. 

There’s this corner candy store near me that sells graham crackers smothered in dark chocolate.  I allow myself one a day at the end of lunch and thoroughly enjoy the event. 

So I drive up to the store at 1 PM on Monday and the CLOSED sign is hanging on the front door.  It’s one of those simple ones that says WE’RE OPEN on the obverse.  Elsewhere the hours are posted as 12 – 8 Monday thru Saturday.  But I move on.  Same thing happens on Tuesday. 

Today (Wednesday) finds the CLOSED sign still in place.  Despite what my model tells me, I try the door and find it unlocked and ask loudly if they are open.  A guy substituting for the owner Carol welcomes me and handles my weekly purchase.  And I learn that he had no idea about the simple sign on the door that had been chasing away all customers for the last three days. The owner is recuperating from surgery and the guy never noticed the simple sign.  Another O-Ring example in which a small item has disastrous consequences. 

Again we find that no model is perfect.



Forgive me for posting two items, but I believe them to be related.  In the first instance we have our oldest algorithm (from 1988), nicknamed “Thermos”. This plots a moving correlation between stock and bond levels. As of Friday (2/26) it has gone bullish for stocks.


Secondly, a major Teutonic bank just announced a buy recommendation in gold. Coincidentally we notice that our measure of professional sentiment just went bearish on gold.


A week ago we had a similar signal to sell bonds. We have long noticed that whenever bonds and gold are in agreement, equities make a move in the opposite direction. Either way, long or short.  



Gut feelings matter, but not the way you think. An individual’s gut feeling is anecdotal. Chances are that even he cannot statistically study his sympathies. However many of us model the gut feelings of investors at large, and those can be statistically studied. Here are a few examples:
Commitments of Traders of futures. Many researchers ply a theory and then try to find data to support it. And their theory typically revolves around following the large (reporting) traders and mimicking them. The trouble is that not even the big guys are right all the time. A better approach is to examine the data without a preconceived theory. In doing so you will find that the small (non-reporting) traders are more consistently wrong than the big guys are right. That is, winners rotate, but losers are consistent. Further analysis reveals that the little guys tend to be even more wrong when they are short. And the best combination is when the little guys are short and the big specs are long. Following the hedgers should be avoided as the hedgers speculate, but on the basis, not the actual price. If you don’t know what that means, don’t play in that venue. 
Options data. This usually takes the form of the putcall volume ratio. Excessive levels tend to occur at market turning points. And by the way, the smart money bets against the excessive level. One problem to be mindful of is that most researchers look at CBOE data, which typically only constitutes a third of all option data. If you want to get it all, get the Options Clearing Corp data, which is free just as CBOE data and more reliable.
While you are looking at option data, go a step further and look at the open interest levels.  I assure you that if you like putcall volume data, you will value the open interest data more.  The latter also tends to give less ephemeral signals. 
Is there any way to combine the two?  You betcha!  In any given period the number of New Positions (NP) equals the volume plus the change in open interest.  Further, the total open interest divided by the backward cumulative NPs identifies a number of trading days which can be described as either the age or average holding time of those positions.  On a very broad scale that data gives a view significantly different from putcall volume, and one that is quite reliable. 
Polls?  There used to be a newsletter which purported to measure contrary opinion for futures. What the publishers (Mr. James Sibbet and Earl Hadady) did was rank the bullishness of various newsletters and take a percentage. The theory was that if every publication was bullish, the market was overbought. The trouble was (paraphrasing Keynes) opinions could stay bullish for longer than you had margin money for picking the top. However if a market was up in the high 90s percent bullish for several weeks, the first downturn in opinion to even mid-80s presaged a price selloff.  It wasn’t the same people each time, but when the collection of gut feelings changed its momentum, the price tended to go along. 
While on the topic of polls, VIX and its offshoots are surveys that are very reliable. 
Price alone. What do you do about a market without telltale derivatives or surveys of newsletters? If you run a regression fit of the price data and extend it, you have a forecast. The deviation of the actual price from the forecast provides a measure of the combined opinions of professionals regarding that price. Small deviations go hand in hand with low volatility which is bullish on prices of assets that go into portfolios. Large deviations are scary which manifest themselves in price discounts. 
So all in all, Virginia, gut feelings matter. 



The options viewpoint.

Point 1: Virtually any macro investment strategy can be replicated with options. (Previously stated by one brighter than I.)

Point 2: The use of options can enable the strategist to hide his moves.

Point 3: Options transactions tend to be the milieu of the professional.

Possible conclusion: The broad analysis of options transactions can reveal some interesting truths about the current investment environment.

We have studied the broad pattern of equity options transactions this century and have found whichever side creates more options positions is correct. That is, the condition where new positions consistently exceed liquidations. This is equivalent to a shorter age or holding time (open interest divided by new positions). "Whomever holds longer is wronger" to coin a cheeky phrase. Specifically, if the turnover rate is higher for calls than puts, it's generally safe to be long.

Being long equities when the bullish patterns existed (since 2000) yielded a compound annual rate of return of 11.5 percent. Being short equities during the bear patterns yielded 3.5 percent (CAROR), such that the combined compound annual ROR was 15 percent. Not bad. The trouble for statisticians is that there aren't many switches (less than 40), making statistical reliability problematic. But the minimized "signal flutter" is comforting to longer term investors.

Although this metric called the 2009 turnaround on the money, it should be used to check the climate rather than the weather. Where are we now? Very close to going long. I would be reluctant to give a heads-up in advance of the actual signal, except the chart I show is a smoothed version. The unsmoothed version is already positive.

Here are two charts (2009 and current).



Every now and then it is advisable to check out what the Fed is doing. There have been upticks recently in the aggregates (since New Years, and concurrently since the drop started), although in my opinion the upticks do not alarm or impress.

monetary base




 I think the group will find many useful lessons for both life and trading in these machiavellian maxims and I'm sure that the list will find plenty of fodder for debate contained herein.

Bill Rafter writes:

Those are not Niccolò's thoughts, but the author's wish as to how Machiavelli would think. The two are not the same. Also, many believe they know Machiavelli because they have read The Prince, a very short work hastily put together in three months with an expected readership of only one person, for the expressed purpose of getting a job. Because Machiavelli has become the Progressives' poster child of evil, some anti-Progressives have taken to championing him. But unfortunately they do so poorly read and for the wrong reasons.

The Discourses on the First Ten Books of Livy are Machiavelli's best work, written over three years (concurrently with The Prince) for a universal audience. The Founding Fathers of the United States all read "The Discourses" as a prelude to creating our government. It would be well worth your time.

Gary Rogan writes: 

The Prince was written for, well, a Prince. One problem with applying both the original Maciavellianisms from The Prince as well as these new improved maxims is that they don't seem to concern themselves with basic competency in one's line of business outside of manipulating people. For a Prince it fits: his job is essentially to manipulate his subjects, enemies, and any threat or potential resource provider into benefiting the Prince. He doesn't personally build bridges or grow food, etc. On the other hand, imagine a plumber who is also the world's greatest student of Machiavelli but is a really bad plumber. It's doubtful he can overcome his major deficiency by simply manipulating his customers.



What kind of moving average of the last x days is the best predictor of current and future happiness, and how does this relate to markets?

Anatoly Veltman writes: 

The widespread misuse of MAs concept is what gives it bad name. 90% of testers and users look at crossovers, and the remaining 10% look at break of MA from above or below. All wrong

The only proven way to apply MAs from trend-follower stand point is to look at nothing else but SLOPE. (Trading Days) Is 14-day MA sloping upward? If so, then is 30-day sloping upward? If so, then is 50-day sloping upward? If so: then Shorting is forbidden! Mirror test may save you from disastrous bottom-picking.

Bill Rafter writes: 

I beg to differ. There is no way the "average of the last x days is best predictor…" It by definition is at least a coincident indicator and more likely a lagging indicator. BTW the same can be said of the SLOPE of the last x days.

However, you can construct a leading indicator by comparison (difference or ratio) of the coincident to lagging indicators. For this newly created leading indicator, there tends to be a lot of false signals, due to random market action. To guard against that you need to have very smooth coincident and lagging inputs. Making them smooth also makes them more lagged, but that will not hurt you as you are not going to look at them outside of a difference or ratio, which will be quite forward-looking.

The real problem is that investors want to identify a static x. In doing so they are insisting that the market be modeled by x periods. Well, the market doesn't always feel like cooperating. At times the market may be properly modeled by x periods, and at other times by x+N, in which N can assume a wide range of positive and negative values. The solution is to first identify the exact period over which the market should be modeled for the coincident valuation. And then go on from there. Rinse, repeat.

Russ Sears writes: 

This would be a good question to ask the trading expert psychologist Dr. Brett.

It seems that with the same brain imagery he uses is being used in the study of the science of happiness.

While I am no expert I have read Rick Hanson, PhD book "Hardwiring Happiness"/ It has been awhile since I enjoyed this book, my summary of it is "focus on the life/good in the present. Placing things in context to how it has brought you to this moment, then enjoy the moment is enjoying life."

Presence seems to be the buzz-word in studies of contentment and psychology of success. Being aware of all your inputs, your feelings and recognizing them as part of life, then celebrate living. Presence gives you the fulfillment in your life needed to be loyal and disciplined enough for what is working well in your life. Thanksgiving is a day built on this idea, But presence also gives you the courage to turn things around, admit things are not as you want, and gives you Hope for the future. Happiness is more about living your life, being in control, then it is circumstances. Some of my happiest times have been after running hard for over 2 hours exhausted after 26.2 miles, cold and totally and dangerously spent but knowing I gave it my all.

So I would suggest that MA, trend following, momentum, acceleration, nor death spirals nor reversion to the mean, value investing should not ever be the "key to Rebecca", rather judge them in the context of everything else. Some days "the trend your friend" other days "the sun will come out tomorrow". 

Brett Steenbarger writes: 

It's a really interesting area of recent research. It turns out that happiness is only one component of overall well-being. What brings us positive feelings is not necessarily what leads to the greatest life satisfaction, fulfillment, and meaning. I suspect the market strategies that maximize short-term positive emotion have negative expected return, as in the case of those who jump aboard trends to reduce the fear of missing a market move.

Ralph Vince writes: 

Too many times in life I've found myself in darkened parking lots with a small gang of characters who intend me harm, and saw how the pieces would play out enough in advance enough to get out of it, or at least to realize there was only one, very unpalatable way out of it.

Shields up.

Too many times in life, I've had an angel whisper in my ear with only a few hours or seconds to spare to keep from being robbed blind by people I made the mistake of trusting.

Too many times in life I've paced in some anonymous hotel room, wondering "How the hell am I going to do this once the day comes?"

Too many margin calls have had to be met.

Far more times than I would care to, I've found myself confronted with the proposition of having to throw boxcars to survive, and I find myself, yet again, with that very proposition in a life and death context.

Only someone who really loves the rush of the markets, could enjoy wanting a given market to move in a specific direction. I've come to the conclusion it's far better for me to set up to profit from whatever direction things move in on a given day. Those that dont move in a manner so as to profit from this day, will tomorrow, or the next day, or the day after that… I need to just show up on time with my shoes on, collect on that which comes in today, sow the seeds today for taking profits on something at some future date. It's not difficult, and a lot more satisfying.

There's enough episodes in life we need boxcars to show up, and yeah, "Baby needs a new pair o'shoes."

Victor Niederhoffer writes: 

I like all these untested ideas about moving averages but my query was of a more general nature. What kind of moving average, perhaps its top onion skin an exponential average, is the best predictor of human happiness. I.e. if you are happy yesterday and unhappy the day before, are you happier or sadder. I mean vis a vis the pursuit of happiness, not markets, although the two are related I think.

Alexander Good writes: 

My answer would be a medium term moving average works best - about 6 months. We're naturally geared to notice acceleration not speed. After accelerating happiness, it's virtually certain to decelerate which we would have a heightened awareness of. Thus a 5 day moving average would have too much embedded acceleration and deceleration to yield a good outcome.

I would also say 6 months is a good number because there's a fear of 'topping out'. I.e. if you're at the peak happiness of the past 5 years you might get afraid of a larger mean reverting move. 6 months is short term enough not to be victim to noticeable accel/decel, but not too long to be subject to such existential thoughts that lead to unhappiness. 2 quarters is also a good timeframe for evaluation of back to back 3 month periods which seems like a relevant timeframe to most people professionally.

My meta question would be: does measuring one's happiness with a moving average make one more or less happy? 

Theo Brossard writes: 

I would pose that happiness would exhibit similar behavior with market volatility. Short-term clustering (which makes exponential average a good choice, if you are happy today chances are you will be happy tomorrow) and longer-term mean reversion (there must be some thresholds defined by values and time–you can't be very happy or unhappy for prolonged periods of time).

Jim Sogi writes: 

A good way to study this is to rate and record your happiness each day. Also record your acts: exercise, diet, work, family, vacation, tv, meditation, etc. Over time you can correlate the things you do that make you happy. You could correlate day to day swings as Chair queries in a univariate time series.



"The stock market leads the economy, not the other way round"

Are we sure of this old bromide?

anonymous writes: 

Yes, the data support the conclusion. Even more so because we know the results of the stock market immediately, and we get the GDP number only each quarter, and then after a delay of months that is then revised three times.

Andrew Goodwin writes:

A statistical method for testing this theory with precise equations is given here for those who would care to update the work:

"The Stock Market as a Leading Indicator: An Application of Granger Causality"

To summarize the conclusion reached using this "Granger causality" method:

Our results indicated a "causal" relationship between the stock market and the economy. We found that while stock prices Granger-caused economic activity, no reverse causality was observed. Furthermore, we found that statistically significant lag lengths between fluctuations in the stock market and changes in the real economy are relatively short. The longest significant lag length observed from the results was three quarters.

Stefan Jovanovich writes: 

"Is the causality relationship more consistent with the wealth effect or with the forward-looking nature of the stock market? The results from this project are consistent with both the wealth effect and the forward-looking nature of the stock market, but do not prove either. Another possibility for future research is to further evaluate where expectations about the future economy are coming from. Our results reveal that expectations for future economic activity are not simply formed by looking at the past trend in the economy as the adaptive expectations model would suggest. Expectations are being formed in other ways, but how?"

The argument for the "wealth effect": rich people's spending is the Keynesian pump that gets its money flows from the drift towards higher stock prices. The argument for the forward-looking nature of the stock market: the same one that applies to all asset and credit pricing, even those for "true" bills. The argument for "adaptive expectations" models: straight lines are easier to draw.

Stock prices go down because enough rich people think they will go down. God only know what makes them decide to think that, even though they have all the lessons of the past to tell them otherwise.

As Eddy and her Mom and others remind me, my sarcasm can be a bit heavy-handed, obscure and unfunny.

Let me try again, now that Big Al (who has saved me from gold standard oops moments and other follies) has come to my rescue.

The Chair's drift is a fact of enterprise itself; people get richer because they figure out how to do things better, faster and cheaper, and the price for that know-how rises steadily because it is the means of producing more wealth.  (Marx was not wrong to focus on the means of production; he just left our distribution and exchange as the other necessary parts of the deal.)

The people the Chair left behind at Harvard, Berkeley and elsewhere share their own kind of Marxist illusion; they think that people can manipulate the way we all keep track of wealth - the unit of account, the interest rate on government debt - and have the manipulations produce further drift which will, in turn, somehow produce greater wealth.

This all reminds me of what a WW II veteran once told me about sharing a bivouac with the Russians while Truman, Churchill and Stalin carved up the world at Potsdam.  The Americans, with their wonderful energy, had set up tents and installed GI showers and faucets after running lines to the nearest pond with clean water.  After seeing the GI walk over to a faucet and turn it on to fill a pail of water to feed the radiator in his Deuce and a Half, a Russian soldier yanked off the faucet, walked over to the Russian side and defiantly banged it into a post.  He was enraged when he turned the tap and nothing came out.

Fat thumb correction:  stock prices go up and down because enough rich people take one side of the trade or the other that they change the price of wealth expectations for that particular company. There is no way of knowing what their particular "reasons" are; markets are part of Heisenberg's universe.

Bill Rafter writes: 

Allow me to come into this party late and probably tick everybody off. What drives markets most of the time (i.e. 90+ pct.) are two things: momentum and sentiment. If you have a handle on those you can make money. Probably the same two things drive the economy, but you cannot make money trading the economy, as the data coming out of the economy is more lagged than the data coming out of the markets. Hone your skills where they can count.



The Monthly Treasury Statement for July has just been published. Of particular concern is the Hospital Insurance (Medicare tax) payments for self-employed enterprises. They continue to languish.

Historically there are no direct causal relationships between this data and equity prices. That is, no one is going to see this data and draw any connection to equities. Most people have no idea that the data exists, and following it is problematic for most (especially financial journalists). The safest thing one can say is that the data does not support any rumors of a renaissance in ultra-small (self-employed) businesses. But you knew that, didn't you.



 By the way, I believe it might be a subject of speculation whether  Mr. Simons and his colleagues have found anomalies that they can still exploit as they might be much too big, and there is much too much competition from other humble anomaly seekers.  Yes, as Mr. Harry Browne would say, as described by  the true believer below, their pantheon of geniuses soars on a much higher level of cognition than myself or any of my colleagues or hundreds of followers - but then again superior intelligence isn't everything. And aside from the profitability of market making, as first enumerated by MFM Osborne, it might be difficult to capture anomalies on a systematic basis that the competitors in St. Louis and other small venues might have missed, no matter their profundity.

Anatoly Veltman writes: 

Does this also answer the query as to WHY would Virtu decide to go public?

A true believer writes: 

If there is anything whatsoever to the legion of gambling analogies to markets, market ecology and human endeavor then most of the chips will end up in very few hands.

The Medallion Fund represents the very apogee of human brilliance so applied to financial markets.

What is more likely, that there is something rotten in Denmark? Or that the combined work of pure genius including:

James Simons

Elwyn Berlekamp

Robert Frey

Henry Laufer

Sean Pattison

James Ax

The whole 'European Contingent' - I will not list those names here.

Plus a host of mere 'worker ants' cleaning data, programming testing machines and keeping the lights on.

Might just have come up with the single best group of high capacity strategies ever known.

We should all celebrate this achievement. It represents everything this list is about, surely?

Trying to pick holes in something like this is the equivalent of the Barron's columnist bearing bearish for 30 years on U.S. stocks.

My belief and optimism is based on facts, not some idol worship groupie phenomenon.

anonymous writes:

Is one allowed to agree with both the True Believer and the Chair? What Simons and the others did was pure genius–they used mathematics to identify the consistent anomalies that occur when people buy and sell securities. Those of us who lack their pure brains and mathematical chops marvel at what they have accomplished and have done our best to create a glacially slow mimicry using employment data and their correlation to the business cycle. (They are playing Scarlatti the way Michelangeli did; I am playing chopsticks hitting one key a month.)

But, as Vic notes, the question is whether or not there remain any arbitrage opportunities left now that those anomalies have been examined in such detail for decades by the far greater number of smart people who have come after the folks at Medallion.

Bill Rafter adds: 

Like others, I agree with both the Chair and Shane. The question then is "how much juice is left in the fruit?" As Stefan says, he gets one a month.

I would posit that it is a question of time frame. Certainly the HFT opportunities are gone for us simple folk, and maybe much of the day trading. But there are still anomalies if we are willing to accept less certainty and leave our bets on the table a little longer. After all, realize the prop shops do not want their worker bees to have an overnight position. Which means those of us willing to have such a position will have an automatic edge. As an example, compare the Open to Close returns to the Close to Open returns of certain derivatives. There's an edge, less than it used to be, but still there, and the edge favors the overnight holders.

Also, we simple folk cannot expect to outperform by trading only SPY (or perhaps its overleveraged sisters), the most competitive and liquid of assets. The greatest returns have always been in the least liquid of assets. 

Shane James replies: 

I see no disagreement with the Chair on this thread. As with the Chair, myself, Medallion, DE Shaw, Citadel and all such people interested in trading from all walks of life - we shall continue to look at new angles, different ways of splicing the available information amongst much else. Medallion too will do this. The outcome? Only the shadow knows.

On this next point, the Chair, myself and anyone with half a clue will be in violent agreement - it is always best to be the bookie . The RenTech entity, at the last count when the info was still public, collected 8% management fee and 45% performance fee (I may be off by just a little here).

To use a collection of letters used by my children to describe this: OMG.

It's good the be the king. 

Jim Sogi writes: 

Much of what they have done is computer science not just math. It also has to do with understanding and moving or changing and understanding and exploiting regulations at the exchanges. In a competitive environment, there will always be an edge available somewhere. They change and move, but there is always opportunity in change, the change in others, the rate of change, the unforeseen effects of changes. I think there is opportunity for the slow and small as well. Computers are stuck with their algos. They leave tracks, patterns, singly and as a group. The markets are complex, and no person or computer knows exactly how it works, though they may find opportunities in complexity. There are always effects of effects of effects, unknown to the actor. Waves spread out from every action.



I once asked of the Chair, is it really worth it to trade markets not based in the United States? We decided that it was an 'interesting' question.

Taking this further it is of much interest to calculate the relative stability of markets. 'Stability' can be measured in many ways and I leave it to the reader (if there are any) to think about this point further.

For example:

1. Are US T - notes more stable than their international peers?

2. Is the S&P 500 more stable than its international peers?

3. Does relative stability explain why the regularities extant in U.S. markets are often massively more persistent than those for similar markets 'overseas'?

There are some interesting things to look at if one believes that the U.S. markets are at the beginning of the chain that moves other markets.

Clearly the more 'stable' market and the market at the beginning of the chain changes from time to time but my supposition is that it takes some great measure of 'statistical crisis'– for lack of a better term– to upset the U.S. market's hegemony even temporarily.

Bill Rafter writes: 

Presumably stability is the opposite of volatility, but there are a lot of ways to count volatility. And of course there is the question of "over which period?" I'm only guessing of course, but I'll bet that John B would define stability as staying within N standard deviations of a moving mean. And that also begs the question as to the period considered. Should the period be static or floating?

Ideally markets that are more stable would attract more portfolio holdings. That is, there would be a stability premium, or alternatively a cost of volatility. If there were two assets priced at $10 and you knew (don't ask how) they would be priced a $20 at a given point in the future, which do you buy for the portfolio? Obviously the more stable of the two since you may have the need to liquidate before the end of the period. In theory the more volatile one would be discounted vis-à-vis the more stable one. With stocks the end certainty is less defined than with bonds.

The original question implied that the investor/trader was looking to be long country markets that were more stable.

Let's suppose that you believe the country ETFs represent their respective markets. Then you could rank those ETFs by inverted volatility. We have done that after first ranking them by other means. We then would have say 10 ETFs that we would like to own, and make a final selection of a few according to inverted volatility. Alternatively it also makes good sense to buy the entire 10, but with different percentages of your equity.

Does that work? Yes, it is more profitable than holding SPY, but not exciting, such that we don't charge for it. We always include SPY in such rankings, as a tracer bullet. The really interesting thing is that SPY never rises to the top of the daily rankings.

We also have the problem of "over which period". One consideration would be to rank all the country ETFs according to the same period, as though China and the U.S. should be compared by the same time standard. That would seem correct if the account owner had a specific time need. Another consideration would be to let each country ETF dictate the period for comparison. But then you might have the input time for Australia being ranked over two years, with SPY only ranked over two months. That would seem correct if the investor was more of a speculator.



I plan to research few trading strategies based on Commitments of Traders data. Any beliefs (positive or negative) about these concepts? Did anyone try to systematize it?

Bill Rafter writes:

Many have researched the Commitments of Traders Reports. If you really want to pursue this I suggest you go into B-school libraries and review titles of unpublished theses for tips. There is little of value to be found in the "popular" literature.

When researching be mindful that you relate the positions both to the market tradedate-wise to test for significance, as well as relating them to the market releasedate-wise for your profitability. One guy who sells CoT data gets this distinction horribly wrong. Collect your own data.

Most researchers tend to focus on identifying the winners by group, and following them. I would posit that the winners vary by group and are less consistent than you would like. Instead, I suggest that you identify losers by group. You will find much greater consistency with regard to losers.

Anecdote: I used to study the CoT for non-obvious trading opportunities. Once I found a situation where the Large Specs had gone from short to long over one reporting period, while the non-reporters (i.e. small traders) had gone from long to short at the same time. [N.B. little guys tend to do poorly on the short side.] This was in the Oats market, which I generally ignored. The Large Hedgers had not changed significantly. Also, from the reporting date to the release date there had been no market movement. I then called everyone I knew with grain knowledge but learned nothing. (It's important to look for orthogonal information.) Sadly I did not know Jeff at the time. What the hell, I bought a lot of Oats and put on even more Oat spreads (long the near). Within the next month Oats and their spreads moved significantly, giving me a great year, new car, etc. And I never learned the reason for the market's move.



Consider, say, 5 related macro markets, one of which is the dominant market in terms of influence upon the other four.

Further assume that your own individual Rosetta Stone tells you to buy the 4 less dominant assets first but the same methodology doesn't get long the main market until later in the microsecond, second, minute, hour, day, week, month, year. (my we are inclusive of all on this site, aren't we!)

Anyway, the issue to consider is this:

Is it more efficient to buy all 5 assets only when the 'influential' asset signals? The qualitative argument being that if the influential asset keeps declining then one should wait on the other four.

After an enumeration here, and considering the relatively short holding periods concerned, it makes more sense to just do all the trades as they occur, 'influential' market be damned.

In terms of percentage attribution of profit or loss amounts there appears to be no persistent profit from waiting. An interesting question might be, is it a good idea to add to the other four when the main market signals….

In the context of relatively short term trading, there appears to be a plethora of cross market vicissitudes– more than enough to compensate for not having the support of the 'main' market.

Bill Rafter comments: 

If "the Four" always lead "the Main", then the Main as a signal is irrelevant for the Four. The Main then should always be bought ahead of its signal (which is a foregone conclusion). This is aside of any portfolio/diversification/size considerations. If you waited for the Main you would seem to be missing some profit on the Four. As you stated, there seems to be no profit in waiting. You should therefore treat the Main as an independent signal on its own.

Be cautious that you have not stacked the deck against the Main. A silly example (but one practiced by many) is to have one signal determined by looking back over say 20 periods, and another looking back over 40 periods (or 5-minute bars vs. 30-minute bars). In this example you will have stacked the deck in speed against the 40-period/30-minute lookback. The novice then claims he needs to wait "for confirmation". All he has done is to nullify the earlier signal. If the earlier one is always/mostly right, his process is inefficient.

Two other considerations:

The use of signals in some markets to trade other markets. The common example here is to use the inverse of bonds to generate an equities signal. Be aware that signals of "opposite" markets rarely occur simultaneously. Some traders would benefit from knowing which comes first, the exit or the new entry. Think about it: it should be obvious.

Our experience is that some signal always leads, but the leader changes. And of course there are false positives. One solution is to have them vote, but in doing so you will always be after the leader. Considering that the greatest improvements in track records come from the reduction of losses rather than outright gains, it seems prudent to trade a little of the upside for less downside. But that is for each to decide, hopefully after testing.



Is this really true in general?

"The most important thing you need to know about commodities" :

If you have traded stocks for a while, you probably have a sense of when a move has gone far enough to be due for reversal, and you're probably used to seeing longer term positions more or less alternately green and red on the day over any reasonable stretch of time. Be careful, because these (correct) instincts will work against you in commodities, which can trend and trend and trend and end in blowoff moves that go far beyond what anyone expected. Simply put, if you come to commodities from a stock trading background, temper your urge to fade moves…

There was a time in market history when S&P 500 traders (experienced, professional traders) flocked to the soybean pits to daytrade, thinking they could apply their ability from one market to another. That incident ended badly for the S&P traders (but very well for the locals in beans!).

Bill Rafter comments: 

Futures are mean-reverting in the shorter run, and that also applies to equity indices. Much less so with individual equities. That being said, that statement does not apply to squeezes in either. Futures moves tend to be linear, whereas stocks and their indices tend to be parabolic. There are logical reasons for these, but not enough room here to write them.



 There is some nice WSJ commentary about Patrick O'Brian today.

"A Centenary Salute to Patrick O’Brian":

Aubrey is an apostle of duty, an advocate of order, and yet he knows that leading his men depends less on his power to punish them than on his power to inspire. Maturin has a far greater appreciation of freedom, rebelliousness, even anarchy, and yet possesses a fierce sense of right and wrong. Together they embody the values of freedom and democracy that allowed Britain to lead the world.

First section, back with the editorials.



 "GCHQ Launches Cryptography App for Budding Codebreakers"

I have not yet seen the Cumberbatch flick Imitation Game and was wondering if it gave any credit to the Poles, who had cracked the first generation of the Enigma. Prior to 1938 there was a disgruntled German turncoat who provided intel to the French (who shared it with the Brits). Both the French and Brits were stymied, and passed what they considered useless intel to the Poles, who then cracked Enigma. For years the Poles managed to read everything put out by the Germans, and even had created a mechanical device to do the work. Then the Germans increased the number of rotators from three to five, and the plug-connections from six to 20, requiring huge additional work. [See Technical Details of the Enigma Machine]. Two weeks before Poland was invaded the Poles gave the Allies what they had on Enigma, shocking them. Without that head-start the Bletchley Park effort would have failed.The market parallel to this is that someone else's research castoffs may be useful to you. Just because someone else has failed to find significance, does not mean you cannot gain utility. Our own most useful tool was a castoff from someone else who failed to make it work.



The normal pattern for INDEX options open interest is for the OI of puts to exceed that of calls. It happens more than 90 percent of the time. It's a bit easier to see if you smooth the data, recognizing that it has a 21-day periodicity. But from approximately January 2013 to September 2014 call index OI exceeded put index OI (or was close enough to be indecisive). Since late September the pattern has reverted to historical.

N.B. the OI pattern for individual equities is that calls outnumber puts, all the time.

A return to normalcy?



A disturbing chart: "This is Probably the Second Worst Time in History to Own Stocks"

Bill Rafter writes: 

The trouble with the chart is that the regression fit was done cumulatively, resulting in older data being subject to look-ahead bias. Thus only the current values are useful, and one wonders exactly how useful. As Steve has commented, the way to foil that is to use a moving regression fit in which the values are static over time, always taking the last point in the fit. Thus all data, past and current are relevant and can then be used in statistical studies.

The question that then comes up is which lookback period do you use. Wherever possible all lookback periods should be adaptive, the question then being to what input. In shorter term price data the market will tell you the relevant lookback period. I have never tried determining lookbacks for longer term data because (a) I don't expect to live long enough to take advantage of it, and (b) too many things can happen in the short run to screw up a good plan. Most people don't marry someone in their 20s based on the supposition that (s)he will look good in their 70s.

I also question the use of any equity or debt data prior to 1972. If you don't know why, ask Stefan. **That's one of the great things about the list; there are sources for just about everything.

Several moving functions you should consider:

Moving linear (i.e., regression) fits and their slopes.

Moving parabolic fits and their slopes. Since most economic and price data are parabolic, this is the better of the two. There is also something to be gained in the difference between a parabolic fit and a linear fit. Fitting parabolas is quite tricky, and it took us a while to code it. If you try to do so and want a check on your efforts, try fitting a parabola to a straight line. If the result is ludicrous, try a different method.

Moving correlations are particularly interesting between markets that might be alternatives to one another. Moving correlations between stocks and bonds (levels to levels) are something we have used for years and continue to do so. I thank Gibbons for his comment that Colby & Myers recommended them, as I had not been aware of that. (I'm not a fan of C&M.)

Gyve Bones responds: 

Colby and Myers didn't recommend the linear regression study per se… the empirical analysis simply showed that study to perform best with a fixed loopback parameter over NYSE index returns data over a long period of time compared to other trend following signal generators. This book was an early attempt to quantify different approaches to see how they performed trying as best as can be done to compare apples to apples. In the mid-to-late 80s, it was the best thing that had been done like that since Dunn & Hargitt's study using punch card futures data in the late 1960s (which found that the Donchian Four Week system was best, the system which launched a thousand CTA, including the Dennis Turtles and their spawn.) Another similar study was done in the 90s by Jack Schwager and another fellow whose name escapes me at the moment which was well done.

Larry Williams adds: 

A question: when was the regression line fit? Today? 20 years ago? 50 years ago? The slope will change based on your starting and end points. How overbought or sold is a function of this. A more careful analysis would either apply this same "method" every year with a set of rules (i.e sell above x% overbought) or would do the same thing on a rolling window basis. It's an interesting chart nonetheless and gives one pause, but I would suggest it lacks a certain amount of rigor. 

Gibbons Burke writes: 

It seems to me that this is a flawed chart to look at historically to make rules from because the trend line drawn into the past contains information about the future. The line is drawn using the linear regression of the entire data set so, for example, the line segment covering 1998-1999 "knows" about what happened in 2014. Very deceptive and misleading to make a rule based on the relationship of the data to the trend line.

Victor Niederhoffer comments: 

The disturbing chart is a case study of why charting is so misleading because of the regression bias and also at the variance of a sum is the sum of the variances. 

Steve Ellison says:

Here is the way to solve the problem of the regression line incorporating future data. Attached is a graph of a "moving regression", as Dr. Rafter calls it. For each date, the red point is the last point of a 30-year regression of the S&P 500 as of that date (the graph is from 2010).



Hefty relative changes in the Monetary Base and hefty relative changes (i.e. "corrections") in the S&P seem to be related. Sometimes the former leads, and sometimes it lags. Unfortunately (for the statistical researcher, as opposed to the Optimists) there are not that many examples. The question: is the current relative decline in Monbase related to the admittedly small SPX correction we have already experienced, or is there more to come? Is there anyone here skilled at looking around corners?



Bill McBride published this interesting piece on wage growth in the US.

On the one hand, one might argue that this is a surefire harbinger of inflation. On the other, some wage growth might carry with it some opportunity for increased spending (save? in this country??). Some top line growth would, I'm sure, be appreciated by one and all.

And that assumes that there really is wage growth going on. At best, the jury's still out on that one.

Bill Rafter writes: 

Wage growth has not been underestimated. Payroll tax receipts suggest otherwise. The latter do so some signs of coming back from the grave, but absolutely nothing to get excited about.

Regarding inflation, there are two forms of money growth that have to be monitored: that originated by the Fed known as the Monetary Aggregates, and that originated by the banking system known as fractional reserve lending. The aggregates are the Monetary Base, M2 and MZM. The lending data are commercial and industrial loans. The planned growth of the aggregates is designed to limit deflation. Inflation will not proceed apace until you get a growth in loans. So if you are worried about inflation, at this time all you have to watch is the loan data.

Aggregates and loan data are available on the FRED site. Payroll taxes are on the Treasury site.



 The Riddle of the Labyrinth by Margalit Fox is a great book describing the decipherment of Linear B, a Bronze Age pre-Homeric script found originally on tablets in the Palace of Minos on Crete. If that is of interest to you, this book will reward you. For me it was a quick and exciting read. If you are a Sherlock Holmes fan, chances are you will enjoy it.

The decipherment of Egyptian Hieroglyphics was solvable once the Rosetta Stone was found, which contained a translation into Greek. However Linear B looking like stick figures or the runic alphabet, had no comparable Cliff Notes.

But I also found the book an excellent guide for anyone interested in doing research on market behavior. The parallels between the two were uncanny. To decipher Linear B required pattern analysis, counting and frequency analysis before there were computers to make those tasks easier. We have computers to aid our decipherment of the markets, but the process of creating a framework to do the research is the same. A lot of setup and then lots and lots of actual work.



 "Nobel winner Fama: Active management 'never' good":

Eugene Fama, the University of Chicago investing researcher who won the Nobel Prize in economics last year, once again warned investors against the lure of active management.

"The question is when is active management good? The answer is never," Fama said to laughs Thursday at the Morningstar ETF Conference in Chicago .

"If active managers win, it has to be at the expense of other active managers. And when you add them all up, the returns of active managers have to be literally zero, before costs. Then after costs, it's a big negative sign," Fama added.

He's known as the father of the efficient-markets theory, which says that asset prices reflect all available information; investment managers can never truly get an edge.

Fama dismissed the idea that it was possible to pick the best managers.

"The good ones might be good or they might be lucky. The bad ones might be bad or they might be unlucky. We can't really tell the difference," he said. "I don't know if it would ever make sense, even if the fees were zero, I don't think you'd be better off because you'd be investing in an undiversified way."

Read More Economy weak because of 'stupid' policies: JPMorgan pro

Asked about Warren Buffett's long-term record of picking good companies, Fama said the Berkshire Hathaway (BRK-A) chief actually agreed with his index-based thesis. Buffett said recently he actually has directed much of his fortune to be placed in passive index funds after he dies.

"He's, like, my hero," Fama said. "What he says is, 'I can pick a company every couple years, but if you have to form a portfolio, you're better off going passive.'"

"All the behavioral people say the same thing," Fama added. "In the end, they realize that the game of doing something active is fraught with problems."

Fama was also asked about hedging against big crashes, like what happened to the markets in 2008. Attempting to protect against them, he said, was the unwinnable game of market-timing.

"If you sold when the market crashed, you made a big mistake, and if you saw it coming you're a genius," Fama said.

Gary Rogan writes:

Everything that The Sage deems right and proper will happen after he dies, the charities, index investing, who knows what else. I guess it's no longer politically correct to say "Après nous, le déluge".

The statement "If active managers win, it has to be at the expense of other active managers. And when you add them all up, the returns of active managers have to be literally zero, before costs." is probably mostly correct but given that some active managers are also activist managers it's not completely correct. Also imagine that every single person in the world was an index investor, that would be an absurd situation where nothing in particular but the inflow of new money would determine the price of all stocks. And still, if the average of all managers, aren't some managers better than indexing? At the very least Fama could say that no person is capable of either being or choosing a better-than-average active manager, but he isn't actually saying this.

Bill Rafter writes: 

That's a poor logical argument by the good professor. While Dr. Fama may be right that before costs the average return of all active managers must be zero, clearly it is possible (if not likely) that there will be serial winners and losers. Speaking only of the latter, several years ago we were asked to propose solutions to a shop that had managed to underperform the S&P for every one of the prior 15 years. They did not like our proposals and also rejected proposals from other research providers, continuing with their own methods. They are now 0-18 versus the S&P. Since it is possible for some to get this investment "thing" totally wrong, it is perfectly logical to assume that some others have better than average performance with consistency.

anonymous writes: 

In the case of Buffett you might ask: cui bono? His non Berkshire index assets could fill an Omaha thimble. Is it not the same press release as Betfair put out about their fixed odds versus exchange book on the Scots referendum?



Would anyone advise on how to determine backtesting periods?

I presume one should choose the most recent period because it may better correlate with the present situation. But is that really true? If it is, then how far back should one include, and how far in the future can it correlate? My experience seems to say that a short backtest period can lead to a very short future prediction or even a very poor prediction. On the other hand, a longer period often leads to poor performances during the present situation.

Shane James replies: 

At the Spec Party I had the privilege to spend a reasonable period of time one to one with the remarkable Sam Eisenstadt.

His work is likely one of the best examples of creative thought in the history of financial markets. He explained to me that there wasn't much backtesting to what he/they did. He came up with some principles that made sense to him and started applying them in real time.

Now, in our so called modern world, things may have moved on (Sam graciously stated as much to the room when he was giving his views on the modern markets). HOWEVER, maybe not so much…..

Try this:

1. If your trading idea has an average holding period of a few days (preferably less) then start from today and run it in real time for the next 90 days or so. By definition, the prices upon which you are testing your ideas did not exist when you had the idea so you have already eliminated most bias if you do this.

2. If you are happy with the structure of the returns (win, lose or draw) then consider if the results were biased by any factor during your live test phase and if related to long only stock index trading then make the requisite adjustments for drift.

3. Perhaps now consider a backtest.

The point being that I think it makes sense to test on data that did not exist BEFORE you perform the backtest.
Some like to 'exclude' certain data and 'pretend' it didn't exist so they can assume that the excluded data is 'out of sample'. For instance they may take 10 years of data and use the odd number years as test data and the even number years as 'out of sample'. This might be a reasonable idea to make yourself feel more comfortable but there is an intangible and very difficult to explain benefit to performing the kind of 'spontaneous' testing set out above on data that did not exist at the genesis of your idea before one starts seeing how well a set of heuristics performed in 1971!

Leo Jia responds: 

Hi Shane!

Thanks very much for the valuable advice.

Wow, Mr Eisenstadt! I would really love to thank him for my early success stories with referencing the Value Line. But I guess it wouldn't matter to him as he might have heard from too many!

Talking about my early experience (back in the 90's), I actually had been using your suggestion all along. There was never backtesting for me — I got an idea and went to buy the stock the next day. It actually worked well overall.

Should I go back doing the "novice" way? That becomes a question worth thinking now that you mentioned it. Perhaps this goes with the valuable lessons where having had enough struggles using complex ways, one discovered the neglected simple way being far superior. In Chinese culture, Tai Chi can be considered as that type of "simple ways".

Now, a couple questions about your suggestion.

1. By putting a new idea directly live, what problem is one trying to solve? Is it the concern that poor backtesting result may make one throw out potentially a good strategy? And is this concern because of the belief that past data are already different from the present situation?

2. In what ways can this idea that seemed to come from nowhere be better than the many ideas one gets by studying historical data? I know inspirations are invaluable, but one doesn't often get those inspirations that are not the results of study. So beyond the mistrust of the correlations between past data and present situation, are there any other reasons?

Thanks again for your thoughts.

Bill Rafter writes:

I am sorry to jump into this discussion late, but think there are a few points that can still be brought.  Looking for beta over a constant period of time (say 6 months) is somewhat meaningless and useless.  It’s a bit like describing a man with one foot in a fire and another in ice as at a tolerable temperature.  You have got fat tails with market volatility and a static window might be good for a journalist, but of limited value for a trader.

At a given time there is a time period over which the study of a market’s behavior will be significant.  And let’s say that at this time it really is 6 months, or 126 trading days.  Assuming no real changes, tomorrow that time window will be 127 trading days, and so on until you get a market change.

When the sea does change, bad things can happen in a hurry and beta value for the preceding 6+ months will be of little value.  Within the last week this happened with biotech:  it had been happily chugging along with good but not extraordinary outperformance of the indices.  Then it got clobbered with huge excessive relative volatility to the downside.  Had you been adapting your monitoring of volatility you would have been prepared, whereas if you stuck with your 6-month window you would have been clobbered along with the group.

My advice to you is to learn how to deal with the market adaptively.  I assure you that if you have a monitoring mechanism which you like, if you make it adaptive you will improve results dramatically. And it doesn’t matter which signal type (momentum, volatility, sentiment) or time frame (intra-day to weekly) you favor.



 A hundred years ago Milutin Milankovich, a Serbian scientist/engineer, didn't have much to do as he was a POW held by the Austrians. So he calculated the pre-historical temperatures of the Earth, based entirely on planetary distances to the sun. Several other scientists persuaded him to go back quite far in time and eventually he calculated the temperatures back a million years. Of course at that time there was no way to prove his work, until in the 1970s data from Antarctic ice cores became available. It turns out his calculations were very accurate, as were similar calculations for Mars and Venus.

If someone a century ago could calculate Earth's temperature a million years ago, the global warming claims of one camp seem to lack significant credibility.

Stefan Jovanovich writes: 

Milankovic's theory is this: "variations in eccentricity, axial tilt, and precession of the Earth's orbit determine climatic patterns on Earth"

The theory of the warmist researchers is that "the addition of combustion gases - most importantly, CO2 - from man-made uses of energy to the earth's atmosphere determine climatic patterns on Earth".

The reason for the falsifications of data by warmist researchers– I assume here that no one denies that these have occurred– is that the theory of man-made global warming requires a dramatic increase both in temperature and CO2 levels during the period when people have been burning stuff. If that cannot be found, then the theory has to contend with the very data that Al Gore found so persuasive– the Vostok ice core samples– and explain why CO2 level increases seem to be a result rather than a cause of the rise in the earth's surface temperature. That non-modeled data (i.e. the ice cores were actually dug out of the earth, not created in a computer model) is inconvenient and true. The Vostok data shows that changes in temperature always precede the changes in atmospheric CO2 by about 500-1500 years.

The usual rebuttal to this evidence and the fact that its data is entirely consistent with the Milankovic theory is something like this: "yes, it's true there is a delayed correlation; but that ignores the more important fact. Once the rise in CO2 levels start, they take over as the most important climate force."

But here, too, the actual non-modeled data presents a problem; the declines in earth surface temperatures that begin the "ice ages" occur precisely when CO2 levels are at their highest. If the Hansen theory's forces are so strong and can overwhelm the mere changes in the Earth's orbit, then how can the 'weak' signal can start an Ice Age when the strong Hansen signal says the opposite should be occurring?

The answer to that, of course, is the usual ad hominems that are the ever available rhetoric of the progressive mind: (1) you don't understand, (2) you haven't read our secret data and (3) you are too stupid to understand these things.

I think we have another definitional problem here, HA. "Complete(ly) unbiased description(s) of meteorology-climatology science practices" do not get written by people who write: "as a historical science, the study of climate change will always involve revisiting old data, correcting, modeling, and revising our picture of the climatic past. This does not mean we don't know anything. (We do.) And it also does not mean that climate data or climate models might turn out to be wildly wrong. (They won't.)"



 Yesterday while driving I heard a report of strong auto sales of both domestic vehicles (particularly trucks) and BMW and Audi. These would show up in the Daily Treasury Report as revenues in categories such as customs duties and excise taxes. Today I went looking for them, and sure enough the recent data is positive.

I tend to think of those categories as a good upstream surrogate for discretionary purchases. There are excise taxes on auto sales, gasoline sales and even on tanning salon sales.

In the linked chart
, SPX is shown as an historical reference. In my opinion there is not a definitive causal relationship. Historically this had been distorted by the "Cash for Clunkers Program", for example.

But maybe there is a retail recovery.



Does anyone know if there is a Predictive Value to a stock's short interest ratio?

Bill Rafter writes:

Short Interest (SI) is a good area to research. We do a lot of work with it in our shop, and use it in our trading. However, the question you posted was specifically about the SI Ratio, something we consider unworthy of attention with a very few exceptions. If that ratio is all you are going to focus on, we suggest watching a good movie instead.

Many people simply look at the SI Ratio because it is available, say on Yahoo, Google or the Nasdaq websites. The problem is that ratio is more dependent upon changes in volume than changes in SI. Volume is also an area worth your attention, but not in that ratio. We maintain that there are better SI ratios to look at rather than that one. But to do that you are going to have to spend some time getting the data, which means not only SI and volume, but outstanding shares, insider ownership and institutional ownership. Then you will find the profitable relationships, but anticipate considerable work.

We have only found the volume contributor to the SI Ratio useful when in a price explosion the volume exceeds the number of shorts. That circumstance suggests that the price explosion (of a high-SI stock) is a result of short covering, which has now been exhausted. Obviously don't buy that stock!

Phil Erlanger is the regarded expert with SI data. His approach was to find stocks that one liked (say on the basis of momentum or whatever) and then look for SI patterns that would enable a greater run-up. We took the opposite approach, looking to first find good short interest patterns, and go from there. What we found was that Erlanger's approach is the better of the two if one is taking a cursory look at SI. That's because fully half of the stocks with high SI deserve it – they are headed south. Of the remaining percentage, about half of those mill around going nowhere. That leaves about a quarter of high-SI stocks overall that benefit positively, a few of which really take off.

Despite the above warnings, we would not purchase a stock without at least making ourselves aware of the SI.



For historical reasons I manually downloaded the Daily Treasury Statement files and dumped them in a folder. Once there we go through our data mining process and extract what we want automatically. Our process could be made completely automatic, but it has not been a big enough inconvenience for us to code it. For virtually all other data our downloading and extraction is completely automatic.

Several weeks ago I noticed a change in the Treasury's website that irregularly makes me click once or twice more each time I download (which is only once daily). It has puzzled me why Treasury would take something that worked perfectly and change it such that it no longer worked perfectly. It has just occurred to me that the new little two-step process would certainly screw up an automated download and extraction procedure. Also of late the data is less and less favorable to a government that may wish to claim everything is rosy.

Am I being paranoid in thinking that there might be a connection?



One wonders if the stooges, the puppets from the centrals will be hauled out to make reassuring comments about the health of the economy and the resonance of the qe's. After all, small people in emerging markets might be hurt and the idea that has the world in its grip will come into play. Trading it from that cynical world view has not been entirely unprofitable the last two days. But it was entirely unprofitable on Monday. However, it often takes a day for the puppets to receive their marching orders.

Rocky Humbert writes: 

I note a Bloomberg news story from this morning that the INVERSE VIX ETF (XIV) had a record inflow of money last week — the largest amount since the ETF started trading in 2010. This tells me that the market has become conditioned to extrapolating the behavior of the past five years.

I believe that among the biggest challenges in investing and running one's models is figuring out when the game has changed (or "ever changing cycles").

I am not making a prediction about when the game will change. But the risk is rising substantially. Conditions precedent for the game changing are (1) "Everyone" is conditioned for the same behavior; (2) High leverage in the system; (3) Rich valuations and/or optimistic assumptions; (4) Subtle changes in monetary conditions and/or other related expectations; (5) A long period of time since things looked really scary. (FWIW NYSE December Margin levels are at records fwiw.)

Think back a few years — what were you thinking then? How many people laughed at "Green Shoots"? Why do people believe the bankers now? But they didn't back then? What is different? I'll predict that we don't have another financial calamity. But to quote the wisdom of Roseanne Roseannadanna, "If it's not one thing, it's another."

Bill Rafter writes:

For the next shoe to drop you may want to look at my post of last week.

Gary Rogan writes: 

When I said we'll see 5% down I was using every one of those reasons other than 4 that I don't understand other than slightly lower QE. The margin leverage chart is the scariest thing in the world if you are looking for scary things.



 What are the major 3 body markets that orbit around each other in our solar market system and how do their epicyclic orbits relate to each other (in the future)?

Bill Rafter writes: 

I think the most important word in the Chair's sentence is "epicyclic", specifically because it is non-linear. Stocks specifically exhibit non-linear behavior, and seeming have forever. Bonds used to behave very linearly, but now behave similarly to stocks, although contrarily so. We have yet to find the defining characteristics of currency markets, but keep trying, hoping to find useful information relating to other markets. Gold is also a tough one, making one think it is a rigged game. REITS behave like a hybrid equity-debt vehicle. We tend to think of REITS as a free market version of the variable annuity (but without the huge vig).

Shane James writes: 

Arguably, and addressing prediction, the big 3 change regularly.

Simple stuff like the listing the biggest moves in X time periods is a useful, elementary starting point for cross market prediction.

Anton Johnson writes: 

Sadly, our system is unstable with the sub-stellar central mass consisting of the collective Central Banks. Orbiting, and sometimes consumed by, the central mass are the various financial instruments periodically switching in relative predominance as they accrete/disperse assets due to the actions of the brown dwarf.



December 30, 2013 | Leave a Comment

 My apologies in advance for a seemingly strange piece of research.

Recently a Speclister posted a link to a site which inferred considerable success in trading various markets on the basis of solar and lunar events. We have all seen these for decades. There are lots of charts that seemingly draw the connection between full and new moons, sunspots, geomagnetic radiation and of course the financial markets. I myself found nothing in the way of serious data that would make me want to trade on that basis, but the site exuded so much confidence that it was hard to dismiss out of hand.

The site like many in the genre spends a lot of space arguing WHY. You know, humans are mostly water and Earth's tides are controlled by solar and lunar gravitation, so why not humans. Personally I don't care what the reason is, as long as a reason exists and the data is non-random. In this case I am going to assume that a reason exists, but is not discernible. So the answer was for me to take a look at the data with our research tools.

My period of study was from January 1, 2005 through December 27, 2013. That could always be enlarged if some worthwhile results were forthcoming. As a benchmark equity asset I used SPY, as it included dividend yield and was a real and tradable market.

Over the period SPY achieved a 7.4 percent compound annual rate of return (CAROR) while experiencing a 60.83 percent maximum drawdown (DD). Thus the return to risk ratio (R/R) was 0.12. Full statistics and a chart are here.

The site made some strong claims about the value of the full and new moon dates, so my first look was there. To look at solar influences I would need a significant number of cycles and they are approximately 11 years each. First I bracketed the half-month on either side of the full moon, and the same with regards to the new moon. With regards to the full moon, you would buy SPY at the first quarter and hold for the half-month through the full moon, selling at the third quarter. When you were out of the market you were in cash, earning nothing. Thus the following constitute programs in which you are only invested for half the possible time:

Full Moon Bracketing:           2.1% CAROR,    36% DD,     0.06 R/R
New Moon Bracketing:        5.19% CAROR,    47.98% DD,     0.11 R/R

This agreed with the site in that longs would favor the new moon. But if the full and new events corresponded to troughs and peaks, we had to look at equity growth between the events. This also constituted investing for only half the possible time.

New to Full (waxing):        9.82% CAROR,    46.08% DD,     0.21 R/R
Full to New (waning):        -2.2% CAROR,    41.17% DD,     -0.05 R/R

These results would suggest that equity prices tend to trough at the full moon and peak at the new moon, exactly as conveyed by the website.

Links to stats: 






Steve Ellison writes:

To what does the t score of 3.46 refer, and how significant is it given multiple comparisons (you tested 4 subsets of data, and one looked pretty good)?



 My first experience with "serious" fraud was in grammar school. I had advance knowledge and just sat and watched the whole thing come off.

I was either in Fifth or Sixth Grade. My next door neighbor Paul was two years older, and Harry further up the block was in high school. Harry had one of those dream jobs: he worked as an usher at the local theatre for the Saturday kid matinees. It was a dream job because he got to see all the movies for free, and got paid to boot.

This theatre occasionally had giveaways to boost the audience. Well this one time they announced they were giving away a free bicycle (a real stunner) to someone in attendance. All you had to do was be in the theatre with a paid ticket. Of course they announced it for weeks and come the appointed Saturday, the place was packed. Kids were even sitting in the aisles as there were no serious fire regulations. There must have been 400 kids there, every one of which dreamed he was going to win that bike.

I sat next to Paul who told me in advance he was going to win. After the first show (the Saturday kid event was always a double-feature), the manager got up on stage with Harry the usher holding the giant bowl with all the tickets. Harry draws the winning ticket and gives it to the manager, who read out the number. Paul jumps up shouting "I won, I won". The next day Harry was riding around the neighborhood with his new bike. I was too young to inquire about the quid pro quo between Paul and Harry, or even perhaps between the manager and Harry. And of course I was in awe.

In many ways it was beauty in its execution. Not unlike the time the former First Lady of Arkansas used the futures markets to bag a payoff. But that's another story. Here's what made me think of the bicycle giveaway long ago:

Today I saw a news item that if no one wins the current $600+ million lottery and perhaps the next upcoming one, then the jackpot could be $1 billion. With this being the Christmas season, there could not be a better time to avoid anyone winning to run the jackpot up to all-time highs. All those people hoping and praying to hit the big one. All the promoters have to do is look into their computers to find unpurchased numbers for several weeks.

Now I'm not suggesting that they give the winning ticket to one of their buddies, like Harry and Paul arranged with the bicycle. But this could all be done with the goal of redistribution of wealth from those who purchase lotto tickets to the tax coffers of the states, who of course get most of the winnings. The individual winner himself does not matter, he's just window dressing.

Just thinking out loud.



 We have gone almost a year with the two percent additional payroll tax reinstated. The results are worse than expected.

What would have been expected is an increase in employment, but not enough to offset the effective tax increase. The reason you would expect an employment increase is because Americans are a resilient lot and get bored with sitting around. Sooner or later they find a way to get back to work. That is not what we have: The growth in payroll taxes is now negative, indicating a net loss in payrolls. The data is effectively "cap-weighted" so it might mean a loss in the number of jobs or switching to lower pay, as when a nuclear engineer becomes a sanitation engineer.

Philosophically, tax rate increases for individuals generate increases in tax revenue for governments. This is exactly what is expected by government, but the problem is that government does not know where to stop. They expect further rate increases to result in commensurate increases in revenue. But government neglects that individuals have a say in this: the latter can vote with their feet by leaving the workforce. America is now on the wrong side of the Laffer Curve.

Additional amounts taxed (N.B. the PPACA has been ruled by the Supremes as a tax) will have a continued negative effect.

A fellow Spec-Lister suggested I look for structural/secular changes in the employment data. My initial thought was that humans are skilled at obtaining freebies, and the disability payments coming from Social Security seemed a perfect target. Consider, faced with a lay-off, why not see a doctor, claim clinical depression and get yourself on disability? The long-term advantage of doing so may mean that you never have to work again, which would not be the case with unemployment benefits. But is my conspiratorial claim borne out by the data?
The short answer is "No". However there is more, should you feel inclined.

Firstly, which data does one use? Social Security Administration issues a report showing claimants for disability and the average claim. Multiply the two and you get the total value of disability benefits paid. Alternatively, you can go to the Treasury website and see their ledger of what actually was paid. Although the two sources (Soc.Sec. and Treasury) mimic one another, they are decidedly not identical. Of specific concern is that they differ by an odd order of magnitude, and one which is not relatively constant. So then one might posit which source does one trust.

Chart of Disability Benefits Paid

Chart of the 12-month rates of change of benefits paid

My experience suggests that the Social Security data looks as though it has been manipulated or "cleaned up". The Treasury data looks as though it contains a degree of static, which is more realistic. My guess would be that the Treasury data is "raw", while the Social Security data is "adjusted". In general my personal preference is for raw data if I cannot reverse engineer the adjustments. Both data sources indicate a relative decline in the yearly rate of change, decidedly counter to my pre-supposed conspiracy claim.

If you look a little deeper into the Treasury data you find a profound cyclic influence:

Cyclic disability benefits

This was a surprise. I did not assume the claimant had much control over the process, but the data indicates that summer is a key time to receive benefits. Oh, the joy of it all. [Skeptics should note that the cyclicality is not related to the number of days in the various months.] The cyclicality also suggests that disabled persons do return to the workplace. (I would have lost that bet.)

What is the current trend?

trend slope in disability benefits paid

For whatever reason, the drift of disability benefits is not increasing. One might optimistically believe that because conditions are not worsening, they must get better. Such logic could cost an investor a lot of his wealth.

Rocky Humbert replies: 

There was a Washington Post story yesterday that adds some color to this discussion. It notes a fact: 1.3 Million workers will have their "emergency" unemployment benefits end on December 28, unless Congress renews this aid program. This is a big number. And I was unaware of this fact. And as I consider myself somewhat informed about stuff, I'd guess relatively few market participants are aware of this fact either.

The writer then looks at the probability that a lot of these folks will file for disability claims. The author cites a study (which I have not read) which suggests that they won't. I have no opinion except that people respond to incentives. And some number of these 1.3 Million will surely find their way back into the reported labor force. This will likely distort the tax revenue, payroll, and other data to some degree in the first months of 2014.

I am raising this point not because I have any view about the currently big number of people receiving disability or what it means. (That's HR Rogan's job.) Rather, I am raising this, because the employment and tax numbers will, I believe, look really odd in January and February. (HR=hand wringer)

The story can be found here:  "Where Will Workers Go After Their Jobless Benefits Expire? Probably Not on Disability"

Jeff Rollert adds: 

Just to add another vector to the discussion, I would also argue that, since 2000 (the benchmark year in the article), the entry into the global labor pool of hundreds of millions of smart, motivated Chinese workers (not to mention Vietnamese, etc) has had a significant impact.

From the MIT Technology Review: "How Technology Is Destroying Jobs":

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson's contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who's worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee's claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the "great decoupling." And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.



Some preliminary thoughts on the running median 2, 3, 4, 1, 7, 8, 9, 3.

A moving median of the first 5 is 3, of the next 5 is 4, of the next 5 is 7, of the next 5 is 8– it's a good indicator of trend. First recommended to me 53 years ago by Fred Mosteller, Chairman of Harvard's first statistics dept.

It is more stable than the moving average as outliers are removed from sample. It is easy to compute fast with computers for small running numbers like 5 or 100 by repeated sorts. For higher numbers, you can form two groups, those below the median and those above. As a new number comes up you place it in one of the two groups if higher or lower and take away the oldest number. Then adjust to make the two groups equal again. It is not used as much as the moving average so it shouldn't be hurt by front running or spikes when cross over occur. It has a defined distribution when the underlying distribution has inordinate extreme values as frequently occurs with Cauchy or similar distributions with infinite variance.

It's probably a good thing to use when using nearest neighbors as predictors, i.e using the median and running median to compute your predictors. It deserves testing in real life markets for real life applications.

Ralph Vince writes:

It is the indicator of "expectation," as evidenced by human behavior itself, and not the probability-weighted mean.

Bill Rafter adds: 

Moving medians have some distinct advantages.

They represent real values that occur. For example, taking the average of 1, 2 and 5 gives you 4, which never occurred, whereas the median 2 did occur. Continuing with the same series, should subsequent values in the series be less than 5, the value of 5 will not occur as a moving median. Hence, the moving median eliminates outliers.

One of my appliances has three thermometers to measure temperature. The value displayed is the median (and hence a series of moving medians). Should one of the thermometers be broken, or distorted by being in a particularly hot or cold spot, the median will still give me the best estimate. This elimination of outliers is very useful.

Should you have data whose importance relies upon only crediting occurring values and need to eliminate outliers, then you should test moving medians. We ourselves had experimented with them regarding price series and written extensively about them, but do not use them in our current work. Our reason is that we consider the outliers in a price series to be particularly important.

Kim Zussman adds:

The following is a plot ratio of SP500 (10 week moving average) / (10 week moving median) for the recent 5 years (SP500 weekly close data).



 For those of you interested in jobs data, this chart might be of interest.

The red line is very important, showing a 2 percent increase. Ceteris paribus the 2013 payroll tax receipts should average 2 percent above 2012. However as time has progressed, the government has received less and less of this increase, and the current receipts growths are running negative to the prior year despite the increase in rate. This is the Laffer Curve at work.

As of January 2014 the YOY growth will use as its base the tax-increased 2013 data, which should be interesting.



I have a model which at its root is theoretically (but not operationally) similar to the Fed Model, and its job it to tell me where to allocate assets among equities, debt, gold and/or REITS. I also include a few other items as 'tracer bullets'. At this time the allocation model would have most of its money in equities, and importantly no money in REITS. However when I look at my list of 30 stocks to buy, 23 of them are REITS and 2 are utilities. So if I have to rotate out of something, my only choice is cash.

Could this suggest something ominous?



 It's funny that the jobs report is not compiled yet. The Labor Dept. must have the data they use, as that report consists of happenings through 9/12. We use Dept. of Treasury as our source and we have that information through 9/27. The Treasury data is generated electronically and we might get the 9/30 report later today unless they intervene.

Bottom Line: The YOY growth in payroll tax receipts (seasonally adjusted), which is our substitute for employment, is at the lowest level of the year, whether you mean calendar year or adjusted fiscal year. But of course, you might never see that report.

Let's say you were in charge of the Administration of a country in a similar circumstance. If you knew the jobs data was fantastic, would you release it? A good economic report might be taken to mean that the country was not as fragile as previously thought, and could therefore withstand a shutdown for a while. On the other hand, if the jobs data were bad, it might mean the country was very fragile, and that the Administration should compromise quickly, effectively forcing your hand. And of course in the latter scenario you should be embarrassed by the fact that nothing you had done economically for 5 years had been successful. Your best option might be to wait until you needed a trump card, and then pull it out of the hat. Plus (if you wanted) you would have additional time to massage the data.



Attached is a weekly chart of CSI300 index (representing 300 large stocks on Shanghai and Shenzhen exchange) from January 2007 to now.

Would anyone call an upcoming bull market from this?

Perhaps the chart is not too obvious yet. Fundamentally, it is true that many foresee a slowdown in GDP growth in the coming years. But what is important now is that people can anticipate some structurally healthy growth. And this is very different from the past 5 years when the growth seemed high but the market mainly saw it as unhealthy and stayed essentially hopeless. The new government seems to deliver a lot more confidence to the market with a new direction for the economy.

Any thoughts?

Bill Rafter writes: 

One suggestion I have is that you ask yourself two questions:

1. Consider the participants in that market; what time frame do they typically observe in terms of long term perspective (i.e. lookback period), and

2. How frequently do they watch the market?

The reason to care what others do is because they are your competition. The money you make, you get from them. Thus, know them!

Point #1 may also be related to taxation. Is there a period of time in China such that if a position is held that long it qualifies for a tax break? In the U.S. that means it qualifies as a "long term capital gain" with a significantly reduced amount going to the confiscatory government.

If there is no such period, then it's nice to see history going back to 2007, but it is irrelevant to what is happening now. However it is good to have history as you can easily see with a visual how a market behaves with the signal process you use. You should statistically test, of course, but a quick look is valuable. (Tukey said so, and he is a god in this area.)

Thus your window of observation for decision making (as opposed to history) should not go back perhaps more that 50 percent greater than the period identified in point #1. In our case (in the U.S. with equities), we do not look back farther than a year and a half. Frequently as little as four days.

Point #2 is the shorter end. If everyone watches the market every day, then by limiting your snapshots to weekly, you are discarding valuable information. Ask yourself, "Why would you ever want to eliminate valuable data?" You would not do that with a neural net, so why do it with real intelligence? Some would posit that weekly information (data or charts) eliminates some noise. However we would argue (and have demonstrated) that it is impossible to separate signal from noise. Specifically I would suggest that if someone gave me what they considered noise, I could find some signal within. It may not be the best example of signal, but it's in there.

Leo Jia adds: 

Thank you very much, Bill, for the precious advice.

There are a couple reasons for me to have attached the weekly chart starting from 2007.

1. I look for a possible multi-year bull market, and for that to me the trend looks clearer on the weekly chart.

2. One key reason for the past few years' laggard market, aside from those fundamental reasons I outlined, is the bull-run and crash in 2007-2008. The bull-run was solely due to the government reform initiative in the stock market which tried to ensure all shares (government shares and floating shares) to be equal. The crash then was mainly due to market suspicion that the resulting floatable government shares would subsequently flood the market. Now 5 years over, the flooding of the government shares, if that happened indeed, is likely to have settled down.

To answer your two questions:

1. There is no tax incentive in China encouraging people to hold longer. Holding period are generally much shorter. It can be as short as a few months for funds, and as short as a few days for individuals.

2. Most participants watch the market everyday.

Perhaps one thing different in China's market is that large market movements are all initiated by government policies. Market enthusiasm are only summoned when the imagination of a government direction as positive.

I am not a government analyst, but traditionally, each government in its 10 years tended to create at least one big upward move in the market. Looking at this government, its initial months already showed signs of its focus on finance (along with new direction on economy). The recent launch of bond futures is one such key move.

img.imageResizerActiveClass{cursor:nw-resize !important;outline:1px dashed black !important;} img.imageResizerChangedClass{z-index:300 !important;max-width:none !important;max-height:none !important;} img.imageResizerBoxClass{margin:auto; z-index:99999 !important; position:fixed; top:0; left:0; right:0; bottom:0; border:1px solid white; outline:1px solid black;}



 Voyager 1, launched back in 1977, has become the first man-made object to pass into the unknown vastness of interstellar space. News Report.

I have a serious challenge for you. Name a single man-made device that has worked continuously for 40+ years without any human physical intervention. The winner will receive Rocky's usual prize: A unique gift of dubious monetary value.

Chris Cooper has a go at it: 

There must be any number of vintage self-winding watches that still work. If it must be wound, does that still match the spirit of your inquiry? Of course, there are many watches and clocks which must be wound by hand that are still operating. You can find some self-winding watches for sale on eBay.

Kim Zussman replies:

I am man-made and have worked continuously for well over 40 years (though currently half time for the government).

Bill Rafter adds:

Without doing any looking, there are lots of low-tech human creations that have survived the test of time. Many dams have performed their functions for decades and even centuries. I'm not speaking of hydroelectric dams, but simple river control devices. The Marib dam in Yemen is still there (after two millennia) and would be working if there was enough rainfall. Many artificial harbors also have exceptional longevity. Some Roman harbor constructions are still operational; the Romans having been expert in concrete manufacture. And don't forget Roman roads.

In more recent times, I am certain there is some electrical cable that is still functioning from half a century ago, if only to ground lightning rods.



There is an issue about the employment numbers that may not be getting proper attention - Section 530 and its interaction with state unemployment benefits. Section 530 of the Revenue Act of 1978 was the Carter Administration's gift to the farm belt. Under Section 530 an individual will not be classified as an employee if the alleged employer has a reasonable basis for treating that person as an independent contractor. "Reasonable basis" can be proved by:

(1) "Judicial precedent, published rulings, or technical advice with respect to the taxpayer, or a letter ruling to the taxpayer; (2) "A past IRS audit of the taxpayer in which there was no assessment attributable to the treatment (for employment tax purposes) of the individuals holding positions substantially similar to the position held by this individual"; or (3) "Long-standing recognized practice of a significant segment of the industry in which the individual was engaged."

The IRS has a "whistle-blower" form that individuals can file to challenge their classification - the SS-8. But - and here is the kicker - on the form itself the IRS warns the taxpayer that "A Form SS-8 should not be filed for supplemental wage issues." What this means, in real terms, is that people who get "fired" from their independent contractor jobs cannot use the IRS to bully state unemployment agencies into paying them benefits.

Since the states all have incentives to cut down on the cash drain from unemployment benefits, even the deep blue ones like California do not make much effort to reclassify contractors as employees once the issue gets to unemployment benefits. the result is that "the workforce" has more and more people in it who are not now and never will be classified as "employees". "Employment" itself becomes less and less of an indicator of actual incomes because the payroll numbers cannot reflect the contractors' fortunes (both good and bad).

Bill Rafter writes: 

For the "percent unemployed" number, reclassification as to who is or is not an employee may have an impact.  However this is the beauty of simply looking at the payroll tax data, as all persons (traditional employees and individual contractors) are required to pay.

Victor Niederhoffer writes: 

But with all the seasonal adjustments and other things that enter the employment numbers, how can payroll numbers not using the census seasonal adjustments be meaningfully compared. 

Bill Rafter elaborates:

It is the seasonal adjustments by the officials that we distrust. We think the adjustments are a fudge factor to be used by an administration eager to paint a picture. I don't know who is responsible (BLS or Census), but their adjustments historically have made little sense. BTW, the Fed also could use someone better at seasonal adjustment, although their number jockeys are better than whoever plays with the payroll data.

A problem is (a) do you want the truth, or (b) do you want to make money? If you are decent at it, doing your own work will get you the truth. However if the world follows the official releases as gospel, you could be right and broke. I have been in that predicament a few times.



Is an asset up or down? How do you decide?

For a somewhat offbeat reason we need an unwavering determination as to whether or not a particular asset was up or down. We do not care how much. Obviously if something is up or down by say 2 percent, there is no argument. The problem is if the data is not definitive. After all, you occasionally have days when the Dow or S&P are one way and the Nasdaq is the other. So which one is right?

The standard is, of course, the close. That would be right in many ways. Most volume occurs at or near the close, and margin calls are determined by the close. But many [technicians] use midrange, or an average of the High, Low and Close. Institutions have been known to care about the volume-weighted average price or VWAP. A priori we thought VWAP would be best for our purposes. But we were wrong.

Ours was a very limited study. We only cared about 4 assets (all ETFs): SPY, IEF, GLD, IYR. And our definition of right vs. wrong is the amount of flip-flopping during a trend. That is, how often is it wrong? We realize this is all very subjective, but we are not writing a thesis here - we just want the quick and dirty facts. The period we considered: 2005 through the present.

It turns out that VWAP is not best. It gives a lot of false signals. This was good news for us as we will not have to acquire VWAP data.

That's all we really cared about. However the fact that institutions take care in getting VWAP price executions and the fact that VWAP (at least in the limited study) gives false information, suggests that someone (a flexion, perhaps) has something at stake to effect the false information.



This is a visual representation of non-payroll tax receipts by Uncle Sam. Now I fully know that corporations and individuals are incentivized to find accountants who will keep these numbers as low as possible, but that tendency does not change over time.



 This article shows results of experiment on the E-Coli bacteria detailing the survival or death of the bacteria in response to the way it handles introduced exogenous stimuli. The upshot is that small changes in exogenous conditions can lead to large substantial differences in outcomes. Surely a rich field for market related phenomena looking at how small changes in one input (say rates) may lead to large movement in other markets (say currencies) when the dependent variable is already under some stress.

Pitt T. Maner III writes: 

This is a really interesting field.

It looks like bacteria have been "hedging their bets" for quite some time. And they have a type of "memory" that influences their response to current environmental conditions. On a larger scale it is interesting to note what happens to the ecology of a system when a "keystone species" is removed. The field of "synthetic ecology/biology" looks to have important findings for a wide range of fields and the bacterial algorithms already developed are being used for engineering problems.

1. "Bet-hedging in stochastically switching environments":

"We investigate the evolution of bet-hedging in a population that experiences a stochastically switching environment by means of adaptive dynamics. The aim is to extend known results to the situation at hand, and to deepen the understanding of the range of validity of these results. We find three different types of evolutionarily stable strategies (ESSs) depending on the frequency at which the environment changes: for a rapid change, a monomorphic phenotype adapted to the mean environment; for an intermediate range, a bimorphic bet-hedging phenotype; for slowly changing environments, a monomorphic phenotype adapted to the current environment. While the last result is only obtained by means of heuristic arguments and simulations, the first two results are based on the analysis of Lyapunov exponents for stochastically switching systems."

2. "Memory in Microbes: Quantifying History-Dependent Behavior in a Bacterium":

"Your average bacterium is unlikely to recite π to 15 places or compose a symphony. Yet evidence is mounting that these 'simple' cells contain complex control circuitry capable of generating multi-stable behaviors and other complex dynamics that have been conceptually linked to memory in other systems. And though few would call this phenomenon memory in the 'human' sense, it has long been known that bacterial cells that have experienced different environmental histories may respond differently to current conditions [1]–[3]. Though some of these history-dependent behavioral differences may be physically necessary consequences of the prior history, and thus some might argue insignificant, other behavioral differences may be controllable and therefore selectable and even fitness enhancing manifestations of memory."

3. The work of Professor Robert T. Paine and the concept of the "keystone species" where an organism has a big effect relative to its abundance:

"It was a ritual that began in 1963, on an 8-metre stretch of shore in Makah Bay, Washington. The bay's rocky intertidal zone normally hosts a thriving community of mussels, barnacles, limpets, anemones and algae. But it changed completely after Paine banished the starfish. The barnacles that the sea star (Pisaster ochraceus) usually ate advanced through the predator-free zone, and were later replaced by mussels. These invaders crowded out the algae and limpets, which fled for less competitive pastures. Within a year, the total number of species had halved: a diverse tidal wonderland became a black monoculture of mussels1."

anonymous adds: 

 OK, what about Slime Molds (particularly, Dictyostelium discoideum). It has the absolutely stunning biological characteristic that it spends much of its life as thousands of individual cells and other times as a single entity.

When times are good for Dictyostelium doscoideum its 'cells' wander off and enjoy themselves. However, in less hospitable environments the 'swarm' of cells coalesce and form a single entity.

Apparently the cells emit acrasion (or AMP) that contains information useful for other cells

When things are starting to look tough the cells pump out increasing amounts of AMP and the cells begin to cluster….Other cells follow these trails and increase to mass towards it completed whole.

Now, I wonder about the stock market. During the regular upward movements most of the components are doing their own thing, following their oscillations generally higher…. When 'it' hits the fan, the correlations between the stocks increase rapidly to 1.0 and they form a single bearish, growling entity.

Now without pushing the analogy too far, I wonder if stocks 'transmit' statistical information (AMP to follow the analogy) to each other (in this context they would not transmit as much as 'exhibit' some form of common statistical behaviour) that forced the behaviour of component stocks into a more correlated state.

Testing possibilities are legion.

Gary Rogan writes: 

My general objections to giving some purpose to the market have to do with incentives, or more precisely lack thereof to do anything in particular.

I read a whole chapter of a book on a slime mold presented as an altruism study. The reason it was presented like that is that when the individual slime mold cells cooperate, only the lucky few that join the growing "mushroom" at the right time get to propagate because they get to form spores only at a particular state of development of the hastily arranged colony. Nevertheless, when presented with a choice of dying for sure or maybe propagating (and the cells only cooperate when they are close to death) they chose to cooperate and propagate. There is also some amount of deception involved when the cells jokey for position, but not a lot, since any particular placement is hard to achieve.

What is the equivalent reason for stocks to cooperate?

Bill Rafter writes: 

Should what you say about stocks transmitting statistical information occur, it would mean a relative decline of idiosyncratic volatility. That is something we have studied, and found that when the going gets tough, the idiosyncratic vol grows faster than the market's vol.There are some other measures of "group think" that are good indicators of both the broad markets and individual assets.

I would posit that stocks do not transmit info, but their owners do. Consider the case of futures in which one market takes such a hit as to require significant margin calls. Human nature being what it is, the public sells its winners to finance its losers, and non-related markets dive along with the primary.



 I heard there is a new open source Python library 'PySEC' allows easy access to all of the SEC's filings.

This is interesting primarily because we are in our 11th week of programming to do essentially what this guy says he has done. Our goal is to glean all of the SEC submissions without human intervention. Many of the commercial data suppliers use the "thousand scribes" method in which they hire a thousand people in a developing nation to manually record and categorize data. And those commercial suppliers charge huge fees for that suspect data.

Does the Python programmer really have something? Have our 11 weeks (to date) been a fruitless exercise?

Prior to 2010 the SEC required submission of quarterly and annual reports to be postable on the web. However there are all manner of idiosyncratic ways in which that information can be posted. Most of the submissions can be mined by a computer, but the fact that we are still programming after 11 weeks suggests it isn't simple.

The vast majority of files are text files. However that does not make mining easy, as labeling of the data is not consistent. Many data items within a given 10-Q may be labeled "total assets" perhaps for each subsidiary. Total liabilities are frequently called something else, or not labeled at all. Then in 2010 it was required that the files be submitted in HTML. Then that requirement was changed to XML, but HTML has appeared to survive. Within submissions we occasionally see an extraneous dingbat dropped into a label, which screws up the mining operation. There is only one submission that has completely stymied us - where the company presented their financial results as an attached GIF file.

We are highly suspect of data that is difficult to mine. Maybe extraneous dingbats have been put there deliberately to foil such a search, or maybe the person responsible is merely trying to impress a boss. But it is enough for us to log the difficulties and research subsequent performance of those problematic submitters. That we will provide to the list, but we will most likely have to abstain from providing a list of the miscreants. We would be happy to hear from any lawyers on the list about that one.

The Python program appears to have made some progress in mining the XML submissions from 2010 but it is a tedious one-by-one search. And now that many of the submissions are back in HTML, the miner has much more work to do for the same effort. So we certainly aren't going to give up our work and pay homage to the Python program.



 The quote below is from Round Ireland with a Fridge by Tony Hanks. I needed something relatively fun and mindless to read and it was recommended by a friend. The book is a lot of fun and I never expected to find anything deeper.

I liked the idea of doing all you could to reduce the chances of you, as an old person, saying 'if only'.

… 'If onlys' are inevitable, an inescapable part of life. If only that plane hadn't crashed, if only that volcano hadn't erupted, if only I hadn't stepped in that dogshit. The trick is to be masters of our own destiny in so far as we have control, and take the rest on the chin with a wry smile. But we must go for it. Only a fool would squander the rich opportunities which life affords us."

Shane James writes:

When one reads about successful individuals in business fields this sort of thing always comes up.

Narrowing the field of individuals massively to include only Financial Market types one sees, to a man, that they all took massively outsized risks in their early days that just happened to pay off. All the heads of the current brand name hedge funds fall into this category. (As a quick aside, Portfolio Managers in these same funds now lose their allocations with drawdowns circa. 3%.) Quite harsh when the 'names' themselves used to swing 50%. The survivor bias is massive. It is not enough just to have the positive attitude of a Richard Branson or the taking all opportunities like a Paul Tudor Jones or the massive leveraged bets of a middle aged Palindrome…One still needs to have figured an edge.

So, I think one should take every opportunity, never let any experience pass one by in a business sense and only stop when you have developed the edge/ product or skill that no one else has. 



For as long as I can remember I have spoken negatively about the use of volume data as input for trading. I never stated that volume was worthless, only that I could not discover any value added by using volume. I would now like to say mea culpa and illuminate the good and the bad.

In the past I had always researched volume data as something that should be multiplied against either price levels or changes. The logic is that a price move when no one placed bets isn't much of a price move, and price moves with lots of money being wagered are more significant. Several of the notables on this list have published books and my recollection is that they counsel in favor of weighting price action by volume. And although I could use volume data to generate profits, no prior efforts were as profitable as using price data without volume.

That is, trading markets according to price momentum can be profitable, but my efforts at weighting that momentum by volume brought down the profitability. Similar with price volatility; weighting it by volume only dropped my expected trading profitability. Hence my bias against volume.

So why would I keep looking? Well, philosophically speaking, volume data is important simply because it isn't price. That is, it offers a respite from possible multicolinearity of looking at price data over and over. However the idea that volume alone could be interesting just does not seem logical. There is a lot of mythology associated with the markets. For example the oft stated reason for the market going up "more buyers than sellers" is impossible, as there are never more buyers than sellers. I had come to believe that the recommended use of volume was such a myth.

A little while ago a fellow Speclist poster (for whom I have considerable respect) commented about certain data (not volume) that gave me the idea to see if volume could be used without weighting it against price. That is, could it be used alone, or relatively alone? "Relatively alone" means comparing volume on up days to volume on down days, or volume on up ticks to that on down ticks, but still not being weighted by price activity other than the sign (+/-). Despite years of research, I had not done this before.

When you look at volume data it just looks like static, whether it is just total volume or distinguished by sign. You have to smooth it. For most people that means moving averages. First tip: don't waste your time with them. They are great at smoothing data when that data has a periodicity that is known and predictable, which does not apply to volume. So instead of using moving averages, go a notch or two higher: exponentials or better yet, regression trends. And once you have that, look at the slopes.

Second tip: looking at equities, your lookback periods should be relatively long, by which I mean 6 months or longer. If you have a method of dynamically determining the best lookback period (without using look-ahead bias) then use it, as its profitability will exceed most others.

 Third tip: it appears best used as a discrimination tool (buy the stock vs. sell the stock) rather than as a ranking tool. However we have not yet exhausted all the possibilities of the latter. Timing decisions are less critical than using price momentum, probably because there are many followers of price momentum and the exits and entrances get too crowded at certain times.

Results: trading constituent stocks of the Russell 3000 on the basis of volume over the last 15 years has produced profitability comparable to that of using price momentum. Similar rates of return with better (i.e. less) drawdowns. That's about 8,000 stocks when you include the dead ones to eliminate survivor bias. Results are better with items that go into portfolios. That means individual stocks rather than indices. Unlevered ETFs of equities = yes; levered ETFs = no; ETFs of currencies and volatility = no. Volatility of the asset negatively impacts success, just as with momentum. We have not tried futures (as we do not trade them).

We are now testing the incorporation of volume data into our overall decision process. Will advise how it turns out.

Lesson learned: when something is making a move, the signals tend to be writ large across the landscape. Accordingly it is no surprise that volume data works. The problem was in my preliminary bias against it and the thought (and published material) that it had to be weighted against price. Resolved: just do the research and forget the preliminary bias.

Third tip: it appears best used as a discrimination tool (buy the stock vs. sell the stock) rather than as a ranking tool. However we have not yet exhausted all the possibilities of the latter. Timing decisions are less critical than using price momentum, probably because there are many followers of price momentum and the exits and entrances get too crowded at certain times.

Results: trading constituent stocks of the Russell 3000 on the basis of volume over the last 15 years has produced profitability comparable to that of using price momentum. Similar rates of return with better (i.e. less) drawdowns. That's about 8,000 stocks when you include the dead ones to eliminate survivor bias. Results are better with items that go into portfolios. That means individual stocks rather than indices. Unlevered ETFs of equities = yes; levered ETFs = no; ETFs of currencies and volatility = no. Volatility of the asset negatively impacts success, just as with momentum. We have not tried futures (as we do not trade them).

We are now testing the incorporation of volume data into our overall decision process. Will advise how it turns out.

Lesson learned: when something is making a move, the signals tend to be writ large across the landscape. Accordingly it is no surprise that volume data works. The problem was in my preliminary bias against it and the thought (and published material) that it had to be weighted against price. Resolved: just do the research and forget the preliminary bias.



In the old days when I used to trade ag futures, about once a year I would see total unanimity of signals. Grains, meats, sugar, cocoa, etc. would all give the same signal at the same time. For example, they would all give a buy signal, such that I would think to myself, "holy mackerel, these markets are going to explode". But the total unanimity was a fake as the markets would stutter and then all drop. I was never bright enough to conjure up a reason why it happened, but it did.

Recently all of my financial market indicators gave sell signals. Stocks, Bonds, REITs, Gold. And the sell signals were everywhere. Momentum, behavioral economics, actual volatility, actual volatility of implied volatility, and some bizarre stuff you would never think of. You name it and it was bearish. Even the fundamental stuff I watch like surrogates for employment and retail sales are bearish.

We are very mechanized traders, and when I get a bearish signal for equities, I simply look to my overall rankings and see what to switch to. But everything was bearish, so there was really nowhere to relocate. However in looking at the rankings, equities were ranked higher than the competitors (e.g. bonds). So I had no choice but to stay in equities, being very selective and keeping every stock on a very short leash.
I have no idea why unanimity of indicators would negate the indication.

Any ideas? I don't see any flexion hands in this, but maybe others do.

One of the holy grails out there is to know how to forecast future co-movements between different assets. (As if forecasting just one isn't hard enough.) As it all starts to hit the fan, the correlations between all assets approach 1.0 at something much greater than an exponential rate…

My qualitative take on it is that the growth rate of the cross correlations as they inexorably accelerate towards parity approaches a certain velocity 'x' at which point, mathematically, we are as close to the asymptote as the 'system' can stand.

This is the 'going to the cliff and back again' phenomena that The Palindrome speaks of as a result of 'reflexive' interactions of market participants' expectations with the price and the price's effect upon the market participants' expectations. Arguably this is the ideal time for stabilising 'flexionic' behaviour (as opposed to shenanigans in TY around auctions et al.)

How they might do it, and more importantly time it, is a very deep question. For 'them' to have it figured out I think they would have to have figured out the actual underlying price generating process (what really moves prices).

Now, I guess only Renaissance Technologies' Medallion Fund has gotten anywhere near identifying the answers to that series of non linear questions. The most that one can say at this stage of the game is that the occurrence of substantial downwards co movements of assets tends to cluster (which is a 'warning sign' in itself) and for short periods after this clustering risk assets often make substantial minima. 

Steve Ellison writes: 

My first guess to Mr. Rafter's question is that, like a Higgs boson, unanimity in any market is very volatile, unstable, and unsustainable. As Richard Band wrote in a book about contrarian investing (doesn't everybody profess to be contrarian?), "If everybody is bullish, who is left to buy?"

To test this proposition, my first idea was to find instances in which the Investors' Intelligence survey of advisors had a 4-to-1 preponderance of bulls over bears or vice versa. There have been no such instances in the 2 years I have subscribed. I settled for instances in which either the bullish or bearish percentage was below 20%. There is typically a sizable group of fence-sitters predicting "correction", so the sum of the bullish and bearish advisors is much lower than 100%.

There were 10 recent weekly reports in which the percentage of bearish advisors was less than 20%. I get the reports on Wednesdays, so I tabulated the change in the S&P 500 futures from the Wednesday close to the Wednesday close of the following week.

Report             One week         Net
Date     Close     later    Close   change
3/13/2013 1550.00  3/20/2013 1549.00   -1.00
3/20/2013 1549.00  3/27/2013 1556.75    7.75
3/27/2013 1556.75   4/3/2013 1548.50   -8.25
 4/3/2013 1548.50  4/10/2013 1582.75   34.25
4/24/2013 1574.00   5/1/2013 1577.25    3.25
 5/1/2013 1577.25   5/8/2013 1628.75   51.50
 5/8/2013 1628.75  5/15/2013 1654.25   25.50
5/15/2013 1654.25  5/22/2013 1655.50    1.25
5/22/2013 1655.50  5/29/2013 1647.00   -8.50
5/29/2013 1647.00   6/5/2013 1608.00  -39.00

Average                                 6.68
Standard deviation                 25.31

Considering that the average net change during my subscription has been a gain of 3 points per week, I get a t score of 0.46, which is not only insignificant, but has the opposite sign of what my conjecture implied, i.e., that low bearishness is bearish.



 I first saw the 'dead eyes' look of a poker player/loser when I was 13 or so. Still gives me restless nights and I know I cannot become that way.

My dad took me into the "stockman's bar" in Billings, Montana to impress upon me what degenerate, greedy people turn into.

Probably another sleepless tonight tormented by that devil.

Gary Rogan asks: 

What is the real difference between gambling and speculation (if you take drinking out of the equation)? Is it having a theory about the odds being better than even and avoiding ruin along the way?

Tim Melvin writes: 

I will leave the math side of that answer to those better qualified than I, but one real variable is the lifestyle and people with whom one associates. A speculator can choose his associates. If you have ever been a guest of the Chair you know he surrounds himself with intelligent cultured people from whom he can learn and whom he can teach. There is good music, old books, chess and fresh fruit. The same holds true for many specs I have been fortunate to know.

Contrast that to the casinos and racetracks where your companions out of necessity are drunks, desperates, pimps, thieves, shylocks, charlatans and tourists from the suburbs. Even if you found a way to beat the big, the world of a professional gambler just is not a pleasant place.

Gibbons Burke writes: 

 Here is something I posted here before on this distinction…

Being called a gambler shouldn't bother a speculator one iota. He is not a gambler; being so called merely establishes the ignorance of the caller. A gambler is one who willingly places his capital at risk in a game where the odds are ineluctably, mathematically or mechanically, set against the player by his counter-party, known as the 'house'. The house sets the odds to its own advantage, and, if, by some wrinkle of skill or fate the gambler wins consistently, the house will summarily eject him from the game as a cheat.

The payoff for gamblers is not necessarily the win, because they inevitably lose, but the play - the rush of the occasional win, the diversion, the community of like minded others. For some, it is a desire to dispose of money in a socially acceptable way without incurring the obligations and responsibilities incurred by giving the money away to others. For some, having some "skin in the game" increases their enjoyment of the event. Sadly, for many, the variable reward on a variable schedule is a form of operant conditioning which reinforces a compulsive addiction to the game.

That said, there are many 'gamblers' who are really speculators, because they participate in games where they develop real edges based on skill, or inside knowledge, and they are not booted for winning. I would include in this number blackjack counters who get away with it, or poker games, where the pot is returned to the players in full, minus a fee to the house for its hospitality*.

Speculators risk their capital in bets with other speculators in a marketplace. The odds are not foreordained by formula or design—for the most part the speculator is in full control of his own destiny, and takes full responsibility for the inevitable losses and misfortunes which he may incur. Speculators pay a 'vig' to the market; real work always involves friction. Someone must pay the light bill. However the market, unlike the casino, does not, often, kick him out of the game for winning, though others may attempt to adapt to or adopt his winning strategies, and the game may change over time requiring the speculator to suss out new rules and regimes.

That said, there are many who are engaged in the pursuit of speculative profits who, by their own lack of skill are really gambling; they are knowingly trading without an identifiable edge. Like gamblers, their utility function is not necessarily to based on growth of their capital. They willingly lose their capital for many reasons, among them: they enjoy the diversion of trading, or the society of other traders, or perhaps they have a psychological need to get rid of lucre obtained by disreputable means.

Reduced to the bare elements: Gamblers are willing losers who occasionally win; speculators are willing winners who occasionally lose.

There is no shame in being called a gambler, either, unless one has succumbed to the play as a compulsion which becomes a destructive vice. Gambling serves a worthwhile function in society: it provides an efficient means to separate valuable capital from those who have no desire to steward it into the hands of those who do, and it often provides the player excellent entertainment and fun in exchange. It's a fair and voluntary trade.

Kim Zussman writes:

One gambles that Ralph and/or Rocky will comment.

Leo Jia adds: 

From the perspective of entering trades, I wonder if one should think in this way:

speculators are willing losers who often win; gamblers are willing winners who often lose.

David Hillman adds: 

It is rare to find a successful drug lord who is also a junkie. 

Craig Mee writes: 

One possible definition might be "a gambler chases fast fixed returns based on luck, while a speculator has time on his side to let the market decide how much his edge is worth."

Bill Rafter comments: 

Perhaps the true Speculator — one who is on the front lines day after day — knows that to win big for his backers, he HAS to gamble. His only advantage is that he can choose when to play. 

 Anton Johnson writes: 

A speculator strives to be professional, honorable, intellectual, serious, analytical, calm, selective and focused.

Whereas the gambler is corrupt, distracted, moody, impulsive, excitable, desperate and superstitious.

Jeff Watson writes: 

I know quite a few gamblers who took their losses like men, gambled in a controlled (but net losing manner), paid their gambling debts before anything else, were first rate sports, family guys, and all around good characters. They just had a monkey on their back. One cannot paint with a broad brush because I have run into some sleazy speculators who make the degenerates that frequent the Jai-Alai Frontons, Dog Tracks, OTB's, etc look like choir boys. 

anonymous writes: 

Guys — this is serious, not platitudinous, and I can say it from having suffered the tragic outcomes of compulsive gambling of another — the difference between gambling and speculating is not the game, the company kept, the location, the desperation or the amounts. The only difference is that a gambler, when asked of his criterion, when asked why he is doing this, will respond with "To make money."

That's how a compulsive gambler responds.

Proper money management, at its foundation, requires the question of criteria be answered appropriately, and in doing so, a plan, a road map to achieving that criteria can be approached.

Anton Johnson writes: 

It's not the market that defines whether a participant is a Gambler or a Speculator, it's his behavior.

Gibbons Burke writes: 

That's the essence of my distinction:

"gamblers are willing losers who occasionally win"

That is, gamblers risk their capital on propositions where the odds are either:

- unknown to them
- cannot be known

- which actual experience has shown to have negative expectation
- or which they know with mathematical precision to be negative

They are rewarded for doing so on a random schedule and a random reward size, which is a pattern of stimulus-response which behavioral scientists have established as one which induces the subject to engage in the behavior the longest without a reward, and creates superstitious as well as compulsive behavior patterns. Because they have traded reason for emotion, they tend not to follow reasonable and disciplined approach to sizing their bets, and often over bet, leading to ruin.

"speculators are willing winners who occasionally lose." That is, speculators risk their capital on propositions where the odds are:

- known to have positive expectation, from (in increasing order of significance) theory, empirical testing, or actual trading experience

They occasionally get unlucky, and have losing streaks, but these players incorporate that risk into the determination of the expectation. Because their approach is reason-based rather than driven by emotion, they usually have disciplined programs for sizing their bets to get the maximum geometric growth of their capital given the characteristics of the return stream, their tolerance for drawdown.

If a player has positive expected value on a bet, then it is not a gamble at all. The house does not gamble. It builds positive expectation into its games. It is a willing winner, although it occasionally loses.

There are positive aspects of gambling, which I have pointed out earlier in the thread and won't belabor. To say that "all gambling is bad" is to take the narrowest view. Gamblers who are willing losers (by my definition all are) provide the opportunities for willing winners (i.e., speculators) to relieve gamblers of the burden of capital they clearly have no desire to hold onto, or are willing to trade in a fair exchange for the excitement of the play, to enable their alcoholic habit, to pass the time, to relieve their boredom, to indulge delusions of grandeur at the hoped-for big win, after which they will quit playing, or combinations of all of the above.

Duncan Coker writes: 

I found Trading & Exchanges by Larry Harris a good book on this topic and he defines all the participants in the exchanges and both gambler and speculators have a role to play. Here is something taken from page 6 that make sense to me: "Gamblers trade to entertain". Speculators to "trade to profit from information they have about future prices."

He divides speculators into those that are well informed versus those that are not. One profits at the expense of the other. Investors "use the markets to move money from the present into the future". Borrowers do the opposite.



The Treasury Dept. puts out a Monthly Treasury Statement that breaks down a lot of interesting data. Mostly it's redundant information to anyone following the daily data, but this month (i.e. April) the payroll tax receipts from self-employed enterprises is a few sigmas to the upside. This April's self-employed number was 8.86 percent higher than that of 2012. The non-self-employed number was up 4.55 percent, which interestingly does not agree with the numbers reported in the Daily Treasury Statement. Recall that payroll taxes for everyone were increased 2 percent. Also note that the Daily Statement is done without human intervention, whereas the Monthly Statement has fingerprints all over it.



Those who choose not to read good books have no advantages over those who cannot read. (Attributed to Mark Twain.) A similar thing applies to research and data: Those who do not collect (and scrutinize) their own data, have no advantages over those who get their ideas and data from journalists or poor data suppliers. I would venture an educated guess that most of the managed investment money is handled by managers getting their information from journalists. Gentlemen, that's your competition. Go forth and prosper.

In quantitative analysis (irrespective of whether its data origins are financial statements or market prices) the guy with the best data has a definite advantage. Conversely, the best analytical mind coupled with poor quality data is at a disadvantage. Let me first deal with the problems of price data.

In equities, back-testing requires using deceased stocks to eliminate survivor bias. That means that to test the Russell 3000 over say 15 years, you need data on maybe 8,000 stocks. You cannot collect those by symbol, because symbols get recycled. So you try SEDOL or CUSIP numbers, but even those have problems. The holy of holies, CRSP has problems. And you cannot simply toss out the missing stocks without experiencing bias. Also note that the constituents of the R3k decrease monthly and are refreshed annually.

Obviously you have to adjust for dividends because you want to compare total returns. That introduces the dividend adjustment problem: do you use multiplicative adjustment or subtraction? Either one is problematic: destroying round numbers or creating negative numbers.

However the problems with data create tremendous opportunities to those who mine it. You know you are on to something when:

1. A major data provider has all of the dates of certain data off by 1 day. (systematic error) You call to ask why that is, and they don't have any idea what you are talking about. "How can you possibly know apriori that their data is wrong?" So you quickly reverse yourself and apologize for being mistaken. Everyone who uses that data has the error. They are counting things that are impossible.

2. You circumvent data suppliers and go directly to the exchange (or government website) because intermediaries screw it up. Hey, you cannot expect data replication to be perfect. (idiosyncratic error)

3. You disregard seasonally adjusted data in favor of raw data, and do your own seasonal adjustment. You cannot do this for every dataset, but certainly for the important ones.

4. A free provider (e.g. government or an exchange) provides detailed instructions on how to data mine their site. But the instructions are wrong. You call and the service people don't know what you are talking about. You eventually get to speak to the geeks and somehow learn the right way to get access. They confirm that no one had those problems before. WHY? Because no one else is looking at the data. He shoots; he scores!
These examples are like lifting back the bride's burqa, thinking that she might have a beard, and being surprised that she is absolutely beautiful.


a. When at all possible, go directly to the source. That may mean the exchanges or the government agency itself rather than your data supplier, and may appear unnecessary on the surface. But if you want to find the mistakes that most cannot find, you have to look in different places.

b. Look for site or download counters and check them out. Come back to them and recheck the numbers later to see the average daily hit rate. I was absolutely delighted to learn that I was one of only four downloaders of certain data.

c. Further check that data (with the counter) to see if it is available on Bloomberg or another major source.

d. Look for alternatives to the data you seek. The alternatives might not be the exact data, but they may be good surrogates. Real numbers for something close to what you want are better than bullsh*t numbers from a poorly conducted survey.

e. I cannot overemphasize the importance of checking the data, and checking that your data mining routine has collected it properly. Errors (either systematic or idiosyncratic) regularly occur. As renowned data cruncher John Tukey said, "There is no substitute for looking at the data." (Exploratory Data Analysis)

Typical problems you have to avoid:

- Look-ahead bias and survivor bias

- Lack of statistical significance - engineers typically require 30-50 observations, but market traders (such as technical analysts) frequently consider one event as significant. Don't do that!

- Testing on a sample of data that may not include the pattern. The solution to that of course is to always use the population, rather than a sample.

- Frequently you may test something on an index as a precursor to testing on thousands of individual stocks (or worse, options). But indices do not necessarily behave like individual stocks. ETFs might be a solution, but they are in themselves just smaller indices.

Those who challenge the validity of data mining (and also market timing) tend to cite as their proof that the first order daily changes in stock prices are random. We can concede that point, but there are lots more relationships to be studied than daily changes.

Data mining can be successful for any number of reasons but the juiciest fruit is to be found in the following ways:

- Analysis of data that is unknown or unseen by most people or, better yet, subject to systematic error (~finding buried treasure)

- Better analysis of existing data. (~having a better brain) Note that some of this will be serendipitous. Exploration by definition will lead you to discovering things you did not expect.

- Incredible persistence (hard work).

Should you seek to do fundamental analysis you will find different and more exasperating problems. We know many of them first hand.

The first problem is that the data is not easily accessed. It tends to cost quite a lot of money, and much of it has systematic errors. We have not found a commercial data supplier that did not have systematic errors.

The cost can be prohibitive. The major high-end quote provider places limits on the amount of data one can retrieve in a given period. We also know that provider uses a lot of humans in the process and has a lot of errors. Looking further afield, the fundamental data vendors we found are three in number. One replied quickly with a quote of $30,000 for the back data and a 1-year subscription going forward. Another came back a month later and wanted $72,000 for the same, and the third never came back to us. When I informed the higher priced service that they were above their competitors, they asked if their "pricing committee" could know what we were quoted by their competitors. That does not tend to make one comfortable, as what kind of business does not know what their competitors charge? Particularly if they make the point of having a pricing committee.

There is an alternative to buying fundamental data – getting it yourself. In theory this should be straightforward: the S.E.C. has all of the relevant files online. But if you are looking to get data on say the R3k for 15 years, you will have to collect it from approximately a half-million 10K and 10Q files.

Uniformity is generally not the rule, and you need some uniformity when doing computer mining. For example, sometimes a 10Q will be labeled "Ten Q" which has to be planned for. Unless you have access to a lot of people from the sub-continent, you want to do this automatically, which will also enable you to avoid things like transposition errors committed by humans. But some things are easier for humans than for computers. For example, most data constituting a company's total assets are listed as "Total Assets". Sometimes that is misspelled, and sometimes the number appears with a double underline, and other times without. Usually the next line starts with "Liabilities", but not always. It's laughable, but not fun.

We cut our teeth on a subset of the universe, REITs. The 172 that we found interesting had approximately 8,000 10K and 10Q files. After a lot of work we managed to get data cleanly from all but about 50. We consider that a major success, but even that low failure rate means we will have to go through about 3,000 files manually for the entire universe of a half-million files.

The good news is that having unrestricted access to such data provides a lot of opportunities. We are making a leap of faith that the data and our analysis will improve our existing results. Of course there isn't a guarantee, but that's the way to bet.

Phil McDonnell writes: 

Thanks to Bill for his excellent survey of data collection techniques and especially the pitfalls. There is little to add to his survey except one thing. That is when there are retroactive changes to data. To handle that case one needs to time stamp your data as to the time received. This caution applies to both fundamental data as well as price data which can be 'adjusted' a day or two later.

The worst example if this was Enron. The Enron data which showed the fraud was only released several years after the bankruptcy.



The payroll tax numbers look bullish on the economy, but any conclusions have to be weighed against the quality of the data. The upcoming Jobs report will include data for the approximate monthly period ending January 12, 2013. That is, it includes data crossing over the year end. Normally that would not be a problem, but this most recent year-end includes tax law changes.

There was a significant amount of bonuses being paid out in December 2012 rather than in the first Quarter of 2013. That shows up in the data. Then of course the tax receipts would drop in 2013 to reflect the lack of bonuses paid in 2013. That also shows up in the data. But then you would see the receipts level off at a number higher than January 2012 because the Feds are taking about 2 percent more. That also shows up in the data. (see attached chart) So the tax receipts accurately reflect policies, the fact of which counters arguments suggesting that people and businesses do not pay attention to taxes.

If taxes reflect policies in this short-run, then their ultimate effects will also reflect those policies. That is, sooner or later the increased taxes will undoubtedly have bearish effects on the economy. Like, we didn't know that?

Is there a better metric of what is going on, considering that the changing rules have the payroll tax receipts jumping up and down? Yes there is, and it is the medicare taxes, officially known as "Hospital Insurance". I have previously commented on this space about medicare taxes, so forgive me for not repeating it. This report will be released at 2 PM EST on the 8th business day of February. The bad news is that it is only monthly data, but the good news is that it will reflect receipts through January 31, and as such should give us a clean look.



 I'm sitting in a Panama City Youth Hostel next to a sailing board with departure times for Columbia. There are twenty sailings in as many days but most are filled by an assortment of backpack travelers from ten countries speaking five languages. It's musical chairs around the board for a $400 berth on a boat that takes four days to Cartagena, Columbia with a stopover at the San Blas Island. Every fifteen minutes a seat fills or opens and the standbys frown or cheer.

I arrived a week ago to help an American ex-pat on a Philosopher's stone land quest, a list of some 600 properties of which he has bought and sold five in the past year at perhaps ten times his dollar cost and two months research and surveying each to get the titles. I accompanied him a few days ago to seven hectares on the Caribbean Coast that sold for $250,000 on the spot to a Mexican developer, and viewed others on Lake Gatun within and Tobacco Island at the inlet of the Canal.

The primary reason for arriving in Panama, however, was to hike the Darien gap, a 90 mile jungle locked strip that is the only break in the Pan American Highway from the Arctic Circle to Tierra del Fuego. This is my third attempt to hike and canoe this gap and I flew in on the promise of a Darien village chief to guide me through to Columbia, but yesterday he backed out saying he couldn't be paid enough to risk his family's lives with recent increased activity of the Columbia rebel FARC throughout the region.
So, I set about alternatives and earlier today sat in Manual Noriega's Paymaster House eating a hamburger in a converted restaurant in Santa Clara, Panama that was one of his power centers with troops wearing boa constrictors around their necks to intimidate the locals and guarding the nearby airport. In December 1989 the sky filled with U.S. paratroopers who landed at the airport, asked the locals for an English speaker who pointed a Major and company of 30 Marines to where I bunked the previous night at an American ex-pat's home. He led them to the Paymaster House that was captured by the U.S.

I returned to Panama City and even as I write a space opened on the December 20th sailing of the Mars De Gato sloop and I grabbed a seat.

Tomorrow there's a 30th anniversary racquetball clinic at the Fort Clayton Gym where in the 1980's I led the racquetball invasion of Central and South America with clinics throughout Latin America, that was also the first failed attempt at the Darien Gap.
After the clinic and sailing I'll alight in Columbia and work south to climb 14,000' Ecuador mountains, and then ply rivers down to Peru where I was hired hours ago via Internet as a Butterfly hunter. I'll capture only five exotic species that an amigo collector sells on EBay for $500 to $1200 each. He has provided me with a net, glassine envelopes and mothballs, and will pay $50 for each rare species that I hope to net to finance my passage home in a few months.



My transparent, stretchable Fibonacci overlay seems to be successfully identifying price levels around which the main indexes cluster. This in itself does not predict the future, it identifies where the holes are on the bagatelle table, but not which one the ball will settle in. Moreover it may reflect a self-fulfilling prophecy rather than a rule. Nonetheless, the information can be useful in constructing multiple-leg options positions.

But my overlay is not predicting timing. All the pundits mention Fibonacci but this does not seem to be the case, has anyone tried other methods? Interested in any pointers.

Looking forward to stepping back in the water, but want to maximise the acuity of my toolset first.

I found this nice online chart for streaming SP500, also gives longer term charts.

Jim Sogi writes:

Not sure what you're doing, but I've been pondering time and time frames and relationships of time. Some systems using returns have time exits and a study of time seems like its important. Not sure exactly how, but the idea is to maximize return based on time while minimizing loss. The relationships change by cycle. It seems time itself and speed and roc volatility all have cycles in time. Perhaps survivorship times give some info.

Bill Rafter writes:

The question is whether one wants to value time or eschew it. Both can be done, so it's up to the practitioner.

Valuing time is easy, as most economics is time series processing. And most all market data comes dated. Shunning time is trickier; do you want to avoid just some time, or all of it?

Point & Figure analysis is what most subscribe to if they want to eliminate some time, and they do that by defining "box sizes" or the minimum move they consider significant. The theory is to define the noise level and throw that noise away. Sounds great, such that someone would be willing to be a tad late on a move if the signal had a higher degree of accuracy. Our extensive research says that P&F is certainly a tad late, but there is a decrease in accuracy. Here's another caution: most of the literature on P&F is written by those lacking native intellectual capacity (IMO) who have no concept of research. To them P&F is a religion akin to animism.

A more successful approach than P&F is not to create box sizes, but to drop all "inside days". I say more successful in that you eliminate insignificant data, and do not lose accuracy. However we have not been able to increase accuracy over normal data analysis. But we are still working with it, and may find something. You still get a time grid, but with lots of the days missing.

The most effective way to eliminate all time is to use Lissajoux patterns. That link will give you an animated example of such with two sine waves. There are lots to be said about this, but I don't think many have the appetite for it



 The payroll tax receipts are virtually identical to that of a month earlier. If there is any tilt to the data, it is slightly higher (i.e. bullish equities). If the media expects it to be down (maybe because of Sandy), then the released data will catch them a little off guard.

From the end of the reporting period Nov. 16th thru Nov. 28th, the data has been higher, but that should be reflected in the next month's report. Please note that I remove the seasonal effects to avoid enthusiasm over the temporary Christmas employment.



A post purporting to show that buy and hold investing does not work has appeared on our list. It is reprehensible propaganda and total mumbo. They do not take account of the distribution of returns to investing over long periods that have been enumerated by the Dimson group and Fisher and Lorie. It is sad to see this on our site. The arguments against buy and hold seem to be that the professors found that short term investing didn't work so they erroneously concluded that long term investing must be the alternative. Shiller is mentioned and cited with approval.

Alston Mabry writes: 

To explore this issue numerically, I took the monthly data for SPY (1993-present) and compared some simple fixed systems. In each system the investor is getting $1000 per month to invest. If during that month, the SPY falls a set % below the highest price set during a specific lookback period (the 3, 6, 12, 18, 24 or 36 months previous to the current month), then the investor buys SPY with all his current cash (fractional shares allowed). If the SPY does not hit the target buy point this month, then the $1000 is added to cash. Once the investor buys SPY shares, he holds them until the present.

For example, let's say the drop % is 10%, and the lookback period is 12 months. In May of year X, we look at the high for SPY from May, year X-1, thru April, year X, and find that it is 70. We're looking for a 10% drop, so our target price would be 63. If we hit it, then spend all available cash to buy SPY @ 63. Otherwise we add $1000 to cash.

Each combination of % drop and lookback period is a separate fixed system.

Over the time period studied, if the investor just socks away the cash and never buys a share (and earns no interest), he winds up with $239,000. On the other hand, if he never keeps cash but instead buys as much SPY each month as he can for $1000, then he winds up with over $446,000, which amount I use as the buy-and-hold benchmark.

If the investor uses the fixed system described, he winds up with some other amount. The table of results shows how each combination of % drop and lookback period compared to the benchmark $446,000, expressed as a decimal, e.g., 0.78 would that particular combination produced (0.78 * 446000 ) dollars.

Results in this table

The best system was { 57% drop, 18+ month lookback }, or just to wait from 1993 until March 2009 to buy in. Of course, it's hard to know that 57% ex ante. The next best system was { 7% drop, 3 month lookback } coming in at 0.99.

This study is just food for thought. It leaves out options for investing cash while not in the market. And it sticks with fixed %'s without exploring using standard deviation of realized volatility as a measure. So, there are other ways to play with it.

Charles Pennington comments: 

Thank you — that is a remarkable "nail-in-the-coffin" result.

Nothing beat buy-and-hold except for the ones with the freakish 57% threshold, and it won by a tiny margin, and it must have been dominated by a few rare events–57% declines–and therefore must have a lot of statistical uncertainty..

That's very surprising and very convincing.

(Now some wise-guy is going to ask what happens if you wait until the market is UP x% over the past N months rather than down!)

Kim Zussman writes: 

Here are the mean monthly returns of SPY (93-present) for all months, months after last month was down, and months after last month was up (compared to mean of zero):

 One-Sample T: ALL mo, aft DN mo, aft UP mo

Test of mu = 0 vs not = 0

Variable      N      Mean     StDev   SE Mean  95% CI            T
ALL mo     237  0.0073  0.0437  0.0028  ( 0.0017, 0.0129)  2.58
aft DN mo   90   0.0050  0.0515  0.0054  (-0.0057, 0.0158)  0.92
aft UP mo  146  0.0083  0.0380  0.0031  ( 0.0021, 0.0145)  2.65

 The means of all months and months after up months were significantly different from zero; months after down months were not.

Comparing months after down vs months after up, the difference is N.S.:

Two-sample T for aft DN mo vs aft UP mo

                  N    Mean   StDev  SE Mean
aft DN mo   90  0.0050  0.0515   0.0054   T=-0.53
aft UP mo  146  0.0084  0.0381   0.0032

Bill Rafter writes: 

A few years ago I published a short piece illustrating research on Buy & Hold. It contrasted a perfect knowledge B&H with a variation using less-than-perfect knowledge using more frequent turnover. Here's the method, which can easily be replicated:

Pick a period (say a year) and give yourself perfect look-ahead bias, akin to having the newspaper one year in the future. Identify those stocks (say 100) that perform best over that period, and simulate buying them. Over that year you cannot do better. That's your benchmark.

Then over that same period do the following: Buy those same 100 stocks, but sell them half-way thru the period. Replace them at the 6-month mark with the 100 stocks perfectly forecast over the next 12 months. Again sell them after holding them for just half the period. Thus the return from the stocks that you have owned and rotated are the result of less-than-perfect knowledge. Compare that return to the benchmark.

Do this every day to eliminate start-date bias, and then average all returns. The less-than-perfect knowledge results far exceeded the perfect-knowledge B&H. Actually they blew them away in every time frame. It's really obvious when you do this with monthly and quarterly periods as you have so many of them.

The funny thing about this is the barrage of hate mail that I received from dedicated B&H investment advisors, who somehow felt their future livelihoods were threatened.

If anyone wants that old article, send me a message off the list. We called it "Cassandra" after someone with perfect knowledge that was scorned.

Anton Johnson writes in: 

Here is a link to BR's excellent study "Cassandra", as it lives on in cyberspace.



 One has found that there is an electronics circuit that almost always retrospectively provides a great description of price action in markets. I wonder if there is an electronics circuit that compresses the voltage output keeping it in a range, sort of like the finger in the dike, but then after the compression is over on the negative side, e.g after the negative feedback is taken away, the voltage doesn't immediately lead to tremendous negative voltage. I seem to remember such a circuit with op amps.

Jon Longtin writes:

There are a variety of electronic circuits that perform such a role, depending on the application. One common application is a voltage regulator, which provides a (nearly) constant voltage, regardless of the load applied to it. The circuit monitors the actual voltage currently being provided and compares to a pre-set reference value. The difference between the actual and desired (setpoint) values is called the error, and is used to adjust the current provided to the circuit to bring the voltage back to the setpoint value. For example if the load increases (more electricity demand) the load voltage will drop and the voltage regulator will provide more current to bring the voltage back up. Same goes for a decrease in load.

There are some limitations and compromises in such a circuit. First is there is a finite amount of current that the power supply/voltage regulator can be provided, and if the error signal requests more than this amount, the output will not be maintained. Also of importance is the time response: a circuit with a very fast time response will respond more quickly to fluctuations in the load, but can also result in so-called parasitic oscillations, in which the output oscillates after a fast change in load is made. By contrast a longer time response provides a slower response to a variation, but tends to damp oscillations. This same behavior, of course, is seen in countless financial indicators, and is part of the art in deciding, e.g., how many prior data points to include in a signal.

A somewhat more complex version of the above, and perhaps more closely aligned with the behavior of a market signal, is an audio "compressor/limiter". This is a device that constantly monitors the volume (magnitude or voltage) of an audio signal and makes adjustments as needed. A limiter is the simpler of the two and simply sets a threshold above which a loud signal will be attenuated. The attenuation is not (usually) a brick-wall however; rather a signal that exceeds the threshold value is gently attenuated to preserve fidelity without overloading the audio or amplifier circuitry. A compressor is a more complicated animal and provides both attenuation for loud signals AND amplification for quieter ones. In essence a HI/LO range or window is established on the unit, and signals exceeding the HI limit are attenuated, while signals below the LO limit are amplified. This resulting output then (generally) falls within the HI/LO range. This is used extensively (too much!) in commercial music. Humans naturally pay attention to louder sounds (ever notice how the volume universally jumps when commercials come on TV? They are trying to grab your attention with the louder volume). Pop music attempts to achieve the same by using aggressive compression to provide the loudest average volume for program material without exceeding the maximum values set by broadcast stations or audio equipment. The result, however is that the music sounds "squished" and doesn't "breath" because the dynamic range of the content has been reduced considerably. With such devices there are a variety of adjustments to determine the thresholds, time before taking action (the attack time) and how gradually or strongly to attenuate (amplify) signals that exceed the envelop range.

Here' s a fairly decent article that describes this in more detail.

Incidentally both of the above are examples of a large branch of engineering called Controls Engineering. The idea, as Vic stated, is to monitor the output by using feedback and make adjustments accordingly. There are countless different algorithms and approaches, as well as very sophisticated mathematical models (people build careers on this) to best do the job. Like most complex things, there is no single approach that works best for every problem, but rather involves a balance of performance, cost, and reliability.

I highly suspect such algorithms have already found their way into many trading strategies, one way or another.

If interested, I can suggest some references for further reading (though I am not a Controls person myself).

Bill Rafter writes: 

 Think of your voltage regulator as a mean-reversion device. If a lot of this is being done, then your trading strategy must morph into simply following the mean.

In light of recent changes in the investment climate we suggest that one should tighten up controls in which one is long a given market. Perhaps that might also or alternatively mean (a la Ralph) tightening the size of the positions. The result will be taking less risk and incurring less return, but taking additional risk would seemingly not be rewarded in the current milieu.

Jim Sogi writes: 

Dr. Longtin's description of compressors and limiters was
fascinating.  A compressor on my guitar signal chain prolongs the
sustain on a signal in addition to smoothing out the volume spikes and has less fade as the signal weakens.  With added volume, one gets a
nice controlled feedback.

Sometimes in the markets one sees a sustained range with the spikes being attenuated reminiscent of a nice guitar sustain.

On a different note, one curious thing is that people cannot  discern differences in absolute volume.  It's very hard to hear the differences
in volume between two signals unless they are placed side by side.



 The problem with polling is because of the response rate. A generation or two ago people were honored that someone would solicit their opinion. No so today, for whatever reason. Two days ago the Pew Foundation revealed the percentage of those persons contacted who are willing to give their opinions. Take a guess as to what that percentage is: According to Pew, their "success rate" is nine percent, or one out of every eleven persons they speak with. Thus they cannot get a reasonable random sample and the Central Limit Theorem does not apply. Therefore most polls are GIGO.



Somebody is spending, but who? This chart shows YOY customs duties and excise taxes which are a fairly good surrogate for retail spending.

However, here's a question: if the federal government buys a fleet of new Chevy Volts (the ones that burn up, LOL), do they pay excise taxes?

Mr. Krisrock writes: 

The combination of Higher Energy, Higher Dollar, Higher Commodities, and the September 15 California Tax on the internet has caused a SURGE in amazon orders to beat the increase…and of course Obama's playing with the numbers. Look at the huge drop in employment a year ago in October. Obama knows how the numbers work a year ahead, and of course, where do auto parts come from…electronic components come from Asia…for starters…and foreign car assemblies do buy foreign parts.



 What are the best markets to trade? Many futures markets trade differently. Some have a lot of depth and intraday gaps are infrequent (I consider these the best to trade). Others have ample liquidity but are prone to gapping. Others still are downright scary. E-minis and 10 years seem like very "safe" trading markets. Eurodollars as well. Crude oil has a lot of liquidity but can gap. Gold seems prone to fast and erratic moves. Grains seem like they can get a bit dicey. Less trafficked softs seem rather risky. Commodities in general appear to have more erratic price risk than stock index futures or financial futures. FX is fairly liquid and seems ok. I am largely making observations based on personal experience and in some case I have none so I am curious for thoughts from seasoned specs.

Bill Rafter writes:

Ask yourself, would I rather trade an extremely efficient market in which information was digested immediately and most of the fluctuations not related to new information were due to randomness, or would I prefer a market that was less so. As you gain experience you will learn that one of these mutually exclusive choices is more profitable to trade than the other. One of these requires virtually no expertise to trade, and indeed expertise would not appear to be helpful, whereas the other requires considerable expertise. One is the frequent choice of novices, whereas the other tends to be avoided by novices. Then ask yourself, how do novices typically fare?

Jeff Watson writes: 

Grains are impossible right now. The 30 cent daily ranges make it too much of a gamble. Even trying to predict, or have a gut instinct of where the carry spreads, the corn/wheat/bean spread, the crush, are going….Oy Vey. To play the grains, to coin a surfing analogy: You better be in really good shape, you gotta see the wave (move) coming toward you, then paddle real hard, pop up and catch the wave. You better either be quick to bail or commit to the wave, make a bottom turn, then ride it until it's over. Determining when the ride(trade) is over isn't as simple as it sounds, and many dangers exist on and below the surface that can still mess you up when you bail the trade. The most important decision a grain trader can make right now is whether he wants to gamble a lot for a potentially big reward, or hunker down and reduce risk.




 We have recently learned something with regard to trading currencies; specifically that in a strategy involving switching or rotating currencies they should also be traded with debt and gold. That is, excluding gold and debt from the universe of currencies lowers rates of return and/or increases drawdowns to less than optimal.

Background: We are equity traders who occasionally run from equities when our various quant manipulations suggest we are about to get thumped. Traditionally in such a circumstance our go-to place has been treasuries, specifically the 10-year. But there are times (like now) when fleeing to bonds doesn't seem like a good idea. So we decided to reassess our strategy vis-à-vis alternatives. And our full-court-press of research shows that the best alternative is a strategy of moving between bonds, gold and the U.S. Dollar Index. This beats ALL strategies involving only one or two of those assets. More importantly for others is that it also beats ALL similar rotations among the Dollar Index and a collection of other currencies. (N.B. we are free to choose whatever time frame seems to be best suited.)

With regard to our strategy, it trades combinations of those assets rather than one alone. However there are times when the strategy will have us in only one asset, and many would express fear at being entirely in gold or the dollar. Few would fear being entirely in treasuries, although the period of greatest decline was indeed a time when all monies were employed in debt.
No one is going to get excited about the alternative rotation strategy; it does not have an exceptional rate of return. But it does have very good risk control, which is what we want in an alternative. None of this should be surprising, as we know how interrelated they are. Currency is all debt, except gold which is the traditional debt alternative vis-à-vis inflation. But then one of the costs of holding gold is the foregone interest. Since they cannot be separated fundamentally, it is logical that they not be separated in a trading program. But it took testing to convince us of such.

If you are a trader who exclusively trades currencies, you should experiment with expanding your universe to including gold and debt.



 I have found a worthy complement to the O'Brien series: Dumas' The Last Cavalier

In 1997 someone had gone through an old French newspaper and found a serialization of Dumas's last work, which had never been published as a book because it was unfinished. Finally it was published in 2005. I happened to buy a copy then, but have only just gotten around to reading it.

I have found it absolutely fantastic. For those who like historical novels, it provides great coverage of the Napoleonic era, the period after the revolution. If you are (as many on the list) a fan of the Patrick O'Brien series about Jack Aubrey, you will find this book gives you the French side of many of the events. One of Aubrey's counterparts would have been Robert Surcouf, so-called King of the [French] Corsairs. Of course Aubrey was fictional and Surcouf real. Dumas' tales of Surcouf are just as good as O'Brien's tales of Aubrey. The protagonist in Cavalier is mentored by Surcouf. Additionally there is an excellent play-by-play account of the Battle of Trafalgar and Nelson's death.

Dumas has written so much, that there are bound to be repeated scenes. An obvious one is that in which an engaged couple signs the marriage contract. That scene in Count is repeated in Cavalier including where the groom disappears immediately before signing, although the respective grooms depart for different reasons. The Tuileries Palace discussions of Louis XVIII and his staff in Count are repeated with Napoleon and his staff in Cavalier, although chronologically Cavalier precedes Count by at least a decade.

Dumas seems to want to please everyone, and refrains from taking sides, which probably accounts for his publishing success, as both Republicans and Royalists could find something to cheer about. He also provides entertainment for his female audience - lots of social gatherings.

This is a long book. My copy was 700+ pages; the action spanning six years with additional prior history.

The first 300+ pages deal with politics and troubles of a police state, somewhat on edge because of an uprising in Brittany. The parallels to the current political scene are startling. Supporters of Napoleon attributed everything good to him, while the Royalists blamed everything bad on him. One hopes we do not undergo a similar war of extermination. Finally our protagonist gets his freedom and goes on one swashbuckling episode after another, much like the Musketeers.

Dumas meant for it to cover perhaps another eight years if you consider the 14 years from the signing of the marriage contract mentioned by the soothsayer. And I truly wished it had. It's one of those books you hate to have end.

Further editing to put it into a book by Dumas would have made it even better. Still, some of ways he conveys information are extremely well done. He spoke of Surcouf (Jack Aubrey's counterpart) as a man whose one good fairy not invited to his baptism was Patience. And he chastises young men for leaving their health in brothels and their purses in taverns. Another: "While Nature may have given him a lot of excellent qualities, it had refused him like qualities of the mind."

As the Chair has said, some of the best reading is over a hundred years old.



 As a hedge fund manager you have nine assistants employed solely to give you advice. Each of the assistants has a different perspective on the markets. They are all good advisers, as any one of them improves your trading immeasurably. For example, the market has a 2 percent annual return, but with your skills you can generate a 10 percent return. If you also add the advice of any one of your assistants you can bump that return up to between 12 and 18 percent.

Over the last 12 representative years there have been times when the nine were universally bullish. But despite their unanimity the market did not always rise. Conversely, even in the protracted down moves of 2008, their bearishness was not unanimous. Put another way, there was always one or two that wanted to go long at the worst times. Yet each and every one over time provided great advice.

You would like to find a way to combine their advice to get even better results than by using any one alone. But that's not easy. Sometimes, adviser A is early, and late at other times on a move. Likewise with the other assistants. One simple solution would be to have them vote, but the performance result of the vote underperforms some of the individuals, although still better than not having any adviser.

*Note here that we are only considering return and not the risk taken to achieve that return. Risk should always be considered, but for the sake of moving along, let us assume that taking the advice of your advisors never increases risk and that their respective upside contribution to profits is directly proportional to their downside exposure to risk. That is, much of their positive return contributions come from reducing risk, which is what we have observed generally.

Now, let's suppose that these advisers are not people, but algorithms. That's actually better because as algorithms they can be combined in ways that individuals cannot. They can be viewed logically (on/off) as in the voting experiment, or they can be ranked by their actual values. If they have scalar values they should be normalized (given the same order of magnitude or scale). For example, you cannot compare the slope of the Dow Industrials with that of the S&P 500, as the former is an order of magnitude larger. But if you put them on the same scale (e.g. divided by price), you can easily compare them.

Normalization is exactly what you would do to your inputs if you were using a neural net, and you might be tempted to go the NN route. But NNs have problems; among them would be your inability to discover the actual combination of what worked best. You might say "who cares" as long as it works, but that philosophy does not have a good history. However there is a very good use for a NN, and that is as a trial. That is, if you are good at NNs (and most people fail), then you should by all means try. If the NN gives you good results, then proceed on your own to find a good combination without the NN. But if using a NN does not improve results for the experienced practitioner, then it is going to be very difficult to find a better combination.

 But how do you combine them to your best advantage? Well, there's an app for that. It's called linear algebra. It is somewhat vertigo-inducing for most traders, because most of them are comfortable with things they can chart. For your average trader that means two dimensions; options traders tend to be comfortable in three dimensions. But with our illustration we are likely progressing to higher dimensions, and they are not chartable, although the problem's solution is indeed a chart, albeit a virtual one.

Subsequent "chapters" (if the topic flies): Operations, Testing.

Jim Sogi writes: 

"But with our illustration we are likely progressing to higher dimensions, and they are not chartable, > although the problem's solution is indeed a chart, albeit a virtual one."

One of my first posts ever to the SL was Flatland, and the idea that multiple dimensionality is lost in two dimension charts which are typically used.

Easan Katir writes: 

Flatland, one of my all-time favorite books since I read it 40 years ago, offers insights in many arenas. Perhaps some enterprising ex-game coder would turn his attention to finance and provide charts where the point of view can be changed with a click. Will traders of the future be trading on an X-box-like device?



 On a recent trip to the Allerton Botanical Garden in Kauai the guide noted that the mango trees have recently gone through 3 seasonal cycles in one year when only one is normal. He had no idea why. Our mango tree did the same. The explanation here was the big unseasonable rainstorm that knocked off the flowers at the beginning of the season. Now we have ripe fruit, unripe fruit and flowers on the same tree. Chair often compares trees to markets. I wonder if unseasonable shocks (EU, Fed,) might have thrown off the seasonal tendencies in the markets, shortening cycles, or forcing cycles. The changing cycles are the hardest to understand.

Bill Rafter writes: 

That is not unique to mangoes.

Take grape vines for example. Generally the only fertilizer you should use on grapevines is a shovel-full of manure in the spring. My wife thinks we should have flowers in between the grapevines and occasionally will hit the vines with some spray fertilizer she is using on those flowers after midsummer. Relatively soon thereafter the vines put out some new flowers (although most people would not recognize them as such), which will bear fruit (grapes are mostly self-fertile). But the fruit won't have time to develop fully and is a waste.

Fig trees typically produce two crops each year. The early crop (called breva) has generally unworthy qualities, whereas the late crop is to die for. But if you wanted, you could fertilize the tree weekly and have figs all summer long. People with summer vacation homes tend to do that as they know they will not be around after Labor Day, when most of the big crop would come in. You could even fertilize yourself to a second crop of determinate tomatoes, which are "programmed" to bear one crop all at once.

I don't know how good that is for the respective plant because I don't do it. And the lessons for the market seem to relate to Quantitative Easing. Perennial plants (particularly fruit trees) need a dormancy period. If they don't get it, they produce poorly until they do. I believe the same is true with markets: you cannot preempt or "outlaw" the business cycle and expect the economy to respond favorably all the time.



 To what extent does the concept of speed ratings, popularized by Crist , have applicability to markets. One variant of the idea being which horse had the fastest quarter last race, or which one had the greatest move down from 1 quarter to the next, like from first quarter to stretch, or stretch to close. Can this concept be applied to days within weeks, or months within years? How would some of the handicappers or horses extend this and what would Bacon say?

Bill Rafter writes:

While in grad school a buddy and I used to go to Golden Gate Fields regularly. For some reason it was always on a Thursday, and we went for the last two races to avoid the entrance fee. The lack of admission was critical because it removed the necessity to bet. (relation to low fixed costs?) The Racing Form was a critical part of the exercise, and I would bet on the horse with the highest speed rating that was showing greater than 8 to 1 odds close to post time. The bets were on the minimal side.

My results were successful if you measured my wine supply, which was quite good. The accumulation of wealth from horse racing was not something I relied upon, so any winnings were spent on the way home at the liquor store on the main drag. Racing bets were not something to aspire to, for a very good reason. Going to the payout window revealed the demographics of those who typically bet on longshots, as an 8-to-1 horse was considered. We also tended to meet those same people in the liquor store, where they were buying Thunderbird.



This is a chart illustrating the S&P shaded to reflect the yearly trend of Initial Unemployment Claims (Fed St. Louis series ICSA). While the chart does not prove anything, it does illustrate a possible relationship. Note that the data relating to the claims have been inverted, such that increases in claims implicate poorer economic conditions and in-turn declining equity prices.

Editorial comments: I do not prefer the ICSA data because it is weekly and goes through a process of human intervention (?corruption?). I prefer daily data that gets recorded electronically without any possible manipulation. HOWEVER even the ICSA data is now showing bearish market indications. I could torture this data to present the current situation as bullish (by introducing significant lag), but have tried to show it similar to how most would be receiving the information.



How do multiple lead changes, and their duration in minutes very close, i.e. from up to down from close or open, or within + or - 2 for multiple minutes, affect the outcome of the day in markets? In the playoff game they had a 14 point lead. "There were six lead changes and five more ties in the final 7 minutes of the third. For the next 13 minutes, a span of 46 dizzying possessions, neither team led by more than two points." By the way, the quotes are from the AP story about the game. One of the few times that I've ever seen a good meaty story rather than boiler plate from the AP.

Bill Rafter adds: 

Sounds like a job for the Spearman Rank Correlation.



 When I was a kid, my father got the family out of blue-collar South Philadelphia to blue-collar Wildwood, NJ for the entire summer. The beach towns of New Jersey are either nice or tacky, and Wildwood was extremely tacky, with most tackiness related to its boardwalk. When you are at the seashore for the summer, collecting shells wears a little thin, so a friend Buzzy and I got a discarded window screen and would go under the boardwalk just below several pizza shops and shovel sand into the window screen. Patrons reaching into their pockets for coins would regularly drop some through the slats in the boardwalk. A few hours work would produce about two bucks each for Buzzy and I, and that was in the days when a quarter could buy you a slice of pizza.

It was dirty work, but rewarding. And of course the dirt was easily washed off in the ocean. Invariably when one of the other kids would find out about our wealth their comments were, "You guys are sooo lucky!" Luck had nothing to do with it. There was a distribution of coins that would fall through and a lot of work by the harvesters. The same is true of the markets.

Jeff Watson writes:

Bill, your experience reminds me of that failed, but magnificent musical, "Paint Your Wagon" where Clint Eastwood discovers that the gold dust gets spilled on the floor and falls through the cracks. Eastwood, Lee Marvin et al proceed to dig tunnels under the entire town and they collect all of that spilled gold dust. They do extremely well for awhile, until a black swan moment where everything collapses and the entire town caves in. There are many market lessons in this movie. About 2:50 of this video is where Eastwood has his eureka moment. 

Vince Fulco writes: 

For those who have never been to Chair's Weston office, right next to the Captain's chair is a painting depicting a similar scenario. Not sure if it is a L'Amour story but the gold miner/spec is on the verge of hitting a nice vein while the precariousness of the surroundings become increasingly more apparent. The moment on the razor's edge is caught perfectly.

Just a beautiful piece.



 I recently visited a Dr. and when I got there, the nurse asked me to fill out a computer questionnaire that took 1 hour to fill out. After I filled it out, I was asked to sign a statement that said such things as "you will not be paid for filling out this questionnaire, the contents might be used by commercial factors, there are unlimited people in the survey" and a hundred other things that gave it a false aura of legitimacy.

I am wondering to what extent the false aura of legitimacy pervades our field. The classic example is the elections in a marxist or democratic regime, or the government institution that's there ostensibly to protect you from harming yourself but is really a gate for preventing competition from small and new entrants into the field. The committees in the markets to maintain order and proper pricing that are really arenas for the members to mark the positions in their favor, and force out the non-members through margin changes and rule changes comes to mind. The rules against competition in all fields, the licensing requirements, and for example the ethics tests that one must pass in certain fields. How pervasive is this and what is the relevance to our field?

Sam Marx writes: 

I agree that the urge not to compete in a fair open market if one is able to set up a monopoly or obtain an advantage is there, and it's a part of human nature. I believe that it cannot be eliminated entirely but there are some changes that would help. I also believe that lying and cheating obtained a large impetus and some begrudging approval when the graduated income tax became constitutional. Therefore, a recommendation I would make is to do away with the graduated income tax and have a flat income tax or replace the income tax with a sales tax. I don't expect to see any of this in my lifetime however. 

Bill Rafter writes: 

Sham credentials. There exist a variety of market-oriented groups whose stated purpose is to identify the truly worthy. However all they really do is confer the aura of legitimacy on those in need of same, while providing income for the executives at group headquarters and hoodwinking the public. The group is frequently a "non-profit", adding more prestige. The legitimacy is conferred by letting the novice fork over not-insubstantial funds, taking a few tests and eventually getting the rights to put letters after his or her name, provided he stays a dues-paying member of the group. The orientation of the group can be fundamental, technical, quantitative, retirement planning or risk aversion.

My personal observation is that some market-oriented groups are worthy, and those which do not offer the paid initials are the best.



 A bittersweet moment in Ty Cobb's life reportedly came in the late 1940s when he and sportswriter Grantland Rice were returning from the Masters golf tournament. Stopping at a Greenville, South Carolina liquor store, Cobb noticed that the man behind the counter was "Shoeless" Joe Jackson, who had been banned from baseball almost 30 years earlier following the Black Sox Scandal. But Jackson did not appear to recognize him, and finally Cobb asked, "Don't you know me, Joe?" "Sure I know you, Ty," replied Jackson, "but I wasn't sure you wanted to know me. A lot of them don't."

Stefan Jovanovich adds: 

Given the fact the Jackson remained a respected figure of the community and the liquor store was owned by Jackson and his wife and his name was above the door, the story could be one of Grantland Rice's maudlin inventions. For the people of his home town, Greeenville, SC, Jackson always was a figure of respect.

The site has a link to the PDF of Furman Bisher's interview with Jackson — the only one he ever gave. Eliot Asinof's book (the one John Sayles relied on for Eight Men Out) is a very large pile of crap which completely ignores Bisher's interview and the Jackson's own grand jury testimony. If Jackson had, in fact, been guilty, it is hardly likely he would have prevailed on the civil suit against Comiskey for his pay for the 1920 and 1921 seasons.

Apologies to all — this subject always gets my dander up. During the series Jackson had 12 hits (a Series record) and a .375 batting average—the best record for a player on either team. He had no errors and threw out a runner at the plate. The principal "proof" against him was that the Reds had hit a number of triples to left field (where Jackson played) because Jackson deliberately dogged it in running the balls down. None of the contemporary newspaper accounts mention ANY triples being hit to left field by the Reds. Once again, the lies run round the world while the truth is still putting its boots on.

Thanks, Bill, for bringing up one the 10 greatest ball players of all time.

keep looking »


Resources & Links