We have an algorithm that we value greatly. I have written about it in this space and have produced a white paper on it. It uses macroeconomic data and has a record over the past 25 years of generating a 13+ percent compound annual ror with a 17+ percent maximum drawdown. The SPY's numbers are 9% and 55%, respectively. Clearly the positive returns come from dodging the drawdowns; there is no beta. BTW, the 75 year history is also very good; suffering only in the 1987 selloff.

At this time the algo is very close to going bearish. It has not signaled bearish yet, but there is a definite possibility. I would not exit long equities without that signal.

The problem is that the macroeconomic data (weekly) is reflecting the effects of two hurricanes. It is perfectly understandable that such data would mirror those unfortunate events. The circumstances clearly are different this time; when have we had two disastrous storms back to back? Because the data is macroeconomic, it is not a flexion fakeout. In fact the technical indicators all point higher. Admittedly we would like to have more information, but that's not forthcoming. An interesting and frustrating problem. At least, the signal has not yet been given.

Rocky Humbert writes: 

I believe the market's reaction tomorrow to American Airline's post-close news this evening may be generally predictive. American guided earnings lower because of the hurricane effects and also because of fuel costs. If Mr. Market doesn't blink, then expect a slew of companies to use the hurricanes as a penalty-free way to guide earnings lower. That is, the teflon market just got a fresh coat of teflon….from the hurricanes.



 What is the composition of the rainwater dumped by the storm? The eventual source is the ocean, but is the means of getting into rainwater evaporation (in which case it's "fresh water"), or has it simply been sucked up into clouds? If the latter, then it must have significant salt, and therefore be detrimental to crops.

Stefan Jovanovich answers: 

The rain is fresh water; Japan gets half its annual rainfall from typhoons. The salt water comes from storm surges - basically high tides aided by sustained onshore wind; but it is not the source of the flooding. The updrafts in typhoons are so destructive because they push the clouds higher and, when the storm comes against structures, create pressure differentials that can literally blow buildings apart from the inside. That is why, even though it is counter-intuitive, you have to have air vents that can be left open so that the pressures inside and out can equalize. The only "sucking up" of actual sea water is the wave action, but that is caused by the rotational windspeeds, not the updrafts.

As bad as Harvey may seem, Hato's effects will probably be even more damaging.



If you plot daily range versus daily volume for the S&P over a long time interval you get the following graph. I have included straight lines illustrating that 2 distributions (relationships) are apparent.

anonymous writes: 

Bill: Excellent visualization! This double hump result is surprising. Vic's random walk explanation was elegant and intuitive.

How does one intuitively explain the two humps? The most intuitive way would be a regime change of some sort — and primarily affects the measured volume.

Regime changes might be changes in market structure (i.e. HFT, commission-rate changes, plus-tick shorting rule changes, growth of ETF's, the way exchanges calculate volume including dark pools, etc.) The commonality of these regime changes is that there is a before-and-after …. so the second hump may be more/less pronounced after a give date??? If one were to do this scatter plotter for each year and make a moving slide show from the result, the result might look very differently…and give some interesting avenues for further research. 



In our shop we consider ourselves "data monkeys" rather than quants, hoping that the disrespect of the moniker will limit wannabees. But if it looks like a duck and walks like a duck…

The problem of ever changing cycles/ figuring out the current regime/ the Church of What's Working Now is solved by most in a brutal fashion rather than a subtle one. Suppose you drive an old car from sea level to say 12,000 feet and it struggles. You could lift up the hood and tear the engine apart. You could also make an air-intake adjustment. Both methods work.

We data monkeys believe that the only things that count with regard to markets are sentiment and momentum. That is, it's all behavioral, and it's reasonably efficient. Sure we like to comment on fundamentals, but the fundamentals to us are only important because they influence the behavioral. When a market has been moving in a certain regime, sooner or later a market Watcher gets the inkling that a change is afoot. His action or inaction will disseminate exponentially to others, and then the regime really will change. The key to keeping up with this is to watch what the Watchers are watching.

To us this means that if you are monitoring data with human input (e.g. price) you had best be making your inputs adapt to what they are watching (i.e. usually the length of past data) and it should have an exponential component to it, rather than linear because human knowledge moves exponentially. If the in-crowd has switched to watching the last week and you are watching the last two months, a change will occur before you become aware. Non-human influenced data (e.g. most fundamentals) can be fixed and linear.

Rocky Humbert writes: 

Roy Niederhoffer wrote a prescient piece 3 years ago. It's worth re-reading this as I think he makes some excellent observations: "CTAs Could Face Historic Challenges From Rising Rates"

anonymous writes: 

Roy Niederhofffer's piece points out that the structure of futures markets for interest rate futures has favored those that didn't expect rates to rise. A large portion of the earnings of investors around these markets would make money because futures had a bias to be priced with an expectation of higher rates than eventually occurred. Those who took the bet that rates would rise lost, and the reverse. We've had a long run of this bias back to the rate peak in 1980/82. Certain types of investors made better than market returns because of this.

The source of this has been Fed led by their providing excess liquidity, and making pronouncements that they would continue the low rates so carry trades would transmit low Fed funds rates to other instruments. THese low rates provide under pinnings for other business investment, and for increasing stock multiples as the only game in town.

What's next?



One of the truest axioms of trading is that the thing you worry about least is the thing that will bite you in the rear. As others have noted, expectations are extremely positive now and few are worried about the downside. But whose expectations?

Something we have written about previously is the length of historical data being watched closely by professional traders, particularly when juxtaposed with that being watched by those who sit in the bleachers. The best bull moves occur when the pros are looking long term and the amateurs are nervous nellies. Right now we have the opposite. With tonight's close we see the amateurs being complacent; they are looking back at what has happened since Election Day. The pros meanwhile are monitoring prices in a 4-day window, a most tenuous stance.

Stefan Jovanovich writes: 

One of my dubious theories is that the internal correlations that we all see in "the market" are largely a product of the development of the New York Banks becoming the clearing house for the nation and their converting that dominance into the "need" for official central banking. The data from the 19th century, which is limited enough to be within my meager mathematical capacity, suggests strongly that the business cycle was much more a matter of the fluctuations of particular businesses than one of the movement of the "economy" as a whole. Weyerhauser's fortunes and Swift's were not on the same cycle. The movements of "Timber" and "Pork" were largely independent.

I wonder if that is becoming the case once again. Optimism may be the general news, but the prices of retail companies, particularly those in the clothing business, very much fit the opposite of Bill's description of the general mood. The general assumption is that everyone will lose their business to Amazon.

Russ Sears writes: 

"One of the truest axioms of trading is that the thing you worry about least is the thing that will bite you in the rear."

I call this the fundamental law of risk management: What risk you ignore or discount incorrectly are the risk you over-load your portfolio with, thinking you have found the "key to Rebecca"/free lunch or at least you have optimized your risk metric such as sharpe ratio. This is what happened to the modeler of RMBS, unknowingly overloading on model risk.

Alston Mabry writes: 

I have often thought (but been unable to effectively implement) that if you could determine what factors the market is not paying attention to, you could place some profitable bets or at least put on some good hedges.

Which leads to a non-quantifiable definition of a bubble as a big move up that continues even after a critical mass of players have become aware of the fatal risks - everybody knows they're playing musical chairs, but it's too profitable to stop.



 The December Jobs data is neither encouraging nor exciting. Admittedly there is considerable hope and some announcements of future hiring, but of course no change is yet visible. I will post a chart based on payroll taxes this week before Friday.

One particular concern for future jobs should be the minimum wage hikes. It is rational to expect that higher minimum wage rates in some locales will stifle employment increases in those locations while neighboring areas experience growth. The good side of these rules is that each changed location will in effect become an economic Petri dish, so we finally get to see unequivocal evidence on the matter. Local experiments that underperform are better than a failed national experiment.

It is also possible that reductions in regulations (among other changes) will create such a successful business climate that demand for workers will render minimum wage laws moot.



An anecdotal observation:

Recently there has been a STDV > 1 rise in the level of the Open Interest of Index Put Options. Historically this seems to be coincident with declining equity prices rather than rising equity prices.

If you know any students looking for a possible project, pass it on.



When the economy takes a turn for the worse, employment declines, right? Well, not all employment. Specifically, part-time employment tends to rise aggressively during economic downturns, somewhat concurrently with full-time employment declining. Because of that, one can play off the two types of employment and get decent broad brush investment timing decisions. The purpose here is to provide a general guideline to the "average Joe investor" (admittedly, a conundrum) to tell him when to be in and out of equities.

Quite simply, when the growth rate (annual rate-of-change) in part-time employment exceeds that of full-time employment, exit equities and only return when those numbers reverse. Doing so will enable an investor to avoid gut-wrenching declines. And of course the most valuable key to increasing wealth is to avoid getting behind.

The following chart illustrates what one's investment posture would have been. Employment differences should model the economy, something the stock market rarely agrees with. However in this case the employment differences seem to do a good job with the market. Again, this is not a trading plan, merely an illustration of what is possible.

Full time employed

Part time employed



 When a virtual tsunami hits, the markets tend to thrash around for a time until they figure it out. If you individually tend to be prescient, go with your instincts. But if your opinions first need some direction from the markets, how long do you wait? Put another way, how many time periods does it take before the detritus left by the tsunami clears? There are several guidelines.

Is the tsunami a one-off event, or is it a rolling event. For example, we all thought Brexit was one-off, but now with court challenges it appears to have legs. One-off events clear faster, obviously, since the rolling ones have the possibility of reversal or modification.

With more "professionally-traded" markets, detritus clears faster. Those who take positions in forex, futures and options (particularly the writers) do not have the luxury of time. Or more to the point, they do not have the margin money. If you look at market-derived information you must be conscious of intra-day movement, particularly that from the two most important times of the day.

The more amateur the market, the more time it takes before you get a clear picture. In this case of course we are talking bonds and stocks, but particularly stocks. But even though those guys tend to act slowly, the markets clear remarkably fast. How long? Typically between 4 and 8 trading days. So we are just now getting there.

Obviously the lesson for the spec is to watch the leveraged markets.

Now, a sidebar question for the "technicians": what do you do with the information (i.e. prices) that occurred during the tsunami? One of our favorite gurus, a CalTech AI expert we call BikerBoy says, "You never, ever throw away information."



 During the Apollo 11 flight, landing and return, the entire planet was absorbed. This election had that same feel. It was that important.

Of course the difference is obvious: In 1969 there were only winners, and of course America was truly great. At this time there are winners and losers. If America does in fact become great again, the only losers will be those whose political bent does not allow them to accept it.

What was really cruel was the fact that the pollsters, Hollywood, etc. (which were tools of the Ds – willingly or inadvertently) conned the Ds that the race was theirs. Most people can accept losses as part of the game. But the Ds were led to believe they could not lose, and they are in shock like the people of Mudville who never believed The Mighty Casey could strike out.



I agree that the BLS number will be bullish tomorrow [2016/11/04]. Why was there recently an article from NYT singing the praises of impartiality of the BLS? Seems suspicious to me, who considers BLS nothing but a bunch of cronies.

The payroll taxes are shown. Not really bullish. However the recent rise covers the exact period that will be sampled in the Jobs Report. The subsequent downturn is not in that sample.

Note: this view of the payroll taxes views the [consequences for the] employment scenario. If we targeted the impact on GDP we would get a more bullish picture.



The idea of systematic trading was not generally accepted 10-15 years ago. Markets were mostly viewed as efficient at that point. Or mostly efficient and any excess returns were just a compensation for risk taking – still efficient in a loose definition of efficiency. Today, trading systems are everywhere. Systems are now called "indexes" or "smart beta". Different strategies are now called "factors". Is 2/20 fee structure going the way CD, Dodo, floor broker, etc. If outperformance can be replicated by some "factors", who needs an expensive trader/manager?

Peter Pinkhaven writes: 

"Strategies that have reasonable sharpe ratios are usually cyclical" - Asness

I believe AQR were one of the pioneers of the recent systematic factor investing.

Bill Rafter writes: 

The automatic trading systems make the markets more efficient and more liquid. They are not predictive but extremely efficient in their reactivity. What the spec should do is view that as an advantage rather than a problem. It would only be a problem if he were a scalper. There is a very good solution to this for the spec (professional or near-pro), and it revolves around knowing which games to play and which to pass.

There will certainly be some professional specs who outperform, and they will be worth the fee. However that fee may continue to be 2&20 on the basis of value, or it may work lower for any number of reasons. Full disclosure: "someone we know" charges 1&10, as they want to keep clients forever rather than have the fee level be an issue in the future.



One of the off-the-radar things we watch is the length of time various subsets of options are held. The flip side of that is the turnover rate of those options. Several years ago I put out a white paper on the concept, and about a year ago there was a small WSJ piece. There is evidence; it's not anecdotal.

The general gist is that those who are more conscious of attuning their options positions (i.e. greater turnover) tend to be correct. Conversely, those who are complacent tend to pay for their complacency. Whoever is longest in a position tends to be "wrongest". As of this evening it is the holders of call equity options who are the more complacent.

One beautiful thing about this indicator is that it appears to measure portfolio shifts rather than mere trading shifts. That is, there isn't much fluttering back and forth.

Disclosure: we have been out of our longs for about 2 months (on the strength of other indicators) and we don't ever short equities.



Can anyone point me to research regarding Futures settlement price vs closing price and subsequent returns in a volatile market environment? I would like to see if settlement is more important (due to margin) during periods of high volatility which I foresee over the next few weeks. I'll try some back of the envelope tests over the weekend.

Bill Rafter writes: 

We have tested many possible prices for importance with regard to generating signals (e.g. momentum, sentiment, etc.). In reality the only price you can guarantee for testing execution in retrospect is the settlement (subject to slippage), followed by the opening (greater slippage). But for signal-generating capability we tested highs, lows, midranges, etc. We also tested subsets, such as the ability of using lows to indicate up/down, vs. highs to indicate up/down. Nothing beats the settlement. Specific to your question, if the settlement differs from the last sale, take the settlement. "There's a reason why it is the settlement."

With regard to stocks we also tried VWAP. Same conclusion.

We also tested to see if the futures settlement influenced cash, or the opposite. In virtually all cases the futures dictated to cash. That conclusion suggests that cash can be manipulated by some clever futures transactions, which of course has happened. Certain markets were famous for it (eggs comes to mind). Anyone who has ever manipulated a market will tell you that you wait until the end of the day and pick your spots (i.e. low liquidity).

If however you are doing some "fuzzy" work, you might explore using something other than the settlement or close. That is, suppose you just needed a qualification as to whether a market was "up" or "down", without regard to actual changes. Consider the following: "The market was up all day, but closed slightly lower." Was it up or down and how do you code for that? This is not esoteric BS; it makes a difference.

The above is the benefit of our own testing. I am not aware of any academic work in this area. It seems too mundane a topic. A cash v. futures settlement thesis might be interesting but the conclusion would be anti-flexion and we know how that would be perceived.

Larry Williams writes: 

Hold on…

In reality data providers have something they call the closing price. That's what we get when the market closes and that stays in our data until about an hour and a half, sometimes two hours, after the markets open in the afternoon when they change the closing price to the settlement price.

You have to be very careful because there can be a wide difference between the closing price and the settlement price. Unfortunately we don't have the settlement price until after the market is open when we have already begun trading. So most trading systems are developed using the official settlement price because that's what is in the historic data but for signal tonight after the market closed we don't get the settlement price until after trading has begun.

Whoever said the life of a trader is an easy one did not look into closing prices.



The normal state of affairs is that 1-month expected volatility (i.e. VIX) is lower than 3-month expected volatility. In many ways this is similar to short term interest rates being lower than longer rates. The logic is that a lot more grief (random or otherwise) can happen over the long term and the market prices that in.

Let us suppose you believe that expected volatility is forward looking (the standard belief). Should you happen to find yourself in the (less common) situation where the market has priced 1-month expected volatility higher than the 3-month, the logical conclusion is that the market places a higher risk on the near term. Since higher levels of expected volatility tend to be bearish, your subsequent conclusion is that the market will get its butt handed to it fairly soon.

Hey, that means you could simply take the difference of the two expected volatilities. Sounds great, but the levels of 1-month and 3-month expected volatilities are not directly comparable. To make them comparable the geek/data monkey has to normalize them over the most representative period. To further complicate this, the last item (the representative period) is never static, but variable. However all of the above are minor items that can be dealt with.

Now the question is: Why am I telling you this NOW? Go figure.

N.B. I am deliberately choosing not to show this in chart form.



A cap-weighted or quality-weighted look at U.S. employment



 "Why can’t we see that we’re living in a golden age?: If you look at all the data, it’s clear there’s never been a better time to be alive" by Johan Norberg

Jeff Watson writes: 

There's huge money in doom and gloom.

Ralph Vince muses: 

A person should live each day of his life with the same mindset, the very same attitude of savor and gratitude for every minor thing, as if he got out of jail that morning.

Or, as the Old Frenchman himself would say, "If you have the same address as a thousand other guys, you don't have a lot going on."

Alston Mabry writes: 

Pessimism is a strategy. People who have learned, usually from childhood, that they cannot act on their most important impulses use pessimism as a way to devalue what they deeply believe they are not allowed to want.

Bill Rafter adds: 

Just a minute…

As we all know from trading, if you want to increase your profitability over time the most effective strategy is to limit losses. Possibly related to this is the result of several studies attesting that fear is a greater motivator than greed, buy a factor of 3 to 1. Furthermore, we all look at prices and know both instinctively and historically that those prices will not be constant over time. They may be higher or lower, but not the same. Thus, pessimism is historically justified, profit-saving and possibly life-saving.

But to want to trade these markets for profit, one also has to be optimistic, often excessively so in light of bad experiences. You need both.

Jim Sogi writes: 

Jeff is right. Television causes pessimism. Don't watch TV. I haven't had TV for 47 years. It's not only the content. It does something to the brain. It's harmful. 

Stefanie Harvey writes:

Exactly. Television, especially US news television, is the poster child for confirmation bias. 

anonymous writes: 

Many good reasons for worry exist. If you're not worried, you're not paying attention. All of the worries stem from something completely nobody talks about in polite company: population explosion. In 1804, the world's population was 1 billion. In 2012, it topped 7 billion. It's projected to reach 9 billion in 2042 — within my son's lifetime.

True, Paul Erlich got it wrong when he said we'd all starve by the end of the 1970s– but go back read his book. Then reflect on how much different life is.

All those people are unsettling policymakers, with these results (and they are what's secretly worrying us):

Unspoken Fear #1: War. Today's empire builders are intent on grabbing resources; nuclear weapons are in too many hands.

– China: rich and populous; thanks to the free-trade break we gave them in the 1970s, they've created a war machine and ready to go for our jugular.

– Islam: implacable and populous; we have spent trillions trying to establish a decent government, and the area keeps morphing into an empire that despises us and all we stand for; they want their old empire back, be it from Baghdad or Istanbul.

– North Korea: Our strategy is, "Let's all ignore that man in the corner, and maybe he'll quiet down."

– Russia: ruthless, and intent on restoring the empire of Rus.

Unspoken Fear #2: Dystopia.

– When people don't have honest work, nothing good can come of it. In America alone, 94 million people are out of the work force. We're not being honest about the impact of robots and artificial intelligence. It's this fear that gave Trump the nomination, not that he knows what to do with it.

Unspoken Fear #3: Central government that keeps growing.

– Confronted by the population explosion, the elites have decided that the masses must be controlled and pacified. This political philosophy shows up in the fear of liability for anything fun, in subsidies, in central banking. We see sledgehammer policy-making, from FDR to Obamacare.

– And the educated love it! Calls for authoritarianism are the norm among socialist youth, aging hipsters, authors and "educators" at all levels.

These memes and unspoken but rational fears show up in pop music, with its ugly pounding overamplified brutalist mindlessness; in contemporary academic music, with its screams and jaggedness; in art, with its sneering cynicism; in architecture, with its boxy Stalinist aesthetics.

It shows up in the piggishness of the powerful, with Hillary Clinton the prime example. The rich expect multiple homes in idyllic spots, bodyguards, private jets; the poor suffer in overbuilt, crowded, noisy, polluted cities.

I happen to be an optimist, and always see the glass as half-full. Please note I am not prescribing anything; for one thing, it's gone too far. Nor do I think that going to Mars will help.

Russ Sears writes: 

First, human super-cooperation is built on trust. To evolve as a group, a high percentage of that group must be trustworthy for the compounding effect of the prisoners dilemma to work. As the group grows too big, it becomes too easy for a individual to feign cooperation. Hence the need for creative destruction and for power being placed in the smallest sized group necessary. It has always been easy to look at the big groups and see the corruption and assume that they are in control of the long term future. But the truth is they are dinosaurs and will lose out to the small but wise group/ businesses that still operates at the human individual trust one another level and are quite hidden from the spotlight, because of size. But these time and time again raise the tide for all.

Second, personally, it is too easy to dwell on the jerks that simply can ruin it for everyone but that fall into everyone's life. They can ruin many nights even if as a rule I try to avoid them. A single jerk can derail my perspective and keep me up at nights and easily crush my spirits if I let them. I found the best antidote for me is to turn the tables if I start thinking of the jerks and think instead of those in everyone's life that have blessed them with love, grace and patience. I think of my Dad's second wife, caring for a dementia patient at home for 13 years and weeping tears of love on his passing, the coach that helped me, the friend that's always there, etc. I try not to let the jerks own my mind rather than those loving, lovely (my spouse), good and virtuous people in my life. This also goes with those news makers, politicians and on the dole.



 Should one follow a purely Quant approach, as seems increasingly popular today, or should one on the contrary combine quantitative and qualitative ideas for best results in trading? 

Intuitively mixing qualitative judgment with quantitative signals matches pension funds' desire to blame someone if something goes wrong, so intuitively it should command higher fees and more assets. Less cynically qualitative judgment is harder to replicate. Theoretically. In reality I find most people's qualitative judgment is just a randomly executed quant system.

For similar reasons I can imagine purely quantitative processes performing better, when the sole mandate of the manager was to define methodologies to turn systems on then subsequently turn them off. But it's hard to ignore the effect of AQR on fees and industry events like Cohen plowing into Quantopian, as both worsening pricing and increasing competition in the quant space.

I'm trying to figure out what method is the best to pursue. Should I be reading the earnings transcripts, talking to management, using the software companies make and ad platforms of tech companies, doing my best to make a robust qualitative view? Or should I be improving my use of machine learning models and getting more proprietary data sets?

More simply, does the next 20 years in have asset management have a stronger bid for the qualitative, the quantitative or the hybrid?

I would be most grateful for your wisdom.

Bill Rafter writes: 

Let's say you have a quant "system" that you have tested and it has a positive expected value that is of interest. Adding some qualitative/anecdotal tinkering on top of your tested program has a real risk of lowering your expected value (assuming you have no ability to test your tinkering.) So why tinker? Well, it's human nature to do so, and by tinkering you might find something better. Okay, then put 90 percent of the capital into the program with the tested positive expected value and experiment with 10 percent, or just hold that latter capital back for when you positively test another system.

BTW you might want to read Ralph's thoughts on how much to bet.

The tougher part is coming up with the "system". Obviously test everything, especially your assumptions. From reading your note I see that you might have some untested assumptions. For example do you think earnings are important, something which I myself do not know? I'm not saying they are unimportant, just that I don't know. For example we do a lot of macroeconomic forecasting, but we never trade based on it because we have learned that the market does what it wants to do, and not necessarily what the economic numbers suggest. And also we know that a lot of the macro releases are fudged.

One thing you should give serious consideration to is which time venue you will target. Unless you have the right infrastructure it will not be high frequency trading. So will it be days, weeks, or much longer? That will dictate the type of approach you pursue and your research. If it will be very long term, then you have to get deep into company research.

The people who care about earnings tend to look at the much longer time frame. Meaning that your capital is exposed for a long time during which lots of randomness can work their evil ways. [The factors that we are most capable of dealing with are momentum and sentiment, and consequently our time frame of interest is shorter, say 4 days to 6 months.] So identify your strengths and go with them, particularly if those strengths differ from that of the crowd. If you don't know what your strengths are, be prepared to put in a lot of time on research. Minimize your trading during that period otherwise you will not have seed capital to trade when you acquire the skills. You know that, but it bears repeating.

Be prepared for the counterintuitive. For example, when we first acquired the computer skills to do the research we did "test 1". Test 1 was "if you know the market is going to go up, which stocks do you buy?" We assumed it would be the high beta stocks, as they would go up more. But they didn't. Turns out that beta is backward-looking and going forward the high-beta moniker just means higher volatility, which is a negative. So test everything and assume nothing.



 "Captain" Vic in Vinalhaven Maine, looking over the harbor and thinking about analogies between boats and trading…

Bill Rafter writes: 

Observing boats can be very interesting because of the diversity of the boats. They are constantly being modified to fit circumstances. The phrase "different horses for different courses" holds very true for boats. It is indeed fair to say that the sea designs all boats as the unsuccessful designs wind up at the bottom.

The diversity of design is evident in ugly commercial vessels, but also true for sailing vessels. Observe the different positions of the masts. The Swiss mathematician Euler won several prizes related to naval architecture, after finishing second in the first contest about mast positioning. If you are lucky you will get to see a ship with the masts raked (tilted) sternwards, common with clipper ships and also a Chinese junk with the mast raked forward.

Interesting also is the trade-off between speed and stability evidenced by the ratio of length to beam (width). The tipping point between the two seems to be a ratio of 6 to 1.

There's a lot to see.



 Book Review: "Who Needs the Fed?" by John Tamny 2016

What really attracted me to this book was the title, something I am in agreement with. I had not been aware of this author before reading a positive review in Forbes and the WSJ. Among other notables is a review from Andy Kessler, whom I have previously found to be objective, and of course a markets person.

First, in favor of the book: the author makes a very good case. Indeed it is safe to say that he finds nothing of value in the Fed's existence. Although a supply-sider, he criticizes them also. He is an adamant free-market advocate who favors no reserve requirements for banks and no FDIC. The Fed was originally created to provide liquidity to solvent banks, and has morphed into providing liquidity to insolvent institutions and even forcing solvent ones to take its money. The author favors creative destruction, whereas the Fed is a major player in central planning and the redistribution of assets to the "weak". "Why keep around that which intervenes in the natural workings of the markets? Didn't we learn in the twentieth century (often through mass murder and starvation) just how dangerous it is to empower central planners?"

The flip side: The tome is 180 pages whose points could have been successfully made in 45. There is so much repetition that it occurred to me the book could be an anthology of previous articles. Why else would the author repeat the exact same text over and over? Does he assume the reader to have Alzheimer's? In each of the 21 chapters he defines his meaning of "credit". He even repeats the exact quotes from Hazlett. Some text is occasionally difficult to read in that some sentences are too long to follow if only read once. He also frequently drops articles (e.g. "the"), probably because he thinks it sounds cool. It doesn't.

The book has no charts, graphs, tables or formulae. Undoubtedly someone told him that those things discourage readers. It is quite the opposite, as they can be used to illustrate a point. One chapter is devoted to how the price of oil responds solely to the price of the dollar with respect to gold. Being a "data monkey" I have the ability to check that out, and when I did I learned why there was no such chart. Yes, there is a sometimes relationship, but nothing to be relied upon.

His concept of real estate is that it solely constitutes consumption by households, not investment. Interestingly my best investment ever was when I acquired and improved a vacant lot 15 years ago for X dollars. Without any subsequent improvement that property currently produces 1.25 X each year in profits. If I were to characterize that as something other than an investment I would possibly call it a winning lottery ticket. I wish I had more of those.

My real reason for acquiring the book is that with a title like that, the author must have some idea as to what non-Fed variables might be of interest. That is, I agree that the Fed is detrimental, so if I had previously been a "Fedwatcher", what do I watch now? Fortunately I found one (just one) that might prove to be valuable.

If you need a guidebook on being skeptical of the Fed, get the book. His examples are great: Taylor Swift, Jim Harbaugh, Uber, etc.



A personal observation:

When a market has had a successful run and is ready to roll a seven there are several scenarios in which the turnaround occurs. A very interesting one is where the market in question does not initially falter and give a sell signal. Rather, what happens is that competitors or alternatives to that market start to look interesting first. It is almost as though those in control of portfolios start to move their cash into the alternatives before selling the primary market.

For those of you who play these markets by the numbers I suggest you check your signals for bonds, gold and equities. Observe if you are getting buy signals in bonds and gold, but not yet sell signals in equities.

This does not have to be a big move, just a portfolio adjustment.



Say that you have a yearly goal of 40% and you achieved in 7 months, or that you have a monthly goal of 10% and you achieved it in 11 days. Do you stop trading at this point? Or do you continue trading thinking the luck is on your side at the moment? Or do you adjust your goal and continue trading with the new goal?

Cheers, Leo

Victor Niederhoffer writes: 

The market will sometimes go much below your goal and to even things out you have to make as much as you can above your goal. Furthermore, the market doesn't care whether you've achieved your goal or not, it will always go its own way, and if you can make a profit on an expected future value basis, you should go for it. Luck is random, but the skill will persist. Apparently you or a colleague has it. Don't throw it out.

Andrew Goodwin writes: 

Your answer may rest in the structure of your money management operation. If it is a hedge fund structure, then heed the following points made in a post on the If you get behind you must know how you will deal with the moral hazard. Since you are ahead greatly, then your incentive is to take the money unless you know with some certainty that you cannot fall below a high watermark and will likely increase your gains.

1) The management fee, over time, usually does not generate enough income to operate and the profitable traders expect bonuses even when the overall fund loses.

2) The winning traders will leave to other firms or will start their own if there is no performance fee gathered to pay them.

3) If fund performance goes negative then high watermark provisions normally go into action. This can lead the manager to swing for the fences or simply close shop.

4) The wind down of the fund can deplete the investor assets and lead to general price markdowns of holdings especially if others had similar strategies and exposure.

5) The fleeing investors will enter into a new fund with a new high watermark and start the process over again.

Here is where the game gets interesting. The author suggests creating exotic option outcome provisions that he calls "Modified High Watermark."

These include A) Reset to zero under certain circumstances. B) Amortize the losses over a period so that the manager can still earn some incentive fee. C) Create a rolling period for the high watermark so that after a time the mark level drops.

His modified high watermark solutions might keep the manager from swinging when the performance fee looks too distant and might keep genuinely unlucky managers around until their skill manifests itself in due course.

Nigel Davies writes: 

There's a case for reducing leverage as one's account size increases so as to reduce the 'risk of ruin', and for some this might be done in a very systematic way. Another question is if there's a point at which one's financial goals have been achieved, especially if one's dreams lie elsewhere. 

Bill Rafter writes: 

You did not specify if your annual goal of 40 percent is based on analysis that suggests a 40 percent return is the mean or maximum. Let me assume that the 40 percent is the maximum annual gain you have ever achieved, if only as an academic exercise. Thus the 40 percent is your quitting point based on perfect knowledge of a particular system.

How frequently have you been calculating your forecasts (or inherently, your position choices?) As was learned from the Cassandra Scenario, "that more-frequent forecasting is inherently profitable, even more so than some forms of perfect knowledge." So:

(1) If 40 percent is your mean annual gain, then continue to trade at the higher level. That is, if you started at 1000 and now have 1400, continue to trade the 1400. Obviously it would also be good to shorten your forecasting period. (2) If 40 percent is your maximum expected gain, then pocket the 400 and start over trading with 1000. Shortening the forecasting period is not a given in this case.

Phil McDonnell adds: 

Let us assume the market has a normal distribution of returns and that the probability of making a 40% return or better, at random is 15%. Then if you decide to take all profits at the 40% level then your probability of a 40% gain will double to 30%. This result follows directly from the Reflection Principle.

The above assumes that your returns are random and implicitly assumes that you have no ability to predict the market. To the extent that you can predict then you should make your decision on your current outlook and not on any arbitrary price point like 40%.

Gibbons Burke comments: 

It seems to me that one should be disposed to let the markets give you as much as it wants to give you without putting artificial limits on that phenomenon, but that practical limits should be enforced on how much lucre it can remove from your wallet. Is more return ever a bad thing, assuming that the distribution of returns is not serially correlated? As our gracious host has noted, the markets have no idea how much money you have made or lost, so the idea of reversion to the mean on an equity curve makes no sense in the same way that it makes sense for market prices which are making repeated excursions up and down seeking the implicit underlying value of the thing (the ever-changing "mean" to which the market is always reverting.)

So, setting a goal to achieve a 40% return seems a reasonable thing to do, but I submit that this goal should be accompanied by the qualifier "or more" and be willing to let a good thing continue.

Regarding the 'limiting losses' idea, in the Market Wizards interview with Jack Schwager, Paul Tudor Jones admitted to having risk control circuit breakers in place so that if he ever lost more than x% in a month he would shut down trading for the remainder of that month. Limiting and rationing losses in ways such as this seem like a reasonable discipline if one is going to set limits on how the market will affect your stake.

An old floor trader's trick I learned while reporting on the futures pits is that if a trader enjoys a windfall gain on a trade, and reaches a pre-figured goal (or more), he takes half the position off the table as a positive reward for being right and taking action on that conviction. Leave the rest of the position on to collect any further gain which the market might want to provide, but he raises the stop to break-even for the remaining position (not counting the profits already taken off the table) in order that a winner would not then turn into a loss. If he stop get hit, he still has half of a windfall gain return in the bank. If the market continues in a favorable move and another windfall gain is realized, the process can be repeated.

This tactic has an anti-martingale character which some more bold traders might object to.

All these thoughts are mostly elaborations on the first two fundamental rules of trading: 1) let your winners ride, 2) cut losses.

Stefan Martinek comments: 

This loss avoiding behavior was well researched by Paul Willman and others. It is observed within traders of all levels approaching a bonus target; cutting off is generally viewed as irrational and Willman discusses how to adjust incentives to get a trader back to risk neutrality. Which reminds me more general but relevant quote from W. Eckhardt: "Since most small to moderate profits tend to vanish, the market teaches you to cash them in before they get away.

Since the market spends more time in consolidations than in trends, it teaches you to buy dips andsell rallies. Since the market trades through the same prices again and again and seems, if only you wait long enough, to return to prices it has visited before, it teaches you to hold on to bad trades. The market likes to lull you into the false security of high success rate techniques, which often lose disastrously in the long run.

The general idea is that what works most of the time is nearly the opposite of what works in the long run.



 Forgive the length, but I thought this was too good not to share:

Let's take their model, their parable, their most extreme case, and walk through it for a moment. It takes Frank Ramsey's basic model, in which savings equals investment equals capital growth, and extends it to a world in which capital can flow freely around the globe to wherever it earns the most interest.

If savings can flow across countries to wherever the interest rate is highest, and if people can borrow across countries without trouble (say, by mortgaging their home to a bank that borrows money from investors in Japan), then in the long run there's only one possible outcome: the most patient country owns everything. The most patient country owns all of the capital equipment in the world, all of the shares of stock, all of the government bonds, all of the mortgages, everything. What happens in all of the other countries? [the "Impatients"] Eventually they spend essentially all of their national income repaying debt to the most patient country. They literally mortgage their future through decades of high living, decades during which they borrow cheap money that is gladly lent by more patient countries.

…After years of enjoying a grand life of consumption, the average Impatient [country] eventually ends up spending its whole income on interest payments, forever.

Well then, who are the Patient countries? Those who lend and export. Who are the Impatient countries? Those who borrow to spend in the short term. Okay, that's definitional. But is there another way to define the Patients/Impatients? It turns out that national average IQ defines them well. And here's the shocker: The U.S. has an average IQ of 98. The U.K's. is 100. East Asia (i.e. China, Japan, South Korea, Singapore) have average IQs of 106. If we look say 25 years into the future, it's likely China's average IQ will have increased. What do you think will happen to the average IQ in America?

This is from "Hive Mind" an excellent book by economist Garett Jones of George Mason University.

anonymous writes: 

Mr. Jones ignores a few minor problems. The first is default; the second is that Ramsey's equation only works in a world where Marx and monetarists are the only people who keep the tally sticks. The patient people may think they own everything but only until they discover that their debt claims are not going to be paid, that neither principal nor interest will be forthcoming. then there is all that investment in apartment blocks and bullet trains. they certainly cost a great deal; by labor theories of value they should be an enormous accumulation of wealth, except there are no actual tenants who can afford rents for the apartments and no travelers who want tickets for the trains. the last and worst fallacy of aggregation is the ranking of average IQs. the world tuns on the machinery and thought that the very smart people produce and the grunt labor that the rest of us do. we depend on the really smart people's discoveries and enterprise and the scut work done by people who stack the grocery shelves and vacuum the think tank carpets. Whether on average people score C+ or B on what is a school exam called an IQ test makes no difference, except, of course, to the people whose livelihoods depend on the rest of us paying ever increasing tithes to the priestly class of schoolies.



When we research strategies, there is a need to measure performance. Some techniques like volatility targeting tend to improve more the equity based measures (e.g. Sharpe, Sortino) but damage or not improve the trade based measures (e.g. Profit Factor, Expectancy). Some techniques like term structure used in asymmetric sizing tend to improve more the trade based measures. Is there any clear argument for or against equity vs. trade based performance statistics?

Rocky Humbert writes: 

Ed Seykota was fond of saying "Everyone gets what they want out of the markets."

That's an elegant way of saying that every investor has their own utility curve.

So an answer to your question is it depends on what portfolio/trade parameters that you are trying to maximize and minimize. Each of the approaches that you describe involves some sort of a trade-off. Academics will talk about optimally efficient frontiers, but for practitioners who are in the markets for the long run, I believe it's a function of what you and your investors want to achieve and most importantly, maintaining the discipline to consistently apply the tools that you mention.

There are many paths to heaven. There is no free lunch.

Bill Rafter writes: 

We prefer equity stats. Our primary metric for longer term research is (Compound Annual ROR)/(Max Drawdown). For example, the equities markets depending on the period chosen tend to have a CAROR in the single digits, while having max drawdowns of ~55 percent. With work and diversification you can invert those numbers such that the ratio is greater than 1. Most of your success will come as a result of reducing losses.

In theory one might argue that if you take care of the trade stats, the equity stats will take care of themselves. As in, fight the battles and the war will take care of itself. This is most exemplified by HFT. If that is the trading time frame of your choice, then by all means go with that. However it is hard for the individual to compete in the HFT framework, meaning that you will probably have to lengthen your trading, gleaning greater gains, but also larger losses. Eventually I think you will come around to preferring the equity stats. But your choice is going to be subjective or trading-plan-specific, which agrees with Rocky's every investor having their own utility curve.

anonymous writes: 

The conception of Seykota's quote as a utility curve is Rocky's. Seykota might have been making a point about market psychology more akin to a Deepak Chopra quote. That's not to say that Seykota did not make money trading. My sense was that his idea about everyone getting what they want from markets applied to those who might have hidden motivations in things other than in optimized financial gain according to a risk adjusted measure.



It is interesting to consider whether certain month's employment announcements tend to be consistently bullish or bearish. A former employee,  writes to me that the May employment numbers have been quite bearish for stocks.

Bill Rafter writes:

The NFP report is always murky to me. It always needs "interpretation" which is why it looks different several days after its release. The big interests (from the media, at least) are the unemployment rate and the number of new jobs. Both are the result of rather obtuse calculations. I prefer the growth of payroll tax receipts which require no interpretation. The source is the Daily Treasury Statement, effectively the bank account of the government. Attached is the data from last week; no change in appearance since. It may not agree with the early or late interpretation of the NFP report, but it speaks truth about the actual job situation.

Stef Estebiza writes: 

Employment data are smoke and mirrors, are more a political need to do to accept further cuts/taxes and justify these policies. The new jobs are precarious and at reduced wages.

anonymous writes: 

I suspect that I read about the Chair's views on the unemployment rate in years past, but is it safe to presume that the numerator smoke/mirror terms cancel out the denominator smoke/mirror terms?

Or does the science of people counting treat the employeds different than the idleds at the tabulation level?

I've generally treated the unemployment rate as a good bit more reliable than the overall jobs number.



Those of us who love speculators but rarely trade wonder what the counters think of this comment from a market historian who is a complete hermit but (I think) a very smart guy:

1) DJIA has gone more than a month without setting a 20 day high or low
2) DJIA is confined to a range of less than 6%
3) DJIA is within 10% of a 2 year high
4) Shiller P/E is 18+

There are seven years in recorded history that fit these parameters:


Victor Niederhoffer writes: 

The counters would say that depending on where you prospectively date such events, the expectation going forward is the same as the past. However, there are a number of special numbers used like 6%,18, 2, shiller p/e, that give so many degrees of freedom that it is amazing the hermit couldn't come up with a more bearish scenario. The hermit is an ignoramous. 

Math Investors writes: 

One of the first studies of the market that one does in one's career is to examine the immediate history of major moves, particularly up moves. What happened just before it took off? We found the usual precursor to an up move tends to be rather boring. For an example just look at one of John Bollinger's "Squeeze Plays". There is certainly not a V-shaped bottom or anything definitive; just a slow sideways drift, typically with narrowing volatility. But knowing that doesn't get you to first base. The fact that the market has been boring does not mean it is going to get exciting. You must have some other input.

But what should be your other input? From years of studying this, we have our favorites*. Although a superior input is indeed better than most, the mediocre inputs aren't that bad. Because when a market is really setting up for a move, the signals tend to be writ wide across the landscape.

For example, first-year nursing students tend to get erratic results when measuring patient blood pressures. But if you had five novices take the BP, and then took the average, it would be pretty close to what an experienced nurse would get. That is, combining multiple imperfect measures is more likely to provide a good estimate than none at all. **

*Our favorites can be seen (and played with) by going to or

** This example from a book I am currently reading, "Hive Mind" by Garett Jones an Associate Professor at George Mason University. I heartily recommend it.



The Dow Theory, Big Cap, little cap, SPY/Russell, 2 factor theories are well tested on a variety of divergences. I think they work somewhat with interest curves as well.  I'm wondering about currencies, and countries. Would global/US, or small/big two factor model be predictive at all?

Bill Rafter writes: 

Two factor models work best when the two variables/inputs exhibit at least some negative correlation (obviously with changes, rather than levels). Equities v. Debt is a good example.

Also, we have noticed that in a competitive 2-horse race the overtaker is usually the first to move. That is, the buy signal in A is given before the sell signal in B. We have surmised this is because the smarter players start to acquire A while the complacent participants are reluctant to dump B until late in the game. Impossible to prove, but it makes some sense. This coincides with the experience that assets move up slower than they decline. As Matt Ridley puts it (Evolution of Everything), "Good things are gradual; bad things are sudden."



 This reminds me of the Sherlock Holmes short story called "The Silver Blaze" in which the mystery was the dog that did not bark.

Why would a practitioner have success with one stock (AAPL) and failure with another (FB)? How are they different (or is there something else) and what are the implications for price forecasting? For example, our tactical algorithms have most recently "nailed it" (AAPL) and "gotten nailed" (FB). Technical analyses sensed something in AAPL, but were 180 degrees off in FB. Why one and not the other? Could Apple's earnings (or at least an inkling of them) been in the market, whereas Facebook's were a total surprise? The market reactions suggest both were a surprise, but yet there were clues with one and not the other.

Here's a link to what we saw or didn't see.

There are many factors which can be used to explain price activity. Among them are price momentum and sentiment, both of which can be modeled by a practitioner or his computer. Somehow someone gets the inkling, real or imagined, that the wind is about to change direction, and either acts accordingly or just declines to follow the well-trod path. Then change happens. It is inexorable, almost evolutionary.

Freely traded markets are very efficient, but not perfectly efficient. That's why "technical analysis" or "counting" works, at least some of the time. Information leaks out and it shows up as a marginal change in the price. Could some companies better enforce a no-leaks policy than others? Maybe. But information can get out in other ways. For example, Apple has stores that are usually crowded.

Suppose all of a sudden they aren't crowded; that's a tell that can be modeled. The people who watch the stores will know before the earnings are released. Okay, then how do you do that for Facebook?

Facebook's revenues and earnings (i.e. fundamentals) are hard to model from the outside. We don't know of any tells. And they may have a rigorous no-leak policy. Which other companies have those same characteristics?

If you look in your program, both companies have similar profiles with regard to share statistics. That is, they have similar relative percents held by institutions and insiders. Their shorts as a percentage of float are similar. However their old school analysis characteristics are different; no one buys FB for the dividends.

Great quote from Robert Schiller: "We should not expect market efficiency to be so egregiously wrong that immediate profits should be continually available." That is both true and comforting when we are licking our wounds. If you have an edge, it's a small one, so diversify or watch the size of your bets.

But no matter how good you are at modelling momentum and sentiment, random things can screw up the forecast. Suppose that all of your algorithms identify a stock that is headed upwards. Then the company's corporate jet falls out of the sky with the executive team on board. That stock is going down, damn the forecast.

To us this is both a practical issue (our bank account) and a philosophical one (our minds). We would appreciate any and all ideas.

BTW, if you want to play with the algorithms yourself, send me an email and I will send you a link.



A few years ago there was a discussion on the site about an esteemed Dailyspecer's paper:
"Modeling the Active versus Passive Debate

That article generated a considerable amount of hate mail from investment "professionals" who felt the piece threatened their buy-and-hold livelihood. I consoled myself with some rather unkind thoughts.

Roger Arnold writes:

This reminds me of the discussion we had here 15 years or so ago when Triumph of the Optimists was published.

When I discussed the subject of the outsized returns of equities versus other asset classes with the principal author, Elroy Dimson, he said that in his opinion the 20th century returns were unique and not likely to be repeated over the next century. I won't go into his reasoning here as we discussed it then and I'm not sure if It's been discussed during my absence from the list.

The gist of the conversation though was that everything that provided the positive drift to publicly traded equities has been exhausted.

The positive drift is what made passive management a plausible money management scenario.



The numbers on Payroll Taxes are quite bullish. However if the Jobs Report shows similar, the stock market response could be negative, anticipating hawkish Fed moves.

The big difference in the data is that the BLS Jobs Report indicates jobs without any discrimination as to actual earnings. That is, a $10 per hour job counts as much as a $1000 per hour job. Payroll taxes intrinsically reflect the quality of the job.

Victor Niederhoffer writes: 

And yet Erica Groshen is still Commissioner of Labor Statistics and she's a very good friend of the Chair and they frequently speak together at testimonials and I believe coauthored an article on inequality together. However, unlike Erica, I have not been able to find evidence that the Chair sent her kids to Camp Kinder the way Erica did.

Bill Rafter writes: 

Today's comments by the Fed Chair give us an interesting observational platform.

If the Jobs Report on Friday is bearish on the economy, then it would appear that the Fed Chair was informed and stepped in before the release to keep the party going. (Whether such response is good is debatable.) Note that the survey period for this month ended on Saturday March 12th, so there has been plenty of time to inform someone who has a need to know.

However if the Payroll Taxes are correct and the jobs numbers are bullish on the economy, then the Fed Chair must be either poorly informed or illogical. Neither is comforting. In such a case one might question the need for such a Fed.



In two weeks the March Jobs Report will be out (Friday April 1st at 8:30am). The data to be reflected will be that collected thru this past week (March 12th). The Payroll Tax Receipts (distributed by the U.S. Dept. of the Treasury) thru March 16th already presage a Jobs Report considerably stronger than the prior one.



In the last four weeks U.S. equities have risen nicely. Some were lucky or good enough to forecast what happened (check their records). And there are some who are apprehensive about where the market is now. I cannot guess everyone's motive, but I believe more than a few of the hesitant are so because they fear a further bursting of the Chinese Bubble. However I present to you a brief phantasmagorical tour showing that the Chinese Bubble has already deflated.

In terms of three usable commodities (copper, wheat and cotton) the Shanghai Stock Exchange has mean-reverted to its price in mid-2014. If you are betting on a further Chinese decline, be cautious.



"Many scientific “truths” are, in fact, false"

In 2005, John Ioannidis, a professor of medicine at Stanford University, published a paper, "Why most published research findings are false," mathematically showing that a huge number of published papers must be incorrect. He also looked at a number of well-regarded medical research findings, and found that, of 34 that had been retested, 41% had been contradicted or found to be significantly exaggerated.

Since then, researchers in several scientific areas have consistently struggled to reproduce major results of prominent studies. By some estimates, at least 51%—and as much as 89%—of published papers are based on studies and experiments showing results that cannot be reproduced.

Bill Rafter writes: 

In academia the currency is published articles. It should therefore not be a surprise that many published articles are useless or worse, flat-out-wrong to the point of being fraudulent. Consider that in the United States the typical number of scientific-based papers published in a peer-reviewed journal by a doctoral candidate is ONE. In certain other countries that number could easily exceed a dozen. Consequently the avid reader of scientific papers learns to discriminate in his reading habits against certain universities and certain countries of origin.

Would you do business with a bank that had a reputation for handing our counterfeit currency? And the fact that counterfeit banknotes exist casts suspicion over all transactions.



A very reliable model of mine is the sign “CLOSED” on a store’s door.  It invariably means the store is closed.  But I was just given an example that a slight change in circumstances can render it totally off the mark. 

There’s this corner candy store near me that sells graham crackers smothered in dark chocolate.  I allow myself one a day at the end of lunch and thoroughly enjoy the event. 

So I drive up to the store at 1 PM on Monday and the CLOSED sign is hanging on the front door.  It’s one of those simple ones that says WE’RE OPEN on the obverse.  Elsewhere the hours are posted as 12 – 8 Monday thru Saturday.  But I move on.  Same thing happens on Tuesday. 

Today (Wednesday) finds the CLOSED sign still in place.  Despite what my model tells me, I try the door and find it unlocked and ask loudly if they are open.  A guy substituting for the owner Carol welcomes me and handles my weekly purchase.  And I learn that he had no idea about the simple sign on the door that had been chasing away all customers for the last three days. The owner is recuperating from surgery and the guy never noticed the simple sign.  Another O-Ring example in which a small item has disastrous consequences. 

Again we find that no model is perfect.



Forgive me for posting two items, but I believe them to be related.  In the first instance we have our oldest algorithm (from 1988), nicknamed “Thermos”. This plots a moving correlation between stock and bond levels. As of Friday (2/26) it has gone bullish for stocks.


Secondly, a major Teutonic bank just announced a buy recommendation in gold. Coincidentally we notice that our measure of professional sentiment just went bearish on gold.


A week ago we had a similar signal to sell bonds. We have long noticed that whenever bonds and gold are in agreement, equities make a move in the opposite direction. Either way, long or short.  



Gut feelings matter, but not the way you think. An individual’s gut feeling is anecdotal. Chances are that even he cannot statistically study his sympathies. However many of us model the gut feelings of investors at large, and those can be statistically studied. Here are a few examples:
Commitments of Traders of futures. Many researchers ply a theory and then try to find data to support it. And their theory typically revolves around following the large (reporting) traders and mimicking them. The trouble is that not even the big guys are right all the time. A better approach is to examine the data without a preconceived theory. In doing so you will find that the small (non-reporting) traders are more consistently wrong than the big guys are right. That is, winners rotate, but losers are consistent. Further analysis reveals that the little guys tend to be even more wrong when they are short. And the best combination is when the little guys are short and the big specs are long. Following the hedgers should be avoided as the hedgers speculate, but on the basis, not the actual price. If you don’t know what that means, don’t play in that venue. 
Options data. This usually takes the form of the putcall volume ratio. Excessive levels tend to occur at market turning points. And by the way, the smart money bets against the excessive level. One problem to be mindful of is that most researchers look at CBOE data, which typically only constitutes a third of all option data. If you want to get it all, get the Options Clearing Corp data, which is free just as CBOE data and more reliable.
While you are looking at option data, go a step further and look at the open interest levels.  I assure you that if you like putcall volume data, you will value the open interest data more.  The latter also tends to give less ephemeral signals. 
Is there any way to combine the two?  You betcha!  In any given period the number of New Positions (NP) equals the volume plus the change in open interest.  Further, the total open interest divided by the backward cumulative NPs identifies a number of trading days which can be described as either the age or average holding time of those positions.  On a very broad scale that data gives a view significantly different from putcall volume, and one that is quite reliable. 
Polls?  There used to be a newsletter which purported to measure contrary opinion for futures. What the publishers (Mr. James Sibbet and Earl Hadady) did was rank the bullishness of various newsletters and take a percentage. The theory was that if every publication was bullish, the market was overbought. The trouble was (paraphrasing Keynes) opinions could stay bullish for longer than you had margin money for picking the top. However if a market was up in the high 90s percent bullish for several weeks, the first downturn in opinion to even mid-80s presaged a price selloff.  It wasn’t the same people each time, but when the collection of gut feelings changed its momentum, the price tended to go along. 
While on the topic of polls, VIX and its offshoots are surveys that are very reliable. 
Price alone. What do you do about a market without telltale derivatives or surveys of newsletters? If you run a regression fit of the price data and extend it, you have a forecast. The deviation of the actual price from the forecast provides a measure of the combined opinions of professionals regarding that price. Small deviations go hand in hand with low volatility which is bullish on prices of assets that go into portfolios. Large deviations are scary which manifest themselves in price discounts. 
So all in all, Virginia, gut feelings matter. 



The options viewpoint.

Point 1: Virtually any macro investment strategy can be replicated with options. (Previously stated by one brighter than I.)

Point 2: The use of options can enable the strategist to hide his moves.

Point 3: Options transactions tend to be the milieu of the professional.

Possible conclusion: The broad analysis of options transactions can reveal some interesting truths about the current investment environment.

We have studied the broad pattern of equity options transactions this century and have found whichever side creates more options positions is correct. That is, the condition where new positions consistently exceed liquidations. This is equivalent to a shorter age or holding time (open interest divided by new positions). "Whomever holds longer is wronger" to coin a cheeky phrase. Specifically, if the turnover rate is higher for calls than puts, it's generally safe to be long.

Being long equities when the bullish patterns existed (since 2000) yielded a compound annual rate of return of 11.5 percent. Being short equities during the bear patterns yielded 3.5 percent (CAROR), such that the combined compound annual ROR was 15 percent. Not bad. The trouble for statisticians is that there aren't many switches (less than 40), making statistical reliability problematic. But the minimized "signal flutter" is comforting to longer term investors.

Although this metric called the 2009 turnaround on the money, it should be used to check the climate rather than the weather. Where are we now? Very close to going long. I would be reluctant to give a heads-up in advance of the actual signal, except the chart I show is a smoothed version. The unsmoothed version is already positive.

Here are two charts (2009 and current).



Every now and then it is advisable to check out what the Fed is doing. There have been upticks recently in the aggregates (since New Years, and concurrently since the drop started), although in my opinion the upticks do not alarm or impress.

monetary base




 I think the group will find many useful lessons for both life and trading in these machiavellian maxims and I'm sure that the list will find plenty of fodder for debate contained herein.

Bill Rafter writes:

Those are not Niccolò's thoughts, but the author's wish as to how Machiavelli would think. The two are not the same. Also, many believe they know Machiavelli because they have read The Prince, a very short work hastily put together in three months with an expected readership of only one person, for the expressed purpose of getting a job. Because Machiavelli has become the Progressives' poster child of evil, some anti-Progressives have taken to championing him. But unfortunately they do so poorly read and for the wrong reasons.

The Discourses on the First Ten Books of Livy are Machiavelli's best work, written over three years (concurrently with The Prince) for a universal audience. The Founding Fathers of the United States all read "The Discourses" as a prelude to creating our government. It would be well worth your time.

Gary Rogan writes: 

The Prince was written for, well, a Prince. One problem with applying both the original Maciavellianisms from The Prince as well as these new improved maxims is that they don't seem to concern themselves with basic competency in one's line of business outside of manipulating people. For a Prince it fits: his job is essentially to manipulate his subjects, enemies, and any threat or potential resource provider into benefiting the Prince. He doesn't personally build bridges or grow food, etc. On the other hand, imagine a plumber who is also the world's greatest student of Machiavelli but is a really bad plumber. It's doubtful he can overcome his major deficiency by simply manipulating his customers.



What kind of moving average of the last x days is the best predictor of current and future happiness, and how does this relate to markets?

Anatoly Veltman writes: 

The widespread misuse of MAs concept is what gives it bad name. 90% of testers and users look at crossovers, and the remaining 10% look at break of MA from above or below. All wrong

The only proven way to apply MAs from trend-follower stand point is to look at nothing else but SLOPE. (Trading Days) Is 14-day MA sloping upward? If so, then is 30-day sloping upward? If so, then is 50-day sloping upward? If so: then Shorting is forbidden! Mirror test may save you from disastrous bottom-picking.

Bill Rafter writes: 

I beg to differ. There is no way the "average of the last x days is best predictor…" It by definition is at least a coincident indicator and more likely a lagging indicator. BTW the same can be said of the SLOPE of the last x days.

However, you can construct a leading indicator by comparison (difference or ratio) of the coincident to lagging indicators. For this newly created leading indicator, there tends to be a lot of false signals, due to random market action. To guard against that you need to have very smooth coincident and lagging inputs. Making them smooth also makes them more lagged, but that will not hurt you as you are not going to look at them outside of a difference or ratio, which will be quite forward-looking.

The real problem is that investors want to identify a static x. In doing so they are insisting that the market be modeled by x periods. Well, the market doesn't always feel like cooperating. At times the market may be properly modeled by x periods, and at other times by x+N, in which N can assume a wide range of positive and negative values. The solution is to first identify the exact period over which the market should be modeled for the coincident valuation. And then go on from there. Rinse, repeat.

Russ Sears writes: 

This would be a good question to ask the trading expert psychologist Dr. Brett.

It seems that with the same brain imagery he uses is being used in the study of the science of happiness.

While I am no expert I have read Rick Hanson, PhD book "Hardwiring Happiness"/ It has been awhile since I enjoyed this book, my summary of it is "focus on the life/good in the present. Placing things in context to how it has brought you to this moment, then enjoy the moment is enjoying life."

Presence seems to be the buzz-word in studies of contentment and psychology of success. Being aware of all your inputs, your feelings and recognizing them as part of life, then celebrate living. Presence gives you the fulfillment in your life needed to be loyal and disciplined enough for what is working well in your life. Thanksgiving is a day built on this idea, But presence also gives you the courage to turn things around, admit things are not as you want, and gives you Hope for the future. Happiness is more about living your life, being in control, then it is circumstances. Some of my happiest times have been after running hard for over 2 hours exhausted after 26.2 miles, cold and totally and dangerously spent but knowing I gave it my all.

So I would suggest that MA, trend following, momentum, acceleration, nor death spirals nor reversion to the mean, value investing should not ever be the "key to Rebecca", rather judge them in the context of everything else. Some days "the trend your friend" other days "the sun will come out tomorrow". 

Brett Steenbarger writes: 

It's a really interesting area of recent research. It turns out that happiness is only one component of overall well-being. What brings us positive feelings is not necessarily what leads to the greatest life satisfaction, fulfillment, and meaning. I suspect the market strategies that maximize short-term positive emotion have negative expected return, as in the case of those who jump aboard trends to reduce the fear of missing a market move.

Ralph Vince writes: 

Too many times in life I've found myself in darkened parking lots with a small gang of characters who intend me harm, and saw how the pieces would play out enough in advance enough to get out of it, or at least to realize there was only one, very unpalatable way out of it.

Shields up.

Too many times in life, I've had an angel whisper in my ear with only a few hours or seconds to spare to keep from being robbed blind by people I made the mistake of trusting.

Too many times in life I've paced in some anonymous hotel room, wondering "How the hell am I going to do this once the day comes?"

Too many margin calls have had to be met.

Far more times than I would care to, I've found myself confronted with the proposition of having to throw boxcars to survive, and I find myself, yet again, with that very proposition in a life and death context.

Only someone who really loves the rush of the markets, could enjoy wanting a given market to move in a specific direction. I've come to the conclusion it's far better for me to set up to profit from whatever direction things move in on a given day. Those that dont move in a manner so as to profit from this day, will tomorrow, or the next day, or the day after that… I need to just show up on time with my shoes on, collect on that which comes in today, sow the seeds today for taking profits on something at some future date. It's not difficult, and a lot more satisfying.

There's enough episodes in life we need boxcars to show up, and yeah, "Baby needs a new pair o'shoes."

Victor Niederhoffer writes: 

I like all these untested ideas about moving averages but my query was of a more general nature. What kind of moving average, perhaps its top onion skin an exponential average, is the best predictor of human happiness. I.e. if you are happy yesterday and unhappy the day before, are you happier or sadder. I mean vis a vis the pursuit of happiness, not markets, although the two are related I think.

Alexander Good writes: 

My answer would be a medium term moving average works best - about 6 months. We're naturally geared to notice acceleration not speed. After accelerating happiness, it's virtually certain to decelerate which we would have a heightened awareness of. Thus a 5 day moving average would have too much embedded acceleration and deceleration to yield a good outcome.

I would also say 6 months is a good number because there's a fear of 'topping out'. I.e. if you're at the peak happiness of the past 5 years you might get afraid of a larger mean reverting move. 6 months is short term enough not to be victim to noticeable accel/decel, but not too long to be subject to such existential thoughts that lead to unhappiness. 2 quarters is also a good timeframe for evaluation of back to back 3 month periods which seems like a relevant timeframe to most people professionally.

My meta question would be: does measuring one's happiness with a moving average make one more or less happy? 

Theo Brossard writes: 

I would pose that happiness would exhibit similar behavior with market volatility. Short-term clustering (which makes exponential average a good choice, if you are happy today chances are you will be happy tomorrow) and longer-term mean reversion (there must be some thresholds defined by values and time–you can't be very happy or unhappy for prolonged periods of time).

Jim Sogi writes: 

A good way to study this is to rate and record your happiness each day. Also record your acts: exercise, diet, work, family, vacation, tv, meditation, etc. Over time you can correlate the things you do that make you happy. You could correlate day to day swings as Chair queries in a univariate time series.



"The stock market leads the economy, not the other way round"

Are we sure of this old bromide?

anonymous writes: 

Yes, the data support the conclusion. Even more so because we know the results of the stock market immediately, and we get the GDP number only each quarter, and then after a delay of months that is then revised three times.

Andrew Goodwin writes:

A statistical method for testing this theory with precise equations is given here for those who would care to update the work:

"The Stock Market as a Leading Indicator: An Application of Granger Causality"

To summarize the conclusion reached using this "Granger causality" method:

Our results indicated a "causal" relationship between the stock market and the economy. We found that while stock prices Granger-caused economic activity, no reverse causality was observed. Furthermore, we found that statistically significant lag lengths between fluctuations in the stock market and changes in the real economy are relatively short. The longest significant lag length observed from the results was three quarters.

Stefan Jovanovich writes: 

"Is the causality relationship more consistent with the wealth effect or with the forward-looking nature of the stock market? The results from this project are consistent with both the wealth effect and the forward-looking nature of the stock market, but do not prove either. Another possibility for future research is to further evaluate where expectations about the future economy are coming from. Our results reveal that expectations for future economic activity are not simply formed by looking at the past trend in the economy as the adaptive expectations model would suggest. Expectations are being formed in other ways, but how?"

The argument for the "wealth effect": rich people's spending is the Keynesian pump that gets its money flows from the drift towards higher stock prices. The argument for the forward-looking nature of the stock market: the same one that applies to all asset and credit pricing, even those for "true" bills. The argument for "adaptive expectations" models: straight lines are easier to draw.

Stock prices go down because enough rich people think they will go down. God only know what makes them decide to think that, even though they have all the lessons of the past to tell them otherwise.

As Eddy and her Mom and others remind me, my sarcasm can be a bit heavy-handed, obscure and unfunny.

Let me try again, now that Big Al (who has saved me from gold standard oops moments and other follies) has come to my rescue.

The Chair's drift is a fact of enterprise itself; people get richer because they figure out how to do things better, faster and cheaper, and the price for that know-how rises steadily because it is the means of producing more wealth.  (Marx was not wrong to focus on the means of production; he just left our distribution and exchange as the other necessary parts of the deal.)

The people the Chair left behind at Harvard, Berkeley and elsewhere share their own kind of Marxist illusion; they think that people can manipulate the way we all keep track of wealth - the unit of account, the interest rate on government debt - and have the manipulations produce further drift which will, in turn, somehow produce greater wealth.

This all reminds me of what a WW II veteran once told me about sharing a bivouac with the Russians while Truman, Churchill and Stalin carved up the world at Potsdam.  The Americans, with their wonderful energy, had set up tents and installed GI showers and faucets after running lines to the nearest pond with clean water.  After seeing the GI walk over to a faucet and turn it on to fill a pail of water to feed the radiator in his Deuce and a Half, a Russian soldier yanked off the faucet, walked over to the Russian side and defiantly banged it into a post.  He was enraged when he turned the tap and nothing came out.

Fat thumb correction:  stock prices go up and down because enough rich people take one side of the trade or the other that they change the price of wealth expectations for that particular company. There is no way of knowing what their particular "reasons" are; markets are part of Heisenberg's universe.

Bill Rafter writes: 

Allow me to come into this party late and probably tick everybody off. What drives markets most of the time (i.e. 90+ pct.) are two things: momentum and sentiment. If you have a handle on those you can make money. Probably the same two things drive the economy, but you cannot make money trading the economy, as the data coming out of the economy is more lagged than the data coming out of the markets. Hone your skills where they can count.



The Monthly Treasury Statement for July has just been published. Of particular concern is the Hospital Insurance (Medicare tax) payments for self-employed enterprises. They continue to languish.

Historically there are no direct causal relationships between this data and equity prices. That is, no one is going to see this data and draw any connection to equities. Most people have no idea that the data exists, and following it is problematic for most (especially financial journalists). The safest thing one can say is that the data does not support any rumors of a renaissance in ultra-small (self-employed) businesses. But you knew that, didn't you.



 By the way, I believe it might be a subject of speculation whether  Mr. Simons and his colleagues have found anomalies that they can still exploit as they might be much too big, and there is much too much competition from other humble anomaly seekers.  Yes, as Mr. Harry Browne would say, as described by  the true believer below, their pantheon of geniuses soars on a much higher level of cognition than myself or any of my colleagues or hundreds of followers - but then again superior intelligence isn't everything. And aside from the profitability of market making, as first enumerated by MFM Osborne, it might be difficult to capture anomalies on a systematic basis that the competitors in St. Louis and other small venues might have missed, no matter their profundity.

Anatoly Veltman writes: 

Does this also answer the query as to WHY would Virtu decide to go public?

A true believer writes: 

If there is anything whatsoever to the legion of gambling analogies to markets, market ecology and human endeavor then most of the chips will end up in very few hands.

The Medallion Fund represents the very apogee of human brilliance so applied to financial markets.

What is more likely, that there is something rotten in Denmark? Or that the combined work of pure genius including:

James Simons

Elwyn Berlekamp

Robert Frey

Henry Laufer

Sean Pattison

James Ax

The whole 'European Contingent' - I will not list those names here.

Plus a host of mere 'worker ants' cleaning data, programming testing machines and keeping the lights on.

Might just have come up with the single best group of high capacity strategies ever known.

We should all celebrate this achievement. It represents everything this list is about, surely?

Trying to pick holes in something like this is the equivalent of the Barron's columnist bearing bearish for 30 years on U.S. stocks.

My belief and optimism is based on facts, not some idol worship groupie phenomenon.

anonymous writes:

Is one allowed to agree with both the True Believer and the Chair? What Simons and the others did was pure genius–they used mathematics to identify the consistent anomalies that occur when people buy and sell securities. Those of us who lack their pure brains and mathematical chops marvel at what they have accomplished and have done our best to create a glacially slow mimicry using employment data and their correlation to the business cycle. (They are playing Scarlatti the way Michelangeli did; I am playing chopsticks hitting one key a month.)

But, as Vic notes, the question is whether or not there remain any arbitrage opportunities left now that those anomalies have been examined in such detail for decades by the far greater number of smart people who have come after the folks at Medallion.

Bill Rafter adds: 

Like others, I agree with both the Chair and Shane. The question then is "how much juice is left in the fruit?" As Stefan says, he gets one a month.

I would posit that it is a question of time frame. Certainly the HFT opportunities are gone for us simple folk, and maybe much of the day trading. But there are still anomalies if we are willing to accept less certainty and leave our bets on the table a little longer. After all, realize the prop shops do not want their worker bees to have an overnight position. Which means those of us willing to have such a position will have an automatic edge. As an example, compare the Open to Close returns to the Close to Open returns of certain derivatives. There's an edge, less than it used to be, but still there, and the edge favors the overnight holders.

Also, we simple folk cannot expect to outperform by trading only SPY (or perhaps its overleveraged sisters), the most competitive and liquid of assets. The greatest returns have always been in the least liquid of assets. 

Shane James replies: 

I see no disagreement with the Chair on this thread. As with the Chair, myself, Medallion, DE Shaw, Citadel and all such people interested in trading from all walks of life - we shall continue to look at new angles, different ways of splicing the available information amongst much else. Medallion too will do this. The outcome? Only the shadow knows.

On this next point, the Chair, myself and anyone with half a clue will be in violent agreement - it is always best to be the bookie . The RenTech entity, at the last count when the info was still public, collected 8% management fee and 45% performance fee (I may be off by just a little here).

To use a collection of letters used by my children to describe this: OMG.

It's good the be the king. 

Jim Sogi writes: 

Much of what they have done is computer science not just math. It also has to do with understanding and moving or changing and understanding and exploiting regulations at the exchanges. In a competitive environment, there will always be an edge available somewhere. They change and move, but there is always opportunity in change, the change in others, the rate of change, the unforeseen effects of changes. I think there is opportunity for the slow and small as well. Computers are stuck with their algos. They leave tracks, patterns, singly and as a group. The markets are complex, and no person or computer knows exactly how it works, though they may find opportunities in complexity. There are always effects of effects of effects, unknown to the actor. Waves spread out from every action.



I once asked of the Chair, is it really worth it to trade markets not based in the United States? We decided that it was an 'interesting' question.

Taking this further it is of much interest to calculate the relative stability of markets. 'Stability' can be measured in many ways and I leave it to the reader (if there are any) to think about this point further.

For example:

1. Are US T - notes more stable than their international peers?

2. Is the S&P 500 more stable than its international peers?

3. Does relative stability explain why the regularities extant in U.S. markets are often massively more persistent than those for similar markets 'overseas'?

There are some interesting things to look at if one believes that the U.S. markets are at the beginning of the chain that moves other markets.

Clearly the more 'stable' market and the market at the beginning of the chain changes from time to time but my supposition is that it takes some great measure of 'statistical crisis'– for lack of a better term– to upset the U.S. market's hegemony even temporarily.

Bill Rafter writes: 

Presumably stability is the opposite of volatility, but there are a lot of ways to count volatility. And of course there is the question of "over which period?" I'm only guessing of course, but I'll bet that John B would define stability as staying within N standard deviations of a moving mean. And that also begs the question as to the period considered. Should the period be static or floating?

Ideally markets that are more stable would attract more portfolio holdings. That is, there would be a stability premium, or alternatively a cost of volatility. If there were two assets priced at $10 and you knew (don't ask how) they would be priced a $20 at a given point in the future, which do you buy for the portfolio? Obviously the more stable of the two since you may have the need to liquidate before the end of the period. In theory the more volatile one would be discounted vis-à-vis the more stable one. With stocks the end certainty is less defined than with bonds.

The original question implied that the investor/trader was looking to be long country markets that were more stable.

Let's suppose that you believe the country ETFs represent their respective markets. Then you could rank those ETFs by inverted volatility. We have done that after first ranking them by other means. We then would have say 10 ETFs that we would like to own, and make a final selection of a few according to inverted volatility. Alternatively it also makes good sense to buy the entire 10, but with different percentages of your equity.

Does that work? Yes, it is more profitable than holding SPY, but not exciting, such that we don't charge for it. We always include SPY in such rankings, as a tracer bullet. The really interesting thing is that SPY never rises to the top of the daily rankings.

We also have the problem of "over which period". One consideration would be to rank all the country ETFs according to the same period, as though China and the U.S. should be compared by the same time standard. That would seem correct if the account owner had a specific time need. Another consideration would be to let each country ETF dictate the period for comparison. But then you might have the input time for Australia being ranked over two years, with SPY only ranked over two months. That would seem correct if the investor was more of a speculator.



I plan to research few trading strategies based on Commitments of Traders data. Any beliefs (positive or negative) about these concepts? Did anyone try to systematize it?

Bill Rafter writes:

Many have researched the Commitments of Traders Reports. If you really want to pursue this I suggest you go into B-school libraries and review titles of unpublished theses for tips. There is little of value to be found in the "popular" literature.

When researching be mindful that you relate the positions both to the market tradedate-wise to test for significance, as well as relating them to the market releasedate-wise for your profitability. One guy who sells CoT data gets this distinction horribly wrong. Collect your own data.

Most researchers tend to focus on identifying the winners by group, and following them. I would posit that the winners vary by group and are less consistent than you would like. Instead, I suggest that you identify losers by group. You will find much greater consistency with regard to losers.

Anecdote: I used to study the CoT for non-obvious trading opportunities. Once I found a situation where the Large Specs had gone from short to long over one reporting period, while the non-reporters (i.e. small traders) had gone from long to short at the same time. [N.B. little guys tend to do poorly on the short side.] This was in the Oats market, which I generally ignored. The Large Hedgers had not changed significantly. Also, from the reporting date to the release date there had been no market movement. I then called everyone I knew with grain knowledge but learned nothing. (It's important to look for orthogonal information.) Sadly I did not know Jeff at the time. What the hell, I bought a lot of Oats and put on even more Oat spreads (long the near). Within the next month Oats and their spreads moved significantly, giving me a great year, new car, etc. And I never learned the reason for the market's move.



Consider, say, 5 related macro markets, one of which is the dominant market in terms of influence upon the other four.

Further assume that your own individual Rosetta Stone tells you to buy the 4 less dominant assets first but the same methodology doesn't get long the main market until later in the microsecond, second, minute, hour, day, week, month, year. (my we are inclusive of all on this site, aren't we!)

Anyway, the issue to consider is this:

Is it more efficient to buy all 5 assets only when the 'influential' asset signals? The qualitative argument being that if the influential asset keeps declining then one should wait on the other four.

After an enumeration here, and considering the relatively short holding periods concerned, it makes more sense to just do all the trades as they occur, 'influential' market be damned.

In terms of percentage attribution of profit or loss amounts there appears to be no persistent profit from waiting. An interesting question might be, is it a good idea to add to the other four when the main market signals….

In the context of relatively short term trading, there appears to be a plethora of cross market vicissitudes– more than enough to compensate for not having the support of the 'main' market.

Bill Rafter comments: 

If "the Four" always lead "the Main", then the Main as a signal is irrelevant for the Four. The Main then should always be bought ahead of its signal (which is a foregone conclusion). This is aside of any portfolio/diversification/size considerations. If you waited for the Main you would seem to be missing some profit on the Four. As you stated, there seems to be no profit in waiting. You should therefore treat the Main as an independent signal on its own.

Be cautious that you have not stacked the deck against the Main. A silly example (but one practiced by many) is to have one signal determined by looking back over say 20 periods, and another looking back over 40 periods (or 5-minute bars vs. 30-minute bars). In this example you will have stacked the deck in speed against the 40-period/30-minute lookback. The novice then claims he needs to wait "for confirmation". All he has done is to nullify the earlier signal. If the earlier one is always/mostly right, his process is inefficient.

Two other considerations:

The use of signals in some markets to trade other markets. The common example here is to use the inverse of bonds to generate an equities signal. Be aware that signals of "opposite" markets rarely occur simultaneously. Some traders would benefit from knowing which comes first, the exit or the new entry. Think about it: it should be obvious.

Our experience is that some signal always leads, but the leader changes. And of course there are false positives. One solution is to have them vote, but in doing so you will always be after the leader. Considering that the greatest improvements in track records come from the reduction of losses rather than outright gains, it seems prudent to trade a little of the upside for less downside. But that is for each to decide, hopefully after testing.



Is this really true in general?

"The most important thing you need to know about commodities" :

If you have traded stocks for a while, you probably have a sense of when a move has gone far enough to be due for reversal, and you're probably used to seeing longer term positions more or less alternately green and red on the day over any reasonable stretch of time. Be careful, because these (correct) instincts will work against you in commodities, which can trend and trend and trend and end in blowoff moves that go far beyond what anyone expected. Simply put, if you come to commodities from a stock trading background, temper your urge to fade moves…

There was a time in market history when S&P 500 traders (experienced, professional traders) flocked to the soybean pits to daytrade, thinking they could apply their ability from one market to another. That incident ended badly for the S&P traders (but very well for the locals in beans!).

Bill Rafter comments: 

Futures are mean-reverting in the shorter run, and that also applies to equity indices. Much less so with individual equities. That being said, that statement does not apply to squeezes in either. Futures moves tend to be linear, whereas stocks and their indices tend to be parabolic. There are logical reasons for these, but not enough room here to write them.



 There is some nice WSJ commentary about Patrick O'Brian today.

"A Centenary Salute to Patrick O’Brian":

Aubrey is an apostle of duty, an advocate of order, and yet he knows that leading his men depends less on his power to punish them than on his power to inspire. Maturin has a far greater appreciation of freedom, rebelliousness, even anarchy, and yet possesses a fierce sense of right and wrong. Together they embody the values of freedom and democracy that allowed Britain to lead the world.

First section, back with the editorials.



 "GCHQ Launches Cryptography App for Budding Codebreakers"

I have not yet seen the Cumberbatch flick Imitation Game and was wondering if it gave any credit to the Poles, who had cracked the first generation of the Enigma. Prior to 1938 there was a disgruntled German turncoat who provided intel to the French (who shared it with the Brits). Both the French and Brits were stymied, and passed what they considered useless intel to the Poles, who then cracked Enigma. For years the Poles managed to read everything put out by the Germans, and even had created a mechanical device to do the work. Then the Germans increased the number of rotators from three to five, and the plug-connections from six to 20, requiring huge additional work. [See Technical Details of the Enigma Machine]. Two weeks before Poland was invaded the Poles gave the Allies what they had on Enigma, shocking them. Without that head-start the Bletchley Park effort would have failed.The market parallel to this is that someone else's research castoffs may be useful to you. Just because someone else has failed to find significance, does not mean you cannot gain utility. Our own most useful tool was a castoff from someone else who failed to make it work.



The normal pattern for INDEX options open interest is for the OI of puts to exceed that of calls. It happens more than 90 percent of the time. It's a bit easier to see if you smooth the data, recognizing that it has a 21-day periodicity. But from approximately January 2013 to September 2014 call index OI exceeded put index OI (or was close enough to be indecisive). Since late September the pattern has reverted to historical.

N.B. the OI pattern for individual equities is that calls outnumber puts, all the time.

A return to normalcy?



A disturbing chart: "This is Probably the Second Worst Time in History to Own Stocks"

Bill Rafter writes: 

The trouble with the chart is that the regression fit was done cumulatively, resulting in older data being subject to look-ahead bias. Thus only the current values are useful, and one wonders exactly how useful. As Steve has commented, the way to foil that is to use a moving regression fit in which the values are static over time, always taking the last point in the fit. Thus all data, past and current are relevant and can then be used in statistical studies.

The question that then comes up is which lookback period do you use. Wherever possible all lookback periods should be adaptive, the question then being to what input. In shorter term price data the market will tell you the relevant lookback period. I have never tried determining lookbacks for longer term data because (a) I don't expect to live long enough to take advantage of it, and (b) too many things can happen in the short run to screw up a good plan. Most people don't marry someone in their 20s based on the supposition that (s)he will look good in their 70s.

I also question the use of any equity or debt data prior to 1972. If you don't know why, ask Stefan. **That's one of the great things about the list; there are sources for just about everything.

Several moving functions you should consider:

Moving linear (i.e., regression) fits and their slopes.

Moving parabolic fits and their slopes. Since most economic and price data are parabolic, this is the better of the two. There is also something to be gained in the difference between a parabolic fit and a linear fit. Fitting parabolas is quite tricky, and it took us a while to code it. If you try to do so and want a check on your efforts, try fitting a parabola to a straight line. If the result is ludicrous, try a different method.

Moving correlations are particularly interesting between markets that might be alternatives to one another. Moving correlations between stocks and bonds (levels to levels) are something we have used for years and continue to do so. I thank Gibbons for his comment that Colby & Myers recommended them, as I had not been aware of that. (I'm not a fan of C&M.)

Gyve Bones responds: 

Colby and Myers didn't recommend the linear regression study per se… the empirical analysis simply showed that study to perform best with a fixed loopback parameter over NYSE index returns data over a long period of time compared to other trend following signal generators. This book was an early attempt to quantify different approaches to see how they performed trying as best as can be done to compare apples to apples. In the mid-to-late 80s, it was the best thing that had been done like that since Dunn & Hargitt's study using punch card futures data in the late 1960s (which found that the Donchian Four Week system was best, the system which launched a thousand CTA, including the Dennis Turtles and their spawn.) Another similar study was done in the 90s by Jack Schwager and another fellow whose name escapes me at the moment which was well done.

Larry Williams adds: 

A question: when was the regression line fit? Today? 20 years ago? 50 years ago? The slope will change based on your starting and end points. How overbought or sold is a function of this. A more careful analysis would either apply this same "method" every year with a set of rules (i.e sell above x% overbought) or would do the same thing on a rolling window basis. It's an interesting chart nonetheless and gives one pause, but I would suggest it lacks a certain amount of rigor. 

Gibbons Burke writes: 

It seems to me that this is a flawed chart to look at historically to make rules from because the trend line drawn into the past contains information about the future. The line is drawn using the linear regression of the entire data set so, for example, the line segment covering 1998-1999 "knows" about what happened in 2014. Very deceptive and misleading to make a rule based on the relationship of the data to the trend line.

Victor Niederhoffer comments: 

The disturbing chart is a case study of why charting is so misleading because of the regression bias and also at the variance of a sum is the sum of the variances. 

Steve Ellison says:

Here is the way to solve the problem of the regression line incorporating future data. Attached is a graph of a "moving regression", as Dr. Rafter calls it. For each date, the red point is the last point of a 30-year regression of the S&P 500 as of that date (the graph is from 2010).



Hefty relative changes in the Monetary Base and hefty relative changes (i.e. "corrections") in the S&P seem to be related. Sometimes the former leads, and sometimes it lags. Unfortunately (for the statistical researcher, as opposed to the Optimists) there are not that many examples. The question: is the current relative decline in Monbase related to the admittedly small SPX correction we have already experienced, or is there more to come? Is there anyone here skilled at looking around corners?



Bill McBride published this interesting piece on wage growth in the US.

On the one hand, one might argue that this is a surefire harbinger of inflation. On the other, some wage growth might carry with it some opportunity for increased spending (save? in this country??). Some top line growth would, I'm sure, be appreciated by one and all.

And that assumes that there really is wage growth going on. At best, the jury's still out on that one.

Bill Rafter writes: 

Wage growth has not been underestimated. Payroll tax receipts suggest otherwise. The latter do so some signs of coming back from the grave, but absolutely nothing to get excited about.

Regarding inflation, there are two forms of money growth that have to be monitored: that originated by the Fed known as the Monetary Aggregates, and that originated by the banking system known as fractional reserve lending. The aggregates are the Monetary Base, M2 and MZM. The lending data are commercial and industrial loans. The planned growth of the aggregates is designed to limit deflation. Inflation will not proceed apace until you get a growth in loans. So if you are worried about inflation, at this time all you have to watch is the loan data.

Aggregates and loan data are available on the FRED site. Payroll taxes are on the Treasury site.



 The Riddle of the Labyrinth by Margalit Fox is a great book describing the decipherment of Linear B, a Bronze Age pre-Homeric script found originally on tablets in the Palace of Minos on Crete. If that is of interest to you, this book will reward you. For me it was a quick and exciting read. If you are a Sherlock Holmes fan, chances are you will enjoy it.

The decipherment of Egyptian Hieroglyphics was solvable once the Rosetta Stone was found, which contained a translation into Greek. However Linear B looking like stick figures or the runic alphabet, had no comparable Cliff Notes.

But I also found the book an excellent guide for anyone interested in doing research on market behavior. The parallels between the two were uncanny. To decipher Linear B required pattern analysis, counting and frequency analysis before there were computers to make those tasks easier. We have computers to aid our decipherment of the markets, but the process of creating a framework to do the research is the same. A lot of setup and then lots and lots of actual work.



 "Nobel winner Fama: Active management 'never' good":

Eugene Fama, the University of Chicago investing researcher who won the Nobel Prize in economics last year, once again warned investors against the lure of active management.

"The question is when is active management good? The answer is never," Fama said to laughs Thursday at the Morningstar ETF Conference in Chicago .

"If active managers win, it has to be at the expense of other active managers. And when you add them all up, the returns of active managers have to be literally zero, before costs. Then after costs, it's a big negative sign," Fama added.

He's known as the father of the efficient-markets theory, which says that asset prices reflect all available information; investment managers can never truly get an edge.

Fama dismissed the idea that it was possible to pick the best managers.

"The good ones might be good or they might be lucky. The bad ones might be bad or they might be unlucky. We can't really tell the difference," he said. "I don't know if it would ever make sense, even if the fees were zero, I don't think you'd be better off because you'd be investing in an undiversified way."

Read More Economy weak because of 'stupid' policies: JPMorgan pro

Asked about Warren Buffett's long-term record of picking good companies, Fama said the Berkshire Hathaway (BRK-A) chief actually agreed with his index-based thesis. Buffett said recently he actually has directed much of his fortune to be placed in passive index funds after he dies.

"He's, like, my hero," Fama said. "What he says is, 'I can pick a company every couple years, but if you have to form a portfolio, you're better off going passive.'"

"All the behavioral people say the same thing," Fama added. "In the end, they realize that the game of doing something active is fraught with problems."

Fama was also asked about hedging against big crashes, like what happened to the markets in 2008. Attempting to protect against them, he said, was the unwinnable game of market-timing.

"If you sold when the market crashed, you made a big mistake, and if you saw it coming you're a genius," Fama said.

Gary Rogan writes:

Everything that The Sage deems right and proper will happen after he dies, the charities, index investing, who knows what else. I guess it's no longer politically correct to say "Après nous, le déluge".

The statement "If active managers win, it has to be at the expense of other active managers. And when you add them all up, the returns of active managers have to be literally zero, before costs." is probably mostly correct but given that some active managers are also activist managers it's not completely correct. Also imagine that every single person in the world was an index investor, that would be an absurd situation where nothing in particular but the inflow of new money would determine the price of all stocks. And still, if the average of all managers, aren't some managers better than indexing? At the very least Fama could say that no person is capable of either being or choosing a better-than-average active manager, but he isn't actually saying this.

Bill Rafter writes: 

That's a poor logical argument by the good professor. While Dr. Fama may be right that before costs the average return of all active managers must be zero, clearly it is possible (if not likely) that there will be serial winners and losers. Speaking only of the latter, several years ago we were asked to propose solutions to a shop that had managed to underperform the S&P for every one of the prior 15 years. They did not like our proposals and also rejected proposals from other research providers, continuing with their own methods. They are now 0-18 versus the S&P. Since it is possible for some to get this investment "thing" totally wrong, it is perfectly logical to assume that some others have better than average performance with consistency.

anonymous writes: 

In the case of Buffett you might ask: cui bono? His non Berkshire index assets could fill an Omaha thimble. Is it not the same press release as Betfair put out about their fixed odds versus exchange book on the Scots referendum?



Would anyone advise on how to determine backtesting periods?

I presume one should choose the most recent period because it may better correlate with the present situation. But is that really true? If it is, then how far back should one include, and how far in the future can it correlate? My experience seems to say that a short backtest period can lead to a very short future prediction or even a very poor prediction. On the other hand, a longer period often leads to poor performances during the present situation.

Shane James replies: 

At the Spec Party I had the privilege to spend a reasonable period of time one to one with the remarkable Sam Eisenstadt.

His work is likely one of the best examples of creative thought in the history of financial markets. He explained to me that there wasn't much backtesting to what he/they did. He came up with some principles that made sense to him and started applying them in real time.

Now, in our so called modern world, things may have moved on (Sam graciously stated as much to the room when he was giving his views on the modern markets). HOWEVER, maybe not so much…..

Try this:

1. If your trading idea has an average holding period of a few days (preferably less) then start from today and run it in real time for the next 90 days or so. By definition, the prices upon which you are testing your ideas did not exist when you had the idea so you have already eliminated most bias if you do this.

2. If you are happy with the structure of the returns (win, lose or draw) then consider if the results were biased by any factor during your live test phase and if related to long only stock index trading then make the requisite adjustments for drift.

3. Perhaps now consider a backtest.

The point being that I think it makes sense to test on data that did not exist BEFORE you perform the backtest.
Some like to 'exclude' certain data and 'pretend' it didn't exist so they can assume that the excluded data is 'out of sample'. For instance they may take 10 years of data and use the odd number years as test data and the even number years as 'out of sample'. This might be a reasonable idea to make yourself feel more comfortable but there is an intangible and very difficult to explain benefit to performing the kind of 'spontaneous' testing set out above on data that did not exist at the genesis of your idea before one starts seeing how well a set of heuristics performed in 1971!

Leo Jia responds: 

Hi Shane!

Thanks very much for the valuable advice.

Wow, Mr Eisenstadt! I would really love to thank him for my early success stories with referencing the Value Line. But I guess it wouldn't matter to him as he might have heard from too many!

Talking about my early experience (back in the 90's), I actually had been using your suggestion all along. There was never backtesting for me — I got an idea and went to buy the stock the next day. It actually worked well overall.

Should I go back doing the "novice" way? That becomes a question worth thinking now that you mentioned it. Perhaps this goes with the valuable lessons where having had enough struggles using complex ways, one discovered the neglected simple way being far superior. In Chinese culture, Tai Chi can be considered as that type of "simple ways".

Now, a couple questions about your suggestion.

1. By putting a new idea directly live, what problem is one trying to solve? Is it the concern that poor backtesting result may make one throw out potentially a good strategy? And is this concern because of the belief that past data are already different from the present situation?

2. In what ways can this idea that seemed to come from nowhere be better than the many ideas one gets by studying historical data? I know inspirations are invaluable, but one doesn't often get those inspirations that are not the results of study. So beyond the mistrust of the correlations between past data and present situation, are there any other reasons?

Thanks again for your thoughts.

Bill Rafter writes:

I am sorry to jump into this discussion late, but think there are a few points that can still be brought.  Looking for beta over a constant period of time (say 6 months) is somewhat meaningless and useless.  It’s a bit like describing a man with one foot in a fire and another in ice as at a tolerable temperature.  You have got fat tails with market volatility and a static window might be good for a journalist, but of limited value for a trader.

At a given time there is a time period over which the study of a market’s behavior will be significant.  And let’s say that at this time it really is 6 months, or 126 trading days.  Assuming no real changes, tomorrow that time window will be 127 trading days, and so on until you get a market change.

When the sea does change, bad things can happen in a hurry and beta value for the preceding 6+ months will be of little value.  Within the last week this happened with biotech:  it had been happily chugging along with good but not extraordinary outperformance of the indices.  Then it got clobbered with huge excessive relative volatility to the downside.  Had you been adapting your monitoring of volatility you would have been prepared, whereas if you stuck with your 6-month window you would have been clobbered along with the group.

My advice to you is to learn how to deal with the market adaptively.  I assure you that if you have a monitoring mechanism which you like, if you make it adaptive you will improve results dramatically. And it doesn’t matter which signal type (momentum, volatility, sentiment) or time frame (intra-day to weekly) you favor.



 A hundred years ago Milutin Milankovich, a Serbian scientist/engineer, didn't have much to do as he was a POW held by the Austrians. So he calculated the pre-historical temperatures of the Earth, based entirely on planetary distances to the sun. Several other scientists persuaded him to go back quite far in time and eventually he calculated the temperatures back a million years. Of course at that time there was no way to prove his work, until in the 1970s data from Antarctic ice cores became available. It turns out his calculations were very accurate, as were similar calculations for Mars and Venus.

If someone a century ago could calculate Earth's temperature a million years ago, the global warming claims of one camp seem to lack significant credibility.

Stefan Jovanovich writes: 

Milankovic's theory is this: "variations in eccentricity, axial tilt, and precession of the Earth's orbit determine climatic patterns on Earth"

The theory of the warmist researchers is that "the addition of combustion gases - most importantly, CO2 - from man-made uses of energy to the earth's atmosphere determine climatic patterns on Earth".

The reason for the falsifications of data by warmist researchers– I assume here that no one denies that these have occurred– is that the theory of man-made global warming requires a dramatic increase both in temperature and CO2 levels during the period when people have been burning stuff. If that cannot be found, then the theory has to contend with the very data that Al Gore found so persuasive– the Vostok ice core samples– and explain why CO2 level increases seem to be a result rather than a cause of the rise in the earth's surface temperature. That non-modeled data (i.e. the ice cores were actually dug out of the earth, not created in a computer model) is inconvenient and true. The Vostok data shows that changes in temperature always precede the changes in atmospheric CO2 by about 500-1500 years.

The usual rebuttal to this evidence and the fact that its data is entirely consistent with the Milankovic theory is something like this: "yes, it's true there is a delayed correlation; but that ignores the more important fact. Once the rise in CO2 levels start, they take over as the most important climate force."

But here, too, the actual non-modeled data presents a problem; the declines in earth surface temperatures that begin the "ice ages" occur precisely when CO2 levels are at their highest. If the Hansen theory's forces are so strong and can overwhelm the mere changes in the Earth's orbit, then how can the 'weak' signal can start an Ice Age when the strong Hansen signal says the opposite should be occurring?

The answer to that, of course, is the usual ad hominems that are the ever available rhetoric of the progressive mind: (1) you don't understand, (2) you haven't read our secret data and (3) you are too stupid to understand these things.

I think we have another definitional problem here, HA. "Complete(ly) unbiased description(s) of meteorology-climatology science practices" do not get written by people who write: "as a historical science, the study of climate change will always involve revisiting old data, correcting, modeling, and revising our picture of the climatic past. This does not mean we don't know anything. (We do.) And it also does not mean that climate data or climate models might turn out to be wildly wrong. (They won't.)"



 Yesterday while driving I heard a report of strong auto sales of both domestic vehicles (particularly trucks) and BMW and Audi. These would show up in the Daily Treasury Report as revenues in categories such as customs duties and excise taxes. Today I went looking for them, and sure enough the recent data is positive.

I tend to think of those categories as a good upstream surrogate for discretionary purchases. There are excise taxes on auto sales, gasoline sales and even on tanning salon sales.

In the linked chart
, SPX is shown as an historical reference. In my opinion there is not a definitive causal relationship. Historically this had been distorted by the "Cash for Clunkers Program", for example.

But maybe there is a retail recovery.



Does anyone know if there is a Predictive Value to a stock's short interest ratio?

Bill Rafter writes:

Short Interest (SI) is a good area to research. We do a lot of work with it in our shop, and use it in our trading. However, the question you posted was specifically about the SI Ratio, something we consider unworthy of attention with a very few exceptions. If that ratio is all you are going to focus on, we suggest watching a good movie instead.

Many people simply look at the SI Ratio because it is available, say on Yahoo, Google or the Nasdaq websites. The problem is that ratio is more dependent upon changes in volume than changes in SI. Volume is also an area worth your attention, but not in that ratio. We maintain that there are better SI ratios to look at rather than that one. But to do that you are going to have to spend some time getting the data, which means not only SI and volume, but outstanding shares, insider ownership and institutional ownership. Then you will find the profitable relationships, but anticipate considerable work.

We have only found the volume contributor to the SI Ratio useful when in a price explosion the volume exceeds the number of shorts. That circumstance suggests that the price explosion (of a high-SI stock) is a result of short covering, which has now been exhausted. Obviously don't buy that stock!

Phil Erlanger is the regarded expert with SI data. His approach was to find stocks that one liked (say on the basis of momentum or whatever) and then look for SI patterns that would enable a greater run-up. We took the opposite approach, looking to first find good short interest patterns, and go from there. What we found was that Erlanger's approach is the better of the two if one is taking a cursory look at SI. That's because fully half of the stocks with high SI deserve it – they are headed south. Of the remaining percentage, about half of those mill around going nowhere. That leaves about a quarter of high-SI stocks overall that benefit positively, a few of which really take off.

Despite the above warnings, we would not purchase a stock without at least making ourselves aware of the SI.



For historical reasons I manually downloaded the Daily Treasury Statement files and dumped them in a folder. Once there we go through our data mining process and extract what we want automatically. Our process could be made completely automatic, but it has not been a big enough inconvenience for us to code it. For virtually all other data our downloading and extraction is completely automatic.

Several weeks ago I noticed a change in the Treasury's website that irregularly makes me click once or twice more each time I download (which is only once daily). It has puzzled me why Treasury would take something that worked perfectly and change it such that it no longer worked perfectly. It has just occurred to me that the new little two-step process would certainly screw up an automated download and extraction procedure. Also of late the data is less and less favorable to a government that may wish to claim everything is rosy.

Am I being paranoid in thinking that there might be a connection?



One wonders if the stooges, the puppets from the centrals will be hauled out to make reassuring comments about the health of the economy and the resonance of the qe's. After all, small people in emerging markets might be hurt and the idea that has the world in its grip will come into play. Trading it from that cynical world view has not been entirely unprofitable the last two days. But it was entirely unprofitable on Monday. However, it often takes a day for the puppets to receive their marching orders.

Rocky Humbert writes: 

I note a Bloomberg news story from this morning that the INVERSE VIX ETF (XIV) had a record inflow of money last week — the largest amount since the ETF started trading in 2010. This tells me that the market has become conditioned to extrapolating the behavior of the past five years.

I believe that among the biggest challenges in investing and running one's models is figuring out when the game has changed (or "ever changing cycles").

I am not making a prediction about when the game will change. But the risk is rising substantially. Conditions precedent for the game changing are (1) "Everyone" is conditioned for the same behavior; (2) High leverage in the system; (3) Rich valuations and/or optimistic assumptions; (4) Subtle changes in monetary conditions and/or other related expectations; (5) A long period of time since things looked really scary. (FWIW NYSE December Margin levels are at records fwiw.)

Think back a few years — what were you thinking then? How many people laughed at "Green Shoots"? Why do people believe the bankers now? But they didn't back then? What is different? I'll predict that we don't have another financial calamity. But to quote the wisdom of Roseanne Roseannadanna, "If it's not one thing, it's another."

Bill Rafter writes:

For the next shoe to drop you may want to look at my post of last week.

Gary Rogan writes: 

When I said we'll see 5% down I was using every one of those reasons other than 4 that I don't understand other than slightly lower QE. The margin leverage chart is the scariest thing in the world if you are looking for scary things.



 What are the major 3 body markets that orbit around each other in our solar market system and how do their epicyclic orbits relate to each other (in the future)?

Bill Rafter writes: 

I think the most important word in the Chair's sentence is "epicyclic", specifically because it is non-linear. Stocks specifically exhibit non-linear behavior, and seeming have forever. Bonds used to behave very linearly, but now behave similarly to stocks, although contrarily so. We have yet to find the defining characteristics of currency markets, but keep trying, hoping to find useful information relating to other markets. Gold is also a tough one, making one think it is a rigged game. REITS behave like a hybrid equity-debt vehicle. We tend to think of REITS as a free market version of the variable annuity (but without the huge vig).

Shane James writes: 

Arguably, and addressing prediction, the big 3 change regularly.

Simple stuff like the listing the biggest moves in X time periods is a useful, elementary starting point for cross market prediction.

Anton Johnson writes: 

Sadly, our system is unstable with the sub-stellar central mass consisting of the collective Central Banks. Orbiting, and sometimes consumed by, the central mass are the various financial instruments periodically switching in relative predominance as they accrete/disperse assets due to the actions of the brown dwarf.



December 30, 2013 | Leave a Comment

 My apologies in advance for a seemingly strange piece of research.

Recently a Speclister posted a link to a site which inferred considerable success in trading various markets on the basis of solar and lunar events. We have all seen these for decades. There are lots of charts that seemingly draw the connection between full and new moons, sunspots, geomagnetic radiation and of course the financial markets. I myself found nothing in the way of serious data that would make me want to trade on that basis, but the site exuded so much confidence that it was hard to dismiss out of hand.

The site like many in the genre spends a lot of space arguing WHY. You know, humans are mostly water and Earth's tides are controlled by solar and lunar gravitation, so why not humans. Personally I don't care what the reason is, as long as a reason exists and the data is non-random. In this case I am going to assume that a reason exists, but is not discernible. So the answer was for me to take a look at the data with our research tools.

My period of study was from January 1, 2005 through December 27, 2013. That could always be enlarged if some worthwhile results were forthcoming. As a benchmark equity asset I used SPY, as it included dividend yield and was a real and tradable market.

Over the period SPY achieved a 7.4 percent compound annual rate of return (CAROR) while experiencing a 60.83 percent maximum drawdown (DD). Thus the return to risk ratio (R/R) was 0.12. Full statistics and a chart are here.

The site made some strong claims about the value of the full and new moon dates, so my first look was there. To look at solar influences I would need a significant number of cycles and they are approximately 11 years each. First I bracketed the half-month on either side of the full moon, and the same with regards to the new moon. With regards to the full moon, you would buy SPY at the first quarter and hold for the half-month through the full moon, selling at the third quarter. When you were out of the market you were in cash, earning nothing. Thus the following constitute programs in which you are only invested for half the possible time:

Full Moon Bracketing:           2.1% CAROR,    36% DD,     0.06 R/R
New Moon Bracketing:        5.19% CAROR,    47.98% DD,     0.11 R/R

This agreed with the site in that longs would favor the new moon. But if the full and new events corresponded to troughs and peaks, we had to look at equity growth between the events. This also constituted investing for only half the possible time.

New to Full (waxing):        9.82% CAROR,    46.08% DD,     0.21 R/R
Full to New (waning):        -2.2% CAROR,    41.17% DD,     -0.05 R/R

These results would suggest that equity prices tend to trough at the full moon and peak at the new moon, exactly as conveyed by the website.

Links to stats: 






Steve Ellison writes:

To what does the t score of 3.46 refer, and how significant is it given multiple comparisons (you tested 4 subsets of data, and one looked pretty good)?



 My first experience with "serious" fraud was in grammar school. I had advance knowledge and just sat and watched the whole thing come off.

I was either in Fifth or Sixth Grade. My next door neighbor Paul was two years older, and Harry further up the block was in high school. Harry had one of those dream jobs: he worked as an usher at the local theatre for the Saturday kid matinees. It was a dream job because he got to see all the movies for free, and got paid to boot.

This theatre occasionally had giveaways to boost the audience. Well this one time they announced they were giving away a free bicycle (a real stunner) to someone in attendance. All you had to do was be in the theatre with a paid ticket. Of course they announced it for weeks and come the appointed Saturday, the place was packed. Kids were even sitting in the aisles as there were no serious fire regulations. There must have been 400 kids there, every one of which dreamed he was going to win that bike.

I sat next to Paul who told me in advance he was going to win. After the first show (the Saturday kid event was always a double-feature), the manager got up on stage with Harry the usher holding the giant bowl with all the tickets. Harry draws the winning ticket and gives it to the manager, who read out the number. Paul jumps up shouting "I won, I won". The next day Harry was riding around the neighborhood with his new bike. I was too young to inquire about the quid pro quo between Paul and Harry, or even perhaps between the manager and Harry. And of course I was in awe.

In many ways it was beauty in its execution. Not unlike the time the former First Lady of Arkansas used the futures markets to bag a payoff. But that's another story. Here's what made me think of the bicycle giveaway long ago:

Today I saw a news item that if no one wins the current $600+ million lottery and perhaps the next upcoming one, then the jackpot could be $1 billion. With this being the Christmas season, there could not be a better time to avoid anyone winning to run the jackpot up to all-time highs. All those people hoping and praying to hit the big one. All the promoters have to do is look into their computers to find unpurchased numbers for several weeks.

Now I'm not suggesting that they give the winning ticket to one of their buddies, like Harry and Paul arranged with the bicycle. But this could all be done with the goal of redistribution of wealth from those who purchase lotto tickets to the tax coffers of the states, who of course get most of the winnings. The individual winner himself does not matter, he's just window dressing.

Just thinking out loud.



 We have gone almost a year with the two percent additional payroll tax reinstated. The results are worse than expected.

What would have been expected is an increase in employment, but not enough to offset the effective tax increase. The reason you would expect an employment increase is because Americans are a resilient lot and get bored with sitting around. Sooner or later they find a way to get back to work. That is not what we have: The growth in payroll taxes is now negative, indicating a net loss in payrolls. The data is effectively "cap-weighted" so it might mean a loss in the number of jobs or switching to lower pay, as when a nuclear engineer becomes a sanitation engineer.

Philosophically, tax rate increases for individuals generate increases in tax revenue for governments. This is exactly what is expected by government, but the problem is that government does not know where to stop. They expect further rate increases to result in commensurate increases in revenue. But government neglects that individuals have a say in this: the latter can vote with their feet by leaving the workforce. America is now on the wrong side of the Laffer Curve.

Additional amounts taxed (N.B. the PPACA has been ruled by the Supremes as a tax) will have a continued negative effect.

A fellow Spec-Lister suggested I look for structural/secular changes in the employment data. My initial thought was that humans are skilled at obtaining freebies, and the disability payments coming from Social Security seemed a perfect target. Consider, faced with a lay-off, why not see a doctor, claim clinical depression and get yourself on disability? The long-term advantage of doing so may mean that you never have to work again, which would not be the case with unemployment benefits. But is my conspiratorial claim borne out by the data?
The short answer is "No". However there is more, should you feel inclined.

Firstly, which data does one use? Social Security Administration issues a report showing claimants for disability and the average claim. Multiply the two and you get the total value of disability benefits paid. Alternatively, you can go to the Treasury website and see their ledger of what actually was paid. Although the two sources (Soc.Sec. and Treasury) mimic one another, they are decidedly not identical. Of specific concern is that they differ by an odd order of magnitude, and one which is not relatively constant. So then one might posit which source does one trust.

Chart of Disability Benefits Paid

Chart of the 12-month rates of change of benefits paid

My experience suggests that the Social Security data looks as though it has been manipulated or "cleaned up". The Treasury data looks as though it contains a degree of static, which is more realistic. My guess would be that the Treasury data is "raw", while the Social Security data is "adjusted". In general my personal preference is for raw data if I cannot reverse engineer the adjustments. Both data sources indicate a relative decline in the yearly rate of change, decidedly counter to my pre-supposed conspiracy claim.

If you look a little deeper into the Treasury data you find a profound cyclic influence:

Cyclic disability benefits

This was a surprise. I did not assume the claimant had much control over the process, but the data indicates that summer is a key time to receive benefits. Oh, the joy of it all. [Skeptics should note that the cyclicality is not related to the number of days in the various months.] The cyclicality also suggests that disabled persons do return to the workplace. (I would have lost that bet.)

What is the current trend?

trend slope in disability benefits paid

For whatever reason, the drift of disability benefits is not increasing. One might optimistically believe that because conditions are not worsening, they must get better. Such logic could cost an investor a lot of his wealth.

Rocky Humbert replies: 

There was a Washington Post story yesterday that adds some color to this discussion. It notes a fact: 1.3 Million workers will have their "emergency" unemployment benefits end on December 28, unless Congress renews this aid program. This is a big number. And I was unaware of this fact. And as I consider myself somewhat informed about stuff, I'd guess relatively few market participants are aware of this fact either.

The writer then looks at the probability that a lot of these folks will file for disability claims. The author cites a study (which I have not read) which suggests that they won't. I have no opinion except that people respond to incentives. And some number of these 1.3 Million will surely find their way back into the reported labor force. This will likely distort the tax revenue, payroll, and other data to some degree in the first months of 2014.

I am raising this point not because I have any view about the currently big number of people receiving disability or what it means. (That's HR Rogan's job.) Rather, I am raising this, because the employment and tax numbers will, I believe, look really odd in January and February. (HR=hand wringer)

The story can be found here:  "Where Will Workers Go After Their Jobless Benefits Expire? Probably Not on Disability"

Jeff Rollert adds: 

Just to add another vector to the discussion, I would also argue that, since 2000 (the benchmark year in the article), the entry into the global labor pool of hundreds of millions of smart, motivated Chinese workers (not to mention Vietnamese, etc) has had a significant impact.

From the MIT Technology Review: "How Technology Is Destroying Jobs":

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson's contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who's worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee's claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the "great decoupling." And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.



Some preliminary thoughts on the running median 2, 3, 4, 1, 7, 8, 9, 3.

A moving median of the first 5 is 3, of the next 5 is 4, of the next 5 is 7, of the next 5 is 8– it's a good indicator of trend. First recommended to me 53 years ago by Fred Mosteller, Chairman of Harvard's first statistics dept.

It is more stable than the moving average as outliers are removed from sample. It is easy to compute fast with computers for small running numbers like 5 or 100 by repeated sorts. For higher numbers, you can form two groups, those below the median and those above. As a new number comes up you place it in one of the two groups if higher or lower and take away the oldest number. Then adjust to make the two groups equal again. It is not used as much as the moving average so it shouldn't be hurt by front running or spikes when cross over occur. It has a defined distribution when the underlying distribution has inordinate extreme values as frequently occurs with Cauchy or similar distributions with infinite variance.

It's probably a good thing to use when using nearest neighbors as predictors, i.e using the median and running median to compute your predictors. It deserves testing in real life markets for real life applications.

Ralph Vince writes:

It is the indicator of "expectation," as evidenced by human behavior itself, and not the probability-weighted mean.

Bill Rafter adds: 

Moving medians have some distinct advantages.

They represent real values that occur. For example, taking the average of 1, 2 and 5 gives you 4, which never occurred, whereas the median 2 did occur. Continuing with the same series, should subsequent values in the series be less than 5, the value of 5 will not occur as a moving median. Hence, the moving median eliminates outliers.

One of my appliances has three thermometers to measure temperature. The value displayed is the median (and hence a series of moving medians). Should one of the thermometers be broken, or distorted by being in a particularly hot or cold spot, the median will still give me the best estimate. This elimination of outliers is very useful.

Should you have data whose importance relies upon only crediting occurring values and need to eliminate outliers, then you should test moving medians. We ourselves had experimented with them regarding price series and written extensively about them, but do not use them in our current work. Our reason is that we consider the outliers in a price series to be particularly important.

Kim Zussman adds:

The following is a plot ratio of SP500 (10 week moving average) / (10 week moving median) for the recent 5 years (SP500 weekly close data).



 For those of you interested in jobs data, this chart might be of interest.

The red line is very important, showing a 2 percent increase. Ceteris paribus the 2013 payroll tax receipts should average 2 percent above 2012. However as time has progressed, the government has received less and less of this increase, and the current receipts growths are running negative to the prior year despite the increase in rate. This is the Laffer Curve at work.

As of January 2014 the YOY growth will use as its base the tax-increased 2013 data, which should be interesting.



I have a model which at its root is theoretically (but not operationally) similar to the Fed Model, and its job it to tell me where to allocate assets among equities, debt, gold and/or REITS. I also include a few other items as 'tracer bullets'. At this time the allocation model would have most of its money in equities, and importantly no money in REITS. However when I look at my list of 30 stocks to buy, 23 of them are REITS and 2 are utilities. So if I have to rotate out of something, my only choice is cash.

Could this suggest something ominous?



 It's funny that the jobs report is not compiled yet. The Labor Dept. must have the data they use, as that report consists of happenings through 9/12. We use Dept. of Treasury as our source and we have that information through 9/27. The Treasury data is generated electronically and we might get the 9/30 report later today unless they intervene.

Bottom Line: The YOY growth in payroll tax receipts (seasonally adjusted), which is our substitute for employment, is at the lowest level of the year, whether you mean calendar year or adjusted fiscal year. But of course, you might never see that report.

Let's say you were in charge of the Administration of a country in a similar circumstance. If you knew the jobs data was fantastic, would you release it? A good economic report might be taken to mean that the country was not as fragile as previously thought, and could therefore withstand a shutdown for a while. On the other hand, if the jobs data were bad, it might mean the country was very fragile, and that the Administration should compromise quickly, effectively forcing your hand. And of course in the latter scenario you should be embarrassed by the fact that nothing you had done economically for 5 years had been successful. Your best option might be to wait until you needed a trump card, and then pull it out of the hat. Plus (if you wanted) you would have additional time to massage the data.



Attached is a weekly chart of CSI300 index (representing 300 large stocks on Shanghai and Shenzhen exchange) from January 2007 to now.

Would anyone call an upcoming bull market from this?

Perhaps the chart is not too obvious yet. Fundamentally, it is true that many foresee a slowdown in GDP growth in the coming years. But what is important now is that people can anticipate some structurally healthy growth. And this is very different from the past 5 years when the growth seemed high but the market mainly saw it as unhealthy and stayed essentially hopeless. The new government seems to deliver a lot more confidence to the market with a new direction for the economy.

Any thoughts?

Bill Rafter writes: 

One suggestion I have is that you ask yourself two questions:

1. Consider the participants in that market; what time frame do they typically observe in terms of long term perspective (i.e. lookback period), and

2. How frequently do they watch the market?

The reason to care what others do is because they are your competition. The money you make, you get from them. Thus, know them!

Point #1 may also be related to taxation. Is there a period of time in China such that if a position is held that long it qualifies for a tax break? In the U.S. that means it qualifies as a "long term capital gain" with a significantly reduced amount going to the confiscatory government.

If there is no such period, then it's nice to see history going back to 2007, but it is irrelevant to what is happening now. However it is good to have history as you can easily see with a visual how a market behaves with the signal process you use. You should statistically test, of course, but a quick look is valuable. (Tukey said so, and he is a god in this area.)

Thus your window of observation for decision making (as opposed to history) should not go back perhaps more that 50 percent greater than the period identified in point #1. In our case (in the U.S. with equities), we do not look back farther than a year and a half. Frequently as little as four days.

Point #2 is the shorter end. If everyone watches the market every day, then by limiting your snapshots to weekly, you are discarding valuable information. Ask yourself, "Why would you ever want to eliminate valuable data?" You would not do that with a neural net, so why do it with real intelligence? Some would posit that weekly information (data or charts) eliminates some noise. However we would argue (and have demonstrated) that it is impossible to separate signal from noise. Specifically I would suggest that if someone gave me what they considered noise, I could find some signal within. It may not be the best example of signal, but it's in there.

Leo Jia adds: 

Thank you very much, Bill, for the precious advice.

There are a couple reasons for me to have attached the weekly chart starting from 2007.

1. I look for a possible multi-year bull market, and for that to me the trend looks clearer on the weekly chart.

2. One key reason for the past few years' laggard market, aside from those fundamental reasons I outlined, is the bull-run and crash in 2007-2008. The bull-run was solely due to the government reform initiative in the stock market which tried to ensure all shares (government shares and floating shares) to be equal. The crash then was mainly due to market suspicion that the resulting floatable government shares would subsequently flood the market. Now 5 years over, the flooding of the government shares, if that happened indeed, is likely to have settled down.

To answer your two questions:

1. There is no tax incentive in China encouraging people to hold longer. Holding period are generally much shorter. It can be as short as a few months for funds, and as short as a few days for individuals.

2. Most participants watch the market everyday.

Perhaps one thing different in China's market is that large market movements are all initiated by government policies. Market enthusiasm are only summoned when the imagination of a government direction as positive.

I am not a government analyst, but traditionally, each government in its 10 years tended to create at least one big upward move in the market. Looking at this government, its initial months already showed signs of its focus on finance (along with new direction on economy). The recent launch of bond futures is one such key move.

img.imageResizerActiveClass{cursor:nw-resize !important;outline:1px dashed black !important;} img.imageResizerChangedClass{z-index:300 !important;max-width:none !important;max-height:none !important;} img.imageResizerBoxClass{margin:auto; z-index:99999 !important; position:fixed; top:0; left:0; right:0; bottom:0; border:1px solid white; outline:1px solid black;}



 Voyager 1, launched back in 1977, has become the first man-made object to pass into the unknown vastness of interstellar space. News Report.

I have a serious challenge for you. Name a single man-made device that has worked continuously for 40+ years without any human physical intervention. The winner will receive Rocky's usual prize: A unique gift of dubious monetary value.

Chris Cooper has a go at it: 

There must be any number of vintage self-winding watches that still work. If it must be wound, does that still match the spirit of your inquiry? Of course, there are many watches and clocks which must be wound by hand that are still operating. You can find some self-winding watches for sale on eBay.

Kim Zussman replies:

I am man-made and have worked continuously for well over 40 years (though currently half time for the government).

Bill Rafter adds:

Without doing any looking, there are lots of low-tech human creations that have survived the test of time. Many dams have performed their functions for decades and even centuries. I'm not speaking of hydroelectric dams, but simple river control devices. The Marib dam in Yemen is still there (after two millennia) and would be working if there was enough rainfall. Many artificial harbors also have exceptional longevity. Some Roman harbor constructions are still operational; the Romans having been expert in concrete manufacture. And don't forget Roman roads.

In more recent times, I am certain there is some electrical cable that is still functioning from half a century ago, if only to ground lightning rods.



There is an issue about the employment numbers that may not be getting proper attention - Section 530 and its interaction with state unemployment benefits. Section 530 of the Revenue Act of 1978 was the Carter Administration's gift to the farm belt. Under Section 530 an individual will not be classified as an employee if the alleged employer has a reasonable basis for treating that person as an independent contractor. "Reasonable basis" can be proved by:

(1) "Judicial precedent, published rulings, or technical advice with respect to the taxpayer, or a letter ruling to the taxpayer; (2) "A past IRS audit of the taxpayer in which there was no assessment attributable to the treatment (for employment tax purposes) of the individuals holding positions substantially similar to the position held by this individual"; or (3) "Long-standing recognized practice of a significant segment of the industry in which the individual was engaged."

The IRS has a "whistle-blower" form that individuals can file to challenge their classification - the SS-8. But - and here is the kicker - on the form itself the IRS warns the taxpayer that "A Form SS-8 should not be filed for supplemental wage issues." What this means, in real terms, is that people who get "fired" from their independent contractor jobs cannot use the IRS to bully state unemployment agencies into paying them benefits.

Since the states all have incentives to cut down on the cash drain from unemployment benefits, even the deep blue ones like California do not make much effort to reclassify contractors as employees once the issue gets to unemployment benefits. the result is that "the workforce" has more and more people in it who are not now and never will be classified as "employees". "Employment" itself becomes less and less of an indicator of actual incomes because the payroll numbers cannot reflect the contractors' fortunes (both good and bad).

Bill Rafter writes: 

For the "percent unemployed" number, reclassification as to who is or is not an employee may have an impact.  However this is the beauty of simply looking at the payroll tax data, as all persons (traditional employees and individual contractors) are required to pay.

Victor Niederhoffer writes: 

But with all the seasonal adjustments and other things that enter the employment numbers, how can payroll numbers not using the census seasonal adjustments be meaningfully compared. 

Bill Rafter elaborates:

It is the seasonal adjustments by the officials that we distrust. We think the adjustments are a fudge factor to be used by an administration eager to paint a picture. I don't know who is responsible (BLS or Census), but their adjustments historically have made little sense. BTW, the Fed also could use someone better at seasonal adjustment, although their number jockeys are better than whoever plays with the payroll data.

A problem is (a) do you want the truth, or (b) do you want to make money? If you are decent at it, doing your own work will get you the truth. However if the world follows the official releases as gospel, you could be right and broke. I have been in that predicament a few times.



Is an asset up or down? How do you decide?

For a somewhat offbeat reason we need an unwavering determination as to whether or not a particular asset was up or down. We do not care how much. Obviously if something is up or down by say 2 percent, there is no argument. The problem is if the data is not definitive. After all, you occasionally have days when the Dow or S&P are one way and the Nasdaq is the other. So which one is right?

The standard is, of course, the close. That would be right in many ways. Most volume occurs at or near the close, and margin calls are determined by the close. But many [technicians] use midrange, or an average of the High, Low and Close. Institutions have been known to care about the volume-weighted average price or VWAP. A priori we thought VWAP would be best for our purposes. But we were wrong.

Ours was a very limited study. We only cared about 4 assets (all ETFs): SPY, IEF, GLD, IYR. And our definition of right vs. wrong is the amount of flip-flopping during a trend. That is, how often is it wrong? We realize this is all very subjective, but we are not writing a thesis here - we just want the quick and dirty facts. The period we considered: 2005 through the present.

It turns out that VWAP is not best. It gives a lot of false signals. This was good news for us as we will not have to acquire VWAP data.

That's all we really cared about. However the fact that institutions take care in getting VWAP price executions and the fact that VWAP (at least in the limited study) gives false information, suggests that someone (a flexion, perhaps) has something at stake to effect the false information.



This is a visual representation of non-payroll tax receipts by Uncle Sam. Now I fully know that corporations and individuals are incentivized to find accountants who will keep these numbers as low as possible, but that tendency does not change over time.



 This article shows results of experiment on the E-Coli bacteria detailing the survival or death of the bacteria in response to the way it handles introduced exogenous stimuli. The upshot is that small changes in exogenous conditions can lead to large substantial differences in outcomes. Surely a rich field for market related phenomena looking at how small changes in one input (say rates) may lead to large movement in other markets (say currencies) when the dependent variable is already under some stress.

Pitt T. Maner III writes: 

This is a really interesting field.

It looks like bacteria have been "hedging their bets" for quite some time. And they have a type of "memory" that influences their response to current environmental conditions. On a larger scale it is interesting to note what happens to the ecology of a system when a "keystone species" is removed. The field of "synthetic ecology/biology" looks to have important findings for a wide range of fields and the bacterial algorithms already developed are being used for engineering problems.

1. "Bet-hedging in stochastically switching environments":

"We investigate the evolution of bet-hedging in a population that experiences a stochastically switching environment by means of adaptive dynamics. The aim is to extend known results to the situation at hand, and to deepen the understanding of the range of validity of these results. We find three different types of evolutionarily stable strategies (ESSs) depending on the frequency at which the environment changes: for a rapid change, a monomorphic phenotype adapted to the mean environment; for an intermediate range, a bimorphic bet-hedging phenotype; for slowly changing environments, a monomorphic phenotype adapted to the current environment. While the last result is only obtained by means of heuristic arguments and simulations, the first two results are based on the analysis of Lyapunov exponents for stochastically switching systems."

2. "Memory in Microbes: Quantifying History-Dependent Behavior in a Bacterium":

"Your average bacterium is unlikely to recite π to 15 places or compose a symphony. Yet evidence is mounting that these 'simple' cells contain complex control circuitry capable of generating multi-stable behaviors and other complex dynamics that have been conceptually linked to memory in other systems. And though few would call this phenomenon memory in the 'human' sense, it has long been known that bacterial cells that have experienced different environmental histories may respond differently to current conditions [1]–[3]. Though some of these history-dependent behavioral differences may be physically necessary consequences of the prior history, and thus some might argue insignificant, other behavioral differences may be controllable and therefore selectable and even fitness enhancing manifestations of memory."

3. The work of Professor Robert T. Paine and the concept of the "keystone species" where an organism has a big effect relative to its abundance:

"It was a ritual that began in 1963, on an 8-metre stretch of shore in Makah Bay, Washington. The bay's rocky intertidal zone normally hosts a thriving community of mussels, barnacles, limpets, anemones and algae. But it changed completely after Paine banished the starfish. The barnacles that the sea star (Pisaster ochraceus) usually ate advanced through the predator-free zone, and were later replaced by mussels. These invaders crowded out the algae and limpets, which fled for less competitive pastures. Within a year, the total number of species had halved: a diverse tidal wonderland became a black monoculture of mussels1."

anonymous adds: 

 OK, what about Slime Molds (particularly, Dictyostelium discoideum). It has the absolutely stunning biological characteristic that it spends much of its life as thousands of individual cells and other times as a single entity.

When times are good for Dictyostelium doscoideum its 'cells' wander off and enjoy themselves. However, in less hospitable environments the 'swarm' of cells coalesce and form a single entity.

Apparently the cells emit acrasion (or AMP) that contains information useful for other cells

When things are starting to look tough the cells pump out increasing amounts of AMP and the cells begin to cluster….Other cells follow these trails and increase to mass towards it completed whole.

Now, I wonder about the stock market. During the regular upward movements most of the components are doing their own thing, following their oscillations generally higher…. When 'it' hits the fan, the correlations between the stocks increase rapidly to 1.0 and they form a single bearish, growling entity.

Now without pushing the analogy too far, I wonder if stocks 'transmit' statistical information (AMP to follow the analogy) to each other (in this context they would not transmit as much as 'exhibit' some form of common statistical behaviour) that forced the behaviour of component stocks into a more correlated state.

Testing possibilities are legion.

Gary Rogan writes: 

My general objections to giving some purpose to the market have to do with incentives, or more precisely lack thereof to do anything in particular.

I read a whole chapter of a book on a slime mold presented as an altruism study. The reason it was presented like that is that when the individual slime mold cells cooperate, only the lucky few that join the growing "mushroom" at the right time get to propagate because they get to form spores only at a particular state of development of the hastily arranged colony. Nevertheless, when presented with a choice of dying for sure or maybe propagating (and the cells only cooperate when they are close to death) they chose to cooperate and propagate. There is also some amount of deception involved when the cells jokey for position, but not a lot, since any particular placement is hard to achieve.

What is the equivalent reason for stocks to cooperate?

Bill Rafter writes: 

Should what you say about stocks transmitting statistical information occur, it would mean a relative decline of idiosyncratic volatility. That is something we have studied, and found that when the going gets tough, the idiosyncratic vol grows faster than the market's vol.There are some other measures of "group think" that are good indicators of both the broad markets and individual assets.

I would posit that stocks do not transmit info, but their owners do. Consider the case of futures in which one market takes such a hit as to require significant margin calls. Human nature being what it is, the public sells its winners to finance its losers, and non-related markets dive along with the primary.

keep looking »


Resources & Links