The traditional explanation of a high put-call ratio was panic by the uninformed public. With the S&P 500 at a two-year high today, that scenario seems quite unlikely. Using another method to evaluate the little guys, the non-reportable positions in last week's COT report were net long 3.4% of the S&P 500 open interest (weighted average of bigs and e-minis). The non-reportable net long position has ranged from 0.9% to 5.2% in the past year.
Alston Mabry comments:
IMHO, the relationship between the ratio of leveraged-long instruments to leveraged-short instruments on the one hand, and the market on the other, went through a regime change this summer after the Flash Crash. To paraphrase Wally Shawn in The Princess Bride: "I don't think that means what I thought it meant."
William Weaver writes:
To pose another question to the list: What types of events can cause a regime change in a financial instrument?
To stick with equities, I use consumer spending data to define high vol and low vol regimes as I have found these fundamentals precede market action (not survey data).
Of my three business partners, two who have created models that best my own and use only price, volume, and open interest data.
Composition of OI? i.e. Commitment of Traders? Larry any thoughts on how a strategy may only work with one demographic instead of using the demographic as a signal itself? Maybe ICI data with fund holdings?
There was a company a few years back that suggested lookback periods consisted of all the data where a detrended version of price stayed within a standardized range, thus the last spike was the end of the prior regime. The flash crash could be a part of this.
Regulatory/mechanical changes: fractional to decimal, up-tick rule, naked short rules, no short rules, required reserves, etc.
So this gives us a few starting points: fundamentals, price/volume/OI related observations, demographic of a market, regulatory/mechanical, sentiment data and major dislocations in price (I believe this should be separated from the price category).
David Aronson replies:
With regard to your question. We have recently added a type of modeling that searches simultaneously for an indictor to define distinct regimes (2 or 3) and then linear models that are optimal in each regime (distinct range of the regime indicator). I have not done much with it yet but we were motivated to add this tool because of the phenomena that you talk about in your post.
September 21, 2010 | Leave a Comment
You Will Meet a Tall Dark Stranger
Written and Directed by Woody Allen Reviewed by Marion DS Dreyfus
Cast: Antonio Banderas, Josh Brolin, Anthony Hopkins, Freida Pinto, Naomi Watts, Gemma Jones
One of the many delights of a Woody Allen annual release is that though the casts are different, we know these people. And we 'know' their predicaments.
Dysfunctional men and women in fizzling marriages; desperate bi-polars; older men yearning to stave off impotence or irrelevance via nubile honeys; women unfulfilled with their careers or the lack thereof. Career people in chrysalis or limbo.
The metrosexual mélange, famed population of the toney Upper East Side and the favored haunts of the Hamptons. In this film, as in several of his recent outings, Woody Allen situates his attractive band of locals and ex-pats in an arcadian London that rivals his most beautiful Manhattan cinematographic offerings.
Booksellers and art dealers proliferate in the daily scuffles of the couples being scrutinized. People have to make some sort of living, and books and art are industries, but they are 'clean,' nothing to soil the hand or frighten the hansom cabs. And these are the 'jobs' that are accepted and certified by the type of people populating Allen films and indeed, the Woodsman's real life.
Standing in for the now 74-year-old Woody as "Alfie," yet again, is the snowy-topped plutocrat played by Anthony Hopkins. Feeling his prowess fleeing, though he is very comfortably well-off, he abandons his long-time wife, Helena, played by Gemma Jones. The darkly troubled striving failed doctor cum efforting novelist, Roy, stormily played by Josh Brolin, is married moodily to Naomi Watts, Alfie's daughter, whose desire for children is thwarted by her husband until his latest novel or project is accepted.
Until the novel's acceptance, their rent and basics are subsidized by Naomi's somewhat dotty yet credulous mother, supplemented by an art gallery assistant's job for Sally, who works for the suave, Armani-suited Antonio Banderas.
Across the road, unhappy Josh peers from his window at a haunting guitarist, Dia (Slumdog Millionaire's gorgeous Freida Pinto), who represents something he won't quite verbalize. While he waits for the publisher's decision on his manuscript, he begins seeing the red-swathed beauty for walks and lunch, though she is affianced, slated to marry in the immediate future.
Alfie "dates" a long-faced, colt-like call-girl so quirkily tall and slim-hipped that for a good slab of the film one thought she could well be a he. But no. Desperately lonely without the sweet wife he jettisoned, he impulsively asks his call-girl shrewdie to marry him.
Helena, also hard hit by loneliness, takes comfort in tippling and a fortune teller several times a week. Though the seer, Cristal (Pauline Collins, so touching in Masterpiece Theatre's Upstairs, Downstairs) is bogus, Helena believes in her predictions and insights, and her occult delusionism makes her the most serene character in the film. Afterlives, prelife, contacting the dead, stars in conjunction…my, my.
Voiceover narration beloved of Allen in many of his iconic films indicate the points that characters do not or cannot voice. Cockney Charmaine is so used to her Vegas Johns that she doesn't even know why a man would have to wait for sex-enhancement little blue pills to take effect. Not in her vocabulary zone. Though she is clearly not his age-cohort, dizzy Charmaine (newcomer Lucy Punch) is not so clueless that she doesn't make hay while she can. Furs, jewelry, apartments, clothing. (Must have been a gig to find this woman, Lucy Punch: freakishly tall, horsey features and skinnily voluptuous, with the longest legs since Tommy Tune. On second thought, maybe she is Tommy Tune…?) Alfie soon regrets his culture-free marriage and wallet wail.
Shed of her mopey husband Roy–deep in serious flirtation with neighbor Dia–Sally realizes she wants her boss, gallery owner Greg Clemente. Too late. Greg is already having an affair with the painter being represented by the Gallery owing to Sally's own efforts.
Adultery is a given for these troubled urbanites.
Analyzing the title, one can make the case that it is more metaphor than actuality. We all, of course, eventually meet that "tall dark stranger," a morbid coefficient of all Allen films, even his prior laugh-out-loud funniest, now long gone.
The music and cinematography, always deeply pleasurable in Allen films, match the beauty of the sets, shiny London in the spring (even torrential rainy scenes are lit beautifully, and don't destroy the mood). The cast is superb, spot on, as always. Though the reviewer audience we saw it with was tamped down and rarely laughed, trademark Allenesque laughter hails not from comic lines or particular set-ups as from the viewer's ready understanding of the comic plights of these messed-up people and their life-trajectories, which so many of us empathize with, if we are not actually living at the moment. As well, of course, as from rueful character rejoinders.
TALL DARK STRANGER is couched in this vision of bleak pay for play. Woody with reference to the passage of time and the futility of life—"a tale of sound and fury, signifying nothing" ("Macbeth")—"After all the ambitions and aspirations, the plagiarism and the adultery, what once was so meaningful won't mean a thing. Many years from now the sun burns out and the earth is gone, and many years after that the entire universe is gone. Even if you could find a pill that makes you live forever, that forever is still a finite number, because nothing is forever."
Talk about fatalism.
All the Woody tropes are here aplenty, in a fondly recalled yet disquieting way. Familial chaos, generational unease, mortal discomforts. One of the memes threading these scenes of striving, plagiarism, delusion and pain is a strong moral dimension. Those who do good (a rarity in an Allen film) are mildly rewarded, though not without effort. Those unable to resist the monumentally daft or unethical, however, are not accorded gentle recompense in the Woody canon, which is as morally connected as you can get: Actions are consequential.
TALL DARK is an edgily entertaining, provocative and eye-filling 100 minutes. This will become vintage–already prize-winning– Woody.
David Aronsen writes:
To explain the distinction between falsifiable and non-falsifiable predictions to my students I would contrast two statements. The non-falsifiable one was a fortune teller's "You will Meet a Tall Dark Stranger. The falsifiable one was you will see a man with one red shoe walking east on 42nd street whistling Satin Doll before 6PM next Wednesday. Did my powerpoints somehow fall into the hands of Woody Allen, and should I ask for a royalty?
There is a vast literature supporting the use of mechanistic decision in repetitive situations, [instead of] over relying on human expertise. Forgetting about accuracy for a moment, which is key, humans are quite inconsistent in the way they use information. Show an expert the same fact set on repeated occasions and the conclusions only correlate at about 0.50. In other words, the facts only account for about 25% of the variation in the expert’s final conclusion. This suggest that the way information is being weighted from instance to instance is inconsistent or the expert is considering information outside of the fact set. When it comes to accuracy the decision algos do better, overall.
Rich Bubb adds:
Here is an interesting example. Soon all of you will be replaced by machines [LOL]:
[An automated investing] system was developed by Robert P. Schumaker of Iona College in New Rochelle and and Hsinchun Chen of the University of Arizona, and was first described in a paper published early this year.
It's called the Arizona Financial Text system, or AZFinText, and it works by ingesting large quantities of financial news stories (in initial tests, from Yahoo Finance) along with minute-by-minute stock price data, and then using the former to figure out how to predict the latter. Then it buys, or shorts, every stock it believes will move more than 1% of its current price in the next 20 minutes" and it never holds a stock for longer."
Source: MIT Technology Review Blog
Nigel Davies writes:
That's a pretty sad comment on fund management standards. Humans have put up one hell of a fight against computers on the chess board using raw (unaided) brain power and still beat the best machines if they're armed with an ordinary PC and have a longer time limit. And that's to say nothing of the drubbing that Go players have given computers, even giving so far as to give them odds.
-oil spill greater than first estimated
-Euro zone bailout underestimated
-USD rise underestimated
-oil's fall underestimated
-recession length underestimated
-volcano impact underestimated
-gold's persistence understated
-housing slump underestimated
Reality seems to be generally understated and always underestimated.
Easan Katir comments:
With the theory that to solve a problem one first needs to define it accurately, a small point for accurate terms:
In the Gulf, there is not an oil "spill". A spill is what happens when a VLCC or Panamex double hull ruptures and leaks, or when Ms. Napolitano jiggles her teacup after reading her poll numbers. A spill is measurable and contained.
In the Gulf is a giant oil gusher from a super-high pressure reservoir, which has been spewing heavy crude oil and methane 24/7 for over a month with no end in sight, with the potential to become the worst eco-disaster in the history of civilization.
David Aronsen comments:
It's rational and in keeping with Bayes Theorem that estimates be updated slowly in response to new information. The related cognitive errors of anchoring and conservatism bias can account for the initial low estimates cited. Then as new information comes in they should be nudged in the direction of the deviation between the prior belief and the new evidence.
Rocky Humbert writes:
Pravda reports that the Soviets used nuclear explosions five times (from 1966 to 1972) to stop underwater well blow-outs. Here's the Pravda story.
One of the reasons that it's critical to assess to true flow rate is it's a first step towards calculating the comparative environmental damage from a nuclear explosion viz a viz a continuing leak for another two months. It's interesting that this is not being discussed in the mainstream media.
Stefan Jovanovich writes:
In 2005 petroleum engineering researchers from Texas A&M University suggested that drilling in the "dangerous and unknown" ultra-deep environment required new blowout control measures: "While drilling as a whole may be advancing to keep up with these environments, some parts lag behind. An area that has seen this stagnation and resulting call for change has been blowout control."
A redundant system might have avoided this because the Cameron Blow-out-Preventer is partially working: The incoming pressure from below the BOP has been measured at between 8,000 and 9,000 psi, while the outflow pressure into the Gulf is 2650 psi. 2 BOPs in series might have done the trick.
I wrote a paper with John Wolberg in 2009 on sentiment indicators. We looked at VIX and a transformed version of it, we call pure vix. We use a regression model to filter out the effect of recent SP500 price dynamics (velocity and volatility primarily) on vix. The filtered version of vix provided the best signals for the SP500. The signals were significant at the 5% level. Anyone interested email me directly (see non-clickable address below) and I will send you a PDF file.
Dear readers of this site,
I have written a paper about using statistical classifiers to distinguish bear market rallies from initial rallies in a new bull market. If anyone is interested email me and I will send you a PDF file.
[address is not clickable]
The Golden Cross has received a great deal of publicity. It is composed of two moving averages, the 50 day and 200 day respectively. When the 50 day crosses above the 200 day that is a buy signal. When the 50 crosses below the 200 day that says sell. Notably the 50 has crossed below the 200 in the turmoil last week.
As with everything this must be tested.
First I looked at the 200 day average alone for the daily Dow since1928. When the Dow closes below its 200 day MA then the return the next day is -.00665%. When above the 200 dma the Dow returns +.03968%. These compare to +.023% for all days. The p values for these two signals were each 7%. So they are not quite significant but you are not crazy if you still want to believe in the 200 day MA.
The question remains, what kind of improvement do we get when we add the 50 day crossover to the mix? On the sell side the 200 dma gave us an expected next day return of -.00665. But the 50 day golden cross sell signal gives a return of +.010%. It loses money!
The same thing happens on the buy side. The 200 dma returns +.03968 versus the golden cross return of .030%. In both cases adding the complexity of the 50 day average reduces return relative to just using the 200 dma alone.
David Aronsen replies:
This is interesting Phil… I just want to be sure I understand what you tested. In the simple case using just the 200 day MA, did you look at the next day return for ALL day's whose close as < MA200 vs. ALL day's whose close was > MA200. Or did you look at only those days were the close moved across the MA200. I suspect the former as the number of crossing days would be very small in number.
With regard to the 50 and 200, I gather you looked at next day's returns for ALL days when the MA50 < MA200 vs. ALL days when MA50 > MA200. Yes?
As to the folks who say the golden cross is as valuable as gold (rather than the other thing) is that the returns from playing the crossings is what is of value, vs. let's say buying and holding or random signals with a similar frequency. Anybody know the data on that one?
The confirmation bias is the tendency to search for or accept as valid information that is consistent with a prior belief and to exclude or reject information that challenges the prior belief.
As powerful a resource as the world wide web/ net is for doing research it seems that unless one is very careful this bias can impact the search words one uses thus strongly promoting the confirmation bias. Since the net has articles and posts that support virtually every point of view it is inevitable that point of view will strengthen over time as more and more articles are found that confirm the belief that biased the search.
Nick White responds:
I think your query is an excellent one. There will be other readers of the site of vastly greater eminence and skill than me in the field you have brought to our attention. However, I'll take a dilettante stab at responding by highlighting a coupe of good pieces of literature on the topics of biases / scientific method… and look forward to the honor of being corrected and informed by my betters. I'll also note that I'm not a professional scientist– merely of the armchair type, and a bad one at that. Secondly, I will cite quite a few names and sources in my response; these readings have been for personal pleasure rather than as part of a course where I've been goaded into critiquing them. In other words, my interpretations of their work might be a load of old tosh. That said….let's get on with it!
To me, your query goes to the heart of the philosophical foundations of the scientific method– and therefore difficult to answer either succinctly or with conviction. Happily, greater minds than our own have wrestled extensively with your topic and, I believe, have some useful answers for you. My aim is to present a thumbnail sketch of the thumbnail sketch of their work. I recognize that boring one's audience is the worst of solecisms so, at the risk of vast oversimplification, I will state my conclusion up front, lest anyone wish to proceed no further (likely a good idea): I believe it is fairly safe to say your answer lies with the tenets of "skeptical" empiricism. That is, one applies as best as possible the criterion of "falsifiability" in their work and research. The trick, as ever, is actual application.
Before we go on, though, I should like to define terms. You say that confirmation bias "is the tendency to search for or accept as valid information that is consistent with a prior belief and to exclude or reject information that challenges the prior belief". I'd like to demarcate that a little bit for our discussion. Richard Thaler summaries much when he writes that, "[belief perseverance means] people are reluctant to search for evidence that contradicts their beliefs. Second, even if they find such evidence they treat it with excessive skepticism. Some studies have found an even stronger effect, known as confirmation bias, whereby people misinterpret evidence that goes against their evidence as actually being in favor." In contradistinction, the behavioral literature seems to distinguish your definition as motivated reasoning, that is, "thinking biased to produce preferred conclusions and support strongly held opinions". Yet these biases themselves may simply be symptomatic of more problematic and pernicious dysfunction in our mental machinery. What could these biases be that "sum to" motivated reasoning?
A non-exhaustive inventory might contain at least five other biases in particular. First up is survivorship bias - that only the winners and survivors get to tell their story and present their data (irrespective of how one frames the enquiry to them). Next, we must account for Kahnemann / Tversky / Slovic's availability and anchoring biases. Availability is "the ease with which relevant instances come to mind". Anchoring is our propensity to estimate solutions with disproportionate reliance on– and influence from - the initial conditions. Fourth, we have the work pioneered by Kelley in the area of attribution, "man …infers causes for the effects he observes. The causes he attributes determine his view of his social world, and this view may determine his behaviour". Fifth, we have K/T's "errors of prediction" which, inter alia, states three principles. First, people rely too much on their "prior" intuitions when making assessments -even in the face of new, objective information. Second, people do not vary their predictions in line with the validity of the information on which their predictions are based and, fInally, people place more confidence in predictions based on highly correlated predictor variables than rational analysis affords them. The preceding doesn't even begin to touch on the personality dimensions of bias - that is, why our egos are constantly on the hunt to be "proved right", or the evolutionary ones - that biases are survival mechanisms!
So, given this diagnostic, what conclusions can we draw so far? It seems we can say that we're wired to confirm our hypotheses because it's convenient, it's fast and it's pleasant. As a result, we make judgment and inference errors that could be avoided if we had more robust methods to compensate for them. But why is confirmation "bad"? What's wrong with the proposition of reinforcing your beliefs and proving your hypotheses with more evidence of the same?This dilemma is not a new one. English polymath Francis Bacon highlighted the problem in the 17th century– and it has been a source of debate for much of the period since (incidentally, he was all for confirmation; though I put this down to his being a lawyer and the legal climate of his times). Today, the problem lies within the domains of philosophy; principally epistemology (what do we know, and how do we know that we know that we know it?) and the problem of induction; in other words, the realm of * falsifiability*. Sir Karl Popper (your principal, go-to guy on philosophical questions of method) argued forcefully that a hypothesis is not empirical -let alone scientific - unless it is falsifiable. What does this mean? For example, an unbroken string of sightings of white swans does not confirm the hypothesis that "all swans are white". But a single sighting of a black swan shows the hypothesis to be false. Conversely, claiming "there may be aliens in space" is not falsifiable. This is important because of the elusive nature of truth and certainty (at least this side of Heaven). One can never prove anything in this life with absolute certainty. All we have are probabilities. Once that notion is at the heart of one's scientific investigations, the door to statistical introspection swings wide open.From that point, at best, we can say with certain degrees of confidence that something "isn't" something else.
Ultimately, I can do no better than quote Sir Karl:
*Science is not a system of certain, or well-established, statements; nor is it a system which steadily advances towards a state of finality. Our science is not knowledge (epsiteme): it can never claim to have attained truth, or even a substitute for it, such as probability…
*We do not know: we can only guess. And our guesses are guided by theunscientific, the metaphysical faith in laws, in regularities which we can uncover - discover.
*But these marvelously imaginative and bold conjectures of ours arecarefully and soberly controlled by systematic tests. Once put forward, none of our anticipations are dogmatically upheld. Our method of research is not to defend them, in order to prove how right we were. On the contrary, we try to overthrow them. Using all the weapons of our logical, mathematical and technical armory, we try to prove that our anticipations were false– in order to put forward in their stead, new unjustified and unjustifiable anticipations, new ' rash and premature prejudices' as Bacon derisively called them.
*The advance of science is not due to the fact that more and more perceptual experiences accumulate in the course of time. Nor is it due to the fact that we are making better use of our senses….bold ideas, unjustified anticipations and speculative thought are our only means for interpreting nature…and we must hazard them to win our prize. Those among us who are unwilling to expose their ideas to the hazards of refutation do not take part in the scientific game.
Thus, I would contend alongside Popper (oh, how I deign!) that empiricism untempered by proper falsification might be many orders of magnitude worse than no empiricism at all. The safeguard you seek in your query is to, as widely as possible, practice rigour in all your habits and research - most importantly, the principle of falsification. Around these parts, the Chair et al have been known to throw out more than the occasional, "um, have you tested that?" to those making a particular claim of one sort or another. It's not for show.So, that's my thoughts on your query. We may avoid confirmation bias and the multiplication of its effects in the following (non-exhaustive) ways: Know which biases may impact your research. Run Popper's Logic of Scientific Discovery over your processes; especially the criterion of falsifiability. Will bad research persist? Sure. It has for centuries. Will it become ubiquitous? Not if science and scientists are doing their jobs properly.What practical steps can we take to effect these principles in our daily work and lives? Bischoff has provided some helpful guidelines: Know your cognitive frailties. Actively seek contrary evidence…force yourself to do it, or have a mentor well versed in creative destruction of your bad hypotheses. Put confidence estimates around the quality of information / data you have obtained. Educate yourself constantly (but don't rely on it too much - heavily discount your own smarts).
From Taleb: know in which domains you can safely apply induction (largely stable, natural phenomena) and which ones may get you into hot water (complex / contingent outcomes relying on inferences drawn from limited observations…because of the massive distortion and impact of rare events in the distribution of outcomes). Relentlessly build in redundancy to all you do and hypothesise. Be humble, and know that you know nothing– even on your best day. Visit graveyards and consider the untold stories of those who "didn't" make it -only survivors get to tell their stories with conviction and credibility. Consider alternative histories, and adopt the probabilistic mind-set. Apply the lessons you have learned from your textbooks to your whole life, not just the narrow, specialized context in which you learned them.I hope this sparks some good discussion! I'd be very interested to hear critiques, comments, additions etc. I'm attempting to stand on the shoulders of giants, so any misinterpretations, misquotes, misattributions or any other mis's are entirely my own.
Source / reference list:
- Kahnemann, Slovic, Tversky (eds): Judgment under Uncertainty: Heuristics and Biases, Cambridge
- Kahnemann, Tversky (eds): Choices, Values and Frames, Cambridge- Slovic et al: The Perception of Risk, Earthscan
- Taleb: The Black Swan, Penguin- Popper: The Logic of Scientific Discovery, Routledge
- Thaler: Advances in Behavioral Finance (Vol II), Princeton Publishing
- Peterson: Inside the Investor's Brain, Wiley- Forbes: Behavioural Finance
- Gauch: Scientific Method in Practice, Cambridge.
- Chamley: Rational Herds: Economic Models of Social Learning, Cambridge
Pitt T. Maner III writes:
A quick overview of a few of the issues you have discussed is presented by James Montier in his latest book, The Little Book of Behavioral Investing– How Not to Be Your Own Worst Enemy. Problem examples in the book illustrate many behavioral traits that one can become susceptible to. Montier shows that one must be ever vigilant and self-aware of narrow-thinking, over-optimism, faulty statistical reasoning, over-conservatism, majority group thinking and assorted biases. One quote from the book reads, "Question authority, but don't accept the answer".
Montier's own soft spots, however, may be an over attachment to value investing and Graham techniques. More knowledgeable critics can decide.
Here is a two part interview with Montier related to the book.
A snippet from the 2nd part of the interview:
Miguel: Give us some insights – how can we become critical thinkers.
James Montier: Critical thinking is really all about being a contrarian in thought. Learning to be skeptical, to question what you hear, and evaluate it based on merit, rather than emotional appeal. In essence taking a contrarian view point requires us to learn three skills.
The first is highlighted by the legendary hedge fund manager Michael Steinhardt, who urged investors to have the courage to be different. He said, “The hardest thing over the years has been having the courage to go against the dominant wisdom of the time, to have a view that is at variance with the present consensus and bet that view.”
The second element is to be a critical thinker. As Joel Greenblatt has opined, “You can’t be a good value investor without being an independent thinker—you’re seeing valuations that the market is not appreciating. But it’s critical that you understand why the market isn’t seeing the value.”
Finally, you must have the perseverance and grit to stick to your principles. As Ben Graham noted, “If you believe that the value approach is inherently sound then devote yourself to that principle. Stick to it, and don’t be led astray by Wall Street’s fashions, illusions and its constant chase after the fast dollar. Let me emphasize that it does not take genius to be a successful value analyst, what it needs is, first, reasonably good intelligence; second, sound principles of operation; and third, and most important, firmness of character.”
Chris Cooper writes:
I am wondering if anyone out there is familiar with a trading opportunity called by some, the Goldman Roll. As it has been explained to me, there is a large numbers of long-only commodity funds. As a given contract that they hold long, say oil is coming due to expire they need to sell that one and then roll into a long position in a further out contract. This creates a very definite trend in the spread that can be exploited. Sell the near one short and buy the next one out. As the roll transactions are executed the Far minus the Near spread has a very predictable and smooth rise. It is claimed that this phenomenon has not be widely recognized and thus remains in existence thus far. Any comments out there on this claim would be appreciated.
Dr. Aronson is author of Evidence-Based Technical Analysis, Wiley, 2006
Nick White comments:
Goldman Roll? More like market roll!
This has been around as long as futures have existed and is nothing sinister. However, as some here were actually around when modern exch. traded futures began, I shall defer to them.
You can maybe get some clue as to roll direction by looking at open interest depending on the contract, but it's not always a good guide. Worth bearing in mind that people hold offsetting positions and much also depends on commercials vs specs etc.Also, if it were that easy to make money, it wouldn't exist…
Michael Cohn writes:
There is index money invested in commodity Indices and a plethora of ETFs. For example, USO or UNG. These commodity ETFs hold futures and there is a need to roll the contracts in a somewhat predictable way although there is now more flexibility as to day. This long exposure always has to sell the near and buy the far contracts. It is fairly easy to see the amounts involved…
David Aronson replies:
Yes, I am on the lookout for all of these creatures. But kidding aside for the moment, are you saying that the claim that such an opportunity exists is on par with sightings of Big Foot? i.e., it's nonsense?
Russ Herrold writes:
It is a safe statement that there are and will always be 'unknowable unknowns' out there in the woods, and that the 'Absence of evidence is not evidence of absence' (but rather sometimes, just a statement that we cannot prove a hypothesis with our current tests and tools)
If I had a Bigfoot in my basement that laid gold bars, I would never reveal that secret, and take great pains to keep that 'trade secret'.
If I had engineered a winning strategy, I would certainly consider sowing disinformation and negative results and disinformation, to lead people seeking to reverse engineer my results, down into blind allies.
I think as a careful investigator, all we can say is: We do not know of a public proof that such exist.
By co-incidence, I am wearing a tee shirt today of a Unicorn, feasting on roast leprechaun, and as she takes knife and fork to her meal, the magic rainbows are let out.
Ken Drees adds:
The idea of taking advantage of a robotic function (mindless ETF doing its monthly maintenance) makes sense; once you notice the ripoff, wouldn't a hunter now wait for the fox?
Tom Printon writes:
I used to fill the GS roll in the coffee pit. Locals typically positioned themselves one to two days ahead of GS. When and if profitable was usually good for few tics, but one had to have size on to be worth while. Off the floor trader's vig would be difficult to overcome.
Paolo Pezzutti adds:
This reminds of "The Night Of The Long Knives" also called Operation Hummingbird. It is interesting how the market was "prepared" for this event that occurs after an impressive up leg. We will see if the event will be able to trigger more volatility. It will say a lot about this market.
Using Fed data, I calculated corp bond spreads: BAA yield - 10Y treasuries, weekly from 1990. (Data = "Market yield on U.S. Treasury securities at 10-year constant maturity, quoted on investment basis", "MOODY'S YIELD ON SEASONED CORPORATE BONDS - ALL INDUSTRIES, BAA").
The graph shows fairly close correlation with VIX, with the eyeball suggesting closer correlation since 9/11/01. Verified by correlation post and pre 911:
pre 9/11: 0.038
post 9/11: 0.218
David Aronson comments:
John Wolberg and I have done some work to derive a normalized version on VIX in order to produce a more accurate timing signal. However we only used various measured derived from price data as normalizing variables (price velocity, acceleration and volatility). We were able to obtain some improvement. However it appears that including the default spread might improve things even more. Anyone interested in a copy of the paper email me, aronson[at]mindspring[dot]com.
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- Older Archives
Resources & Links
- The Letters Prize
- Pre-2007 Victor Niederhoffer Posts
- Vic’s NYC Junto
- Reading List
- Programming in 60 Seconds
- The Objectivist Center
- Foundation for Economic Education
- Dick Sears' G.T. Index
- Pre-2007 Daily Speculations
- Laurel & Vics' Worldly Investor Articles