Like the German fairy tale Hansel and Gretel, the financial markets often have clues you can follow to safety and profits.

Sometimes though the clues that you intended to use, or thought would be there, are not. The question with DB is not if there will be a Lehman moment because there surely will not be. Rather, the consideration should be of the vicious cycle of bad politics leading to bad economics.

The key questions that should be asked include some of the following: First, to what extent does this weaken Chancellor Merkel's position in front of next year's German elections? Second, what does this mean for ECB chief Draghi and easy money which is wildly unpopular in Germany already? Third, what does this mean for Italian banks?

Surely, if Germany bails out DB, which is a given, how can the Germans ask Italians to play by different rules? Fourth, what does this mean for European banking union? Fifth, how might this influence German growth which has been the locomotive for Europe? Sixth, if Europe can't grow with a weaker Euro, low energy prices, and record low rates what does it take?



 One has found that there is an electronics circuit that almost always retrospectively provides a great description of price action in markets. I wonder if there is an electronics circuit that compresses the voltage output keeping it in a range, sort of like the finger in the dike, but then after the compression is over on the negative side, e.g after the negative feedback is taken away, the voltage doesn't immediately lead to tremendous negative voltage. I seem to remember such a circuit with op amps.

Jon Longtin writes:

There are a variety of electronic circuits that perform such a role, depending on the application. One common application is a voltage regulator, which provides a (nearly) constant voltage, regardless of the load applied to it. The circuit monitors the actual voltage currently being provided and compares to a pre-set reference value. The difference between the actual and desired (setpoint) values is called the error, and is used to adjust the current provided to the circuit to bring the voltage back to the setpoint value. For example if the load increases (more electricity demand) the load voltage will drop and the voltage regulator will provide more current to bring the voltage back up. Same goes for a decrease in load.

There are some limitations and compromises in such a circuit. First is there is a finite amount of current that the power supply/voltage regulator can be provided, and if the error signal requests more than this amount, the output will not be maintained. Also of importance is the time response: a circuit with a very fast time response will respond more quickly to fluctuations in the load, but can also result in so-called parasitic oscillations, in which the output oscillates after a fast change in load is made. By contrast a longer time response provides a slower response to a variation, but tends to damp oscillations. This same behavior, of course, is seen in countless financial indicators, and is part of the art in deciding, e.g., how many prior data points to include in a signal.

A somewhat more complex version of the above, and perhaps more closely aligned with the behavior of a market signal, is an audio "compressor/limiter". This is a device that constantly monitors the volume (magnitude or voltage) of an audio signal and makes adjustments as needed. A limiter is the simpler of the two and simply sets a threshold above which a loud signal will be attenuated. The attenuation is not (usually) a brick-wall however; rather a signal that exceeds the threshold value is gently attenuated to preserve fidelity without overloading the audio or amplifier circuitry. A compressor is a more complicated animal and provides both attenuation for loud signals AND amplification for quieter ones. In essence a HI/LO range or window is established on the unit, and signals exceeding the HI limit are attenuated, while signals below the LO limit are amplified. This resulting output then (generally) falls within the HI/LO range. This is used extensively (too much!) in commercial music. Humans naturally pay attention to louder sounds (ever notice how the volume universally jumps when commercials come on TV? They are trying to grab your attention with the louder volume). Pop music attempts to achieve the same by using aggressive compression to provide the loudest average volume for program material without exceeding the maximum values set by broadcast stations or audio equipment. The result, however is that the music sounds "squished" and doesn't "breath" because the dynamic range of the content has been reduced considerably. With such devices there are a variety of adjustments to determine the thresholds, time before taking action (the attack time) and how gradually or strongly to attenuate (amplify) signals that exceed the envelop range.

Here' s a fairly decent article that describes this in more detail.

Incidentally both of the above are examples of a large branch of engineering called Controls Engineering. The idea, as Vic stated, is to monitor the output by using feedback and make adjustments accordingly. There are countless different algorithms and approaches, as well as very sophisticated mathematical models (people build careers on this) to best do the job. Like most complex things, there is no single approach that works best for every problem, but rather involves a balance of performance, cost, and reliability.

I highly suspect such algorithms have already found their way into many trading strategies, one way or another.

If interested, I can suggest some references for further reading (though I am not a Controls person myself).

Bill Rafter writes: 

 Think of your voltage regulator as a mean-reversion device. If a lot of this is being done, then your trading strategy must morph into simply following the mean.

In light of recent changes in the investment climate we suggest that one should tighten up controls in which one is long a given market. Perhaps that might also or alternatively mean (a la Ralph) tightening the size of the positions. The result will be taking less risk and incurring less return, but taking additional risk would seemingly not be rewarded in the current milieu.

Jim Sogi writes: 

Dr. Longtin's description of compressors and limiters was
fascinating.  A compressor on my guitar signal chain prolongs the
sustain on a signal in addition to smoothing out the volume spikes and has less fade as the signal weakens.  With added volume, one gets a
nice controlled feedback.

Sometimes in the markets one sees a sustained range with the spikes being attenuated reminiscent of a nice guitar sustain.

On a different note, one curious thing is that people cannot  discern differences in absolute volume.  It's very hard to hear the differences
in volume between two signals unless they are placed side by side.



Even the concept of 'programming language' is becoming somewhat blurred. In years past a language, and it's associated compiler, would translate a program written by a human directly into machine code for a specific machine. Working on a different machine (unix vs. DOS vs. Mac) required a completely different compiler.

Today in many cases what happens is the language compiles to an intermediate 'bytecode' that is then subsequently executed with a second program running on the actual machine, referred sometimes as a virtual machine. The advantage here is that the bytecode can be the same regardless of the machine it is running on, with only the virtual machine having to be adapted to a particular hardware platform. This has proved very successful, with, as noted, Java and web browsers being the archetypical example. One minor disadvantage is that the resulting programs run somewhat slower than the native compiler mentioned above, however in this day and age the gluttony of computer power and resources make the speed difference, in most cases, incidental.

Microsoft took this concept once step further by making available a huge library of pre-written functions to do a very large variety of tasks (in addition to the virtual machine concept, which they call the Common Language Runtime (CLR)). The programmer then simply pulls in the functions he/she needs from the library and they can put together a program much more quickly. This is the approach that Microsoft has taken with its .NET Framework. Then the programming language exists more to string in the various functions. To wit: there are four languages from Microsoft that can interface with the same library: C#, C++. Visual Basic, and J# and each produces the same code for the CLR.

It is also not at all uncommon for a modern application to mix several different languages; this has been made all the more easier with the separation of tasks described above. A webpage may be written with HTML and XML, then use AJAX and SQL to dynamically build content and pull from a central server, with a little Java and C# thrown in for specialty functions.

It's an ongoing evolution, that's for sure.


Jon Longtin, Ph.D.
Associate Professor and Undergraduate Program Director
Department of Mechanical Engineering

State University of New York at Stony Brook



 Does the orbit of the Moon trigger earthquakes ? If so then March 16 through the 22nd could be interesting. The Moon makes its closest approach Mar 19 during the new Moon.

Here is something from Nolle's web site: his March forecast.

On a more interesting note my research showed that the stock market performs better from the new Moon to the full Moon than during the waning half of the cycle.

Jon Longtin comments: 

I wouldn't lose sleep over it.

The stress that the moon produces on the earth by constantly darting from one side to the other every day is orders of magnitude greater than the small variation in its distance to earth.

Put another way, high tide maybe a few thousandths of an inch higher when the moon is closest to the earth, on top of a several foot swing in sea level that day.

(But such events do make lovely fodder for the doomsayers…)

Peter Grieve writes:

The mixture of explicitly stated science with implied superstition seems to becoming an art form.

Jupiter and Saturn have a combined mass of less than .002 solar masses. And tidal effects vary like the inverse cube of distance. Which means that Jupiter's tidal effect is reduced by another factor of 1/64, since at its nearest it is 4 times further from us than the Sun is.

Putin will undoubtedly be pleased with dire predictions for the West.

Kim Zussman writes:

This kind of prediction is old news: see The Jupiter Effect .

As recalled, at the time astronomers estimated the net tidal pull of the 1982 planetary alignment on the sun (which, in turn, was to effect solar radiation and subsequently interact with earth's magnetic field) of ~1mm. The sun is about 864,000 miles in diameter.

Eventually with enough of these they'll get one right.

Pitt T. Maner III writes: 

Here is a good video on "pseudo-predictions" for this weekend from down under. Multiple, vague predictions debunked by scatter graphs.

I would guess, however, that there will be a resurgence of interest in the writings of catastrophists– Velikovsky being considered one of the last of the old time breed…

Phil McDonnell comments:

Speaking of Velikovsky a version of his theory is now the most favored theory for the formation of the Moon. The exception being that he thought the Moon was formed during historical times and used Biblical references to date it. For example he claimed the parting of the Red Sea was a giant Moon tidal effect. Instead current thinking dates the Mars sized Earth impact at about 300 million years after the formation of the solar System.

More on Moon formation theory:

Giant impact hypothesis

I also ran a test looking at all the earthquakes > 7.0M in 2010. I found that the number that were 'predicted' by Nolle's super Moon windows was 19%. But the number of days covered by the windows was only 10% of the year. On its face it seems like modest support, but the sample size of correct hits was only 4, so the jury is still out.



 The amazing moves this week are consistent with my 50 year old studies as to what happens following cardinal panics like airline crashes, and presidential assassinations. A terrible move down, and then by the end of the week, right where it was before. It happened to i s p and the grains and oil and the dollar yen. What else? How to generalize?

Jon Longtin responds: 

"…how to generalize?"

One thought would be to do a simple curve fit on an instrument of your choice after each event in history. Since the events themselves are unique and relatively short in duration (earthquake, assassination, terrorist bombing, etc.) and also very well defined in time, the trigger point (or time t=0) is well known almost immediately after the event happens.

In general a cusp-like response is observed (very rapid decline, followed by a well-defined apex, and then a rapid ascent (although probably not as sharp as the descent) to some threshold pre-event point (say 80%).

The underlying argument would be that people's mass reaction to any catastrophic event is similar (panic, confusion, and uncertainty followed by the gradual realization that the world is not ending, and things work back to normal). Since the underlying behavior is the same, it's not unreasonable to expect that the financial instrument's response should similarly be the same across different events. One could then try to form a single curve by appropriate scaling (so-called self-similar behavior). Then, when a new situation presents itself (hate to sound so detached when speaking of disasters), one could chart the instrument's history against the curve, and as soon as enough points were collected, match/scale it to the master curve and make an estimate as to the turn-around point and recovery and go from there.

One could further classify events into separate categories, e.g., natural disasters, political events, financial events, etc., and prepare appropriate curves for each, since the nature of the event will be similarly well defined and knowable very soon after it happens.

The engineering analog is somewhat along the following lines: a standard technique to test a system is to apply an impulse response and see how the system responds. Examples include tapping an automobile frame with a hammer and measuring how the structure responses in time, or using a gunshot to measure the acoustics in a large hall. In these measurements, the initial driver (the hammer hit or gunshot) happens so quickly that it has come and gone before the system has had a chance to even begin to respond. As a consequence the resulting measurement is only the response of the system and is not contaminated by the initial response.

In contrast, drivers that that are longer in time have a more complicated interaction with the structure, because the structure will start to respond to the first part of the driver, but the system is still being driven. The analog would be grabbing onto the car frame with your hands and shaking it repeatedly for a few minutes to get it to vibrate: the car frame will begin to response as soon as you start shaking it, but then as you continue to shake it, that further alters the response, which affects the response, etc. etc. = much more complicated to analyze and predict.

 In financial terms, disasters are often very short in duration (seconds and minutes), and subsequently they behave like an impulse response, with the system being society. In contrast, an event such as the wave of unrest in the middle east is a much longer time-frame event (weeks and months): the event and the response become highly coupled, making their analysis more complicated.

Anatoly Veltman writes:

Not sure if it's a separate topic, but there are sometimes dangers when you generalize. For example, the level of EUR currency (and its perceived trend) is significantly higher today than at this sample's outset. To what degree did this influence most commodities' comeback?

Another layer that could be added to this sample's analysis: what to make of the relatively lagging instruments? Sugar, Platinum and Palladium haven't made up their losses…

The President of the Old Speculator's Club, John Tierney, responds: 

How does one make generalizations from the recent events outlined by the Chair? I have no doubt that his 50-year study shows similar market reactions. However, I'm reluctant to adopt any new theories or adjust my current investment outlook due to these studies. The current environment, and one that has existed at least since the Fed initiated QE1, is the involvement of government agencies.

I and others have suggested this surreptitious presence in the past. We have been (rightly) put in our place because we failed to fulfill the Chair's mandate: "stats on the table" (something, by the way, which Rocky was very, very good at).

In the current situation and that which has existed for several years now, we KNOW that our government (and others) have been manipulating the "invisible hand." We may not be aware of the extent of the presence, or where it is being applied, but that it exists is an established (and self-confessed) fact.

With that in mind, I'm left to "guess" whether the current scenario is an accurate re-enactment of past events, or whether it has been manipulated to seem so. For years we on the List have been leery of the efficacy of any government interference in the markets. With that in mind, it's difficult to make a legitimate extrapolation from past events - the new, big, player makes any surmise questionable.

My reluctance to revise my pre-existing view of the market's course is only enhanced by the numerous television experts who are outlining a "bounce-back" scenario based on past bounce-backs. It may well occur but will it endure or will it vanish with the exit of the interference? I'm currently betting (and that IS the right word) against it.

William Weaver shares: 

Check out this interesting abstract:

Behavioral economic studies reveal that negative sentiment driven by bad mood and anxiety affects investment decisions and may hence affect asset pricing. In this study we examine the effect of aviation disasters on stock prices. We find evidence of a significant negative event effect with a market average loss of more than $60 billion per aviation disaster, whereas the estimated actual loss is no more than $1 billion. In two days a price reversal occurs. We find the effect to be greater in small and riskier stocks and in firms belonging to less stable industries. This event effect is also accompanied by an increase in the perceived risk: implied volatility increases after aviation disasters without an increase in actual volatility.

Found via the Empirical Finance blog



 Most people still don't know what engineering is all about, or the benefits it provides.

If anything, Hollywood pushes a negative stereotype of the tech-types: emotionless, narrow-minded, and caring only about their project. There are very successful shows about doctors, lawyers, cops, sports figures, politicians, blue collar workers, and criminals, but I've never seen a mainstream show or series in which a technical guy (scientist/engineer) was the hero. About the best I can think of is MacGyver, and his genius was really resourcefulness, rather than intellectual acumen. It was thus nice to see your comments.

Sadly, I suspect that the finger-pointing and blame for this disaster will far overshadow the enormous benefit that was realized by all of the good engineering and preparedness that was done, along the lines of what you pointed out. Although things are far from being over, it appears that this is no Chernobyl. It's easy to say in retrospect that 'you should have planned for this', but it's nearly impossible to capture every last combination of atrocities that nature can render on a moment's notice on our feeble structures, while also making them affordable, buildable, and profitable in the first place.

So, thank you for recognizing this anecdotal, but vital, contribution to the ongoing disaster in Japan, and appreciating how much worse it could have been.

(and now see writer step off of his soap box…).



Here is a very interesting article I found on Information Theory:


The noisier the channel, the more extra information must be added to make error correction possible. And the more extra information is included, the slower the transmission will be. ­[Claude] Shannon showed how to calculate the smallest number of extra bits that could guarantee minimal error–and, thus, the highest rate at which error-free data transmission is possible. But he couldn't say what a practical coding scheme might look like.

Researchers spent 45 years searching for one. Finally, in 1993, a pair of French engineers announced a set of codes–"turbo codes"–that achieved data rates close to Shannon's theoretical limit. The initial reaction was incredulity, but subsequent investigation validated the researchers' claims. It also turned up an even more startling fact: codes every bit as good as turbo codes, which even relied on the same type of mathematical trick, had been introduced more than 30 years earlier, in the MIT doctoral dissertation of Robert Gallager, SM '57, ScD '60. After decades of neglect, Gallager's codes have finally found practical application. They are used in the transmission of satellite TV and wireless data, and chips dedicated to decoding them can be found in commercial cell phones.


The codes that Gallager presented in his 1960 doctoral thesis (http://www.rle.mit.edu/rgallager/documents/ldpc.pdf) were an attempt to preserve some of the randomness of Shannon's hypothetical system without sacrificing decoding efficiency. Like many earlier codes, Gallager's used so-called parity bits, which indicate whether some other group of bits have even or odd sums. But earlier codes generated the parity bits in a systematic fashion: the first parity bit might indicate whether the sum of message bits one through three was even; the next parity bit might do the same for message bits two through four, the third for bits three through five, and so on. In Gallager's codes, by contrast, the correlation between parity bits and message bits was random: the first parity bit might describe, say, the sum of message bits 4, 27, and 83; the next might do the same for message bits 19, 42, and 65.

Gallager was able to demonstrate mathematically that for long messages, his "pseudo-random" codes were capacity-approaching. "Except that we knew other things that were capacity-approaching also," he says. "It was never a question of which codes were good. It was always a question of what kinds of decoding algorithms you could devise."

That was where Gallager made his breakthrough. His codes used iterative decoding, meaning that the decoder would pass through the data several times, making increasingly refined guesses about the identity of each bit. If, for example, the parity bits described triplets of bits, then reliable information about any two bits might convey information about a third. Gallager's iterative-decoding algorithm is the one most commonly used today, not only to decode his own codes but, frequently, to decode turbo codes as well. It has also found application in the type of statistical reasoning used in many artificial-intelligence systems.

"Iterative techniques involve making a first guess of what a received bit might be and giving it a weight according to how reliable it is," says [David] Forney. "Then maybe you get more information about it because it's involved in parity checks with other bits, and so that gives you an improved estimate of its reliability." Ultimately, Forney says, the guesses should converge toward a consistent interpretation of all the bits in the message.

Jim Sogi writes:

I think that is why many market price moves come in threes as an error correction devise. Like the recent triple bottom.

Jon Longtin writes:

Very interesting articles on the history of encoding schemes.

One interesting thing to note is that if you take even a simple information stream and encode it with any of the numerous algorithms available, the encoded version of the information is typically unintelligible to use as humans in any way, shape, or form.

‘hotdog’ for example, might encode to ‘b$7FQ1!0PrUfR%gPeTr:$d’

These encoding algorithms work by rearranging the bits of the original word, looking for patterns, and applying mathematical operators on the bit stream.

Many of the financial indicators and short-term predicative tools that abound today are based on some combination of the prior price history, but often in a relatively simplistic way. For example, although the weightings may change for various averages, their time sequence, i.e. the order in which they are recorded and analyzed is the same: a sequence of several prices over some period is analyzed in the same order in which it was created.

Perhaps, however, there is some form of intrinsic encoding that is going in the final price history of an instrument. For example, it could be reasoned that news and information does not propagate at a uniform rate, or that different decision makers will wait a different amount of time before reacting to a price change or news. The result might be that the final price history that actually results, and that everyone sees and acts upon, is encoded somehow based on simpler, more predictable events, but the encoding obfuscates those trends in the final price history.

Maybe it is no coincidence that Jim Simons of Renaissance Technologies did code breaking for the NSA early on in his career.

Unfortunately reverse-engineering good hashing codes, particularly those designed to obfuscate, such as security and encryption algorithms are notoriously difficult to crack. (The encrypted password file on Unix machines was, for many years, freely visible to all users on the machine because it would simply have taken too long to crack for machines of the time.) On the other hand, the cracking algorithms often require little knowledge of the original encoding scheme, instead simply taking a brute force approach. Thus if there were such an underlying encoding happening with financial instruments—and the encoding might be unique for each instrument—then perhaps there is some sliver of hope that it might be unearthed, given time, a powerful machine, and some clever sleuths.

For all I know this has been explored ad nauseum both academically and practically, but it does get one wondering …



 Hi Victor,

It was a pleasure coming to Junto a couple of weeks back and chatting with you. I've been very impressed with how easily it is to speak to people and how willing they are to share their thoughts and insights. It's a great resource. Finally, I wanted to say thanks for adding me to SPEC list; it seem to be wonderfully rich living dialog, and I am learning a lot, even if many of the conversations still are beyond my knowledge level at this point.

I've been thinking a bit about our conversation after dinner at Junto a couple of weeks back about how you might foster your son Aubrey's interest in things mechanical. You have done the obvious things of getting him all of the construction and science toy sets and the like. My dad was quick to notice my interest in mechanical stuff, and, to a large degree, really helped to get me to take the career path that I did. Thus here is a potpourri of other thoughts that I wanted to pass along to you.

Before I make my own recommendations, I thought I would pass along some thoughts from my father, who, by chance we visited last week in Ohio, and I asked him about his thoughts on your question. He said that with me he always tried to encourage exploring without being too quick to interject, e.g., in taking something apart with the very real chance that it will be broken for good. He felt it important to let the exploration process happen naturally with a minimum of intervention, with the idea that the child makes-and learns from-his own mistakes, trials, and tribulations. In essence, then, he took a Libertarian view.

In terms of my own thoughts, one thing that I bet Aubrey would really enjoy doing is taking some things apart to see how they work. Great candidates for this kind of thing include, in no particular order, kitchen scales, motorized toys especially with gears and such, CD, cassette or VCR players, old mechanical clocks, old inkjet printers or computers, and the like. He'll need some simple tools to do this with, and, of course, he would have to be supervised for many of these activities, but I would guess he would love the process of exploring and understanding as he takes things apart. I always did. I wouldn't be a bit surprised if he was able to get many of them back together again in working condition.

An interesting twist on the above would be to give him a simple object that doesn't work and see if he can fix it, preferably by taking it apart. I realize he's at a pretty young age, but it might be worth a try. You could even "rig" things so that the repair was fairly obvious, then gradually make it more challenging.

Another thought would be to take him to a museum of science and industry. My parents took me to the Museum of Science and Industry in Chicago when I was 11 or 12 and it changed my life:

I realize Aubrey is a little young, but this still could be a very impressive and entertaining experience for him. To that end, there is a children's museum of science and technology in Troy, NY. Here is a reference to the New York Hall of Science in Queens. I've never been there, but it might be worth a look. There may be several others in New York City, and would be worth looking into if you haven't done so already.

Consider also an aviation or automobile museum. There are some local, I believe. If you are ever in Dayton, OH, there is a spectacular military aircraft museum at Wright Patterson Air Force Base. Chris Tucker may know of others (as well as science museums for that matter; he has been to several around town with his kids).

Another simple thing to do would be to take him to a local hardware store like Home Depot or Lowes or even a craft store. There's a lot of fun mechanical stuff in those places, and they also have the raw materials to make all manner of things. If nothing else, it's easy to do and you could gauge his interest in various things to see what he likes most.

You mentioned that Aubrey likes structures. My all-time favorite structures are (in this order):

1. The Hoover Dam

2. Arch of St. Louis (Gateway Arch)

3. Eiffel Tower

4. Washington Monument

5. Sears Tower

A trip to any of these would probably be an absolute delight for him. Oftentimes these places will have museums on the structure, and you can get kits or books or videos at the local gift shops there that could further his learning and interest.

Other things to look for include films/and documentaries on the above structures. For example, I saw a very interesting show about a suspension bridge that was built in Europe somewhere (Norway maybe?) and they assembled the bridge section by section, building off of the previous sections. The film chronicled the building construction over time and also highlighted some of the technical challenges and behind the issues that the engineers faced. It was a great show.

Some other things that I have always been fascinated with, even as a kid, that he might also find interesting:

. Power plants - there is just about everything in one of these, and everything is super-sized.
. Mechanical Equipment rooms in buildings of all kinds with pumps, ducts, pipes, valves, gages, and control systems. To this day I still love this kind of thing.
. Water towers, especially the kind where the tank is suspended off the ground with legs - I can't tell you why I liked these so much, but I always did, and there was one close to our house in Ohio when I was a kid.
. Factories- again, there is just about everything here: robots, assembly lines, machining operations of all kinds, conveyer belts, hydraulics, pneumatics, electirc motors, sensors, etc. Factories are usually very densely packed, so you can see a lot in a small space.
. Dams- perhaps it's their sheer size, or the enormous amount of water that they hold, but there is something incredibly captivating about a dam. It's no coincidence that Hoover Dam is my single most favorite structure.
. Cars and engines - Underneath the hood of a modern car is a marvel of engineering. Just to see the belts turning and fans spinning might be very enjoyable. Obviously use caution.
. Bridges - I've always liked them, although they never captivated me as much as some of the other things above. Still, they are impressive structures, and there are a bunch around the greater New York area to have a look at. It might be worth looking into possible tours of any of the bridges.
Yet another great resource are any number of TV shows: There is one called "How It's Made", by the Science Discovery channel. that covers everything from soft drinks to fiberglass to fire hydrants. I've seen it several times and have enjoyed every show. Info is here:

Then there is the web. A site that I have sent my own students to on several occasions is called "How Stuff Works." They usually have nice graphics to describe all manner of devices and mechanisms. It might be a bit advanced for him now, but he could certainly look at the pictures and animations.

Finally, you mentioned in passing a tutor of some kind. One thought might be to hire an engineering student or physics student to spend some time with Aubrey, say once every week or two, or to come for a week during the summer. Many students post flyers offering their services for tutoring and the like around Campus, and they are always interested in making a few bucks. Columbia and CUNY have good engineering programs, as does Polytechnic. It would take a bit of effort to get the right person, but if and when you did, it could be a great experience.

All thoughts welcome. If I think of other things that would be useful, I'll gladly pass them along if interested.

Jon Longtin, Ph.D.

Associate Professor and Undergraduate Program Director

Department of Mechanical Engineering

Rocky Humbert writes:

things of scienceDoes anyone know whether there is a successor to "Things of Science?"

My parents subscribed my brother and me to this in the 1960s. Each month a little blue box would arrive in the mail with genuine hands-on scientific experiments suitable for children. It was a much simpler time (before the internet, etc.) but the program whet our appetite and contributed to our both pursuing engineering/science in college, graduate school and beyond. 

Jonathan Bower adds:

This is one of my favorite "toys" for learning.



 Most of us view the probability of an event as being between zero and one. But this is a simplification. Negative probabilities exist in physics, and they "probably" exist in the markets.

Additionally, probabilities greater than 1 exist too. Probabilities which are less than zero or greater than one are called "extended probabilities."

This is the first paper that I've seen which builds a mathematical model of interest rate options for negative probabilities. Previous papers have dealt with "risk-neutral" and "psedo-probabilities." The authors also promise an upcoming paper that describes financial models for events with a probability. The paper makes good reading for those who enjoy dividing by zero, and taking the square root of a negative number.

For the less geeky, besides negative interest rates, can anyone think of some real world examples of negative probabilities? Or probabilities greater than one?Is "Hell freezing over" an extended probability?

Rocky Humbert, quantitative analyst, speculator and master chef, blogs as OneHonestMan.

Bruno Ombreux comments:

If it is less than 0 or more than 1, then by definition, it is not a probability. It is not even a measure. They could call them anything, for instance "tiger-striped ferrraris", but they should not call them "negative probabilities".

The reason for a semantic discipline requirement, is that this tongue-in-cheek article, is targeted at finance people. People in the finance industry are generally clueless and take this kind of joke at face value.

Right now, I am studying Bayesian statistics, where they make ample use of calculation hacks and gimmicks. For instance they use Dirac masses as probability densities (height is infinite, width is zero, and area 1 ). But they know exactly what they are doing and nobody is fooled by the vocabulary.

That's different when the public is the banks or HF unwashed masses. For these, a dog should be called a dog, a cat a cat…

Rocky Humbert responds:

While I have often have my tongue planted in cheek (as well as foot planted in mouth), that is actually not the case here.

Mr. Bruno's first sentence is entirely correct with respect to "classical" probability theory. However, he might consider the possibility that Extended Probability Theory is analogous to Einstein's Relativity Theory extending Classical Newtonian Physics. (i.e. there are practical applications of atomic physics in our mundane lives — one doesn't need to travel at the speed of light to see this.)

I'm not a mathematician (and I don't play one on TV either) but I'm told that negative probabilities have a long history for people thinking about the foundations of quantum theory. Feynman wrote about them and the concepts led to his initial work on quantum computing. (Basic quantum computers now exist.)

Additionally, in markets, I believe that "Dutch Books" may give rise to extended probabilities.

I can't quarrel that some folks in the finance industry are clueless. (C'est moi, Monsieur!) but I don't find Brownian Motion-based options pricing models entirely satisfying either… hence I try to keep an open mind.

Jon Longtin writes:

Negative refraction I would caution against mixing mathematics with physics. Math is an (actually the only) absolute science, who's existence is defined completely in terms of stated rules and relationships. It is, at the end of the day, a very large body of definitions.

Probability, in the mathematical sense, is the chance that a particular outcome will happen, with the assumption that that outcome can at least happen. If an event can never happen its probably is zero, and if it always happen it is one. *Mathematically* to speak of events outside of this context is meaningless.

Physics is, well, physics. The world is the way it is and it's our job to describe it to the best of our ability. A tool that does this remarkably well is mathematics. Often, though, as we learn more the physical models have to be revised, expanded, and reinterpreted, given new information and insights. When we look at our new and improved models through the lens of the old model, thought, strange things happen. This is true with relatively, quantum mechanics, and when they discovered that the sun went around the earth, to name a few.

There are, for example, new materials that are characterized as having a negative index of refraction, n (a measure of how strongly materials bend light, and is the reason a pencil looks bent when in a glass of water); classically vacuum has n = 1 and air is about 1.0001 or so, water = 1.33, with values less than 1 physically impossible. There are, however, new materials being developed that do not naturally exist in nature, but rather are engineered structures that give the illusion of having a negative index. The point is no physics are being violated here; only that the model needs to be revised, and fitting the new material into the old model will result in surprising and sometimes counterintuitive understanding.

It is sometimes tempting to introduce an extension to the old model, such as e.g., negative refractive index and negative probability, but the more rigorous approach is to redefine the physical model from the ground up to capture the new phenomenon in a rigorous way.

Jon Longtin, Ph.D, is Associate Professor and Undergraduate Program Director, Department of Mechanical Engineering, State University of New York at Stony Brook

Bruno Ombreux replies:

In the case of the negative probability article, they are quick to dismiss the obvious and parsimonious solution that everyone has been using, and replace it with some harebrained theory.

Negative interest rates are nothing new and not a problem. We have had plenty of trade-ables that have always been able to get negative, with active OTC option markets in them, eg crack spreads…

The simple solution is to use a normal law instead of log-normal, and if you are still not happy, to use the empirical distribution.

Tom Marks comments:

 There is a fine volume recently out by the wonderfully polymathic Clifford Pickover called The Math Book. "From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics.

Reading each of these 250 entries is like doing a set of 25 push-ups for one's mind. And not just the left side, as the methodology behind fractal artwork indicates.

On the subject of squares of a negative number of which Rocky wrote, Dr. Pickover touches on the formerly ridiculed notion of imaginary numbers, the contributions of Bombelli, et al., and writes:

An imaginary number is one whose square has a negative value. The great mathematician Gottfried Leibniz called imaginary numbers 'a wonderful flight of God's Spirit; they are almost an amphibian between being and not being.' Because the square of any real number is positive, for centuries many mathematicians declared it impossible for a negative number to have a square root. Although various mathematicians had inklings of imaginary numbers, the history of imaginary numbers started to blossom in sixteenth-century Europe. The Italian engineer Rafael Bombelli, well known during his time for draining swamps, is today famous for his Algebra, published in 1572, that introduced a notation for √-1, which would be a valid solution for the equation x² + 1 = 0. He wrote, 'It was a wild thought in the judgment of many.' Numerous mathematicians were hesitant to 'believe' in imaginary numbers, including Descartes, who actually introduced the term imaginary as a kind of insult."

Leonard Euler in the eighteenth century introduced the symbol i for √-1 — for the first letter of the Latin word imaginarius — and we still use Euler's symbol today. Key advances in modern physics would not have been possible without the use of imaginary numbers, which ave aided physicists in a vast range of computations, including efficient calculations involving alternating currents, relativity theory, signal processing, fluid dynamics, and quantum mechanics. Imaginary numbers even play a role in the production of gorgeous fractal artworks that show a wealth of detail with increasing magnifications."From string theory to quantum theory, the deeper one studies physics, the closer one moves to pure mathematics. Some might even say that mathematics 'runs' reality in the same way that Microsoft's operating system runs a computer. Schrödinger's wave equation — which describes basic reality and events in terms of wave functions and probabilities — may be thought of as the evanescent substrate on which we all exist, and it relies on imaginary numbers.

Here is Dr. Pickover's website (well worth a look).

Rocky Humbert replies:

With your reference to Dr. Pickover, you tied together many loose ends:

Mr. Maner alluded to the fact that there is a probability greater than one that "…vampires will invade the literary world and be a profitable genre." He referenced the epic drama: "Abraham Lincoln: Vampire Hunter " Low and behold, it was Dr. Pickover who invented vampire numbers– I would wager that this is the first time that negative probability, markets, Abraham Lincoln and vampires were all discussed in the same thread on Dailyspec. (The probability of this happening is comparable to the odds that the S&P will rise five more days in a row. I therefore conclude that this must be an omen, and I just bought ONE March S&P 116 call as a homage to negative probability and vampires.)

Sushil Kedia writes:

The oracle at delphiThe first and the simplest example of negative probability at work in the markets comes to my mind from the Chair's oft emphasis on deception in the markets.

Let me use an example:

A street conman's game very much prevalent in India, near the smaller train stations and ports, where the hustler holds a heavy bag of muslin with two hands and offers a wide peep inside for you to see a nicely mixed hoarde of coloured and natural peanuts. The odds offered are 3:1 for you to multiply your money if you lift up a coloured peanut. You rush in playing an unfair game apparently to your advantage. When you shove your hand in, the mouth of the bag is held much closer around your wrist than when you were inticed to take a game loaded in your favour.

You pull out the peanut and it is not coloured.

The trick deployed is that there is another bag within the bag containing only uncoloured peanuts.

In the awareness of the hustler, probability of the outcome is certain due to his ability at deception. In the awareness of the player the probability of the outcome is somewhere close to 50:50. In the awareness of an analyst like me who has burnt the hand that tried picking the coloured peanut many times 50/(50+50+100) or 0.25 assuming the hustler fails at closing out the bag with mixed peanuts forcing your hand into the hidden bag with only plain ones.

The difference in the probabilities known to the newbies and those who have burnt their hand and become analysts is the 0.75 gap which is really the negative probability on which the hustler is peddling his skill.

Replace the peanuts - plain and the mixed ones with earnings guidance and announcements and you realize the negative probability the masses face vis-a-vis the insiders assuming all else being equal.

Replace the peanuts - plain and the mixed ones with counted stats the pros are playing with and the bags of code (not only computational but simply deal flow informational) the glittering big firms can have.

So on and so forth.

With this perspective gaps in perception, information, imagination, awareness, model specification, ability to loot, peddle, hustle etc. etc. a concept of negative probability fits in well with comprehension. The Oracles of Delphi as explained in the Education of a Speculator played really on the negative probability the masses believe are non-existent and happy to live with such belief. Despair, disdain, pursuit of short-cuts, road to quick riches have all been built with bricks of negative probability.

The first and the simplest example of negative probability at work in the markets comes to my mind from the Chair's oft emphasis on deception in the markets.

Let me use an example:

A street conman's game very much prevalent in India, near the smaller train stations and ports, where the hustler holds a heavy bag of muslin with two hands and offers a wide peep inside for you to see a nicely mixed hoarde of coloured and natural peanuts. The odds offered are 3:1 for you to multiply your money if you lift up a coloured peanut. You rush in playing an unfair game apparently to your advantage. When you shove your hand in, the mouth of the bag is held much closer around your wrist than when you were inticed to take a game loaded in your favour.

You pull out the peanut and it is not coloured.

The trick deployed is that there is another bag within the bag containing only uncoloured peanuts.

In the awareness of the hustler, probability of the outcome is certain due to his ability at deception. In the awareness of the player the probability of the outcome is somewhere close to 50:50. In the awareness of an analyst like me who has burnt the hand that tried picking the coloured peanut many times 50/(50+50+100) or 0.25 assuming the hustler fails at closing out the bag with mixed peanuts forcing your hand into the hidden bag with only plain ones.

The difference in probabilities known to the newbies and the analysts is 0.5-0.25 = 0.25 the awareness advantage.

The difference in probabilities known to the analysts and the hustler is 1-0.25 = 0.75 the hustling advantage.

The difference in probabilities known to the hustler and the newbies is0.5-1.0 = -0.5 the ignorants negative probability

Awareness Advantage MINUS Ignorants negative probability = 0.25-(-0.5) = 0.75 = The Hustling Advantage or in other words, the negative probability is Awareness Advantage - Hustling Advantage.

Replace the peanuts - plain and the mixed ones with earnings guidance and announcements and you realize the negative probability the masses face vis-a-vis the insiders assuming all else being equal.

Replace the peanuts - plain and the mixed ones with counted stats the pros are playing with and the bags of code (not only computational but simply deal flow informational) the glittering big firms can have.

So on and so forth.

With this perspective gaps in perception, information, imagination, awareness, model specification, ability to loot, peddle, hustle etc. etc. a concept of negative probability fits in well with comprehension. The Oracles of Delphi as explained in the Education of a Speculator played really on the negative probability the masses believe are non-existent and happy to live with such belief. Despair, disdain, pursuit of short-cuts, road to quick riches have all been built with bricks of negative probability.

T.K Marks writes:

On the subject of meals, I see today a poignant portrait of the food chain. These photos from Colorado are spectacular.

"…The starling seems to be completely unaware it is on the lunch menu as the bald eagle makes it attack at high speed…" On some level we've all been starlings at one point or another.


Resources & Links