# Goedel’s Proof, from Russell Sears

It seems to me that Goedel's "Incompleteness Theorem" proves the limits of reason, and science. That in a system complex enough to do arithmetic, that there either exist:

1 True properties that are not provable,
2. or the system is inconsistent.

If you understand his motives and what he thought his incompleteness theory proves: Platonists were right. Not the Sophist: "Man is the measure of all things", not the rigid scientist/empiricist. Nor the Mystic.

It is said that some of Goedel's decent into madness stemmed from him not understand how others could mis-understand the implications of his proof and what was so clear to him.

Few even understand what the proof, proves, let alone the implications, and fewer still the proof.

Goedel was every bit as much the genius as Einstein according to his good friend Einstein.

## Jim Sogi responds:

From my read of Luck, Logic & Whies Lies by Jörg Bewersdorff, rather than the limits of reason, Goedel marked the end of a period in the history of math and the beginning of what I might describe as the probabilistic age. Many of the advances in physics and our own market science rests on probabilistic mathematics and this is the new frontier in the same way that the mathematic fiction of a limit allowed Newton to initiate modern science. To me it is a peculiar type of math with variables representing shifting penumbras, but is what gives an advantage over those relying on linear or fixed systems.

## Jason Schroeder exclaims:

Don't drag Goedel into this!

Your Bayesian Vagabond cautions over exuberance concerning Mr. G proves limits.

Moving to probabilities does not remove the problem. Probabilities are deductions taken under uncertainty. Otherwise probabilities, including the famous 0 and 1, are mental fixations aiding the proving/deducing process. Incompleteness holds that that abstract process cannot prove everything. Some things require a different tactic or strategy.

More symbols (limits and penumbras and strings of numerals) do not create more possibilities to defeat incompleteness. We all gotta work for our dinner intellectually. Take the risks and change the rules.

Mr. G proves Hilbert championed a dead-end. The scientific air at the time was using phrase  "final solution" voiced by Hilbert and his groupies. The German politicians were just being savvy by bringing the notion to the people. Showing the axiomatization, or encoding, or formalistic pretensions that the averagely clever think they automate mapping out a solutions before taking to the field.

"It should anyway be observed that Gödel's theorem is not the anti-scientist panacea… science is primarily seeking questions" not proving correctness before trying (that is called self-righteousness in another tradition).

Remember Popper's love of falsifiability ignores Goedel's work because it is not falsifiable! Goedel refutes Hilbert and Popper.

More from Girard, a mathematical logician:

It is out of question to enter into the technical arcana of Gödel's theorem, this for several reasons :

(1) This result, indeed very easy, can be perceived, like the late paintings of Claude Monet, but from a certain distance. A close look only reveals fastidious details that one perhaps does not want to know.

(2) There is no need either, since this theorem is a scientific cul-de-sac : in fact it exposes a way without exit. Since it is without exit, nothing to seek there, and it is of no use to be expert in Gödel's theorem.

…never forget Turing's contribution to computer science, a ontribution which mainly rests on a second reading of Gödel's theorem ; the fixed point of programs is nothing more than the celebrated algorithmic undecidability of the halting problem: no program is able to decide whether a program will eventually stop, and no way to pass around this prohibition. This is a simplified version of the incompleteness theorem … loses very little …

## Russ Sears concludes:

Not having read "Luck, Logic & Whites Lies", but left to judge by your brief decription.

Much has been written about Goedel's proof and its implications from those that don't really understand it. Or if they do they only give the part of the story they want you to hear. This is part of the frustration Goedel had.

To quote an expert on Goedel, Rebecca Goldstein, "…the second incompleteness theorem doesn't say that the consistency of a formal system of arithmetic is unprovable by any means whatsoever. It simply says a formal system that contains arithmetic can't prove the consistency of itself. After all, the natural numbers constitute a model of the formal system of arithmetic and if a system has a model then it is consistent…In other words, when the formal system of arithmetic is endowed with the usual meaning, involving the natural numbers and their properties, the axioms and all that follow from them are true and therefore consistent. This sort of argument for consistency, however, goes outside the formal system, making an appeal to the existence of the natural numbers as a model"

The goal was "to expunge all reference to intuitions-was most particularly directed toward our intuitions of infinity: not surprisingly, finite creatures that we are, it is these intuitions that have prove themselves, from the very beginning , to be the most problematic." … "This can only be done by going outside the formal system and making an appeal to intuitions that can't themselves be formalized."

In other words, my own this time. Goedel ideas no doubt did help herald in what you call "the probabilistic age". He did so by making scientist question even the subtlest assumptions in their methods and models. But I would suggest that Heisenberg principle had much more of an effect in causing a "probabilistic age" than Goedel, if for no other reason than it came first.

The implications to reasons are that a pure Spock is not possible…that intuition must be a part of the process and hence the value of standing like a tree, running 70 miles a week in cold of December. Or, as Einstein and Goedel both did, going for long walks often together can give increase your scientific output, by giving you a chance to put it in perspective.

While this clearly has implication for a speculator, I would suggest that the bigger implication is that finite creatures that we are we should always remain humble and be open to the idea that we even as "counters" are heading down a wrong path. We do not always have an edge, despite what the numbers say. Not that "counting", reason, or science is wrong… rather we are using it wrong, that we missed something in our model. The limit to science and reason is us.

# Canada/US Goods Arbitrage, from Gordon Haave

Canada dollar at parity with \$US… Lots of products, such as books, have the price listed in \$US and \$Canadian, based on old exchange rates. Buy products in US, rent van, take them to Canada, return them. Rinse and repeat. Consumer goods arbitrage increases market efficiency by forcing producers to stop being silly about their pricing.

eBay is a Canadian's best friend. Of course, the customs dudes are profiting from that arbitrage themselves. Getting goods across the border is taking a rather long time. Getting packages from the UK, on the other hand, is faster than in country.

## Gabe Carbone remarks:

There was a topic paper done by the economist at one of the Canadian banks in the past couple months on this. He listed off the top goods with arbitrage opportunities.

## Sam Humbert extends:

A cross-pond arb I stumbled on: I bought a copy (UK version) of "The Seven-day Weekend: A Better Way to Work in the 21st Century" by Ricardo Semler, for a couple of dollars at a yard sale, and have been skimming it. I was curious to see how its AMZN reviews looked, so I checked AMZN/US, and was surprised to find the book is rare and valuable,

6 used & new available from \$66.46

Then I noticed that at AMZN/UK, it's a penny + shipping,

58 used & new available from £0.01

A project for Prof. Haave: Buy all 58 UK copies and eBay them in the US..

# Volatility is Not Variance, from Jason Schroeder

August 16, 2007 | 3 Comments

Where would I find someone or a school to agree with the statement "volatility is not variance?"

My problem is with variance and not with volatility. Indeed, I might be forced to accept that the computation of volatility needs be the computation that we call variance when given some data.

So given that these topics elide and volatility is something seemingly we all experience, does anyone know of a school, perspective, or person that denies volatility and variance are one in the same? And proceeds to define volatility in some other manner?

# Spirulina, from Jason Schroeder

I cannot fail to remind all of the benefits of Spirulina, especially the well grown varieties — not all algae ponds are equal. It is a superfood that simply is more useful to the body than other volumes of food, and the brain gets the message by being less hungry. I find Atkins in addition to Spirulina beats the cravings while simply using the proteins and fats to turn the dials on the metabolism.

## Charles Pennington writes:

The last thing that I did in my academic life, or rather the last thing that a very, very talented student in my lab, Luisa Ciobanu, did while I watched, was to obtain a magnetic resonance image (MRI) of a single cell of the algae Spirogyra, which is very similar to Spirulina. It's a plant cell shaped like a cylinder, with a diameter of about 40 micrometers. The "spiro" in the name is there because the cell has green chloroplasts that align in a spiral pathway along the inner cell wall. Dr. Ciobanu was able to resolve these chloroplasts (each just a few micrometers in diameter) piercing in and out of the image planes. The MRI images are on page 75 of our review article, and Dr. Peter Sengbusch has also made some pretty pictures using a regular optical microscope. It never occurred to me that it would be a good idea to eat these things!

# A Sermon on Programming Languages, from Jason Schroeder

Programming languages are cool. Love them. Like them. The most important languages are Prolog, ML, and Haskell. The best ideas are represented there. The problem being that people imagine that somehow a language makes programming easier. It does not. But the hope remains. The major abstractions are function calls, garbage collection, exceptions and objects. Once one leaves the safety of the fire, there are language features like monitors, first order types, first order functions, functors/module systems, dynamic scoping, unification (Prolog only), annotations and then one gets into the really rare toy features. Way back when, garbage collection was considered a toy feature. (The chip provides the number, operation and memory abstractions.) The language exists for the writer, not the computer. So many language features are written with the expectation that somehow the author's task is made easier. In order to do this, the features have to be "different" (orthogonal) rather than just syntactic sugar. Reducing keystrokes is nice in the beginning but the real power is doing stuff that simple keystroke reduction cannot do! Like creating functions at runtime. Or creating interconnected packages of classes and objects at runtime by inserting meta-parameters of types and data. (Macros on steroids.) Features like that permit one to not have to create all the support code to make it happen in your own way. One can look at the source code and go "oh, this is what is happening" as the language demarcates the parts that truly vary and the parts that are simply different! A handy library can do all this work. But then the library better be a standard library so that it is not just some other pile-o-junk with suggestive function names. So the difference between a well developed library and a language feature is minimal. In fact, the only language feature that cannot be implemented as a function is short-circuiting AND and OR… C++ has too many language features. (Arguments ensue). And C has so few it is amazing that is all one really needs. But without their standard libraries for string manipulation, I/O, POSIX compliance, etc., these languages would be nothing but curios. I really want to spend my time writing in ML, the funkiest. It is has great libraries but not enough to keep me from having to write library wrappers. Haskell provides the highest meta-abstractions I know of but it is not widely used so its practicality is lessened (but oh, one can show how smart one is by orchestrating amazing meta-programming abstractions to make the compiler write the program for you). Prolog is prolog, if the problem is expressible in prolog there is no reason not to use it, the debate rests on whether anything useful is expressible in Prolog… In conclusion, the best advice find tight library functionality and stay close to C/C++/Java. C# is more marketing than stable. Behemoth libraries get spooky. If you are trying to write web services, then Ruby-On-Rails is way cool. But if you are hankering for a real server, then C/C++/Java are mandatory. Java servers are freakish mounds of code to the uninitiated. A good old C server was demonstrable in 200 lines but the expectation of servers has increased infinitely since the Internet Bubble!

A good, practical intro to Haskell has been rolling out on Mark Chu-Carroll's blog. The tone of it isn't "for dummies," but it's clear and direct. Even I can make some sense of it. Haskell is arguably the most important of the newer (i.e., non-Lisp) "functional languages". It's kinda-sorta like Python visually, and has some similarities to R, which has itself been accused of being a functional language.

# Bayes: Complete Ignorance and Stated Truth, by Jason Schoeder

Ignorance is preferable to error and he is less remote from the truth who believes nothing than he who believes what is wrong. — Thomas Jefferson (Notes On Virginia)

During my ongoing Bayesian spelunking, I have run across the idea of “complete ignorance” and “stated truth”.

Reading probability theory shows background to things we know.

1. The differences between people who both claim to know nothing about a situation must be tested anyhow.
2. The similarities in answers between people do not differentiate the volume of perspective.

Complete Ignorance:

One might think that such a thing is an important part of Probability Theory, alot like zero-ness.

But it seems nailing down what you do not know results in discovering things one does know. Discovering if a problem is location or scale or rotationally invariant provides an idea of what one does not know: that is, the problem itself encodes what you do not know.

And that complete ignorance may only be relative to the problem and not the observer (being dumb does not count in this exercise).

A full blown modeling of “complete ignorance” would create a starting point for building knowledge, but that is off in another realm of thinking.

Stated Truth:

As an experiment, one can play with meta-distributions of probabilities: Ap

P(A | Ap, X) = p

Ap means regardless of anything else you might have been told, the probability of A IS p

The net effect of this “rule” is: for new information to update the meta-distribution Ap, the probability of P(New | Ap) must have some slope effect to either narrow the distribution or provide some focus to the uniform.

The integrals are fun to play with, the regular case of arguing with someone who has an Ap distribution in his head we all know, his receptiveness to new ideas is mathematically predetermined, if the new information creates unwanted redefinition, there is no need to reprocess the meta-distribution. Dogma about Ap is different than knowing why Ap is relevant.