### Feb

#### 17

# Probability is as Useful to Physics as Flat-Earth Theory, from Dylan Distasio

February 17, 2016 |

I thought this was an interesting opinion piece from David Deutsch who has some creative ideas in physics theory:

"Probability is as useful to physics as flat-Earth theory"

## Gibbons Burke writes:

String theory, or more particularly, M-theory, which represents a current SWAG (Scientific Wild-Assed Guess) at the grand-unifying-theory-of-everything, requires some eleven dimensions to make it all work out.

Our mortal finite deterministic mental capacities can wrap our space-time evolved brains around four or five, with instruments perhaps a few more.

Perhaps randomness is how we get a handle on behavior which defies rational explanation in our four-dimensional flatland of what seems to be the 'natural' material world; if there are eleven or more dimensions, then perhaps what seems random for us has rules beyond our ken which govern the dynamics of the other invisible, shall we say, 'super-natural', dimensions.

## Ralph Vince writes:

I think people are missing the point of the article Dylan puts here. The author of this simple piece is discussing things that are right in my ambit, what I call "Fallacies in the Limit." The fundamental notion of expectation (the probability-weighted mean outcome), foundational to so much in game theory, is sheer fallacy (what one "expects" is the median of the sorted, cumulative outcomes at the horizon, which is therefore a function of the horizon).

To see this, consider a not-so-fair coin that pays 1:-1 but falls in your favor with a probability of .51 The classical expectation is .02 per play, and after N plays, .5N is what you would expect to make or lose for player and house, as the math of this fallacious approach - and I say fallacious as it does not comport to real-life. That is, if I play it on million times, sequentially, I expect to make 20,000 and if a million guys play it against a house, simultaneously, (2% in the house's favor) the house expect to make 20,000

And I refer to the former as horizontal ergodicity (I go play it N times), the latter as vertical ergodicity (N guys come play it one time each). But in real-life, these are NOT equivalent, given the necessarily finite nature of all games, all participants, all opportunities.

To see this, let is return to our coin toss game, but inject a third possible outcome — the coin lands on its side with a probability of one-in-one-million and an outcome which costs us one million. Now the classical thinking person would never play such a game, the mathematical expectation (in classical terms) being:

.51 x 1 + .489999 x -1 + .000001 x - 1,000,000 = -.979999 per play.

A very negative game indeed. Yet, for the player whose horizon is 1 play, he expects to make 1 unit on that one play (if I rank all three possible outcomes at one play, and take the median, it i a gain of one unit. Similarly, if I rank all 9 possible outcomes after 2 plays, the player, by my calculations should expect to make a net gain of .0592146863 after 2 plays of this three-possible-outcome coin toss versus the classical expectation net loss of -2.939997 (A wager I would have gladly challenged Messrs. Pascal and Huygens with). To see this, consider the 9 possible outcomes of two plays of this game:

**outcome**

0.51 0.51 1.02

0.51 -0.489999 0.020001

0.51 -1000000 -999999.49

-0.489999 0.51 0.020001

-0.489999 -0.489999 -0.979998

-0.489999 -1000000 -1000000.489999

-1000000 0.51 -999999.49

-1000000 -0.489999 -1000000.489999

-1000000 -1000000 -2000000

The outcomes are additive. Consider the corresponding probabilities for each branch:

product

0.51 0.51 0.260100000000

0.51 0.489999 0.249899490000

0.51 0.000001 0.000000510000

0.489999 0.51 0.249899490000

0.489999 0.489999 0.240099020001

0.489999 0.000001 0.000000489999

0.000001 0.51 0.000000510000

0.000001 0.489999 0.000000489999

0.000001 0.000001 0.000000000001

The product at each branch is multiplicative. Combining the 9 outcomes, and their probabilities and sorting them, we have:

outcome probability cumulative prob

1.02 0.260100000000 1.000000000000

0.999999 0.249899490000 0.739900000000

0.020001 0.249899490000 0.490000510000

-0.979998 0.240099020001 0.240101020000

-999999.49 0.000000510000 0.000001999999

-999999.49 0.000000510000 0.000001489999

-1000000.489999 0.000000489999 0.000000979999

-1000000.489999 0.000000489999 0.000000490000

-2000000 0.000000000001 0.000000000001

And so we see the median, te cumulative probability of .5 (where half of the event space is above, half below — what we "expect") as (linearly interpolated between the outcomes of .999999 and .020001) of .0592146863 after two plays in this three-possible-outcome coin toss. This is the amount wherein half of the sample space is better, half is worse. This is what the individual, experiencing horizontal ergodicity to a (necessarily) finite horizon (2 plays in this example) expects to experience, the expectation of "the house" not withstanding.

And this is an example of "Fallacies of the Limit," regarding expectations, but capital market calculations are rife with these fallacies. Whether considering Mean-Variance, Markowitz-style portfolio allocations or Value at Risk, VAR calculations, both of which are single-event calculations extrapolated out for many, or infinite plays or periods (erroneously) and similarly in expected growth-optimal strategies which do not take the finite requirement of real-life into account.

Consider, say, the earlier mentioned, two-outcome case coin toss that pays 1:-1 with p = .51. Typical expected growth allocations would call for an expected growth-optimal wager of 2p-1, or 2 x .51 - 1 = .02, or to risk 2% of our capital on such an opportunity so as to be expected growth optimal. But this is never the correct amount — it is only correct in the limit as the number of plays, N - > infinity. In fact, at a horizon of one play our expected growth-optimal allocation in this instance is to risk 100%.

Finally, consider our three-outcome coin toss where it can land on it;s side. The Kelly Criterion for determining that fraction of our capital to allocate in expected growth-optimal maximization (which, according to Kelly, to risk that amount which maximizes the probability-weighted outcome) would be to risk 0% (since the probability-weighted outcome is negative in this opportunity).

However, we correctly us the outcomes and probabilities that occur along the path to the outcome illustrated in our example of a horizon of two plays of this three-outcome opportunity.

## Russ Sears writes:

Ok after a closer look, the point the author is making is scientist assume probabilities are true/truth based on statistics. But statistics are not pure math, like probability, because they are not infinite. Therefore they can not detect the infinitely small or infinitely large.

But the author assumes that quantum scientist must have this fallacy and do not understand. Hence he proposes that thought experiments or philosophical assumptions of deterministic underpinnings of physics must hold and should carefully supercede statistical modeling. Hence denying the conscious mind any role is creating a physical world outside itself.

So basically the author accuses others of not understanding the difference between the superiority of probability over statistics. So he tries to use pure thought to get pure physics devoid of the necessity of consciousness to exist. Perhaps he does not confuse the terms himself. It would be better written however, if he used the terminology a 1st year probability and statistics student learns.

## Jim Sogi adds:

I believe that the number and size of trades at a price, or the lack of density at that price lead to certain gravitational effects. The other somewhat unknown are the standing orders at those levels but the orders and trade density are related.

# Comments

5 Comments so far

## Archives

- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- Older Archives

## Resources & Links

- The Letters Prize
- Pre-2007 Victor Niederhoffer Posts
- Vic’s NYC Junto
- Reading List
- Programming in 60 Seconds
- The Objectivist Center
- Foundation for Economic Education
- Tigerchess
- Dick Sears' G.T. Index
- Pre-2007 Daily Speculations
- Laurel & Vics' Worldly Investor Articles

The second half of this article has an incorrect calculation which percolates through. I re-posted the corrected one, but it evidently didn’t make it to the web. Here is the correct posting:

I think people are missing the point of the article Dylan puts here. The author of this simple piece is discussing things that are right in my ambit, what I call “Fallacies in the Limit.” The fundamental notion of expectation (the probability-weighted mean outcome), foundational to so much in game theory, is sheer fallacy (what one “expects” is the median of the sorted, cumulative outcomes at the horizon, which is therefore a function of the horizon).

To see this, consider a not-so-fair coin that pays 1:-1 but falls in your favor with a probability of .51 The classical expectation is .02 per play, and after N plays, .5N is what you would expect to make or lose for player and house, as the math of this fallacious approach - and I say fallacious as it does not comport to real-life. That is, if I play it on million times, sequentially, I expect to make 20,000 and if a million guys play it against a house, simultaneously, (2% in the house’s favor) the house expect to make 20,000

And I refer to the former as horizontal ergodicity (I go play it N times), the latter as vertical ergodicity (N guys come play it one time each). But in real-life, these are NOT equivalent, given the necessarily finite nature of all games, all participants, all opportunities.

To see this, let is return to our coin toss game, but inject a third possible outcome — the coin lands on it;s side with a probability of one-in-one-million and an outcome which costs us one million. Now the classical thinking person would never play such a game, the mathematical expectation (in classical terms) being:

.51 x 1 + .489999 x -1 + .000001 x - 1,000,000 = -.979999 per play.

A very negative game indeed. Yet, for the player whose horizon is 1 play, he expects to make 1 unit on that one play (if I rank all three possible outcomes at one play, and take the median, it i a gain of one unit. Similarly, if I rank all 9 possible outcomes after 2 plays, the player, by my calculations should expect to make a net gain of .0592146863 after 2 plays of this three-possible-outcome coin toss versus the classical expectation net loss of -2.939997 (A wager I would have gladly challenged Messrs. Pascal and Huygens with). To see this, consider the 9 possible outcomes of two plays of this game:

outcome

0.51 0.51

1.02

0.51 -0.489999

0.020001

0.51 -1000000

-999999.49

-0.489999 0.51

0.020001

-0.489999 -0.489999

-0.979998

-0.489999 -1000000

-1000000.489999

-1000000 0.51

-999999.49

-1000000 -0.489999

-1000000.489999

-1000000 -1000000

-2000000

The outcomes are additive. Consider the corresponding probabilities for each branch:

product

0.51 0.51

0.260100000000

0.51 0.489999

0.249899490000

0.51 0.000001

0.000000510000

0.489999 0.51

0.249899490000

0.489999 0.489999

0.240099020001

0.489999 0.000001

0.000000489999

0.000001 0.51

0.000000510000

0.000001 0.489999

0.000000489999

0.000001 0.000001

0.000000000001

The product at each branch is multiplicative. Combining the 9 outcomes, and their probabilities and sorting them, we have:

outcome probability cumulative prob

1.02 0.260100000000 1.000000000000

0.020001 0.249899490000 0.739900000000

0.020001 0.249899490000 0.490000510000

-0.979998 0.240099020001 0.240101020000

-999999.49 0.000000510000 0.000001999999

-999999.49 0.000000510000 0.000001489999

-1000000.489999 0.000000489999 0.000000979999

-1000000.489999 0.000000489999 0.000000490000

-2000000 0.000000000001 0.000000000001

And so we see the median, the cumulative probability of .5 (where half of the event space is above, half below — what we “expect”) as of .020001 after two plays in this three-possible-outcome coin toss. This is the amount wherein half of the sample space is better, half is worse. This is what the individual, experiencing horizontal ergodicity to a (necessarily) finite horizon (2 plays in this example) expects to experience, the expectation of “the house” not withstanding.

And this is an example of “Fallacies of the Limit,” regarding expectations, but capital market calculations are rife with these fallacies. Whether considering Mean-Variance, Markowitz-style portfolio allocations or Value at Risk, VAR calculations, both of which are single-event calculations extrapolated out for many, or infinite plays or periods (erroneously) and similarly in expected growth-optimal strategies which do not take the finite requirement of real-life into account.

Consider, say, the earlier mentioned, two-outcome case coin toss that pays 1:-1 with p = .51. Typical expected growth allocations would call for an expected growth-optimal wager of 2p-1, or 2 x .51 - 1 = .02, or to risk 2% of our capital on such an opportunity so as to be expected growth optimal. But this is never the correct amount — it is only correct in the limit as the number of plays, N - > infinity. In fact, at a horizon of one play our expected growth-optimal allocation in this instance is to risk 100%.

Finally, consider our three-outcome coin toss where it can land on it;s side. The Kelly Criterion for determining that fraction of our capital to allocate in expected growth-optimal maximization (which, according to Kelly, to risk that amount which maximizes the probability-weighted outcome) would be to risk 0% (since the probability-weighted outcome is negative in this opportunity).

However, we correctly us the outcomes and probabilities that occur along the path to the outcome illustrated in our example of a horizon of two plays of this three-outcome opportunity, (whic is .51, -.489999) which is (expected-growth) maximized at risking 100% of our capital, not 0%.

I should point out two things on expectation here which may not be obvious. First, in the limit, as the number of trials increases N approaches infinity, the classical expectation and my horizon-specific expectation converge (I.e. the classical expectation is asymptotic).

Secondly, it is the horizon specific expectation which living organisms on earth innately operate by, as evidence by their actions.

Ralph, in your single toss/3outcomes example sorting the payoffs of (-1e6, -1, 1) and taking the median gives a value of -1 unit not +1 there does not seem to be any measure which will return the max payoff of 1 unit as the median

I still cant follow your sample tables; the only outcomes of each toss are integer payoffs (-1000000,-1,1); an outcome of 0.51 cannot happen in your explanation of the game…Using integer outcomes I get a median of zero in the 2 toss example which is indeed greater than the expected value but doesnt match your figures, am I missing something?

Is probability as useful to physics as flat-earth theory? To suggest that this is so is to confuse the notion of “measure” with the notion of “theory.” Probability is an example of a measure. Probability theory is an example of a theory.

Probability theory is associated with a logic: the “probabilistic logic.” Can physics dispense with the probabilistic logic? Not unless it is willing to abandon quantum mechanics, thermodynamics and information theory among other well validated theories.

Rather than anything else, this is an issue of epistemology — and like almost everyone, the author has a wrong understanding of it. Since Rand’s epistemology (in this case most notably the theory of concepts) is relatively new, afaik noone has explicitly developed a proper theory of probability.

The author seems to be the kind of logical positivist/empiricist (like David/Milton Friedman) who would say that 2 + 2 does not really equal 4, but approximately 4. Using his Toohey-like line of “reasoning” all knowledge could be attacked.

About probability:

Due law of casualty, everything happens as should; result of a flipped coin is based off the physical forces causing the movement. The notion of probability arises because of lack of knowledge of the causal factors. For the same reason we do have free will and determinism is wrong, we have to deal with imperfect knowledge. To increase success of our actions, we use the concepts of probability in cases we do not understand the causal relations.

About the concepts:

First consider mathematics. When a prehistoric man who freshly started to employ reason in his actions was collecting berries, he created in his mind certain concepts to better understand the reality. First concepts about quantities might’ve been “few”, “a lot”, “one”, and “two”.

He might’ve noticed that certain amounts of berries is possible to divide into two equal groups, while some amounts will leave a leftover. He might’ve noticed that putting “few” of “fews” created “a lot”. Through time, he would notice more precise relationships, such as 2+2=4, and 4+4=16.

Through experience he created in his mind the concepts of numbers. Afterwards, he could apply the concepts (and relations among them) to forms that satisfy the defining attributes of the concepts. He could take this knowledge learned from counting berries to do calculations with any other physical items. He created arithmetics.

Quantity does not exist per se; it must be quantity of something. Numbers do not exist, they are just a subjective concept to guide an actor in reality.

As basic as it may sound, this is not a common explanation. Rather than this narrative in the line of Aristotle-Rand, most people (including author of the article) make the Platonic mistake of numbers existing independently of reality.

The fundamental mistake of the original article is that the author makes the same mistake (that I displayed on arithmetics) in the field of probability theory. The only thing that a probability is, is a concept to guide a rational being in action.

The prehistoric man might have noticed patterns in various amounts of berries on a plant. Due lack of knowledge (which as mentioned is the cause of introducing probability) he cannot know the exact number of berries on the next plant he will go to. However, through experience he might’ve noticed that on average, a plant has 10 berries and in 90% cases it has +/-2. To plan his actions in future, he can perform certain calculations using the concepts of probability; he created statistics.

(Particularly in his lecture on this topic on Youtube) there are clearly many other problems (i.e misunderstanding of game theory or quantum physics), but it all boils down to having a wrong epistemology. It could always be traced back to A=/=A, and having wrong premises always leads to absurd conclusions, like this one.