# Probability is as Useful to Physics as Flat-Earth Theory, from Dylan Distasio

February 17, 2016 |

I thought this was an interesting opinion piece from David Deutsch who has some creative ideas in physics theory:

## Gibbons Burke writes:

String theory, or more particularly, M-theory, which represents a current SWAG (Scientific Wild-Assed Guess) at the grand-unifying-theory-of-everything, requires some eleven dimensions to make it all work out.

Our mortal finite deterministic mental capacities can wrap our space-time evolved brains around four or five, with instruments perhaps a few more.

Perhaps randomness is how we get a handle on behavior which defies rational explanation in our four-dimensional flatland of what seems to be the 'natural' material world; if there are eleven or more dimensions, then perhaps what seems random for us has rules beyond our ken which govern the dynamics of the other invisible, shall we say, 'super-natural', dimensions.

## Ralph Vince writes:

I think people are missing the point of the article Dylan puts here. The author of this simple piece is discussing things that are right in my ambit, what I call "Fallacies in the Limit." The fundamental notion of expectation (the probability-weighted mean outcome), foundational to so much in game theory, is sheer fallacy (what one "expects" is the median of the sorted, cumulative outcomes at the horizon, which is therefore a function of the horizon).

To see this, consider a not-so-fair coin that pays 1:-1 but falls in your favor with a probability of .51 The classical expectation is .02 per play, and after N plays, .5N is what you would expect to make or lose for player and house, as the math of this fallacious approach - and I say fallacious as it does not comport to real-life. That is, if I play it on million times, sequentially, I expect to make 20,000 and if a million guys play it against a house, simultaneously, (2% in the house's favor) the house expect to make 20,000

And I refer to the former as horizontal ergodicity (I go play it N times), the latter as vertical ergodicity (N guys come play it one time each). But in real-life, these are NOT equivalent, given the necessarily finite nature of all games, all participants, all opportunities.

To see this, let is return to our coin toss game, but inject a third possible outcome — the coin lands on its side with a probability of one-in-one-million and an outcome which costs us one million. Now the classical thinking person would never play such a game, the mathematical expectation (in classical terms) being:

.51 x 1 + .489999 x -1 + .000001 x - 1,000,000 = -.979999 per play.

A very negative game indeed. Yet, for the player whose horizon is 1 play, he expects to make 1 unit on that one play (if I rank all three possible outcomes at one play, and take the median, it i a gain of one unit. Similarly, if I rank all 9 possible outcomes after 2 plays, the player, by my calculations should expect to make a net gain of .0592146863 after 2 plays of this three-possible-outcome coin toss versus the classical expectation net loss of -2.939997 (A wager I would have gladly challenged Messrs. Pascal and Huygens with). To see this, consider the 9 possible outcomes of two plays of this game:

outcome

0.51                     0.51    1.02

0.51             -0.489999    0.020001

0.51             -1000000    -999999.49

-0.489999            0.51    0.020001

-0.489999    -0.489999    -0.979998

-0.489999    -1000000    -1000000.489999

-1000000            0.51    -999999.49

-1000000    -0.489999    -1000000.489999

-1000000    -1000000    -2000000

The outcomes are additive. Consider the corresponding probabilities for each branch:

product

0.51          0.51                 0.260100000000

0.51          0.489999          0.249899490000

0.51          0.000001          0.000000510000

0.489999   0.51                  0.249899490000

0.489999    0.489999          0.240099020001

0.489999    0.000001          0.000000489999

0.000001    0.51                 0.000000510000

0.000001   0.489999           0.000000489999

0.000001    0.000001          0.000000000001

The product at each branch is multiplicative. Combining the 9 outcomes, and their probabilities and sorting them, we have:

outcome             probability         cumulative prob
1.02              0.260100000000    1.000000000000
0.999999       0.249899490000    0.739900000000
0.020001       0.249899490000    0.490000510000
-0.979998      0.240099020001    0.240101020000
-999999.49    0.000000510000    0.000001999999
-999999.49    0.000000510000    0.000001489999
-1000000.489999    0.000000489999    0.000000979999
-1000000.489999    0.000000489999    0.000000490000
-2000000    0.000000000001    0.000000000001

And so we see the median, te cumulative probability of .5 (where half of the event space is above, half below — what we "expect") as (linearly interpolated between the outcomes of .999999 and .020001) of .0592146863 after two plays in this three-possible-outcome coin toss. This is the amount wherein half of the sample space is better, half is worse. This is what the individual, experiencing horizontal ergodicity to a (necessarily) finite horizon (2 plays in this example) expects to experience, the expectation of "the house" not withstanding.

And this is an example of "Fallacies of the Limit," regarding expectations, but capital market calculations are rife with these fallacies. Whether considering Mean-Variance, Markowitz-style portfolio allocations or Value at Risk, VAR calculations, both of which are single-event calculations extrapolated out for many, or infinite plays or periods (erroneously) and similarly in expected growth-optimal strategies which do not take the finite requirement of real-life into account.

Consider, say, the earlier mentioned, two-outcome case coin toss that pays 1:-1 with p = .51. Typical expected growth allocations would call for an expected growth-optimal wager of 2p-1, or 2 x .51 - 1 = .02, or to risk 2% of our capital on such an opportunity so as to be expected growth optimal. But this is never the correct amount — it is only correct in the limit as the number of plays, N - > infinity. In fact, at a horizon of one play our expected growth-optimal allocation in this instance is to risk 100%.

Finally, consider our three-outcome coin toss where it can land on it;s side. The Kelly Criterion for determining that fraction of our capital to allocate in expected growth-optimal maximization (which, according to Kelly, to risk that amount which maximizes the probability-weighted outcome) would be to risk 0% (since the probability-weighted outcome is negative in this opportunity).

However, we correctly us the outcomes and probabilities that occur along the path to the outcome illustrated in our example of a horizon of two plays of this three-outcome opportunity.

## Russ Sears writes:

Ok after a closer look, the point the author is making is scientist assume probabilities are true/truth based on statistics. But statistics are not pure math, like probability, because they are not infinite. Therefore they can not detect the infinitely small or infinitely large.

But the author assumes that quantum scientist must have this fallacy and do not understand. Hence he proposes that thought experiments or philosophical assumptions of deterministic underpinnings of physics must hold and should carefully supercede statistical modeling. Hence denying the conscious mind any role is creating a physical world outside itself.

So basically the author accuses others of not understanding the difference between the superiority of probability over statistics. So he tries to use pure thought to get pure physics devoid of the necessity of consciousness to exist. Perhaps he does not confuse the terms himself. It would be better written however, if he used the terminology a 1st year probability and statistics student learns.

I believe that the number and size of trades at a price, or the lack of density at that price lead to certain gravitational effects. The other somewhat unknown are the standing orders at those levels but the orders and trade density are related.

Name

Email

Website

1. Ralph Vince on February 17, 2016 10:47 pm

The second half of this article has an incorrect calculation which percolates through. I re-posted the corrected one, but it evidently didn’t make it to the web. Here is the correct posting:

I think people are missing the point of the article Dylan puts here. The author of this simple piece is discussing things that are right in my ambit, what I call “Fallacies in the Limit.” The fundamental notion of expectation (the probability-weighted mean outcome), foundational to so much in game theory, is sheer fallacy (what one “expects” is the median of the sorted, cumulative outcomes at the horizon, which is therefore a function of the horizon).

To see this, consider a not-so-fair coin that pays 1:-1 but falls in your favor with a probability of .51 The classical expectation is .02 per play, and after N plays, .5N is what you would expect to make or lose for player and house, as the math of this fallacious approach - and I say fallacious as it does not comport to real-life. That is, if I play it on million times, sequentially, I expect to make 20,000 and if a million guys play it against a house, simultaneously, (2% in the house’s favor) the house expect to make 20,000

And I refer to the former as horizontal ergodicity (I go play it N times), the latter as vertical ergodicity (N guys come play it one time each). But in real-life, these are NOT equivalent, given the necessarily finite nature of all games, all participants, all opportunities.

To see this, let is return to our coin toss game, but inject a third possible outcome — the coin lands on it;s side with a probability of one-in-one-million and an outcome which costs us one million. Now the classical thinking person would never play such a game, the mathematical expectation (in classical terms) being:

.51 x 1 + .489999 x -1 + .000001 x - 1,000,000 = -.979999 per play.

A very negative game indeed. Yet, for the player whose horizon is 1 play, he expects to make 1 unit on that one play (if I rank all three possible outcomes at one play, and take the median, it i a gain of one unit. Similarly, if I rank all 9 possible outcomes after 2 plays, the player, by my calculations should expect to make a net gain of .0592146863 after 2 plays of this three-possible-outcome coin toss versus the classical expectation net loss of -2.939997 (A wager I would have gladly challenged Messrs. Pascal and Huygens with). To see this, consider the 9 possible outcomes of two plays of this game:

outcome
0.51 0.51
1.02
0.51 -0.489999
0.020001
0.51 -1000000
-999999.49
-0.489999 0.51
0.020001
-0.489999 -0.489999
-0.979998
-0.489999 -1000000
-1000000.489999
-1000000 0.51
-999999.49
-1000000 -0.489999
-1000000.489999
-1000000 -1000000
-2000000

The outcomes are additive. Consider the corresponding probabilities for each branch:

product
0.51 0.51
0.260100000000
0.51 0.489999
0.249899490000
0.51 0.000001
0.000000510000
0.489999 0.51
0.249899490000
0.489999 0.489999
0.240099020001
0.489999 0.000001
0.000000489999
0.000001 0.51
0.000000510000
0.000001 0.489999
0.000000489999
0.000001 0.000001
0.000000000001

The product at each branch is multiplicative. Combining the 9 outcomes, and their probabilities and sorting them, we have:

outcome probability cumulative prob
1.02 0.260100000000 1.000000000000
0.020001 0.249899490000 0.739900000000
0.020001 0.249899490000 0.490000510000
-0.979998 0.240099020001 0.240101020000
-999999.49 0.000000510000 0.000001999999
-999999.49 0.000000510000 0.000001489999
-1000000.489999 0.000000489999 0.000000979999
-1000000.489999 0.000000489999 0.000000490000
-2000000 0.000000000001 0.000000000001

And so we see the median, the cumulative probability of .5 (where half of the event space is above, half below — what we “expect”) as of .020001 after two plays in this three-possible-outcome coin toss. This is the amount wherein half of the sample space is better, half is worse. This is what the individual, experiencing horizontal ergodicity to a (necessarily) finite horizon (2 plays in this example) expects to experience, the expectation of “the house” not withstanding.

And this is an example of “Fallacies of the Limit,” regarding expectations, but capital market calculations are rife with these fallacies. Whether considering Mean-Variance, Markowitz-style portfolio allocations or Value at Risk, VAR calculations, both of which are single-event calculations extrapolated out for many, or infinite plays or periods (erroneously) and similarly in expected growth-optimal strategies which do not take the finite requirement of real-life into account.

Consider, say, the earlier mentioned, two-outcome case coin toss that pays 1:-1 with p = .51. Typical expected growth allocations would call for an expected growth-optimal wager of 2p-1, or 2 x .51 - 1 = .02, or to risk 2% of our capital on such an opportunity so as to be expected growth optimal. But this is never the correct amount — it is only correct in the limit as the number of plays, N - > infinity. In fact, at a horizon of one play our expected growth-optimal allocation in this instance is to risk 100%.

Finally, consider our three-outcome coin toss where it can land on it;s side. The Kelly Criterion for determining that fraction of our capital to allocate in expected growth-optimal maximization (which, according to Kelly, to risk that amount which maximizes the probability-weighted outcome) would be to risk 0% (since the probability-weighted outcome is negative in this opportunity).

However, we correctly us the outcomes and probabilities that occur along the path to the outcome illustrated in our example of a horizon of two plays of this three-outcome opportunity, (whic is .51, -.489999) which is (expected-growth) maximized at risking 100% of our capital, not 0%.

I should point out two things on expectation here which may not be obvious. First, in the limit, as the number of trials increases N approaches infinity, the classical expectation and my horizon-specific expectation converge (I.e. the classical expectation is asymptotic).

Secondly, it is the horizon specific expectation which living organisms on earth innately operate by, as evidence by their actions.

2. MJ Kelly on February 18, 2016 12:25 am

Ralph, in your single toss/3outcomes example sorting the payoffs of (-1e6, -1, 1) and taking the median gives a value of -1 unit not +1 there does not seem to be any measure which will return the max payoff of 1 unit as the median

3. MJ Kelly on February 20, 2016 7:43 am

I still cant follow your sample tables; the only outcomes of each toss are integer payoffs (-1000000,-1,1); an outcome of 0.51 cannot happen in your explanation of the game…Using integer outcomes I get a median of zero in the 2 toss example which is indeed greater than the expected value but doesnt match your figures, am I missing something?

4. Terry Oldberg on February 28, 2016 1:13 am

Is probability as useful to physics as flat-earth theory? To suggest that this is so is to confuse the notion of “measure” with the notion of “theory.” Probability is an example of a measure. Probability theory is an example of a theory.

Probability theory is associated with a logic: the “probabilistic logic.” Can physics dispense with the probabilistic logic? Not unless it is willing to abandon quantum mechanics, thermodynamics and information theory among other well validated theories.

5. Simon on March 5, 2016 12:15 am

Rather than anything else, this is an issue of epistemology — and like almost everyone, the author has a wrong understanding of it. Since Rand’s epistemology (in this case most notably the theory of concepts) is relatively new, afaik noone has explicitly developed a proper theory of probability.

The author seems to be the kind of logical positivist/empiricist (like David/Milton Friedman) who would say that 2 + 2 does not really equal 4, but approximately 4. Using his Toohey-like line of “reasoning” all knowledge could be attacked.

Due law of casualty, everything happens as should; result of a flipped coin is based off the physical forces causing the movement. The notion of probability arises because of lack of knowledge of the causal factors. For the same reason we do have free will and determinism is wrong, we have to deal with imperfect knowledge. To increase success of our actions, we use the concepts of probability in cases we do not understand the causal relations.

First consider mathematics. When a prehistoric man who freshly started to employ reason in his actions was collecting berries, he created in his mind certain concepts to better understand the reality. First concepts about quantities might’ve been “few”, “a lot”, “one”, and “two”.

He might’ve noticed that certain amounts of berries is possible to divide into two equal groups, while some amounts will leave a leftover. He might’ve noticed that putting “few” of “fews” created “a lot”. Through time, he would notice more precise relationships, such as 2+2=4, and 4+4=16.

Through experience he created in his mind the concepts of numbers. Afterwards, he could apply the concepts (and relations among them) to forms that satisfy the defining attributes of the concepts. He could take this knowledge learned from counting berries to do calculations with any other physical items. He created arithmetics.

Quantity does not exist per se; it must be quantity of something. Numbers do not exist, they are just a subjective concept to guide an actor in reality.

As basic as it may sound, this is not a common explanation. Rather than this narrative in the line of Aristotle-Rand, most people (including author of the article) make the Platonic mistake of numbers existing independently of reality.

The fundamental mistake of the original article is that the author makes the same mistake (that I displayed on arithmetics) in the field of probability theory. The only thing that a probability is, is a concept to guide a rational being in action.

The prehistoric man might have noticed patterns in various amounts of berries on a plant. Due lack of knowledge (which as mentioned is the cause of introducing probability) he cannot know the exact number of berries on the next plant he will go to. However, through experience he might’ve noticed that on average, a plant has 10 berries and in 90% cases it has +/-2. To plan his actions in future, he can perform certain calculations using the concepts of probability; he created statistics.

(Particularly in his lecture on this topic on Youtube) there are clearly many other problems (i.e misunderstanding of game theory or quantum physics), but it all boils down to having a wrong epistemology. It could always be traced back to A=/=A, and having wrong premises always leads to absurd conclusions, like this one.