Pure random sequences are difficult to artificially generate using computers or algorithms. Some random number generators use natural phenomenon, such as a flame to generate a random sequence. Looking at the ocean waves, or sand dunes it is fair to think of the patterns a being random in the sense one cannot predict where a particular peak or valley will be at a particular time and place. Sailors know this all too well. However, there is an underlying process with its own set of internal rules that generates the the so called random pattern. For waves it is the dynamic between wind, the water surface and the water surface tension and viscosity, the length of the fetch and speed of the wind. Even with such a seemingly random pattern it is possible to predict certain aspects of wave generation such as size, direction with information of the wind speed and direction and duration. Applying the idea to markets, if one could identify the underlying functions would it not be possible to have some predictive ability on the wave size, or in markets volatility as to size time and place as is possible in wave prediction. The navy and NOAA has spent considerable sums on creating models for waves as it is used to time war attacks, landings, how it affect shipping, oil rigs and other industrial needs. Weather prediction is one of the main forefronts of computer science and modeling due to the large number of people affected, and the risks of life and property. Surfers happen to benefit being able to quite accurately predict waves, timing, arrival and size.

What are the winds that drive the markets? Fed stimulus, currency moves, economic forces, upward drift, regulation, bank policies.

Jeff Watson writes:

It is much more difficult than one would think, to generate truly random numbers.

Gary Rogan writes:

While there is no mathematical proof, as far as we know the digits of pi while of course truly deterministic also form a truly normal distribution.

Orson Terrill writes:

How would that be? It seems to me that the 10 digits (0 to 9) would merely have an irregular distribution for any stopping point, but would approach the same number of observations as the number of digits observed approaches infinity. Therefore, the digits would form a uniform distribution, no?…


Min.   1st Qu.  Median   Mean   3rd Qu.    Max.

0.000   2.000   4.000   4.443   7.000   9.000

691 decimal points.

Mr. Isomorphism writes: 

Getting arbitrarily long π is pretty easy with the Berkeley Calculator.

$ echo "scale=2222; a(1)*4" | bc -l > pi.2222

(    a(1) == arctan(1) == quarter-circle     )

then in R:
pi.2222 <- scan('pi.2222', 'character')
slice.pi <- strsplit(pi.2222, "")

  .   0   1   2   3   4   5   6   7   8   9
  1 199 229 230 204 219 230 223 217 227 245

One is then limited only by patience….

It's unclear what a 'normal distribution' of digits would mean, since the normal is defined on [−∞,+∞] and most of its mass is between [−3,+3] … it's not defined on {0,1,2,3,4,5,6,7,8,9}…. I think that Ï€ is actually a normal number, which means the digits are distributed uniformly.

Another nice artefact of using bc -l is that with obase=2, obase=16, etc one can play with the question a bit more, as 10 digits is not sacrosanct. The binary expansion of pi


should have a 50/50 distribution of 0's and 1's if the decimal digits are evenly distributed, and higher bases (imagine base 12837687622234) would count what appear as "longer patterns" in base 10. I believe it's this way of thinking that leads people to say eg the works of Shakespeare are encoded in pi.





Speak your mind


Resources & Links