# Article on Information Theory, shared by Steve Ellison

September 4, 2010 |

Here is a very interesting article I found on Information Theory:

Excerpts:

The noisier the channel, the more extra information must be added to make error correction possible. And the more extra information is included, the slower the transmission will be. ­[Claude] Shannon showed how to calculate the smallest number of extra bits that could guarantee minimal error–and, thus, the highest rate at which error-free data transmission is possible. But he couldn't say what a practical coding scheme might look like.

Researchers spent 45 years searching for one. Finally, in 1993, a pair of French engineers announced a set of codes–"turbo codes"–that achieved data rates close to Shannon's theoretical limit. The initial reaction was incredulity, but subsequent investigation validated the researchers' claims. It also turned up an even more startling fact: codes every bit as good as turbo codes, which even relied on the same type of mathematical trick, had been introduced more than 30 years earlier, in the MIT doctoral dissertation of Robert Gallager, SM '57, ScD '60. After decades of neglect, Gallager's codes have finally found practical application. They are used in the transmission of satellite TV and wireless data, and chips dedicated to decoding them can be found in commercial cell phones.

—-

The codes that Gallager presented in his 1960 doctoral thesis (http://www.rle.mit.edu/rgallager/documents/ldpc.pdf) were an attempt to preserve some of the randomness of Shannon's hypothetical system without sacrificing decoding efficiency. Like many earlier codes, Gallager's used so-called parity bits, which indicate whether some other group of bits have even or odd sums. But earlier codes generated the parity bits in a systematic fashion: the first parity bit might indicate whether the sum of message bits one through three was even; the next parity bit might do the same for message bits two through four, the third for bits three through five, and so on. In Gallager's codes, by contrast, the correlation between parity bits and message bits was random: the first parity bit might describe, say, the sum of message bits 4, 27, and 83; the next might do the same for message bits 19, 42, and 65.

Gallager was able to demonstrate mathematically that for long messages, his "pseudo-random" codes were capacity-approaching. "Except that we knew other things that were capacity-approaching also," he says. "It was never a question of which codes were good. It was always a question of what kinds of decoding algorithms you could devise."

That was where Gallager made his breakthrough. His codes used iterative decoding, meaning that the decoder would pass through the data several times, making increasingly refined guesses about the identity of each bit. If, for example, the parity bits described triplets of bits, then reliable information about any two bits might convey information about a third. Gallager's iterative-decoding algorithm is the one most commonly used today, not only to decode his own codes but, frequently, to decode turbo codes as well. It has also found application in the type of statistical reasoning used in many artificial-intelligence systems.

"Iterative techniques involve making a first guess of what a received bit might be and giving it a weight according to how reliable it is," says [David] Forney. "Then maybe you get more information about it because it's involved in parity checks with other bits, and so that gives you an improved estimate of its reliability." Ultimately, Forney says, the guesses should converge toward a consistent interpretation of all the bits in the message.

## Jim Sogi writes:

I think that is why many market price moves come in threes as an error correction devise. Like the recent triple bottom.

## Jon Longtin writes:

Very interesting articles on the history of encoding schemes.

One interesting thing to note is that if you take even a simple information stream and encode it with any of the numerous algorithms available, the encoded version of the information is typically unintelligible to use as humans in any way, shape, or form.

‘hotdog’ for example, might encode to ‘b\$7FQ1!0PrUfR%gPeTr:\$d’

These encoding algorithms work by rearranging the bits of the original word, looking for patterns, and applying mathematical operators on the bit stream.

Many of the financial indicators and short-term predicative tools that abound today are based on some combination of the prior price history, but often in a relatively simplistic way. For example, although the weightings may change for various averages, their time sequence, i.e. the order in which they are recorded and analyzed is the same: a sequence of several prices over some period is analyzed in the same order in which it was created.

Perhaps, however, there is some form of intrinsic encoding that is going in the final price history of an instrument. For example, it could be reasoned that news and information does not propagate at a uniform rate, or that different decision makers will wait a different amount of time before reacting to a price change or news. The result might be that the final price history that actually results, and that everyone sees and acts upon, is encoded somehow based on simpler, more predictable events, but the encoding obfuscates those trends in the final price history.

Maybe it is no coincidence that Jim Simons of Renaissance Technologies did code breaking for the NSA early on in his career.

Unfortunately reverse-engineering good hashing codes, particularly those designed to obfuscate, such as security and encryption algorithms are notoriously difficult to crack. (The encrypted password file on Unix machines was, for many years, freely visible to all users on the machine because it would simply have taken too long to crack for machines of the time.) On the other hand, the cracking algorithms often require little knowledge of the original encoding scheme, instead simply taking a brute force approach. Thus if there were such an underlying encoding happening with financial instruments—and the encoding might be unique for each instrument—then perhaps there is some sliver of hope that it might be unearthed, given time, a powerful machine, and some clever sleuths.

For all I know this has been explored ad nauseum both academically and practically, but it does get one wondering …

WordPress database error: [Table './dailyspeculations_com_@002d_dailywordpress/wp_comments' is marked as crashed and last (automatic?) repair failed]
`SELECT * FROM wp_comments WHERE comment_post_ID = '5234' AND comment_approved = '1' ORDER BY comment_date`

Name

Email

Website