Oct

11

Bayesian Methods, by James Sogi

October 11, 2006 |

The debate on the relative merits of classical predictive statistical analysis and the application of Bayesian analysis when applied to markets when you have a prior probability computed for a given time frame is whether it is better to exit at the optimal time given at the time of entry of the trade or to alter your probabilities and trade based on the arrival of new information, new ticks and new changes in price, news, or announcements while the trade is pending. The classical statistical theory asserts that you have to trade the probabilities and to alter course creates the danger of Bacon's "switches" and diminishing the favorable edge. The prior distribution has ups and downs in the returns but overall the sum probabilities will be positive over the long run and to try short run this distribution reduces the overall return. The application of the Bayesian theory argues that adjusting the position during the trade to the arrival of the new information and can increase the probabilities and returns and avoid the "switches". In practice the former seems to be beating the latter but this may be due to lurking variables in execution. This could be tested easily enough on historical data. The problem in testing is which parameters to use for the posterior criteria.

Thomas Leonard and John Hsu's book Bayesian Methods, from the Cambridge Series in Statistical and Probabilistic Mathematics, has understandable definitions of the concepts for the practitioner and the theoretician. The Bayesian paradigm investigates the inductive modeling process where inductive thought and data analysis are needed to develop and check plausible models. Indeed, one of the the main reason for the spec list is inductive thought to develop models and their testing. Mathematics and deductive reasoning are then used to test those models. Too much concentration on deduction can reduce insight, and too much concentration on induction can reduce focus. An iterative inductive/deductive modeling process has been suggested. Bayes' Theorem states generally that Posterior information equals prior information plus sampling information.

The Expected Utility Hypothesis (EUH) microeconomic procedure helps make rational decisions about money and might be used as a model to quantify decision making and risk as an addition or alternative to Dr. McDonnell's risk formula by considering the choices of the trader or client relative to the statistics to determine whether the amount at risk and the decision frame work being used is rational or will lead to losses or lower return for given probabilities. This work parallels the work of Tversky and Kahneman, but is quantified in Bayesian terms. The basic idea is that people place a premium on certainty which leads to irrational decisions about risk and leads to more losses than is right. This is a good quantification of the gambler vs speculator distinction just discussed, as the gambler's probability expectation is negative while the speculator's probability expectation is positive. Using as examples such as the St. Petersburg Paradox, Allais' Paradox, and the risk aversion paradox, the EUH can be used to make better decisions. Some seek the premium for certainty and fall into the trap known as the "Dutch Book" resulting in certain losses over a series of iterations. This is the distinction between a gambler and a speculator. Formally, does your choice satisfy the utility function U(r) such that for any P1 and P2 P1 <= P2 if and only if:

E(U(X) | P1) < = E(U(X) | P2)

The EUH measures whether you choose a positive but more random expectation over more certain but lesser return. The St. Petersburg paradox is a good example. A fair coin is tossed repeatedly until a head is obtained for the first time. If the first head is obtained after n tosses, you receive a reward of 2^n dollars. What certain reward reward would you accept as an equivalent to the random reward? The paradox occurs because the expected winnings are infinite, but most people would accept 6 or 8 dollars. The EUH can quantify an individual's utility function. This might be a good way to allocate money among funds or risk profiles using an elicitation process and creating utility curve for allocating either clients or moneys among funds or accounts with varying risk profiles and expectations or leverage.

Back to the original point. Let's say you are in a trade that says the optimum expectation is tomorrow. What if the market goes up big today in a big rush all of a sudden. Do you wait it out because the system says so or do you take the gift. The odds have changed on the expectation due to today's rise. What if the twin towers are bombed. Do you bail or ride the original trade? The answer to these is simple, but as shades go, it is not so clear. The criteria for judging the posterior probability seems to be the crux of the issue, but should follow a rational method.

Dr. Phil McDonnell responds:

The origins of Expected Utility Theory go back to The Theory of Games and Economic Behavior by Morgenstern and Von Neumann (1944). They asserted that the expected utility is given by:

E(u(x)) = sum( p(i) * u( x(i) ) summed over all outcomes i

where: p(i) is the probability of outcome x(i) occurring. u(x) is presumed to be an unknown but monotonic increasing utility function which may be unique to each individual. Note that the expectation is a sum over all outcomes.

In their paper Kahneman and Tversky (KT) on Prospect Theory a prospect is essentially a set of outcomes as above, in which the sum of the probabilities is 1. The latter constraint simply means that all outcomes are included. In the first page of KT they say "To simplify notation, we omit null outcomes". Null outcomes comes from two sources. One is a probability of zero which is innocuous because the zeros would not be included in the sum of all the probabilities adding to 1 in any case. KT makes the unsupported assumption about the nature of the utility function in the following two cryptic remarks from p.266.

,with u(0)=0, (p.266)
set u(0)=0

In both cases they are making an assumption about the utility function which neither supported nor even explained. In addition the paper is using what may be the wrong zero point.

Daniel Bernoulli made a very insightful analysis of the St. Petersburg Paradox mentioned by Jim Sogi. His key understanding was that the utility of money is logarithmic with the natural log ln() being the convenient choice. The compounded value of a dollar is given by (1+r)^t where r is the rate and t is time. This is simply a series of multiplications of (1+r) by itself t times. We know that multiplication can be replaced by sums of logarithms. After which we take the anti log to restore the final answer. So if our goal in a sequence of investments or even prospects (bets) is to maximize our long term compounded net worth we should look to the ln() function as our rational utility of a given outcome with respect to our current wealth level w. In particular, for a risk indifferent investor, a given outcome x would be worth:

u(x) = ln( (1+r) * w )

This thinking is the basis for the optimal money management formulas and for what could be called a Rational Theory of Utility. Note that the formula only depends on wealth. It is non linear and concave.

Thus it is reasonable to ask what was the average net worth of the individuals in the Israeli, Swedish, Allais and KT studies. For the most part they were students with some faculty. The average net worth of a student was probably about $100. A little beer and pizza night out was considered living high for most.

Using questions with numbers reduced in size from Allais, KT asked subjects to choose between:

A: 2500 with p= .33 B: 2400 with certainty 2400 with p= .66 0 with p= .01 For N=72 [18%] [82% chose this]

The expected log utility for these choices under rational utility is: E( u(x) ) = 3.1996 E( u(x) ) = 3.2189

Clearly the 3.2189 value chosen by 82% of the subjects was the better choice.

Looking at Problems 3 and 4, Choices A, B, C and D we find that the rational utility function agreed with the test subjects every time without exception. KT disagreed with both the rational utility function proposed herein and their test subjects. Based on this metric it would appear that the subjects are quite rational in their utility choices.

One can find the KT paper here.

Here is some R code to calculate the expected log utility for a rational investor for each of the referenced KT problems:

Jeremy Smith responds:

Note that there are at least 2 ways the distribution can change. It might change unpredictably, e.g. due to arrival of new information or it might change for example because of the approaching expiration of options. Even if Bayesian methods and Markov Models aren't good at the first kind of change, they ought to be useful for the 2nd kind.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search