Apr

23

 One is astonished at how far the subject of position sizing has come since Robert Bacon in 1940 when he suggested 2% of your money on each bet, then a buck on the races so you could lose 50 in a row before going under.

How about an approach where position size was a variable that you put in your statistical return and reward space to start with, then examine the distribution of returns with various positions sizes and determine how your utility fits in with the distribution.

For example, today a 20 day high of 230, indeed a 1½ year high. What is the distribution of the six such?  Max 4.8, min -5, moves to relevant endpoint 2, 5, 2, 2, 1, 5, -2, 3. No trade from Bacon. Wait for overlay. Pittsburgh Phil in the background.

Phil McDonnell writes:

The hard truth, to me, is that it is all position sizing. –Ralph Vince

I agree with this only up to a point. In order to have a winning strategy one must have an edge in a statistical sense. You cannot win with a losing system. One needs both a winning system with an edge and a solid money management system. Neither one alone is sufficient.

After one finds a winning system then you must also have a money management system that does not expose you to ruinous losses. If you graph the expected amount of money you make at various position sizes for any winning system you will find that it looks like a mountain. The peak of the mountain occurs at precisely the positions size Ralph calls optimal f. But if you also look at a chart of risk (stdev) you find it is a monotoncally rising function of position size. Thus as you continue past the optimal f point you are giving back return but still increasing risk. It is the worst of both worlds. If you go far enough past it you can actually wind up losing money even with an overall winning system. That is why I prefer to call the optimal f point the point of maximal investment return.

kahneman receiving nobel prizeWith respect to Vic's comment about utility, there is much merit to this approach. None of us truly knows our utility function and if you believe Kahneman and Tversky it is probably irrational anyway. So then the next best thing is to construct a rational function mathematically from some logical first principles. The three most obvious choices are Sharpe ratio, log, and my favorite is log log Sharpe ratio. Except for the simple log function, one invariably finds that using these utility functions one chooses a point on the mountain graph somewhat to the left of the optimal f peak. So in that sense optimal f is really only 'optimal' for the case of maximizing compounded portfolio return but is sub-optimal and dangerously past the optimal point for maximizing any utility which explicitly takes risk into account.

Dr. McDonnell is the author of Optimal Portfolio Modeling, Wiley, 2008

Ralph Vince adds:

I agree with most of what you say here. Like the old Frenchman used to say, "Most people don't know what makes them tick; they only know that they tick."

Most people do not really know what they are in the markets for — and I think there are very many different and good reasons for being in the markets aside from mere growth maximization. But most don't know what they are here for.

I think until someone can answer that they're probably better off not being in this arena. But I no longer think one needs a winning strategy, and I beg to differ with the notion that you must have a positive expectation (and, this too further indicates that timing and selection are subordinate to sizing). Ultimately, you are in this for a finite number of holding periods or trades (call this T) , and given that you have control over your quantity, you seek to come out at T (or before, if you have achieved the objective of your criteria) with the objective of your criteria.

Again, and I will use this for illustration of the idea — if I could do a full martingale on my capital, and I had unlimited capital, and my goal was to accumulate, say, X……

I could then do a full martingale on a losing system, and when X was achieved (or at necessarily time T) leave the game.

I know for years I too bought into the idea that you have to have a winning system. But I am seeing guys who have specified their criteria well, and are getting astounding results, and are trading approaches that are, at best, feeble. 

Ralph Vince is the author of The Leverage Space Trading Model, Wiley, 2009

Phil McDonnell replies:

I would love to see an example of a system that had a negative expectation but could somehow be turned into a positive expectation through money management. The martingale example is a system that exchanges a high probability of winning a small amount for the small probability of losing a large amount.

Examples of such 'systems' with skewed distributions would include:

  1. Selling out of the money options.
  2. Setting a profit target of $1 with a stop of a $10 loss.

Until I see one I shall remain skeptical that one can reliably expect to profit from a losing system simply through money management.

Rocky Humbert:

As a philosophical matter, I question whether a system can truly have a negative expectation. Because if you take a system that supposedly has a negative expectation and simply do the exact opposite, you should have a system with a positive expectation. I am skeptical that any market participant believes that his approach has a negative expectation.

If you have ever tried to play checkers to lose (instead of win), you'll see just how difficult this can be.

(Note that I am excluding transaction costs from this discussion. But there should be no a priori (efficient market) reason to believe that always buying out of the money options should have a better result than always selling out of the money options– unless there is a systematic mispricing.)

Steve Ellison comments:

crapsAny casino game is a system with a negative expectation for the player (except a blackjack card counter). In craps, one can bet against the shooter, but the expectation is still negative. The only way to take the other side is to be the house. 

George Parkanyi writes:

In my REAP system (Relational Equity Allocation Program), I made the position size a function of the relative separation ( % move of X minus the %move of Y determined the % of position X to sell to fund the purchase of position Y) between two securities made over time. You can re-allocate based on a specific net separation (e.g. 30%) or re-allocate at specific time periods come what may. This has a positive expectation over long periods, because there is dollar-cost averaging dynamic involved - a more aggressive version because fewer shares are sold of the relatively higher security, and more shares bought of the relatively lower security, and the wider the separation the larger the re-allocation size. The compounding over time depends on the volatility (and therefore degree of divergence and funds transfer) between the matched securities.

Trade sizing can also be used for money management in trend following. The simple principle of scaling into a position as it rises keeps your risk relatively low on initial entry, and there is a cushion of profit to fund the risk of subsequent higher-up scaling purchases. Here again, you can optimize by how high you go before adding, and what tranche sizes you add at each level. The trade-off is that you limit your profit potential by scaling, but your stop-outs are cheaper, and waiting to add provides confirmation of the trend.

I currently am using a 40-30-30 scaling sequence, using a specific setup pattern rather than a fixed % rise (e.g. 0%, 10%, 20%). You could use a relatively wide stop on the first tranche, or really keep your costs down and use a very tight stop that allows several inexpensive stop-outs before you "latch". The latter is a better way to go I think if you are trading breakouts and strength patterns. A good break-out doesn't look back, so your tight stop doesn't factor in. If you have to stop out 3 to 4 times before you catch it, you can still keep your misfire costs low.

Rocky Humbert:

Fair point. I was referring to the financial markets (and not casinos,) but you can indeed buy and hold the shares of publicly traded casino companies and you are taking the other (positive expected) side.

Turning a system upside down highlights an additional phenomenon: path dependency can determine whether one is a trading "genius" or "moron."

Nigel Davies comments:

Fascinating discussion.

If I might offer my two cents I've found that in my field there are an immense number of practical difficulties in bringing a nice piece of theory to the board. From my amateur perspective I see many issues with regards to the subject under discussion:

a) A maximum loss size implies that you have a clearly defined exit point, i.e you are trading with stops of some kind and these can nonetheless leave you a winning system.

b) Assuming you have your exit point, how realistic is it that this will be achieved (slippage etc).

c) Does assigning all trades the same position size represent maximum efficiency? I suggest that some are much better than others and should therefore be weighted more heavily.

I'm sure that readers of this site can think of many more issues such as these. This in turn makes me wonder about the utility of trying to apply very precise mathematics to practical and very messy issues. Surely it should come with a good sized dollop of common sense and flexibility in which good lab experiments are regarded as mission statements rather than straight jacket rules.

Of course doing this is an art in itself which will extends into all sorts of psychological nuances. If anyone is unable to do this they should be looking to work on themselves rather than 'the system'. Discovering the reasons why people can't operate effectively under pressure is very valuable, both in the markets and in life.

Craig Mee responds:

Fair call, Nigel, though the one thing I would have to agree about on the surface but disagree on is "I suggest that some are much better than others and should therefore be weighted more heavily."

It had been my humble experience that the trades I thought were crackers ended up as duds, and those I thought were tradable, but just above the criteria, turned out to be 4-10 baggers. Setting the same cash risk, at the start was imperative across the board.

Nigel Davies writes:

 Well, yes, that can be a tricky one. But if one's assessment of bullitude/bearitude is unreliable vis a vis degree, what makes you so sure that they're not completely the wrong way round!? It could be that 'sure thing' trades lower one's vigilance in which case we're back to the human factor. Anyhow, now I'm more awake I can think of some other flies in the ointment in this position sizing debate. What about:

a) The good part time trader with a day job who wants to build up capital. Perhaps he should he push the boat out more at first so as to get a big enough account to go professional.

b) The improving trader; shouldn't he trade small size whilst learning?

c) The successful professional trader who wants to protect capital. Shouldn't he gradually reduce size rather than have his entire wealth and livelihood on the line.

Don't get me wrong, I think that an understanding of position size risks is essential. But there's a lot more to this than just numbers.


Comments

Name

Email

Website

Speak your mind

1 Comment so far

  1. douglas roberts dimick on May 2, 2010 5:06 am

    Making Bacon

    How may utility determine distribution of position sizing based on integrated positions relative to triangulation of movement for unitizing position mass?

    In effect, the Quantitative Relativity of an exchange sequence (by trades or volume) is quantified by intervals of movement not time. One’s cited approach by Bacon is similar in this respect, focusing on the series of events.

    Phil’s mountain analogy is consistent with my research (see daspec article, Linear Mapping of Topological Vector Spaces: Plotting Order-Size Domains Relative to QR Correlated Convergence of Issue Direction Indicators), whereby a strategy is executed based on cone-shaped plotting of domains, so representing order size execution of issuance (or money management systematics). See (cited article’s) Figure 1 for how linear mapping of topological vector spaces – representing non/directional energy of electronic exchange sequencing of securities market transactions – indicates variations in equilibrium of center of mass for TVS domains.

    The circular base of order size cones generates upon convergence of velocity bound sums; this energy dynamic of the exchange process explains why Ralph’s optimal f does not account for Phil’s monotonical functioning of risk (standard deviation) as position size decreases. Note my prior article’s Figure 1: for purposes of exiting a position, the f point (as peak of the cone) must be inverted, whereby the base of the cone narrows; as a result, the risk of loss (or triangulated degree) correlates (or becomes greater) to the Quantitative Relativity (or special relativity) of the decreasing size of the position.

    Accordingly, as Phil indicates, to avoid the “worst of both worlds,” one’s strategy must first be correlated to account (or hedge) for this inclusion of mass aligned with directional vectors of closed, subspace sums bound to a linear mapping of transaction exchange (or energy patterns). See [article’s] Figure 1 distinguishing of TVS domains, to wit:

    “Geometrically, an order-relative domain forms upon a circular base. The diameter of that base of the corresponding cone parallels an assimilated plotting (or linear progression) of issue transactional velocity. This line ends at the point of divergence among indicators constituting that directional function of the issuance transaction (or series of order executions).

    Diameter of circular base of domain represents averaging of order size. Linear sequencing of orders therewith generate plotting of closed, convex sets less extreme points (e.g., apex connecting legs of cone).

    This rules-based construct of geometric portioning offers a method to reconstruct electronic exchange market footprinting. The purpose (or value) of such plotting is being able to quantify patterns of market relativity otherwise eliminated or nontransparent within increasingly fragmented markets. That inability to correlate is also caused by low latency, high frequency systematics as well as during (artificial) pricing gyrations of market aggregation and periods of high levels of volatility.”

    Ralph’s optimal f is a center of mass that may represent the average position of all the particles of mass that constitute a particular object (or vested position as determined by one’s strategy). Therefore, such positioning (being the center of mass) is a function relative to positions and masses of the particles as composed within a given system (e.g., gravity environment or electronic exchange securities market systematics).

    Due to averaging (or relativity) of positions (as masses) indicated within boundaries of objects (or patterning of entry/exit trades or adjusted volume sizing), the center of mass of that object does not necessarily coincide with its intuitive, geometric center. Therefore, the center of mass may be inside the boundaries of an object or outside the boundaries of the object (or those legs of the triangulation forming the cone, being Phil’s mountain) that determines Ralph’s optimal f.

    A line with a point of the center of mass of an object (or shape) ending inside the defined base of support (or shaped boundaries) presents a stable equilibrium with the object balanced. However, a center of mass outside of the base of support is unstable and not balanced. Phil’s monotonical correlation actually becomes the parabolic curve(s) – like being a path of light – that may form as a correlation of positions within a position (or masses averaged into states of equilibrium relative to any corresponding center of mass, being Ralph’s optimal f).

    So it appears that Ralph and Phil are both right. An optimal f may be realized via strategy; however, it is not static (or fixed), because Phil’s standard deviation as a correlation of risk is determined as the circular base of his cone-shaped “mountain” assimilates changes in position size and pricing. In effect, his monotonical order is represented by a line that becomes a parabolic curve during time-change in positioning. This operation of Quantitative Relativity of a given electronic exchange system explains why there is “No trade from Bacon.”

    Phil’s suggested rational functions (e.g., Sharpe ratio, log, and log log Sharpe ratio) functionally correlate so long as the utility functions of his “mountain graph” parallel (or assimilate) indicated quantification of a given position relative to market movement(s) of which the strategy positioning is derived. Yet, as Ralph notes, this process is finite as constructed by one’s strategic formulation(s) of T and X.

    Ralph cites those who “have specified their criteria well, and are getting astounding results, and are trading approaches that are, at best, feeble.” This observation perhaps distinguishes how position domains are independent of market indicator domains because Quantitative Relativity operates independently among exchange and strategy systematic (or general and specific relativity).

    Accordingly, one may query then why a system (such as George’s) pairs stocks instead of demarcating the “relative separation” by replacing the second security (Y) with a “% move” of x at preceding points in time of convergence and divergence?
    Does not pairing (as with batching) create a rather complex series of issues as to independence of quantification relative to both pairings and each security?

    Phil’s martingale example is interesting as the devise is used in polo and other equestrian activities, usually either for training or performance. From a perspective of stochastic analysis, though, the issue may be whether one’s strategy is more gambling or probability based. A martingale focusing on re-allocation could be said to be probability oriented, as the tool is applied for money management purposes. Otherwise, if a temporal application of t and s is being calculated, then is not such a system returning us to the difficulties presented in Phil’s original observations about an optimal f not correlating to risk (standard deviation)?

    The working draft of my second, current book project considers this dilemma with FSM construction, whereby strategy functions and execution protocols are distinguished via rules-based declination of state transitioning.

    dr

Archives

Resources & Links

Search