Aug

6

 As a hedge fund manager you have nine assistants employed solely to give you advice. Each of the assistants has a different perspective on the markets. They are all good advisers, as any one of them improves your trading immeasurably. For example, the market has a 2 percent annual return, but with your skills you can generate a 10 percent return. If you also add the advice of any one of your assistants you can bump that return up to between 12 and 18 percent.

Over the last 12 representative years there have been times when the nine were universally bullish. But despite their unanimity the market did not always rise. Conversely, even in the protracted down moves of 2008, their bearishness was not unanimous. Put another way, there was always one or two that wanted to go long at the worst times. Yet each and every one over time provided great advice.

You would like to find a way to combine their advice to get even better results than by using any one alone. But that's not easy. Sometimes, adviser A is early, and late at other times on a move. Likewise with the other assistants. One simple solution would be to have them vote, but the performance result of the vote underperforms some of the individuals, although still better than not having any adviser.

*Note here that we are only considering return and not the risk taken to achieve that return. Risk should always be considered, but for the sake of moving along, let us assume that taking the advice of your advisors never increases risk and that their respective upside contribution to profits is directly proportional to their downside exposure to risk. That is, much of their positive return contributions come from reducing risk, which is what we have observed generally.

Now, let's suppose that these advisers are not people, but algorithms. That's actually better because as algorithms they can be combined in ways that individuals cannot. They can be viewed logically (on/off) as in the voting experiment, or they can be ranked by their actual values. If they have scalar values they should be normalized (given the same order of magnitude or scale). For example, you cannot compare the slope of the Dow Industrials with that of the S&P 500, as the former is an order of magnitude larger. But if you put them on the same scale (e.g. divided by price), you can easily compare them.

Normalization is exactly what you would do to your inputs if you were using a neural net, and you might be tempted to go the NN route. But NNs have problems; among them would be your inability to discover the actual combination of what worked best. You might say "who cares" as long as it works, but that philosophy does not have a good history. However there is a very good use for a NN, and that is as a trial. That is, if you are good at NNs (and most people fail), then you should by all means try. If the NN gives you good results, then proceed on your own to find a good combination without the NN. But if using a NN does not improve results for the experienced practitioner, then it is going to be very difficult to find a better combination.

 But how do you combine them to your best advantage? Well, there's an app for that. It's called linear algebra. It is somewhat vertigo-inducing for most traders, because most of them are comfortable with things they can chart. For your average trader that means two dimensions; options traders tend to be comfortable in three dimensions. But with our illustration we are likely progressing to higher dimensions, and they are not chartable, although the problem's solution is indeed a chart, albeit a virtual one.

Subsequent "chapters" (if the topic flies): Operations, Testing.

Jim Sogi writes: 

"But with our illustration we are likely progressing to higher dimensions, and they are not chartable, > although the problem's solution is indeed a chart, albeit a virtual one."

One of my first posts ever to the SL was Flatland, and the idea that multiple dimensionality is lost in two dimension charts which are typically used.

Easan Katir writes: 

Flatland, one of my all-time favorite books since I read it 40 years ago, offers insights in many arenas. Perhaps some enterprising ex-game coder would turn his attention to finance and provide charts where the point of view can be changed with a click. Will traders of the future be trading on an X-box-like device?


Comments

Name

Email

Website

Speak your mind

1 Comment so far

  1. drdimick on August 8, 2012 12:37 am

    Is Scalar a Rules-Based Limitation of An NN Tautology?

    Bill’s example of scalar-based comparisons for neural net (NN) substitution with algorithms to replace human input (or advisors) presents an interesting example of how rules-based assumptions ultimately govern the outcomes of a given fund/bank risk management strategy.

    Bill’s example of normalizing scalar (Dow and S&P500) values represents a rules-based assumption, whereby some formulation of mass, length, and/or speed is being correlated absent of directional considerations. Consequently, any resulting algorithm has a finite application in determining a vector quality absent alternate metric and topological (input/output) related data.

    Is such an example not then a linear formulation?

    Bill then proceeds to note that one of the glaring “problems” of the “NN route” is an inability to “discover the actual combination of what worked best.” No doubt…

    Where within such algorithmic strategies are the nonlinear dynamics so operating in those varying (non)directional states of price action during any given electronic market exchange?

    The irony of Bill’s “trial… to find a good combination” retort — to what he attributes as a philosophical limitation (or complication) – is that we circle right back to the human factor. And here is where Bill’s logic becomes interesting.

    His solution to the tautology? Linear algebra.

    Based my study of day trading strategies relative to algorithmic processing, I remain uncertain how Bill is aligning two dimensions with the “average trader” – whomever that may be – and “higher dimensions” a la options traders and algorithmic processing (as in his scalar comparison). Regardless, does not the question then become deciding what rules-based considerations are being factored into the design of a program trading architecture?

    Just as one comes to realize that algo-quant probability-based equations work only when their rules-based assumptions are operative during a given price action cycle, it appears that Bill’s algebraic scheme fails upon a similar realization…

    Markets are relative (not mathematical) constructs of human activity.

    Sure, trial and error tesing makes sense provided that correlation with dominate algorithmic processing of a given market exchange can be established and maintained. However, the philosophical antecedent operating within the systematics of (random or cyclical) price action is that such linear correlations – algebraic, geometric, arithmetic, or from any related study even beyond pure mathematics – have yet to be correlated (at least nominally) with the resulting nonlinear market correlations (or valuations).

    The most striking testimony evincing this “truth” of all market truths is recorded during an investors club talk with perhaps the market’s great liar of all time… Bernard Madoff…

    http://www.youtube.com/watch?v=auSfaavHDXQ

    A caveat: Bill’s citing of a scalar comparison lasers our discussion to perhaps the single most important question concerning market topology – one analogous to what Einstein resolved to arrive at his General Theory of Relativity…

    How do we define and quantify the geodesics (or curvature of space-time) found in the metacircularity of electronic market exchange systematics?

    No doubt, Bill’s scalar comparison presents an important consideration for finding the answer.

    dr

Archives

Resources & Links

Search