### Jun

#### 13

# A Fechnerian Day, by Victor Niederhoffer and Tom Downing

June 13, 2007 | 1 Comment

The S&P index was at exactly 1500 yesterday at 10:00 a.m., with the futures at exactly 1515 at the same time. The total ground covered in the S&P futures yesterday, using two point up and down swings as a measure, had to be one of highest in history. From a 1525 close the previous day to a 1517 open, to 1522 at 9:40, 1513 at 10:30, 1521 at 11:40, 1516 at 1:00 p.m., to 15:26 at 2:00 p.m. (at which time I tried to book tennis court), to 1506 at the close. That is 65 points of movement to make a mistake in — truly disruptive and encouraging Fechnerian decision making.

Today, (Wednesday), the cast bond has moved up from a low of .90 to .9126 at its high, (it was at .9116, as of 10:10 a.m., time of writing), after moving from .9216 to .9016 yesterday, mostly between 2:00 - 5:30 p.m..

The omniscient market of Israel has moved from its low of 1078 to 1094 this morning, up slightly on day, and the VIX has moved from its 14.7 Monday close to 16.7 by yesterday's close, near its high of 17.1, but dropped back to 15.7 today. Many markets seem to be trading about half way between their recent highs and lows.

The move on retail sails in the S&P and the Bonds, down three quarters of a percent and then up one percent from there, was completely disruptive. It reminded me of the play by play of a sumo match, which also takes place in 25 seconds and is the only thing in the world that goes through so many gyrations in that short amount of time.

We calculated Kendall Tau for S&P prices for 5, 10, 15, and 20 day non-overlapping intervals since 1996.

We then compared these empirical estimates to estimates calculated from synthetic price series created using bootstrapped daily changes.

In general, the observed Kendall Taus were higher than you'd expect from the simulated distributions, significant at about a ten percent level.

Method (using the 5-day as an example):

1) First I generated 1000 bootstrapped price series.

2) Split each series into non-overlapping 5-day intervals

3) calculated Kendall Tau for all 1000. now we have a distribution of taus:

The average of Tau was 0.0482, and the standard deviation of Tau was 0.0243

4) Calculate the Tau from the actual observed price series = 0.0778

5) Figure out where 0.0778 falls within the simulated distribution: in this case it turns out that the Empirical Tau (0.0482) was greater than 88.39% of the Taus from the simulated distribution.

Results: Simulated Distribution of Kendall Tau (based on 1,000 simulated price series for each interval):

*Interv.* **5-day 10-day 15-day 20-day**

*mean * 0.0482 0.0455 0.0446 0.0460

*stdev. * 0.0243 0.0317 0.0374 0.0418

*Empirical Distribution mean * 0.0778 0.0900 0.0898 0.1120

*%le of Emp. mean in Simulated Dist. %le * 0.8839 0.9189 0.8911 0.8541

### Jan

#### 18

# EUR/USD, by Craig Mee

January 18, 2007 | 1 Comment

A quick observation …

Since the start of 1995 through 2006, the opening week of the year in eur/usd has been the extreme (HIGH OR LOW) for the year nine out of 12 years … Will ‘07 follow this suit?

## Tom Downing comments:

This looks pretty nonrandom to me notwithstanding the arcsine effect.

Define S as the number of years (out of 12) in which the min or max falls within the first week … In 10,000 simulated 12 year periods, here is the distribution of S when price changes follow a standard normal distribution: (mean 0, standard deviation 1):

S N Prob Odds

0 988 0.0988 10.12

1 2504 0.2504 3.99

2 2984 0.2984 3.35

3 2145 0.2145 4.66

4 951 0.0951 10.52

5 324 0.0324 30.86

6 86 0.0086 116.28

7 16 0.0016 625.00

8 1 0.0001 10000.00

9 1 0.0001 10000.00

10 0 0.0000 NA

11 0 0.0000 NA

12 0 0.0000 NA

In only 1 of the 10000 simulations did at least 9 years of the 12 have a min or max within the first week.

If you assume some sort of drift (for example, since 2002 euro/$ mean = 3.3 pips with standard deviation of 68 pips/day), the probability of having at least one first week min or max increases, but the probability rapidly drops off after S=7:

S N Prob Odds

0 579 0.0579 17.27

1 1814 0.1814 5.51

2 2789 0.2789 3.59

3 2460 0.2460 4.07

4 1473 0.1473 6.79

5 628 0.0628 15.92

6 210 0.0210 47.62

7 44 0.0044 227.27

8 3 0.0003 3333.33

9 0 0.0000 NA

10 0 0.0000 NA

11 0 0.0000 NA

12 0 0.0000 NA

Another approach would be to estimate the probability of observing a first week min or max in any given year (conditional on a price change distribution), and then calculate the probability of having at least 9 successes out of 12 trials under binomial distribution.

## Vincent Andres adds:

EUUS_W.DAT : column = OPEN 02/01/1995-25/12/2006

WEEK_1 WK_MIN WK_MAX DIFF

1995 1.2040 1.2040 1.3422 0.0000

1996 1.2740 1.2250 1.2837 0.0097

1997 1.2400 1.0556 1.2406 0.0006

1998 1.1091 1.0762 1.2085 0.0329

1999 1.1756 1.0098 1.1830 0.0074

2000 1.0133 0.8352 1.0256 0.0123

2001 0.8956 0.8437 0.9472 0.0516

2002 0.9016 0.8613 1.0100 0.0403

2003 1.0225 1.0225 1.2184 0.0000

2004 1.2352 1.1790 1.3444 0.0562

2005 1.3313 1.1709 1.3576 0.0263

2006 1.1854 1.1834 1.3353 0.0020

Read more here.

## Sam Humbert adds:

I took a quick look at this as a finger-exercise. Below is R code with some user-tweakable parameters, currently set to roughly mimic Tom's work (though I took a clean-room approach; didn't use Tom's code as a base). The idea, as suggested by Tom, is to find the "probability of observing a first week min or max in any given year," which is "Prop" in this R script, and turns out to be .177 (I'm sure Dr. Phil or others could find a closed-form solution) and plug this into the binomial, thus chopping out an order of magnitude of computing. The results I get are almost exactly Tom's, so either his work is correct (as usual) or he/I made the same mistakes.

Days<- 252 # Biz days in a year

Year<- 12 # Number of years

Week<- 5 # Biz days in a week

Sims<- 10000 # Number of sims

Data<- apply(matrix(rnorm(Days*Sims),Days),2,cumsum)

Prop<-sum(pmin(apply(Data,2,which.min),apply(Data,2,which.max))<=Week)/Sims

Prob<- round(diff(pbinom(Year:0,Year,Prop,F)),4); Prob<- c(Prob,1-sum(Prob))

Odds<- round(1/Prob,2)

data.frame(S=Year:0,Prob,Odds)

Days<- 252

Year<- 12

Week<- 5

Sims<- 10000

Data<- apply(matrix(rnorm(Days*Sims),Days),2,cumsum)

Prop<-sum(pmin(apply(Data,2,which.min),apply(Data,2,which.max))<=Week)/Sims

Prob<- round(diff(pbinom(Year:0,Year,Prop,F)),4); Prob<-

c(Prob,1-sum(Prob))

Odds<- round(1/Prob,2)

data.frame(S=Year:0,Prob,Odds)

S Prob Odds

1 12 0.0000 Inf

2 11 0.0000 Inf

3 10 0.0000 Inf

4 9 0.0000 Inf

5 8 0.0002 5000.00

6 7 0.0016 625.00

7 6 0.0088 113.64

8 5 0.0352 28.41

9 4 0.1023 9.78

10 3 0.2113 4.73

11 2 0.2948 3.39

12 1 0.2492 4.01

13 0 0.0966 10.35

### Jan

#### 7

# 2007 Fed Model Forecast, by Victor Niederhoffer and Tom Downing

January 7, 2007 | 2 Comments

The Fed Model postulates that if the forward earnings yield of the S&P Index is higher than the 10-year treasury yield, stocks are “undervalued“, and vice versa. As of January 4, the S&P was at 1418.34 and expected forward S&P 500 earnings for the next 12 months were 90.38, making the forward earnings yield 6.37 percent (90.38/1418.34). The yield on the 10-year T-note was 4.6 percent.

Historically, subsequent market returns have been correlated with the differential between the S&P forward earnings yield (estimated 12 months earnings divided by the S&P 500 level) and the 10-year treasury yield. On the 9 occasions when this differential has been greater than 1 percent, the S&P 500 has risen nine out of nine times for an average of 14.7 percent in the subsequent 12 months. (This differential currently stands at 1.77 percent).

We have found that the best way to specify the Fed model relationship for forecasting purposes is with a linear regression in the form:

S&P Return[t+1] = a + b * ( Forward Earnings Yield[t+1] - 10 Year Yield[t] )

Estimating this regression using yearly data since 1980, we obtained the following equation:

S&P Return[t+1] = 0.0834 + 4.8839 * ( Forward Earnings Yield[t+1] - 10 Year Yield[t] ) t-stat 2.72 2.05 p-values 1.17% 5.07%

The R-Squared of 0.14 is quite high for a predictive regression in the financial markets and indicates that almost 15 percent of variation in subsequent returns was explained by the independent variable over the time period studied.

To determine current Fed Model forecast:

Current S&P (as of 01/04/07) stands at 1418.34

Forward Earnings = 12 months consensus forward earnings for the S&P 500 = 90.38

Forward Earnings Yield = Forward Earnings / S&P = 90.38/1418.34 = 6.37 percent

10 Year Yield = The Current Yield on 10-Year government note is 4.6 percent

The Differential (Earnings Yield - 10 Year) = 1.77 percent

Substituting these numbers into the regression formula :

0.084 + 5.027 * (0.0637 – 0.046 ) = 0.173

Therefore, Fed Model yields a forecast of about 17.3 percent for next 12 months. See full details.

### Dec

#### 12

# Growth vs. Value, from Victor Niederhoffer and Tom Downing

December 12, 2006 | Leave a Comment

I have written extensively about my belief that growth beats value, because my experience with many hundreds of companies shows that you get paid for finding areas where capital has a high rate of return, and for doing innovative things. You don't get paid for using capital with low rates of return and imitative enterprises. I like to give the anecdotal example of Joe McNay, who took part of Yale's endowment from a few million to over $125 million in 20 years, as unobtrusive evidence supporting my view. Also, since 1960, Value Line has tried to find groups of low P/E and low P/B stocks that would beat their composite, but found that a dollar invested in their composite or their Group 1, rebalanced each month, grew at least 10 times faster than a dollar invested in value over the period.

There are some problems in this, in that the Value Line composite, is an equally weighted geometric average, (the return on portfolio at time t is equal to the nth root of the cumulative product of the relative returns P[t]/P[t-] of the n stocks), and the portfolio may be arithmetically averaged. In this case the average would always be greater for the arithmetic workings over the geometric workings for the same data.

Anyhow, I based my empirical conclusions on the Value Line findings, and pay no attention to the results of Fama-French and their followers, which are fatally flawed by data problems, retrospection, non-operational results, and data ending in 1999. Almost seven years have passed since 1999 and it is now time to update the prospective Value Line results.

% Returns

Year Value Line Low Market Low Price Low Price Low Price Comp Cap Earnings Book Sales

2000 -9 -24 -47 -33 -26

2001 -5 32 -19 10 22

2002 -29 -32 -50 -39 -29

2003 37 62 54 71 59

2004 12 6 4 14 16

2005 2 -9 -7 -7 -2

2006/Sep 3 30 14 26 3

Total -1 40 -63 4 38

We have in this data the unfortunate feature of a theory meeting a fact. The results show clearly that during this period low P/S was best and low P/E was worst. Low Market Cap was best of all by a thin margin, but because these stocks would have suffered from transaction and liquidity costs, they are only slightly more meaningful than the seriously flawed studies of my former colleagues alluded to above.

Professor Pennington offers:

This is correct about the method used by Value Line to calculate the daily returns of their composite index. However, that's a very bizarre quantity to calculate, and it has no relation to a real portfolio that anyone could hold. Here are the problems:

- If any one stock goes to zero on day t, the entire portfolio would show a 100% decline! Surely that must have happened at some point over the last ~40 years of Value Line's existence, covering more than 1000 stocks. What did they do?
- They write that "This market benchmark assumes equally weighted positions in every stock covered in The Value Line Investment Survey. That is, it is presupposed that an equal dollar amount is invested in each and every stock.", but that's plainly not the case. Let's do a simple example involving just two stocks. One goes up 1% and the other goes down 90%. Value Line's formula would tell me to take the square root of 1.01*0.1, giving 0.32, corresponding to a 68% decline. An portfolio with an equal dollar amount invested in each stock would have declined about 44%.
- Not only are the weightings not equal, but also they can not be known in advance. You don't know the weightings that will be used until you know what the returns are.
- As Tom pointed out, and as illustrated in example 2 above, the calculated daily percentage changes using this method will always be less than the return of an equal weighted portfolio.

Here is text from the Value Line site.

Larry Williams replies:

I was always perplexed by Vic and Laurel's comments on growth vs. value, as my studies suggest that in the large blue chip stocks, DJIA, value outperforms growth hands down. The studies are backed up with actual performance from 1999 forward that beat the S&P and Dow.

Finally I reconciled it, in that Vic and Laurel are not looking at blue chippies, which have been my focus. There, growth is more difficult to come by, I suspect, so value leads the way. And I'm still learning about all of this.

Dr. Kim Zussman adds:

Out-of-sample testing should be powerful. However there are still fog issues with:

- Growth and value may go in-and-out of favor, and 00-06 (post-huge growth period) may partially attribute to this. Perhaps G > V over long periods, but sub-periods swing both ways.
- V.L.'s particular take on growth may be getting arbed out, with their publication of stock lists, F.V.L. closed end fund, and countless traders/funds using their data. i.e., the anomaly died.

Russell Sears mentions:

If the companies are reacting to incentive, it would make sense that they buy market share at the expense of profits, when growth is being rewarded in the markets, and do the opposite when value is being rewarded. Which comes first, and hence is predictive?

Steve Ellison adds:

Most companies are growth companies first and value companies later, after their industries mature and products become commoditized, or as a result of company-specific difficulties. Buying market share at the expense of profit actually heralds the end of growth, as it indicates the company is having difficulty differentiating its products from competitors' products.

From management's perspective, simply being in the value stock category is a slap that conveys an urgent need to improve profitability. One incentive is the possibility that management might be ousted in a takeover if the share price is low enough to attract a buyer. The generally high profit margins of growth companies provide incentives for competitors to enter the market.

Russ replies:

While this explains it on an individual company basis, I don't think it explains it on a total basis, as the graph Gordon sent suggest. What are the signs that this is happening at a macro level?

2091### Sep

#### 7

# Fed Model: The Last Four Months of the Year, by Tom Downing

September 7, 2006 | Leave a Comment

In the table below, I have classified August to December returns for the past 27 years into 3 groups. A positive differential (Forward Earnings yield - 10 yr yield) has boded well for stocks. The current differential is about 2 percent, so the expected return is greater than 5 percent. Note that the unconditional mean is 3.89 percent for the last 4 months of the year, so the results are not as statistically impressive on that basis.

Also note that since 1979: when the differential has been greater than 0, the S&P has never dropped more than 4 percent (ignoring draw-downs) from August 31st to December 31st.

GROUP AVG STD N T %POS MAX MIN

DIFF < 0 0.07% 12.26% 9 0.02 67% 14% -25%

0 > DIFF > .01 5.38% 9.96% 9 1.62 78% 28% -4%

DIFF > .01 6.22% 7.08% 9 2.64 78% 18% -4%

ALL 3.89% 10.00% 27 2.02 74% 28% -25%

## Archives

- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013 2066
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- April 2007
- March 2007 bc1
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- Older Archives

## Resources & Links

- The Letters Prize
- Pre-2007 Victor Niederhoffer Posts
- Vic’s NYC Junto
- Reading List
- Programming in 60 Seconds
- The Objectivist Center
- Foundation for Economic Education
- Tigerchess
- Dick Sears' G.T. Index
- Pre-2007 Daily Speculations
- Laurel & Vics' Worldly Investor Articles