PhilWhat's a good working definition of standard error?

Suppose you had a regression model:

SPY = .30 * FXY - .0015

where SPY is the percent change in SPY and FXY is percent change yesterday in FXY (yen ETF). Then you backtest your model on the last 100 days of data. Each day you get a prediction and you already know the actual. The difference between the predicted and actual is called the residual or error. Take the standard deviation of all the errors — that is your standard error.

The interpretation of standard error is straightforward as well. If we make all the usual assumptions about normal distribution etc., then the standard error is a plus or minus confidence band around our prediction. Today FXY is up 1.20%, so our prediction from the model would be for SPY to be up tomorrow by .21%. This is an actual fitted model with a standard error of 1.4%.

So we can think of the prediction as the center or mean of our distribution of prices tomorrow. The standard error is the variability around that mean. Assuming normally distributed errors we would expect two thirds of our observations to fall within one standard deviation of the prediction. One sixth would be greater than prediction plus one standard error and one sixth would be less than predicted minus one standard error. In the same manner 95% should lie within two standard errors of the prediction.





Speak your mind

3 Comments so far

  1. Dan Costin on March 26, 2008 12:27 pm

    I always have trouble with this part: “Assuming normally distributed errors.” Anyone in favor of banning that phrase, and all trading and risk management systems that are based on it?

  2. George Parkanyi on March 26, 2008 4:10 pm

    Standard errors describes how I normally trade. Standard deviation is when I make whole new errors I’d never thought of before …

  3. D'Urville Martin on March 26, 2008 7:41 pm

    There is also a good explanation of this topic at wikipedia:


Resources & Links