Mar

26

PhilWhat's a good working definition of standard error?

Suppose you had a regression model:

SPY = .30 * FXY - .0015

where SPY is the percent change in SPY and FXY is percent change yesterday in FXY (yen ETF). Then you backtest your model on the last 100 days of data. Each day you get a prediction and you already know the actual. The difference between the predicted and actual is called the residual or error. Take the standard deviation of all the errors — that is your standard error.

The interpretation of standard error is straightforward as well. If we make all the usual assumptions about normal distribution etc., then the standard error is a plus or minus confidence band around our prediction. Today FXY is up 1.20%, so our prediction from the model would be for SPY to be up tomorrow by .21%. This is an actual fitted model with a standard error of 1.4%.

So we can think of the prediction as the center or mean of our distribution of prices tomorrow. The standard error is the variability around that mean. Assuming normally distributed errors we would expect two thirds of our observations to fall within one standard deviation of the prediction. One sixth would be greater than prediction plus one standard error and one sixth would be less than predicted minus one standard error. In the same manner 95% should lie within two standard errors of the prediction.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search