Within this very nice piece, Rob drops this bomb of mathematical knowledge:
It is not necessary to actually fit separate models when computing the CV statistic for linear models.
Here is a broader excerpt and the method itself (after the jump).
While cross-validation can be computationally expensive in general, it is very easy and fast to compute LOOCV for linear models. A linear model can be written as
and the fitted values can be calculated using
where is known as the “hat-matrix” because it is used to compute (“Y-hat”).
If the diagonal values of are denoted by , then the cross-validation statistic can be computed using
where Plane Answers to Complex Questions for a proof. Thus, it is not necessary to actually fit separate models when computing the CV statistic for linear models. This remarkable result allows cross-validation to be used while only fitting the model once to all available observations. is the residual obtained from fitting the model to all observations. See Christensen’s book