Standardizing regression inputs by dividing by two standard deviations

Interpretation of regression coefficients is sensitive to the scale of the inputs. One method often used to place input variables on a common scale is to divide each variable by its standard deviation. Here we propose dividing each variable by two times its standard deviation, so that the generic comparison is with inputs equal to the mean +/- 1 standard deviation. The resulting coefficients are then directly comparable for untransformed binary predictors. We have implemented the procedure as a function in R. We illustrate the method with a simple public-opinion analysis that is typical of regressions in social science.

Here’s the paper, and here’s the R function.

Standardizing is often thought of as a stupid sort of low-rent statistical technique, beneath the attention of “real” statisticians and econometricians, but I actually like it, and I think this 2 sd thing is pretty cool.

2 thoughts on “Standardizing regression inputs by dividing by two standard deviations

  1. Marc,

    I think the signs are right. What happens is, when you have an interaction with predictors that are not centered (as in the first regression), the coefficients for the main effect are hard to directly interpret.

Comments are closed.