Interpreting the estimated varying coefficients in a multilevel model

Shravan writes,

In lmer, if we have a varying intercept specified, e.g. (1|item), sigma^2 is what is estimated, not the intercepts themselves (at least that’s what the Pinheiro and Bates book says, Bates himself calls the values of the varying intercepts conditional modes).

In Gelman and Hill, p. 253 and elsewhere you refer to the varying intercepts as estimates:

“… the multilevel estimate for a given [item] j can be approximated as a weighted average of the mean of the observations in the [item] … and the mean over all [items].”

Is it correct to think of the varying intercepts as estimates? Irrespective of whether the answer is yes or no, is there any interprettion at all that is assignable to these numbers (which lmer does output if you ask it to)? They are just nuisance terms, right?

My reply: Nuisance is just a state of mind. The goal is to get the posterior distribution of these coefficients; the estimates from lmer() are posterior modes conditional on the estimated variance parameters. Which can be thought of as approximate posterior medians. But not exactly. See the 8-schools example in Chapter 5 of Bayesian Data Analysis for more discussion–there we show how the estimates and standard errors for individual coefficients change as the group-level variance changes.