Brandon Sherman writes:

I just was just having a discussion with someone about multilevel models, and the following topic came up. Imagine we’re building a multilevel model to predict SAT scores using many students. First we fit a model on students only, then students in classrooms, then students in classrooms within district, the previous case within cities, then counties, countries, etc. Maybe we even add in census tract info. The idea here is we keep arbitrarily adding levels to the hierarchy.

In standard linear regression, adding variables, regardless of informativeness, always leads to an increase in R^2. In the case of multilevel modeling, does adding levels to the hierarchy always lead to a stronger relationship with the response, even if it’s a tiny one that’s only applicable to the data the model is built on?

My reply: Not always. See here.

**P.S.** Since we’re on the topic, I should point you to this recent paper with Ben, Jonah, and Aki on Bayesian R-squared.

To what extent does the last Bayesian approach linked work for glmms? I don’t see any reference to Schielzeth and Nakagawa’s work. That, and Is imagine this would be straightforward to partition into marginal versus conditional R2 values.