Skip to content
 

More on fitting multilevel models

Eric Schwartz writes:

Thanks for the blog post. I have three follow up questions:

Q1. I also would also prefer to look at confidence or posterior intervals, so in the multilevel model how would you rank order the best ways to construct those intervals?
– run fully Bayes posterior inference in WinBUGS or R directly (but if the data set is too big that is too slow)
– run lmer() and use mcmcsamp() (but it is not working though I appreciate Bates free work!)
– run lmer() and form an interval of +/- 2 Standard Errors around the Estimate (just as you and Jennifer Hill show in your ARM book on page 261)

Q2. Under what conditions is using +/- 2 standard errors of estimates a reasonable approximation of what the posterior interval would be?

Q3. May you clarify the “almost always” in your blog post? Is it reasonable to choose a lmer() model with “family=poisson” over one with “family=quasipoisson” when there are already, say, two non-nested batches of random effects taking care of the overdispersion in observed counts? I interpret those batches as capturing unobserved heterogeneity of count propensities across the individual groups in those batches, so they sufficiently reflect overdispersion; above that, an additional idiosyncratic error term for each observation seems unnecessary to the individual-level story.

My reply:

1. Full Bayes is best. Once it’s working again, mcsamp() is full Bayes (I think).

2. Easiest way to evaluate this is via a fake-data simulation. See chapter 8 of ARM for some simple examples of this sort of thing.

3. I’d say to always do quasipoisson etc. But I don’t actually know what glmer does here. Just today I fit a model using binomial and then quasibinomial and I was stunned to find the standard errors were smaller when I used quasibinomial. I still don’t know what was going on here. Again, fake-data simulation would be the way to check this and see what to trust.

4 Comments

  1. Diego G. says:

    What can I use instead of mcsamp() to fit and then plot a varying-intercept, logit model. What is the equivalent of mcsamp()?
    Thank you.

  2. K? O'Rourke says:

    Quassi _usually_ means just making a simple (scalar) adjustment of the log-likelihoods based on local features at the maximum – best avoided unless you are sure the log likelihood is not multi-model (happens with exponential models with unknown scale parameters) AND the adjustment does not make things worse.

    Smaller standard errors are _usually_ due to observed under-dispersion which is a strong signal of model failure – overfitting, non-independent patient responses … so _usually_ making a bad situation much worse.

    Related/equivalent to robust standard errors, sandwich variance estmates – etc – desparate attempts to avoid doing full modeling – which I was warned as a graduate student was always very dangerous

    Wonder if the multi-modality would make the fake data checking very challenging?

    K?

  3. freddy says:

    desparate attempts to avoid doing full modeling… always very dangerous

    C'mon, these attitudes are just silly. Use of e.g. robust standard errors is just a method, not a mentality. If the user has checked – perhaps using the fake data simulation suggested above – that the method should work well for the problem at hand, it's wrong (and unhelpful) to call them 'desperate' or 'dangerous'.

  4. K? O'Rourke says:

    Freddy – maybe a bit strong but its a blog post and not much different than the warning David Firth gave me was I was a graduate student that I did not take seriously enough at my on peril.

    For an example of what can go wrong – search for "batman" is this pdf
    http://www.samsi.info/200809/meta/tutorials/thesi