Multilevel models estimated using lme/nlme or Bugs

Neal writes,

One can clearly do hlm either via Bates’s lme in R or via mcmc (winbugs or perhaps dedicated c code). Now in some sense there is no competition – if the models are the same, and (in the context of previous discussion) one simply wants to explore the likelihood, then in principle both could be used.

So I am curious if you have one or more stats students who would be intereted in the following:

a. on some reference set of data, does lme and mcmc give the same answer?

b. given the appropriate perturbations of the error process, do both give similar (and correct) se’s

c. Presumably lme gives the same answers regardless of analyst. mcmc has a bit more art (though perhaps not in this well known case).How great is the interanalyst “reliability” for mcmc here?

based on all this, can one do any recommendations for mcmc vs lme? (of course it may well be easier to do a better job of modeling specific features with mcmc, which would be a win for it, and we won;t even discuss what one would do in lme if one thought that priors should be made as irrelevant as possible)

any interest in finding some stats studnet to look at this issue?

My own guess isthat many polsci folks did not use bates because not that many folks use R (and perhaps many who do would happily use winbugs, presumably the conditional probability of liking winbugs/stata is lower).

Anyway, this might well increase the market for hlm. And again, I have no idea if there are interesting issues which caused the ml mixed stuff to be inferior to doing bayes/winbugs.

My short answer is that the Bates and Pinhero lme and nlme programs are useful as fast approximations, especially for large datasets where Bugs is slow. The lme and nlme programs have three drawbacks compared to Bugs:

(1) As far as I can tell, lme/nlme only work for nested models (e.g., students within schools), not for nonnested models (e.g., time-series cross-sectional data characterized by states and years).

(2) When variance components are near zero, the point-estimation approach of lme/nlme can’t really work well (for example, see the 8-schools problem in Chapter 5 of Bayesian Data Analysis).

(3) Bugs allows endless generalizations, for example, if there are many variance parameters, they can themselves be modeled hierarchically (as in Section 6 of this paper).

3 thoughts on “Multilevel models estimated using lme/nlme or Bugs

  1. Just to leap to Bates' defence, lme can handle non-nested models, although it's a bit tricky (you have to create the covariance matrix using some tools in nlme). But in the lme4 package there is the lmer function, which fits GLMMs, and has a more flexible random effects specification.

    Bob

  2. Bob,

    Wow–that's cool. Lme is fast with large datasets. Maybe it's worth writing a hack for lmer so that it does Gibbs sampling? Even just 20 iterations or so would be reasonable, I think, in allowing it to average over uncertainty in the variance parameters. Although then I'd like to allow the variance parameters themselves to be modeled, and then there's the t distribution . . . that way lies WinBugs (OpenBugs?).

    But I recently did have a fairly big problem (16000 units in 9000 groups) where Bugs was too slow and I just used lme. So maybe there's room for a simple Bayesian version of it. I'll have to see if I understand how lmer works. It took me a long time to understand the lme specification; it's conceptually much different from how I think about these models.

  3. Hi Andrew,

    FYI, if you want to look at the guts of the lmer() function, look in the Matrix package, not lme4. It allows the user to specify a variety of maximization and/or numerical integration algorithms. So, for a given model, it might not be too hard to write a small gibbs sampling plugin.

    So far the best howto on lmer() besides the help documents is in the latest R News: http://cran.us.r-project.org/doc/Rnews/Rnews_2005… .

    Also, as Doug Bates just posted to the R list in response to Eduardo Leoni (who, perhaps is the student doing these comparisons mentioned above):

    " I'm not sure exactly what Bayesian analysis you are doing but you may want to look at the function mcmcsamp in versions 0.98-1and later of the Matrix package. It can take a fitted lmer object and create an MCMC sample from the posterior distribution of the parameters assuming a locally uniform prior on the fixed-effects parameters and the non-informative prior described by Box and Tiao for the variance-covariance matrices."

    Best,

    Jake

Comments are closed.