Pablo Verde sends in this letter he and Daniel Curcio just published in the Journal of Antimicrobial Chemotherapy. They had published a meta-analysis with a boundary estimate which, he said, gave nonsense results. Here’s Curcio and Verde’s key paragraph:

The authors [of the study they are criticizing] performed a test of heterogeneity between studies. Given that the test result was not significant at 5%, they decided to pool all the RRs by using a fixed-effect meta-analysis model. Unfortunately, this is a common practice in meta-analysis, which usually leads to very misleading results. First of all, the pooled RR as well as its standard error are sensitive to 2 the estimation of the between-studies standard deviation (SD). SD is difficult to estimate with a small number of studies. On the other hand, it is very well known that the significant test of hetero- geneity lacks statistical power to detect values of SD greater than zero. In addition, the statistically non-significant results of this test cannot be interpreted as evidence of the homogeneity of the results among all RCTs included.

How can you generally avoid boundary estimates of multilevel variance parameters? Using our cute little trick, implemented in blmer/bglmer in the blme package in R.

Some more information on the blme package would be very useful such as the conditions under which it is preferable compared to lme4, speed comparison etc. It would also be good to know the relation to the current released version of lme4 and the development version lme4a…. I am sure more people are interested in that…

Greg:

I’ll post something on this. But the short answer is that blmer/bglmer is just lmer/glmer with priors. Our functions are a modification of lmer/glmer. The Bayesian version should run a little bit faster in most cases because the priors help to stabilize the estimates.