Skip to content
 

Using Bayesian meta-analysis to adjust for bias in experiments and observational studies

Commenter RogerH pointed me to this article by Welton, Ades, Carlin, Altman, and Sterne on models for potentially biased evidence in meta-analysis using empirically based priors. The “Carlin” in the author list is my longtime collaborator John, so I really shouldn’t have had to hear about this through a blog comment. Anyway, they write:

We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited.We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions.

I really really like this idea. As Welton et al. discuss, their method represents two key conceptual advances:

1. In addition to downweighting questionable or possibly-biased studies, they also shift them to adjust in the direction of correcting for the bias.

2. Instead of merely deciding which studies to trust based on prior knowledge, literature review, and external considerations, they also use the data, through a meta-analysis, to estimate the amount of adjustment to do.

And, as a bonus, the article has excellent graphs. (It also has three ugly tables, with gratuitous precision such as “-0.781 (-1.002, -0.562),” but the graph-to-table ratio is much better than usual in this sort of statistical research paper, so I can’t really complain.)

This work has some similarities to the corrections for nonsampling errors that we do in survey research. As such, I have one idea here. Would it be possible to take the partially-pooled estimates from any given analysis and re-express them as equivalent weights in a weighted average? (This is an idea I’ve discussed with John and is also featured in my “Survey weighting is a mess” paper.) I’m not saying there’s anything so wonderful about weighted estimates, but it could help in understanding these methods to have a bridge to the past, as it were, and see how they compare in this way to other approaches.

Payment demanded for the meal

There’s no free lunch, of course. What assumptions did Welton et al. put in to make this work? They write:

We base the parameters of our bias model on empirical evidence from collections of previously published meta-analyses, because single meta-analyses typically provide only limited information on the extent of bias . . . This, of course, entails the strong assumption that the mean bias in a new meta-analysis is exchangeable with the mean biases in the meta-analyses included in previous empirical (meta-epidemiological) studies. For example, the meta-analyses that were included in the study of Schulz et al. (1995) are mostly from maternity and child care studies, and we must doubt whether the mean bias in studies on drugs for schizophrenia (the Clozapine example meta-analysis) is exchangeable with the mean biases in this collection of meta-analyses.

Assumptions are good. I expect their assumptions are better than the default alternatives, and it’s good to have the model laid out there for possible criticism and improvement.

P.S. The article focuses on medical examples but I think the methods would also be appropriate for experiments and observational studies in social science. A new way of thinking about the identification issues that we’re talking about all the time.

2 Comments

  1. Keith O'Rourke says:

    Nothing is new, of course, but is nice to see things catch on.

    In a 2001 book chapter on meta-analysis, I credited Don Rubin with a Bayesian approach that demonstrated "the gain in precision from including studies at high risk of bias is likely to be low" in his 1978 Annals of Statistics, page 46.

    So its only be around for 30+ years, but there has been some nice work by Sander Greenland in the Journal of the Royal Statistical Society in 2005 and RL Wolpert and KL Mengersen in Statistical Science 2004 (apparently quite separately).

    I think Don Rubin once said the most statistician's don't read …

    > re-express them as equivalent
    > weights in a weighted average?

    Do mean come up with "estimates" and "weights" that approximate the full Bayesian anlysis?

    Keith

  2. Keith O'Rourke says:

    > I think Don Rubin once said the most statistician's don't read …

    I guess this time that includeds me – I should have read more than the abstract.

    Though shifting in addition to downweighting for bias and meta-analysis to get priors for effects (L'Abbe, Detsky, O'Rourke, 1987) are not new either.

    Keith