I recently saw this article that Stephen Senn wrote a couple of years ago, criticizing Bayesian sensitivity analyses that relied on vague prior distributions. I’m moving more and more toward the idea that Bayesian analysis should include actual prior information, so I generally agree with his points. As I used to say when teaching Bayesian data analysis, a Bayesian model is modular, and different pieces can be swapped in and out as needed. So you might start with an extremely weak prior distribution, but if it makes a difference it’s time to bite the bullet and include more information.

My only disagreement with Senn’s paper is in its recommendation to try the so-called fixed-effects analysis. Beyond the difficulties with terminology (the expressions “fixed” and “random” effects are defined in different ways by different people in the literature; see here for a rant on the topic which made its way into some of my articles and books), there is the problem that, when a model gets complicated, some of the estimates that are called “fixed effects” get very noisy, especially in sparse data settings such as arise in logistic regression.

A very thoughtful read for anyone interested in dealing with multiple sources of evidence or meta-analysis.

The Yates and Cochran (1938) paper given in the bibliography is also a very thoughtful read.

Meta-analysis models tend not to get complicated due to lack of replications (i.e. study is the unit of analysis so not many and quality and p_value censoring worries distract from believing "better models").

In this context the fixed effect model is usually a good first step in the scaffolding before moving to random effect (or I always found that).

It was his recommendation to use "H-likelihoods of Lee and Nelder" that I was wary of.

K?

p.s. Bayesians might be better suited by reading his work more seriously than some I know