Why Bayes? again

Richard Zur (see earlier entry here) asks,

Is there a benefit to Bayesian analysis if you’re just recasting a reliable MLE fit, with reliably estimated asymptotic standard errors, in the Bayesian paradigm? I think the interpretation of results is more natural, but my friend the frequentist says the priors impose subjectivity while his method is objective. I also think the Bayesian method works better for extreme cases, where asymptotics might break down, but he just tells me we shouldn’t be working on those extreme results anyway. So… am I really buying anything by doing this work?

My response: sometimes it’s no big deal but it can make a difference. You don’t always have a lot of data under each experimental condition. For example, in my work with serial dilution assays, we have four or eight samples per unknown compound. n=4 or even n=8 can be far from asymptotic, especially when some measurements are below detection limits. The model is nonlinear and I see no advantage to avoiding Bayesian methods. And I certainly wouldn’t want someone telling me “we shouldn’t be working on those extreme results anyway.” To do zillions of replications just to allow a mathematical approximation to work better–that’s asking a lot of my experimental colleagues. Besides, N is never large.

2 thoughts on “Why Bayes? again

  1. There's a distressing bias against subjectivity in peoples' frequentist friends. The standard Bayesian answer is that there is *always* subjectivity in your model; it's just a matter of whether you put it out in the open by selecting a prior, or whether you keep it hidden by putting your thumb on the scales when choosing your functional form (and since the choice of likelihood function is subjective and common to both Bayesian and ML approaches, the difference is less than one might think).

  2. The comment regarding small N seems important to me. Most useful experiments are very expensive to replicate, but statistical dogma is that N must be large in order for any analysis to work.

    Some work has been done to generate designs based on cost constraints for microarrays. Another fairly famous example is pooling of blood samples for an expensive test.

Comments are closed.