How Bayesian am I?

I was reminded of the varieties of Bayesians after reading this article by Robin Hanson:

[I]n our standard framework systems out there have many possible states and our minds can have many possible belief states, and interactions between minds and systems allow their states to become correlated. This correlation lets minds have beliefs about systems that correlate with the states of those systems. The exact degree of belief appropriate depends on our beliefs about the correlation, and can be expressed with exact but complex mathematical expressions.

This sounds reasonable but it doesn’t actually fit how I do statistics! as a Bayesian statistician–that is, someone who applies Bayesian methods to statistical analysis. I do not actually find the Bayesian approach to be useful in characterizing my belief states. To be more precise, I use Bayesian inference to summarize my uncertainty _within_ a model, but not to express my uncertainty _between_ models. Rather, I will hypothesize a model, use it to make predictions (forecasts and hindcasts) and then evaluate the model based on the fit of the predictions to data and prior information.

We discuss this issue a bit more in chapter 6 of Bayesian Data Analysis, in the discussion of model checking, discrete model averaging, and continuous model averaging.

To take a historical example, I don’t find it useful, from a statistical perspective, to say that in 1850, say, our posterior probability that Newton’s laws were true was 99%, then in 1900 it was 50%, then by 1920, it was 0.01% or whatever. I’d rather say that Newton’s laws were a good fit to the available data and prior information back in 1850, but then as more data and a clearer understanding became available, people focused on areas of lack of fit in order to improve the model.

In the areas where I work, models are never correct or even possibly correct, but it can be useful to use Bayesian inference as a tool to assess uncertainty within a fitted model. Also, and perhaps just as importantly, Bayesian inference is useful in creating probabilistic forecasts and hindcasts that can be compared to real data in order to assess aspects of model misfit. I know that other people have found Bayesian methods useful more directly for model selection (i.e., discrete model averaging, computing the posterior probability that each particular candidate model is true) but this has just about never worked for me; see the aforementioned chapter 6 or this article from Sociological Methodology for more discussion.

Where does the disagreement come from?

I have two stories on this. First, different methods are useful in different domains. Robin and his colleagues are particularly interested in logical coherence, a general philosophy of knowledge, and applications such as prediction markets, whereas I am more focused on studying particular phenomena (typically in social science and public health), analyzing particular datasets, and developing robust statistical methods (robust in the sense of working well in a wide variety of problems). Specific concerns (e.g., the dependence of Bayes factors on untestable aspects of the prior distribution, discussed in our chapter 6 as a reason for not liking discrete model averaging) might seem like minor technicalities to Robin but seem like major methodological problems to me.

My second story, which I think is also true, is that different statistical methods can be used successfully in applications–there are many roads to Rome–and so it is natural for anyone (myself included) to believe that our methods are particularly good for applications. For example, Adrian Raftery does excellent applied work using discrete model averaging, whereas I don’t feel comfortable with that approach. Brad Efron has used bootstrapping to help astronomers solve their statistical problems. Etc etc. I don’t think that Adrian’s methods are particularly appropriate to sociology, or Brad’s to astronomy–these are just powerful methods that can work in a variety of fields. Given that we each have successes, it’s unsurprising that we can each feel strongly in the superiority of our own approaches. And I certainly don’t feel that the approaches in Bayesian Data Analysis are the end of the story. In particular, nonparametric methods such as those of David Dunson, Ed George, and others seem to have a lot of advantages.

But I don’t see that I’d gain much out of slapping a subjective probability on to that last statement.

The 46656

P.S. I tried to locate this within I. J. Good’s famous characterization of varieties of Bayesians but, as with other such categorizations, the points of debate have moved so much over the years that it was hard to align the article to modern concerns. I should really write an updated version of the Good article.

10 thoughts on “How Bayesian am I?

  1. Within a narrow focus of (for example) deciding whether including a certain interaction in a model is helpful or whether a prior based on say federal government data is better than a prior based on county data do you reject the bayes factor as a useful technique?

    I can imagine rejecting the idea that the probability of a model is 7.5% is meaningful, while still accepting the idea that the ratio of probabilities of two different models can provide a goodness of fit like measure.

    I'm waiting for amazon to deliver your books, so I can't simply look up your opinion yet :-)

  2. Andrew,

    I'm slightly confused about your position. It seems to me from reading this and other things that you've written that your primary justification for using Bayesian inference is for practical reasons. This makes sense to me.

    But what I'm confused about is what your position on belief state characterization is. It seems possible to me that you don't explicitly use Bayesian methods to summarize uncertainty between models, but that you still use a psychological process that is Bayesian. Something must be going on in your brain to influence your decision on which model to select
    and this could be a type of Bayesian inference even if you were a diehard Frequentist.

    Is it your contention that the unconscious psychological processes that govern belief states are not usefully described as Bayesian, or just that as an academic, you don't find current Bayesian methods to be useful for explicitly deciding among belief states?

  3. Daniel,

    In your example, I prefer continuous model averaging. Rather than have a probability that the added coefficient is nonzero, I'd include it, ideally with some sort of informative prior distribution. I haven't used many informative prior distributions in my books, but I'm moving more in that direction. But I haven't found these Bayes factors to be useful–rather than comparing two models, I'd rather build a third model that includes the two as special cases: that is, continuous model expansion rather than discrete model averaging.

    Greg,

    I have no problem acknowledging uncertainty in my beliefs. I just don't find it helpful to characterize my beliefs as "model A is true" or "model B is true" or "model C is true," given that I know already that all three are false. I'm much happier working with probabilities within a model, and then expanding the model as needed to account for its major imperfections.

    I realize, however, this is not as sharp as a distinction as I might be implying. In particular, given that I do not accept model A, model B, or model C, I also clearly can't unconditionally accept inferences _within_ these models. Ultimately, then, it must come down to some sort of robustness statement about the particular statistical models that I use. It's a good question to which I don't have a great answer right now.

    But one thing I can say is that I'm opposed to the narrow Bayesian approach in which all models represent a subjective belief and thus cannot be checked. Strange as it sounds, I think this attitude (that model checking is not only unnecessary but also essentially illegal) has been very common among Bayesian statisticians, although perhaps less so than 15 years ago.

  4. Yes, it is silly to talk about whether Newton's laws are true, as an absolute, but it does make sense to talk about specific predictions. People had expectations about celestial motion and the precession of Mercury surprised them. Afterwards, their expectations about falling apples were hardly changed, but their expectations about celestial motion was a little more diffuse and their expectations about more exotic things should be very diffuse. Even if I don't know how to make calculations with GR, it narrows what realms I should think of as exotic; it increases my confidence in some applications of Newton's laws!

  5. Andrew, in contexts where "models are never correct or even possibly correct" I think it does make sense to not estimate their probability, but to instead see them as standing for a space of possibly correct models at some distance away. But for claims that you do see as possibly true, I hope you then do think it makes sense to estimate their probability.

  6. If the model is never true, then what good is an uncertainty estimated within a particular model? At the end of the day, when you report the uncertainty of some estimate, will you not always underrepresent your actual uncertainty?

    Any suggestions for some reading on how to account for the necessary uncertainty created by model selection?

  7. Aleks,

    I am a bit curious about your response "some models are better than others." Such an answer seems to be either avoiding the issue raised in my question or is perhaps an indication that you believe that I have missed the point and am looking at things the wrong way. Care to elaborate?

    Andrew,

    Do you have anything to add?

  8. Andrew is suggesting not to do model selection but to select a good model because he's interested in giving insight about the data. Multiple models are plain out confusing, even if they might yield higher predictive accuracy.

    @grad student: As for your own point. Bayesian models are just another model of a more general philosophical issue of uncertainty. It's a language (good or bad, useful or useless), not a statement (true or false).

    As for over- or underestimating uncertainty. To a Bayesian this all depends on your prior. If your prior is good, you will get it more right (over a large number of experiments) than when your prior is bad. Model selection corresponds to ignoring all prior uncertainty about the model.

  9. I think that what bothers me is the notion of quantifying uncertainty under a model which is almost certainly false.

    I suppose my question is foundational in that I am wondering whether it is wise to calculate a variance estimate given your model or likelihood is true, or a posterior variance under your specified likelihood and prior.

    Do we really want to be calculating uncertainty conditional on a model, or is there another statement that we would rather be able to make?

Comments are closed.