When I do applied statistics, I follow Bayesian workflow: Construct a model, ride it hard, assess its implications, add more information, and so on. I have lots of doubt in my models, but when I’m fitting any particular model, I condition on it. The idea is we take our models seriously as that’s the best way to learn from them.

When I talk about statistical *methods*, though, I’m much more tentative or pluralistic: I use Bayesian inference but I’m wary of its pitfalls (for example, here, here, and here) and I’m always looking over my shoulder.

I was thinking about this because I recently heard a talk by a Bayesian fundamentalist—one of those people (in this case, a physicist) who was selling the entire Bayesian approach, all the way down to the use of Bayes factors for comparing models. OK, I don’t like Bayes factors, but the larger point is that I was a little bit put off by what seemed to be evangelism, the proffered idea that Bayes is dominant.

But then, awhile afterward, I reflected that this presenter has an attitude about *statistical methods* that I have about *statistical models*. His attitude is to take the method—Bayes, all the way thru Bayes factors—as given, and push it as far as possible. Which is what I do with models. The only difference is that my thinking is at the scale of months—learning from fitted models—and he’s thinking at the scale of decades—his entire career. I guess both perspectives are legitimate.

Is it not as simple as: You need to ride your method hard in the same way as you need to ride your model hard? (And you need to ride your model/method hard each time you condition on it)

One way purity can matter is by highlighting the limits of pure application. The hope is this reveals why this or that works or doesn’t. You appear to mostly be talking about something different, that from his perspective, as a physicist, he’s creating models for behaviorial results reduced to a level where it can be matched against repeating physical results ‘globally’. You are generally creating models to analyze complex behaviorial results that have meaning(s) which is more specifically contextual.

I’m not entirely sure what the idomatic phrase “ride it hard” means exactly, but if it means something like “try to break it” to make sure it works under all important conditions. Then I’m with you.

I think the parallels you draw don’t work–there’s no parallel between how evidence should be used in evaluating a model vs the kinds of evidence allowable in evaluating a model. Your preferred workflow is just an analytic strategy that you would change or renounce if it consistently led you to draw demonstrably false conclusions. His regard for Bayes is the paradigm within which he does or defines science. Which he (presumably) wouldn’t change or renounce based on mere empirical results, so long as Bayesian principles continue to be logically consistent (in his view).

Put another way, both of you are doing science within frameworks that say science consists of making and testing claims that are falsifiable, but the set of permissible tests for falsification is much narrower in his framework. That’s an entirely different level of disagreement than how far you should push a model. Both of you would abandon the model at some threshold of evidence, which might be greater or lesser, but your framework would allow you to do so based on evidence from hybrid methods. His framework would only consider that same evidence in evaluating the model if it could be recreated using pure Bayes (again, presumably).

Dear statisticians, I have a question: is it minimax to pick the bayesian decision instead of the minimax? 😂

Andrew:

I presume that there are results that would take you to question your model. By contrast, the evangelical is wedded to his method, and no results would take him to question it. There’s a big difference between argumentative (or implicational) assumptions and assumptions relied upon to reach a claim or conclusion.

On the other hand, is there any empirical result that would cause you to question pi > sqrt(2) ? Some things aren’t empirical questions.

> wedded to his method

Vampirically wedded to his method – which cannot be shaken by any conceivable experience – ever.

My comment grew into a blog post of its own. Found here: https://critical-inference.com/attitudes-toward-models-vs-attitudes-toward-methods/