Responding to a comment from Thomas Lumley (who asked why MRP estimates often seem to appear without any standard errors), I wrote:
In political science, MRP always seems accompanied by uncertainty estimates. However, when lots of things are being displayed at once, it’s not always easy to show uncertainty, and in many cases I simply let variation stand in for uncertainty. Thus I’ll display colorful maps of U.S. states with the understanding that the variation between states and demographic groups gives some sense of uncertainty as well. This isn’t quite right, of course, and with dynamic graphics it would make sense to have some default uncertainty visualizations as well.
But one thing I have emphasized, ever since my first MRP paper with Tom Little in 1997, is that this work unifies the design-based and model-based approaches to survey inference, in that we use modeling and poststratification to adjust for variables that are relevant to design and nonresponse. We discuss this a bit in BDA (chapter 8 of the most recent edition) as well. So there’s certainly no reason not to display uncertainty (beyond the challenges of visualization).
I’ve recently been told that things are different in epidemiology, that there’s a fairly long tradition in that field of researchers fitting Bayesian models to survey data and not being concerned about design at all! Perhaps that relates to the history of the field. Survey data, and survey adjustment, have been central to political science for close to a century, and we’ve been concerned all this time with non-representativeness. In contrast, epidemiologists are often aiming for causality and are more concerned about matching treatment to control group, than about matching sample to population. Ultimately there’s no good reason for this—even in an experimental context we should ultimately care about the population (and, statistically, this will make a difference if there are important treatment interactions) but it makes sense that the two fields will have different histories, to the extent that a Bayesian researcher in epidemiology might find it a revelation that Bayesian methods (via MRP) can adjust for survey bias, while this is commonplace to a political scientist as it’s been done in that field for nearly 20 years.
I wonder if another part of the story is that Bugs really caught on in epi (which makes sense given who was developing it), and Bugs was set up in a traditionally-Bayesian way of data + model -> inference about parameters, without the additional step required in MRP of mapping back to the population.
Also, causal inference researchers have tended to be pretty cavalier about the sampling aspect of their data. Rubin, for example, talked a lot about random or nonrandom assignment of the treatment but not much about representativeness of the sample, and I think that attitude was typical for statisticians for many years—at least, when they weren’t working in survey research. In my own work in poli sci, I was always acutely aware that survey adjustment mattered (for example, see figure 1a here), and I didn’t want to be one of those Bayesians who parachute in from the outside and ignore the collective wisdom of the field. In retrospect, this caution has served me well, because recently when some sample-survey dinosaurs went around attacking model-based data-collection and adjustment, I was able to decisively shoot them down by pointing out that we’re all ultimately playing the same game.
I don’t go with the traditional “Valencia” attitude that Bayesian approach is a competitor to classical statistics; rather, I see Bayes as an enhancement (which I’m pretty sure is your view too), and it’s an important selling point that we don’t discard the collective wisdom of a scientific field; rather, we effectively include that wisdom in our models.