I read a couple of psychology papers recently and was impressed by their thoroughness. Each of these papers (one by Pelham, Mirenberg, and Jones, and one by Roberts) had 10 separate studies covering different aspects of their claims. The standard in applied statistics seems much lower: what’s expected is that we do one good data analysis, along with explorations of what might have happened had the data been analyzed differently, assessment of the importance of assumptions, and so on.
Standards are lower in applied statistics
The difference, I think, is that in a statistics paper–even an applied statistics paper–the goal is to study or demonstrate a method rather than to make a convincing scientific case about the application under consideration. I mean, the scientific claims being presented should be plausible, but the standards of evidence seem quite a bit lower than in psychology.
What about other fields? Biology and medicine, oddly enough, seem more like statistics than psychology in their “convincingness” standards. In these fields, it seems common for findings to be reversed on later consideration, and typically a research paper will present the result of just one study. (In medicine, it is common to have review articles that summarize 40 or more studies in an area, and it seems accepted that individual studies are not supposed to be convincing in themselves.)
Political science, economics, and sociology seem somewhere in between. Research papers in these fields will sometimes, but not always, include multiple studies, but there is also often a requirement for a theoretical argument of some sort. It’s not enough to show that something happened, you also have to explain how it fits in (or refutes) some theoretical model.
The paradox of importance
Getting back to statistical research, one thing I’ve noticed is that the most elaborate research can be done on relatively unimportant problems. If there is a hurry to solve some important problem, then we’ll use the quickest methods at hand–we don’t have the time to waste on developing fancy methods. But if it’s something that nobody cares about . . . well, then we can put in the effort to really do a good job! In the long run, these new methods we develop can become the quick methods for the important problems of the future, but meanwhile we often see cutting-edge applied statistical research on problems that are of little urgency.