Kevin Lewis pointed me to this quote from a forthcoming article:
Daniele Fanelli and colleagues examined more than 3,000 meta-analyses covering 22 scientific disciplines for multiple commonly discussed bias patterns. Studies reporting large effect sizes were associated with large standard errors and large numbers of citations to the study, and were more likely to be published in peer-reviewed journals than studies reporting small effect sizes. The strength of these associations varied widely across research fields, and, on average, the social sciences showed more evidence of bias than the biological and physical sciences. Large effect sizes were not associated with the first or last author’s publication rate, citation rate, average journal impact score, or gender, but were associated with having few study authors, early-career authors, and authors who had at least one paper retracted. According to the authors, the results suggest that small, highly cited studies published in peer-reviewed journals run an enhanced risk of reporting overestimated effect sizes.
This all seems reasonable to me. What’s interesting is not so much that they found these patterns but that they were considered notable.
It’s like regression to the mean. The math makes it inevitable, but it’s counterintuitive, so nobody believes it until they see direct evidence in some particular subject area. As a statistician I find this all a bit frustrating, but if this is what it takes to convince people, then, sure, let’s go for it. It certainly doesn’t hurt to pile on the empirical evidence.