[silly cartoon found by googling *cat burglar*]
Torbjørn Skardhamar, Mikko Aaltonen, and I wrote this article to appear in the Journal of Quantitative Criminology:
Simple calculations seem to show that larger studies should have higher statistical power, but empirical meta-analyses of published work in criminology have found zero or weak correlations between sample size and estimated statistical power. This is “Weisburd’s paradox” and has been attributed by Weisburd, Petrosino, and Mason (1993) to a difficulty in maintaining quality control as studies get larger, and attributed by Nelson, Wooditch, and Dario (2014) to a negative correlation between sample sizes and the underlying sizes of the effects being measured. We argue against the necessity of both these explanations, instead suggesting that the apparent Weisburd paradox might be explainable as an artifact of systematic overestimation inherent in post-hoc power calculations, a bias that is large with small N. Speaking more generally, we recommend abandoning the use of statistical power as a measure of the strength of a study, because implicit in the definition of power is the bad idea of statistical significance as a research goal.
I’d never heard of Weisburd’s paradox before writing this article. What happened is that the journal editors contacted me suggesting the topic, I then read some of the literature and wrote my article, then some other journal editors didn’t think it was clear enough so we found a couple of criminologists to coauthor the paper and add some context, eventually producing the final version linked here. I hope it’s helpful to researchers in that field and more generally. I expect that similar patterns hold with published data in other social science fields and in medical research too.