Rapid post-publication review

A colleague points me to a published paper and writes:

Do you believe this finding? If your biology isn’t strong enough to pass judgement — mine certainly isn’t — can you ask somebody who knows?

My reply: 4 groups with a total n=71? No way. The topic is too sad for me to mock on blog. But a priori I don’t think it’s worth looking into. Lots of researcher degrees of freedom in the analysis, enough so that I think it would be easy to attain statistical significance by chance alone. Sure, figure 4 looks impressive, but, as usual, consider all the comparisons they could’ve made. Usual disclaimer applies that I’m no subject-matter expert, I’m not even familiar with the subject matter. I’m just doubtful on the claimed statistical evidence.

19 thoughts on “Rapid post-publication review

      • But how can people evaluate or understand your post without looking at the article? Why post it on the blog? The context here matters. A lot of studies are done with lower N! Have you seen the studies that justify high intensity interval training over other forms of exercise—some of the N’s are pathetically low and the samples not indicative of the population (i.e. all people in their 20s).

      • Jon:

        The point is that there are zillions of such bad papers published every month. I don’t care about this article, I’m trying to make a general point, which I’ve already illustrated in dozens of posts on this blog. Just search “fat arms” or “ovulation” or “sex ratio” or “himmicanes” or “Bem” or whatever. I’m not at all claiming that a study with N=71 is necessarily bad. I’m just saying that this particular study is just one more example of a type we keep seeing over and over. I’ve done enough posts criticizing such studies in detail. This time I wanted to make a more general point. And, as I said, the topic is too sad for me to mock on blog.

  1. A quick search of Google Scholar for [“4 groups” “n=71” “figure 4”] restricted to publications in 2015 (and hence qualifying for “rapid post-publication review”) and filtered for the general focus of this blog and the hint that it’s in biology, leads me to guess it’s either “The Effectiveness of Group-based Self-management Programmes to Improve Physical and Psychological Outcomes in Patients with Cancer: a Systematic Review and …” or “Pathway-based gene signatures predicting clinical outcome of lung adenocarcinoma”. If I had to make one guess, it’s the second, as it’s from one of the tabloids. But that’s only if the article literally said “n=71” — if they said it another way, that search wouldn’t find it.

    On the other hand, “Evasive behaviours of female dusky dolphins (Lagenorhynchus obscurus) during exploitative scramble competition” looks like more fun. Why doesn’t ESPN show dolphin sports?

  2. To people asking Andrew for a link, I think the whole point of his post is that having applied this type of critique to the particulars of so many studies we now have a general template for how to first approach any study in the abstract.

    • Perhaps but in many fields sample size is inherently limited. It is also trivial to assert researcher degrees of freedom in the absence of “registered” methods or whatever. Sometimes the details matter, sometimes not. If it really is true in the abstract then – as I mentioned earlier – replication of studies seems the solution – anything else focused on the statistical approach per se is just tilting at windmills.

Leave a Reply to Jon Cancel reply

Your email address will not be published. Required fields are marked *