Are the tabloids better than we give them credit for?

Joshua Vogelstein writes:

I noticed you disparage a number of journals quite frequently on your blog.
I wonder what metric you are using implicitly to make such evaluations?
Is it the number of articles that they publish that end up being bogus?
Or the fraction of articles that they publish that end up being bogus?
Or the fraction of articles that get through their review process that end up being bogus?
Or the number of articles that they publish that end up being bogus AND enough people read them and care about them to identify the problems in those articles.

My guess (without actually having any data), is that Nature, Science, and PNAS are the best journals when scored on the metric of fraction of bogus articles that pass through their review process. In other words, I bet all the other journals publish a larger fraction of the false claims that are sent to them than Nature, Science, or PNAS.

The only data I know on it is described here. According to the article, 62% of social-science articles in Science and Nature published from 2010-2015 replicated. A earlier paper from the same group found that 61% of papers from specialty journals published between 2011 and 2014 replicated.

I’d suspect that the fraction of articles on social sciences that pass the review criteria for Science and Nature is much smaller than that of the specialty journals, implying that the fraction of articles that get through peer review in Science and Nature that replicate is much higher than the specialty journals.

My reply: I’ve looked at no statistics on this at all. It’s my impression that social science articles in the tabloids (Science, Nature, PNAS) are, on average, worse than those in top subject-matter journals (American Political Science Review, American Sociological Review, American Journal of Sociology, etc.). But I don’t know.

9 thoughts on “Are the tabloids better than we give them credit for?

  1. … and I must get a half-dozen requests a week, to publish in journals I have never heard of. And a few of them also request me to be on their editorial boards.

  2. Joshua Vogel wrote: “According to the article, 62% of social-science articles in Science and Nature published from 2010-2015 replicated. A earlier paper from the same group found that 61% of papers from specialty journals published between 2011 and 2014 replicated.”

    Replication in a journal does not guarantee quality.

  3. From the articles abstract – ” We find a significant effect in the same direction as the original study for 13 (62%) studies”
    Thought that way of assessing replication had been widely discredited, but apparently not by Aug 2018.

    Next “the effect size of the replications is on average about 50% of the original effect size” starts to bring important aspects into focus – so, on average exaggerating the effect by a factor of 2.

    The paywall :-(

  4. In economics, Andrew‘s conjecture is true at least as evidenced by tenure requirements. Those use something similar to a lexicographic ranking of journals, with econ top 5 on top, followed by top-fields (eg journal of labor economics, journal of public economics). Science, Nature, PNAS, sometimes don’t count at all, sometimes count at the lower top field level, or simply confuse people. But clearly not comparable to top 5 econ; also nowhere near in terms of impact within the profession.

  5. Vogelstein as well as Andrew’s response point the finger on the fact that we simply don’t have any reliable metric for measuring this. Chances are if we had, people would start to game it, and soon it’d be ruined. These “replication numbers” are obviously fishy, but are they worse than anything else, such as arguing from rejection rates, tenure requirements and the like?

    • For some of the better known economics pieces in Science / Nature, you don’t even have to replicate, you can just read the appendix / SI of the paper carefully to find where the rabbit goes into the hat (i.e. which confounder produces the alleged result). See, e.g. the work in progress by Bartling, Fehr, Ozdemir. To be fair, that also happens in economics journals, but it seems to occur less often.

  6. I think that their social science articles are definitely worse than those in the top subject-matter journals (and even the second level journals) in terms of the chance of having a fundamental flaw that invalidates their claims. There are a lot of articles in ASR, AJS, AJPS, etc. that are unconvincing, but few that are just wrong in the way that the “himmicane” study was. Part of that may be conventions about length. A standard request from reviewers in sociology and political science is to do something extra–try another analysis, or show more descriptive information, or say more about the magnitude of the effects. As a result of doing those things, problems are more likely to be discovered. One reason that reviewers are ready to request extras is that there’s little pressure to keep articles short. I’ve never reviewed for journals like Science or PNAS, but my guess is that their reviewers take the approach of “do you see a problem?” rather than “what could the author do to strengthen the paper?”

  7. I disagree with Joshua’s framing of this issue. If some low-grade journal publishes 80% crap studies, is it then ok for top journals to publish 40% crap studies? The top journals deserve the scrutiny because of their popularity and influence. Perhaps Andrew’s implicit metric is the number of potential exposures of bad science.

Leave a Reply to Sandro Ambuehl Cancel reply

Your email address will not be published. Required fields are marked *