Tip o’ the iceberg to ya

Paul Alper writes:

The Washington Post ran this article by Fred Barbas with an interesting quotation:

“Every day, on average, a scientific paper is retracted because of misconduct,” Ivan Oransky and Adam Marcus, who run Retraction Watch, wrote in a New York Times op-ed in May.

But, can that possibly be true, just for misconduct alone and not including honest mistakes? Science researchers would then seem to belong in the same league as used car dealers.

My reply: I don’t give two poops about Dr. Anil Potti but, sure, with over a million scientific papers published a year, I’d think that we should be seeing tens of thousands of retractions a year. Just as a point of reference, I’ve published a few hundred research papers and retracted two of them (or I’ve run corrections which I consider the equivalent of retraction). I think I’m a pretty good scientist so I’m guessing that other people’s retraction rates should be higher than my approx 1/2 of 1%.

If tens of thousands of papers should be retracted each year, that comes to something like 100 a day. So if only 1 paper a day is retracted, my guess is that something like 99% of the papers that should be retracted, aren’t.

13 thoughts on “Tip o’ the iceberg to ya

  1. This is why relative frequencies are more informative than absolute frequencies for rare events.

    1 paper per day retracted due to misconduct sounds like a rampant epidemic, until you compare it to everything else that is happening in the world.

    “For every 100 accidental poisoning deaths in the US, a scientific paper somewhere in the world is retracted due to misconduct.”

    Somehow that doesn’t sound nearly as rampant.

  2. I don’t understand the comment you’re commenting on. The Washington Post article itself clearly points out that “Researchers publish some 2 million articles a year.” Compared to this, 1 retracted article per day is an extremely small number (0.02 %) — how would one infer from that that “Science researchers would then seem to belong in the same league as used car dealers?” (I agree with Andrew’s comment that the number that *should* be retracted is undoubtedly higher.) Perhaps the point is the opposite — that we unnecessarily slander used car dealers! I do feel sorry for these people who have to work under the stereotype that they’re dishonest.

  3. As has been pointed out, the more prestigious the journal, the higher the retraction rate–a journal which is not read is unlikely to suffer retractions. Therefore, the retraction rate cited by Oransky and Marcus likely vastly understates the situation.

  4. What makes this issue particularly difficult is that it is not as easy as one would think to pinpoint what constitutes fraud in science; sure there are some clear cut cases but not everything is cut and dry. My favorite example is Millikan’s oil drop experiment. As is preached in introductory physics, he is credited with discovering the charge of an electron by trying to balance the gravitational force acting on an oil droplet with the electric field and solving for the charge, which ought to be in small multiples of the charge of an electron. Turns out that in his lab notebooks he had some 175 measurements and reported only 58. Was he a fraud? This is examined in a nice read here: http://calteches.library.caltech.edu/4014/1/Millikan.pdf.

  5. I assume your retractions weren’t due to misconduct, but I think your point still stands that with a million publications per year it’s not crazy to see 365 instances of misconduct.

  6. It’s not clear to me that the model where scientific publications represent honestly-produced, “accurate” work is actually more efficient than one that allows errors or even outright fraud to persist.

    • Choosing the most impactful topics and having the skill to discern which have that potential and those that do not is essential. This skill is grounded in critical thinking. Without a critical mass of participants with a high level of critical thinking skill in which foundational assumptions and their logical implications are recognized and extrapolated, then you are correct that very accurate measures of meaningless constructs will be far more likely and fraudulent behavior within such a system likely has little effect. The problem with passive acceptance of this type of system is that it will inevitably self-destruct.

      If you contrast this to a system in which topics that are not only important but are grounded in the real world, what emerges is the reality that fraudulent input does in fact have a substantial negative impact.

      • This reminds me of what should be called the “Wisdom Conjecture” of scientific advancement:

        The great scientists of old such as Newton, Euler, Gauss, and so on, were about as clever as the typical physicist or mathematician today (or even back in their own day), who are in turn about as clever as the top people in any field. Their success was due to having the wisdom to identify the most fruitful lines of research.

Leave a Reply to Giri Cancel reply

Your email address will not be published. Required fields are marked *