Endless citations to already-retracted articles

Ken Cor and Gaurav Sood write:

Many claims in a scientific article rest on research done by others. But when the claims are based on flawed research, scientific articles potentially spread misinformation. To shed light on how often scientists base their claims on problematic research, we exploit data on cases where problems with research are broadly publicized. Using data from over 3,000 retracted articles and over 74,000 citations to these articles, we find that at least 31.2% of the citations to retracted articles happen a year after they have been retracted. And that 91.4% of the post-retraction citations are approving—note no concern with the cited article. We augment the analysis with data from an article published in Nature Neuroscience highlighting a serious statistical error in articles published in prominent journals. Data suggest that problematic research was approvingly cited more frequently after the problem was publicized [emphasis added]. Our results have implications for the design of scholarship discovery systems and scientific practice more generally.

I think that by “31.2%” and “91.4%” they mean 30% and 90% . . . but, setting aside this brief lapse in taste or numeracy, their message is important.

P.S. In case you’re wondering why I’d round those numbers: I just don’t think those last digits are conveying any real information. To put it another way, in any sort of replication, I’d expect to see numbers that differ by at least a few percentage points. Reporting as 30% and 90% seems to me to capture what they found without adding meaningless precision.

28 thoughts on “Endless citations to already-retracted articles

  1. I am wondering what might be done to improve the situation. For example, could the Science Citation Index be altered so that articles that have been retracted are labeled as such? This does seem like the ethical thing to do.

    • “I am wondering what might be done to improve the situation”

      Don’t they have things like plagiarism software/programs for papers?

      If so, could those be altered to include “highlighting” the mentioning of papers in things like the references section that have been retracted, and make sure journals use the software/programs when reviewing submitted papers?

      • “If so, could those be altered to include “highlighting” the mentioning of papers in things like the references section that have been retracted, and make sure journals use the software/programs when reviewing submitted papers?”

        Hmm, could this be an idea for someone/something (e.g. “Retraction watch”?)?

        Can’t you just have a database of retracted papers (that is being updated all the time), and combine that with some sort of plagiarism software and have it be active on a specific site where everyone can upload a specific (to be submitted) paper to have it be “scanned” to find possible retracted papers?

      • Ah, thank you for the information!

        I would prefer a “simple” and “separate” site/software/program, not connected to a different thing like Zotero (whatever that exactly does)

        • It’s a reference manager. Like EndNote, but free (and better in my opinion). This is useful because it will alert authors as they add a citation to a retracted article to their manuscript.

        • Ah thank you for the information. I am not sure what a “reference manager” is, even after quickly trying to look it up.

          As you may have guessed already, i am not a fan of all these computer things that are supposedly making academic life easier and/or better. They to me often miss the mark of what exactly matters, and may make things more complicated than needed, and perhaps make things worse. More importantly, they usually take me way too much time and energy to download, and install, and then learn how to use.

          When i wrote all my manuscripts, i just looked up, and typed in, the references i needed. I think i like it that way. I am a simple man.

    • What would change the situation would be a move away from the use of supposedly objective metrics to evaluate the performance of researchers.

      However, such a move would require both a massive cultural change and the availability of some viable alternative for the administrators who must make or desire to make such evaluations. The problem faced by such administrators (and even other researchers in the same department) is incompetence to judge the quality of publications, and insufficient time and resources to do so even when competent to do so.

      Currently there are many perverse incentives to cite articles. For example, citing a highly cited article makes one’s article more likely to show up in forward citation searches, which are more likely to be performed on highly cited articles. One can imagine others.

    • > I am wondering what might be done to improve the situation

      There are two ways.

      First one is technical and can be handled by software. Just check, whenever submitting a paper, if references are still valid.

      Second one is more structural. Just ensure that scientists communicated with each other and know what is happening in their field and the fields they are referring to!

  2. A little conscientiousness would also go a long way. As standard practice, whenever I’m about to cite something, I always go to google scholar and look at recent citations and related articles. Even working in theory I’ve found many times when a big result has been found to have a flawed assumption, a poorly carried out example, a mistake in a proof, etc. But If I were to just settle for “Famous and Notable (1987) Say X”, I’d probably have no fight from reviewers. Granted, sometimes there are so many citations that it might be hard to track down a replication or a response paper, but I don’t think many people even think this is part of writing an honest lit review.

      • Maybe you shouldn’t over estimate their ability to round the numbers in a way that you like. Especially considering that apparently 73,564 citations to the retracted articles became “over 74,000” in the abstract…

        Would 30% and 90% mean anything contained in the intervals [25% 35%] and [85% 95%]? Maybe they could just say more than half and less than half…

      • I agree with the sentiment but I think researchers should be true to methodology and report whatever results their method spits out. If the precision of results isn’t appropriate, then perhaps it’s the method that should be changed rather than the results.

        • Specifically most stats software works with 64 bit double precision IEEE floating point numbers. So there are approximately 15 digits of precision in every calculation. Does anyone want 31.2124905515227 % ?

        • Andree, I totally agree: Computers do exactly what we tell them.

          But it seems like when we build the model we can build output that’s consistent with input. Seems like the easy way to go, do the thinking up front, create rules, then you can run with whatever the box spits out.

  3. My impression is that most journals flag their retracted articles quite clearly (It says “RETRACTED” or somesuch on the article’s page).

    Hypothesis: Many researchers don’t even read the abstracts of many of the articles they cite, but copy citations from other (necessarily older) papers. Hence, they never notice that the articles in question have been retracted.

    • Quote from above: “Hypothesis: Many researchers don’t even read the abstracts of many of the articles they cite, but copy citations from other (necessarily older) papers. Hence, they never notice that the articles in question have been retracted.”

      Yep, i agree with that thought.

      This is also why something else could be useful besides journals flagging papers with “RETRACTED”.

  4. This puts the lie to the claim that it’s okay to over-interpret your evidence or exaggerate your conclusions because, if your findings aren’t replicated, at least you will used the scientific process to move the field forward.

    • Quote from above: “This puts the lie to the claim that it’s okay to over-interpret your evidence or exaggerate your conclusions because, if your findings aren’t replicated, at least you will used the scientific process to move the field forward.”

      Yes!

      I will use your comment to (again) mention a principle i came across and i think deserves much more attention in (social) science: “Primum non nocore” (“First, to do no harm”). From the wikipedia page:

      “Primum non nocere (Classical Latin: [ˈpriːmʊ̃n noːn nɔˈkeːrɛ]) is a Latin phrase that means “first, to do no harm.” The phrase is sometimes recorded as primum nil nocere.[1]

      Non-maleficence, which is derived from the maxim, is one of the principal precepts of bioethics that all medical students are taught in school and is a fundamental principle throughout the world. Another way to state it is that, “given an existing problem, it may be better not to do something, or even to do nothing, than to risk causing more harm than good.” It reminds physicians to consider the possible harm that any intervention might do.”

Leave a Reply

Your email address will not be published. Required fields are marked *