Skip to content
 

Endless citations to already-retracted articles

Ken Cor and Gaurav Sood write:

Many claims in a scientific article rest on research done by others. But when the claims are based on flawed research, scientific articles potentially spread misinformation. To shed light on how often scientists base their claims on problematic research, we exploit data on cases where problems with research are broadly publicized. Using data from over 3,000 retracted articles and over 74,000 citations to these articles, we find that at least 31.2% of the citations to retracted articles happen a year after they have been retracted. And that 91.4% of the post-retraction citations are approving—note no concern with the cited article. We augment the analysis with data from an article published in Nature Neuroscience highlighting a serious statistical error in articles published in prominent journals. Data suggest that problematic research was approvingly cited more frequently after the problem was publicized [emphasis added]. Our results have implications for the design of scholarship discovery systems and scientific practice more generally.

I think that by “31.2%” and “91.4%” they mean 30% and 90% . . . but, setting aside this brief lapse in taste or numeracy, their message is important.

P.S. In case you’re wondering why I’d round those numbers: I just don’t think those last digits are conveying any real information. To put it another way, in any sort of replication, I’d expect to see numbers that differ by at least a few percentage points. Reporting as 30% and 90% seems to me to capture what they found without adding meaningless precision.

28 Comments

  1. Martha (Smith) says:

    I am wondering what might be done to improve the situation. For example, could the Science Citation Index be altered so that articles that have been retracted are labeled as such? This does seem like the ethical thing to do.

    • Martha (Smith) says:

      I am thinking especially of the web version — changing print versions is a bigger task.

    • Anonymous says:

      “I am wondering what might be done to improve the situation”

      Don’t they have things like plagiarism software/programs for papers?

      If so, could those be altered to include “highlighting” the mentioning of papers in things like the references section that have been retracted, and make sure journals use the software/programs when reviewing submitted papers?

      • Anonymous says:

        “If so, could those be altered to include “highlighting” the mentioning of papers in things like the references section that have been retracted, and make sure journals use the software/programs when reviewing submitted papers?”

        Hmm, could this be an idea for someone/something (e.g. “Retraction watch”?)?

        Can’t you just have a database of retracted papers (that is being updated all the time), and combine that with some sort of plagiarism software and have it be active on a specific site where everyone can upload a specific (to be submitted) paper to have it be “scanned” to find possible retracted papers?

      • Antonio Vargas says:

        If authors had to upload their bibtex files when submitting a manuscript, those might be easier to use directly.

    • Vince says:

      Zotero (which I don’t use, but is widely used in the social sciences, I think) has something like this in development: Retracted Item Notifications

      • Anonymous says:

        Ah, thank you for the information!

        I would prefer a “simple” and “separate” site/software/program, not connected to a different thing like Zotero (whatever that exactly does)

        • Adede says:

          It’s a reference manager. Like EndNote, but free (and better in my opinion). This is useful because it will alert authors as they add a citation to a retracted article to their manuscript.

          • Anonymous says:

            Ah thank you for the information. I am not sure what a “reference manager” is, even after quickly trying to look it up.

            As you may have guessed already, i am not a fan of all these computer things that are supposedly making academic life easier and/or better. They to me often miss the mark of what exactly matters, and may make things more complicated than needed, and perhaps make things worse. More importantly, they usually take me way too much time and energy to download, and install, and then learn how to use.

            When i wrote all my manuscripts, i just looked up, and typed in, the references i needed. I think i like it that way. I am a simple man.

    • Dan F. says:

      What would change the situation would be a move away from the use of supposedly objective metrics to evaluate the performance of researchers.

      However, such a move would require both a massive cultural change and the availability of some viable alternative for the administrators who must make or desire to make such evaluations. The problem faced by such administrators (and even other researchers in the same department) is incompetence to judge the quality of publications, and insufficient time and resources to do so even when competent to do so.

      Currently there are many perverse incentives to cite articles. For example, citing a highly cited article makes one’s article more likely to show up in forward citation searches, which are more likely to be performed on highly cited articles. One can imagine others.

    • > I am wondering what might be done to improve the situation

      There are two ways.

      First one is technical and can be handled by software. Just check, whenever submitting a paper, if references are still valid.

      Second one is more structural. Just ensure that scientists communicated with each other and know what is happening in their field and the fields they are referring to!

  2. A little conscientiousness would also go a long way. As standard practice, whenever I’m about to cite something, I always go to google scholar and look at recent citations and related articles. Even working in theory I’ve found many times when a big result has been found to have a flawed assumption, a poorly carried out example, a mistake in a proof, etc. But If I were to just settle for “Famous and Notable (1987) Say X”, I’d probably have no fight from reviewers. Granted, sometimes there are so many citations that it might be hard to track down a replication or a response paper, but I don’t think many people even think this is part of writing an honest lit review.

  3. Adede says:

    Zotero now flags retracted papers to help avoid this problem.

  4. Paul says:

    The extra digits can be useful in detecting computational mistakes or possibly fraud, and in reconstructing raw data when not supplied.

  5. LemmusLemmus says:

    My impression is that most journals flag their retracted articles quite clearly (It says “RETRACTED” or somesuch on the article’s page).

    Hypothesis: Many researchers don’t even read the abstracts of many of the articles they cite, but copy citations from other (necessarily older) papers. Hence, they never notice that the articles in question have been retracted.

    • Anonymous says:

      Quote from above: “Hypothesis: Many researchers don’t even read the abstracts of many of the articles they cite, but copy citations from other (necessarily older) papers. Hence, they never notice that the articles in question have been retracted.”

      Yep, i agree with that thought.

      This is also why something else could be useful besides journals flagging papers with “RETRACTED”.

  6. Michael Nelson says:

    This puts the lie to the claim that it’s okay to over-interpret your evidence or exaggerate your conclusions because, if your findings aren’t replicated, at least you will used the scientific process to move the field forward.

    • Anonymous says:

      Quote from above: “This puts the lie to the claim that it’s okay to over-interpret your evidence or exaggerate your conclusions because, if your findings aren’t replicated, at least you will used the scientific process to move the field forward.”

      Yes!

      I will use your comment to (again) mention a principle i came across and i think deserves much more attention in (social) science: “Primum non nocore” (“First, to do no harm”). From the wikipedia page:

      “Primum non nocere (Classical Latin: [ˈpriːmʊ̃n noːn nɔˈkeːrɛ]) is a Latin phrase that means “first, to do no harm.” The phrase is sometimes recorded as primum nil nocere.[1]

      Non-maleficence, which is derived from the maxim, is one of the principal precepts of bioethics that all medical students are taught in school and is a fundamental principle throughout the world. Another way to state it is that, “given an existing problem, it may be better not to do something, or even to do nothing, than to risk causing more harm than good.” It reminds physicians to consider the possible harm that any intervention might do.”

  7. Fr. says:

    Kinda related — ‘zombie’ papers that everyone (author included) agreed should be retracted, but still live on: https://steamtraen.blogspot.com/2019/05/an-update-on-our-examination-of.html (Heathers and Brown about papers by psychologist Nicolas Guéguen)

Leave a Reply