“Any research object with a strong and obvious series of inconsistencies may be deemed too inaccurate to trust, irrespective of their source. In other words, the description of inconsistency makes no presumption about the source of that inconsistency.”

Nick Brown and James Heathers write:

We have seen two documents from the Scientific Integrity Officer at the University of Rennes-2 . . . The first of these dates from June 2018 and is entitled (our translation from French), “Preliminary Investigation Report Regarding the Allegations of Fraud against Nicolas Guéguen”. . . .

We would also like to stress that the title of this document is extremely inexact. We have not made, and do not make, any specific allegations of fraud, nor are any implied. The initial document that we released is entitled “A commentary on some articles by Dr. Nicolas Guéguen” and details a long series of inconsistencies in research methods, procedures, and data. The words “fraud” and “misconduct” do not appear in this document, nor in any of our communications with the people who helped with the investigation. We restrict ourselves to pointing out that results are “implausible” (p. 2) or that scenarios are “unlikely [to] be enacted in practice” (p. 31).

They continue:

The origin of inconsistencies (be it typographical errors, inappropriate statistical methods, analytical mistakes, inappropriate data handling, misconduct, or something else) is also irrelevant to the outcome of any assessment of research. Any research object with a strong and obvious series of inconsistencies may be deemed too inaccurate to trust, irrespective of their source. In other words, the description of inconsistency makes no presumption about the source of that inconsistency.

Well put. It’s all about Clarke’s Law.

4 thoughts on ““Any research object with a strong and obvious series of inconsistencies may be deemed too inaccurate to trust, irrespective of their source. In other words, the description of inconsistency makes no presumption about the source of that inconsistency.”

    • Steve:

      Yes, and this is also related to what we call the time-reversal heuristic: Instead of starting with the shaky claim and treating it as the status quo which requires some immense effort to set aside, start with the refutation or failed replication and then go from there. Thus, instead of (implicitly) acting as if we should believe each of the claims of Wansink, Kanazawa, etc., until demonstrably proved otherwise, we can relax and, by default, not trust what they have to say.

      Davies in your link discusses the Iraq war. This is different in that there was a perceived asymmetry of costs involved. The claim was not just that Saddam Hussein had weapons of mass destruction, but that (a) the cost of not starting a war, if there really was WMDs, was potentially very high, and (b) the cost of starting a war, if there really were not WMDs, was low. In fact, there were no WMDs and the cost of the war was high—but my point here is that the judgment of “Does Saddam have WMDs?” was conflated with “What is the cost of doing nothing if he has WMDs, compared to the cost of starting a war if he doe not?” It’s similar to debates over climate change: disputes about costs of remediation and adaptation leak into disputes about the climate facts.

  1. Yes, to a property functioning scientific community fraud is treated the same as any other kind of error. All that matters is:

    1) Can you tell people what they must do to repeat your observations?
    2) Can you make accurate predictions* about future observations?

    * These must be sufficiently precise or at least surprising in some way if your explanation were incorrect

Leave a Reply

Your email address will not be published. Required fields are marked *