I agree with this comment

The anonymous commenter puts it well:

The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.

9 thoughts on “I agree with this comment

  1. Fernando,

    I do not see how anything can be deeper than replacing careful description, prediction, falsification, and independent replication with dynamite plots, p-values, affirming the consequent, and peer review. From my own experience I am confident in saying that confusion caused by NHST is at the root of this problem.

    • Incentives? Impact factors? Publish or die? “Interesting” and “new” above quality and reliability, or actually answering a research question, and a silly and unbecoming obsession with being quoted in NYT, etc…

      Given the incentives something silly is bound to happen. At issue is cause or effect.

      • Thank goodness for predatory fake journals, for a small fee you get to claim a publication that will never generate a tesky letter to the editor and likely will disappear soon after your university appraised it as real.

      • maybe so, but at this level of depth we’re practically saying that human nature and the incentive structures it creates is fundamentally at odds with uncovering the truth.

        I don’t disagree with that – there’s always been a conflict between power and truth – the illusion of progress mostly comes from the fact that conditions line up once in a while (e.g. discovering the truth is beneficial to someone in power – like during a space or arms race) so that something useful gets discovered.

        However, I’m afraid that even if we eliminated the current incentive structure, I would expect another incentive structure which also conflicts with discovery would appear.

      • Fernando,

        It do not believe it is possible for NHST (when using the nil null hypothesis) to provide any useful scientific information. Because the most useful it can be is zero, and no experiment is perfect, it can in fact only have negative utility by making people think something they should not (confusing them). I have taken the argument a bit farther than he appears to, but see John Myles White’s arguments here:

        http://www.johnmyleswhite.com/notebook/2012/05/18/criticism-4-of-nhst-no-mechanism-for-producing-substantive-cumulative-knowledge/

        • As noted by JMW, all the useful information is found in the intermediate step of parameter estimation. I would add that the calculation of a p-value also need not be useless, at least in the cases where it can act as shorthand for a likelihood as described in Michael Lew’s paper below (however its role is now relegated to a minor one, and the previous “century of confusion” will probably prevent it from ever being widely useful in this way). It is specifically the accept/reject “hypothesis” step that places the p-value at center stage which is faulty.

          To P or not to P: on the evidential nature of P-values and their place in scientific inference
          Michael J. Lew
          (Submitted on 1 Nov 2013)
          http://arxiv.org/abs/1311.0081

  2. Pingback: The Notorious N.H.S.T. presents: Mo P-values Mo Problems « Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

Leave a Reply

Your email address will not be published. Required fields are marked *