Here Comes the Stats: The Impact of Incidental Contextual Factors on Research Acquisition

Chetan Chawla writes:

I think this article from my field (in one of our top journals) is right up your alley:

“Proves” hypothesis that the weather on a startups demo day influences investments. I haven’t dug into the methodology yet, but it reminded me of your Himmicane blog post, curious what you think of this one?

This actually reminds me of what Gary Taubes was told early in his science writing career when he was debunking cold fusion. “If you think this science is bad, you should look at nutrition.” . . . . what passes for science in business/management is just as astounding.

The above-linked article, “Here Comes the Sun: The Impact of Incidental Contextual Factors on Entrepreneurial Resource Acquisition,” follows some of Rolf Zwaan’s 18 rules for success, with the main difference being that it makes use of observational data rather than lab experiments. I have the usual concerns about piranhas and forking paths.

At this point some of you might be annoyed that I express skepticism about these published claims without offering any specific criticisms. But that’s kind of the point! You should be skeptical of a claim of large effects and complicated interactions based on a bunch of p-values. The appropriate attitude is not to start with Bem-like credulity and wait until all the specific problems in the study have been found; rather, start with skepticism and then go from there.

14 thoughts on “Here Comes the Stats: The Impact of Incidental Contextual Factors on Research Acquisition

  1. I guess skepticism is the right initial attitude if you believe (on the basis of prior expertise, or something like this) that the methods and/or the scientific discipline are likely to produce unreliable results.

    But then there’s also people who aren’t experts. For these non-experts, I wonder if the right attitude would be something like: “ok, I don’t know what to think (yet!)”. If such acceptance of ignorance is coupled with willingness to learn about the methods and/or the scientific discipline, the non-expert could then take the appropriate steps towards either skepticism or credulity.

    • I do not know of any kind of skepticism that is not ‘I do not know’.

      [recalling the intro of Garnet Chan to one of his talks at the Simons Foundation, to ‘experts in other fields’]

    • The motto of the royal society was nullius in verba, or take no ones word for it. It arose as a response to the 1000 years of stagnation caused by overreliance on the argument from authority heuristic.

      For generations, the brightest and most highly educated people spent their lives on problems most consider irrelevant today like whether the son was begotten by the father vs the father begat the son.

  2. With this kind of thing, I often look for the smallest p-values for the central hypotheses. If they are in [.01, .1], then, yeah, probably pointless noisy (unless there is something special about what’s being studied). Not much need to look harder. If tiny, then I try to figure out whether they account for all the important dependencies (eg clustered assignment of regressors) in the data; often this is isn’t the case. Here it seems like we in the first case (though as this involves weather, we perhaps should be concerned about the second too if we cared to look harder).

    • That’s a good point, as Ioannidis says moving the p-value threshold much lower would help. I’m not optimistic enough to hope for an end to NHST, so lower thresholds would be welcome.

      • Only lowering p value threshold without increasing the sample size is futile. You may luck out with a 0.01 or with a 0.05, as you only have one shot at it.
        The former may lead you to overconfidence though. Not to mention you also risk missing the effect (type II).

      • Chetan, Navigator:

        No no no! All the fussing around with p-values and significance levels and sample sizes is all pointless in the absence of good theory and measurement. At best, stricter rules will dissuade future researchers from doing hopelessly noisy studies, but existing bad research will not be helped by these contextless statistical steps.

  3. David, of course in an ideal world people move away from obsession with p-values. But I’m not sure many fields are there yet, the incentives are to publish “significant” findings from noisy data.

  4. “The appropriate attitude is … start with skepticism and then go from there.”

    Excellent. On this, at least, we agree.

    Once there’s some sensible reason to tentatively accept the results, one can move away from reactionary skepticism and see if it’s possible to build a case for acceptance.

Leave a Reply

Your email address will not be published. Required fields are marked *