Skip to content
 

Don’t talk about hypotheses as being “either confirmed, partially confirmed, or rejected”

Kevin Lewis points us to this article by Paige Shaffer et al., “Gambling Research and Funding Biases,” which reports, “Gambling industry funded studies were no more likely than studies not funded by the gambling industry to report either confirmed, partially confirmed, or rejected hypotheses.”

The paradox is that this particular study was itself funded by the gambling industry! So we’re in a kind of infinite regress here.

Anyway, for the purpose of this post I’m not interested in gambling or conflicts of interest. Rather, my interest was triggered by the phrase, “confirmed, partially confirmed, or rejected hypotheses.”

In social science, I think we should move away from the idea that we have hypotheses that are confirmed or not. My concern here is similar to my issues with null hypothesis statistics testing:

1. Hypotheses in social science are vague enough that it’s just about never clear what it would mean for a hypothesis to be “confirmed” or “rejected.” There are no true zeroes, and even when it comes to estimation, the size and direction of comparisons can vary across people and scenarios.

2. Conflation of rejection in a null hypothesis significance test with rejection of a substantive hypothesis.

3. Rejection of null hypothesis A taken as support for, or confirmation of, favored alternative hypothesis B.

4. Dichotomization—or, one might say, premature dichotomization—throwing away information at all stages of a study, from design and data collection through coding and data analysis.

I think there are a few more issues here that I’m forgetting. The key point here is that the problems of null hypothesis significance testing arise not just with original studies but also with replications, meta-analyses, literature reviews, and the like.

11 Comments

  1. Eliot J says:

    Similar sentiment expressed by Jorma Rissanen in his book Optimal Estimation of Parameters. He wrote (paraphrasing) don’t regard models as true of false, think of them as good, bad or in-between.

  2. Confirmed is based on an infinite regress of confirmed on the basis of assumptions which were confirmed on the basis of hyper-assumptions …

    Perhaps more plainly, the only thing we ever learn about reality is that it is not that compatible with what we expected. On the other hand, when it seems compatible, we just have not had the opportunity to view it in enough details to see the ways in which it was other than expected.

  3. somebody says:

    I think there’s also something to be said against treating social science theory like physical theory. There’s no time translational invariance here — things that are true today aren’t necessarily true tomorrow, so the implication of settled science carried by the word “confirmed” seems inappropriate.

    I’m sure there’s lots of important and true relationships between anti-LGBT attitudes and other political views that were “true” and measurable two decades ago that don’t hold now. If we tested a hypothesis on that and “confirmed” it with a high degree of certainty then, I’d still want to retest it now.

    • Kyle C says:

      John Maynard Keynes warned eloquently about exactly this issue in economics.

    • jim says:

      “I’d still want to retest it now.”

      And it’s possible that neither would be relevant even in it’s own time, since people often tell surveyors what they think the surveyors want to hear. There aren’t a lot of good reasons to expose views to surveyors that could be deemed socially inappropriate.

      And it’s also possible that “attitudes” as perceived or represented in a particular study would have little or no relationship to behavior. The fact that an individual doesn’t like people from Group X doesn’t necessarily imply that they’ll tolerate those people being abused. OTOH – as we’ve seen many times throughout history – a person who has no problem with people from Group X might be more than happy to abuse them relentlessly if that’s what’s necessary to maintain cordial relationships with various other people.

  4. dl says:

    The work cited in Rainey, “Testing for a Negligible Effect” may be relevant to this conversation.

  5. The point, I think, is that the use of “confirmed” implies something unhelpful for decision making – you have some confidence in the claim, and the study provides evidence one way or another. So to put it differently, the suggestion here is to be a bit more Bayesian in our interpretation of NHST – because even though classical statistical analysis is appropriate for doing and evaluating the individual studies, when you’re doing decision making about what you should research next, or what decisions to make, Bayesian thinking is a more helpful paradigm.

  6. Justin Smith says:

    I never understood the ‘there are no true zeroes’ argument. There may not be true zeroes, but there may be constants a and b in interval [a,b] where you can get as small as you want around 0. Also we can often test for any constant c just by subtracting off or shifting by c, so isn’t saying ‘there are no true zeroes’ the same as saying ‘there are no true constants’.

    For 2, the issue of practical significance can be addressed by equivalence testing and specifying upfront the smallest effect size of interest (SESOI).

    Justin

    • Martha (Smith) says:

      I’d say, “There might be true zeroes, but we can’t tell whether there are or not, so for practical purposes there are no true zeroes”. (Same goes for “true constants”.)

    • somebody says:

      > Also we can often test for any constant c just by subtracting off or shifting by c, so isn’t saying ‘there are no true zeroes’ the same as saying ‘there are no true constants’.

      I’d say there are no true constants. Or rather more precisely, any set of cardinality one in an uncountably infinite sample space, which is usually assumed in the probability model for real-valued parameters, has measure zero. So, if you ask me whether or not your theta is equal to any constant value, I will tell you no with as much confidence as I have in anything.

      That’s kind of semantic, but I think that’s sort of the point of the objection. The question of whether or not some hypothetical theta is exactly equal to some constant under some probabilistic, mathematical model that’s wrong anyways is a semantic question with, ironically, no significance.

Leave a Reply