You’ll never guess what I say when I have nothing to say

A reporter writes:

I’m a reporter working on a story . . . and I was wondering if you could help me out by taking a quick look at the stats in the paper it’s based on.

The paper is about paedophiles being more likely to have minor facial abnormalities, suggesting that paedophilia is a neurodevelopment disorder that starts in the womb. We’re a bit concerned that the stats look weak though – small sample size, no comparison to healthy controls, large SD, etc.

If you have time, could you take a quick look and let me know if the statistics seem to be strong enough to back up their conclusions? The paper is here:
http://link.springer.com/article/10.1007/s10508-015-0564-7/fulltext.html

I replied: Yes, I agree, I don’t find this convincing, also it’s hard to know what to do with this. It doesn’t seem newsworthy to me. That said, I’m not an expert on this topic.

9 thoughts on “You’ll never guess what I say when I have nothing to say

  1. Oh, you could do stuff with it: If the effect was true you could, for example, try screening during pre-natal tests, or during risk assessment for sensitive jobs etc.

    But what makes that moot if this is just one of those typical, low powered / non-preregistered / fishing / crappy methodology studies that mistake some noise for an effect. And I think it is.

    I’m saying it isn’t so hard to know what to do with it, if only it was true.

    • Assuming this held up and was not due to lack of blinding (how many ‘minor physical anomalies’ like undetached ear lobes could you find if you were looking?) or some sort of selection bias, it’s still hard to imagine any predictive test of pedophilia besides ‘has previously been convicted of sex crimes’ which would not run into the base-rate problem & be useless, because it’s not a common crime to be convicted of. (And as sex crimes get expanded, even that will probably be increasingly useless – being convicted of ‘sexting’ as a teen is probably inversely correlated with pedophilia!)

    • Note the delightfully classical prejudice against the non-right-handed as well, even in the abstract.

      “Evidence is steadily accumulating to support a neurodevelopmental basis for pedophilia.”

      What are the usual statistics for the rate and steadiness of the accumulation of evidence? And greater steadiness in the accumulation of evidence predicts what, in what model?

      • I’ve been telling people for awhile, but they don’t want to believe it is that bad: Allow testing a null hypothesis rather than the researcher’s hypothesis as evidence and you can “accumulate evidence” for anything you want. It is totally useless.

        The only thing slowing down the epic crash coming is people trained in science still holding positions of importance. Once the critical mass has retired, the field of research goes to shit and is replaced by political picking and choosing.

    • This appears to be an example of the tenuous inferential chain so commonly adopted in psychology.

      1. Weak relationship found between ‘a’ & ‘b’ with small sample studies.
      2. Weak relationship found between ‘b’ & ‘c’ with small sample studies.
      3. Weak relationship found between ‘c’ & ‘d’ with small sample studies
      4. Combine studies from ‘3’ into a meta-analysis still considered to be a small sample study and conclude ‘5’:

      5. Inference: ‘a’ causes ‘b’, therefore ‘a’ causes ‘c’, therefore ‘a’ causes ‘d’.

      6. Policy Recommendation: All ‘c’ should be considered ‘d’ and anything you might do to or for ‘d’ should be done to or for ‘c’.

    • Don’t be too quick – has the reporter published their story yet? If they asked you and then ignore what you say, will you feel better or worse? Or might it help if they adopt a power pose?

  2. The story could be that there are papers get published (and taken seriously?) when they lack the evidence to back them. The question could be asked and possibly even answered “is publication with unconvincing evidence more prevalent on topics with highly emotive issues?” Like the one in the referenced paper – about as emotive is it is possible to get. As opposed a paper something considered to be extremely dry.

    To what extent are such papers used as justification for policy? How should policy makers decide if research is worthwhile or not – peer reviewed in journals isn’t looking so great in this case. The journalist has had to get a separate review from a (famous?) statistics professor yet has basic stats literacy. Should we be comfortable about the need for authority there?
    Could the process the journalist followed, getting a second review of the statistics alone be something that is capable of being formalized? Eg paying for and publishing such reviews of the stats with the review attributed. How could that be set up to maximize the signal to noise?

    Is the closed access journal system working at all given reading the paper will cost you $40 when it has no merit? Should such for-profit closed access journals be employing qualified statisticians to do such reviews prior to publication? How could that be enforced by the market? By regulation?

    There’s a few different ways a journalist could go..?

Leave a Reply

Your email address will not be published. Required fields are marked *