Applying the “If there’s no report you can read, there’s no study” principle in real time

So, I was on the website of the New York Times and came across this story by Donna de la Cruz:

Opioids May Interfere With Parenting Instincts, Study Finds . . .

Researchers at the Perelman School of Medicine at the University of Pennsylvania scanned the brains of 47 men and women before and after they underwent treatment for opioid dependence. While in the scanner, the study subjects looked at various images of babies, and the researchers measured the brain’s response. . . . Sometimes the babies’ features were exaggerated to make them even more adorable; in others, the chubby cheeks and big eyes were reduced, making the faces less appealing. . . .

Compared with the brains of healthy people, the brains of people with opioid dependence didn’t produce strong responses to the cute baby pictures. But once the opioid-dependent people received a drug called naltrexone, which blocks the effects of opioids, their brains produced a more normal response. . . .

Interesting, and full credit to the Times for inserting the qualifier “may” into the headline.

Anyway, the article continues:

The study, among the first to look at the effects of opioid dependence and how its treatment affects social cognition, was presented last month at the European College of Neuropsychopharmacology Congress in Vienna.

OK, where’s the research paper? Where are the data? The Times article provides a link which leads to a short abstract but no paper. Here’s the key paragraph from the abstract:

Forty-seven opioid dependent patients and 25 controls underwent two functional magnetic resonance imaging sessions, approximately ten days apart, while viewing infant portraits with high and low baby schema content and rating them for cuteness. Immediately after the first session, patients received an injection of extended-release naltrexone, while controls received no treatment. The repeated-measures ANOVA revealed a marginal significant main effect of Group [F(1,42) = 3.90, p = 0.06] with greater ΔRating (i.e. RatingHigh – RatingLow) in the patient group, but no main effects of Session [F(1,42) = 0.00, p = 0.99] or Gender [F(1,42) = 2.00, p = 0.17]. Among patients, self-reported craving was significantly reduced [F(1,24) = 45.76, p < 0.001] after the injection (i.e. On-XRNTX), but there was no gender difference [F(1,24) = 3.15, p = 0.09]. Whole brain analysis of variance showed Gender by Group by Session interaction in the ventral striatum. Brain responses increased in female patients and decreased in male patients across sessions, while the pattern was reversed in the controls. We found that the behavioral effect of baby schema, indexed by “cuteness” ratings, was present across all participant categories, without significant differences between patients and controls, genders or sessions. The pattern of the brain response to baby schema includes insulae, inferior frontal gyri, MPFC and the parietal cortex, in addition to the ventral striatum, caudate and fusiform gurus reported by Glocker et al. (2009) [4] in healthy nulliparous women.

I can’t quite follow what they did, but of course the phrase “marginal significant” sets off an alarm—not because I’m a “p less than .05” purist but because it makes me think of forking paths. There’s also the issue of assigned cuteness or rated cuteness, interactions with sex and maybe other patient-level characteristics, the difference between significant and non-significant, not to mention options in the outcome measure (what was referred to in the news article as “strong responses” and whatever zillion degrees of freedom were available from the MRI data.

This is not to say that the conclusions of the study are wrong, just that I have no idea. Just as I have no idea about the gay-gene study, which was one of our original inspirations for the principle, “If there’s no report you can read, there’s no study.”

I googled the title of the paper, *Sustained opioid antagonism increases striatal sensitivity to baby schema in opioid dependent women*, but all I could find was that abstract and an unsigned press release from 19 Sept on a website called MedicalXpress.

Again, I have no idea if this study’s claims are correct, if they have good evidence for their claims, or if the study is useful in some way. (These are three different questions!) In any case, the topic is important and I have no problem with the Times writing about research in this area. But . . . if there’s no report you can read, there’s no study. It’s not about whether it’s peer-reviewed. In this case, there’s nothing to review. An abstract and a press release just don’t cut it.

I know nothing about this research area, and the people who did this project may be doing wonderful work. I’m sure that at some point they’ll write a paper that people can read, and at that point, there’s something to report on.

5 thoughts on “Applying the “If there’s no report you can read, there’s no study” principle in real time

  1. > paper that people can read, and at that point, there’s something to report on.
    I don’t think so – perhaps a year after the paper is published and the wider scientific community has had a chance to discuss, digest and assimilate it with x other papers (some unpublished).

    Until then (at least outside the relevant scientific community) there simple is no interpret-able information to report on.

    (There will always be exceptions but those need to be flagged by a public health agency.)

    • Oh, what a different world it would be if this rule were applied! I agree that this is how science journalism should be. “Literature is news that STAYS news,” said Erza Pound. Science “journalism” should always be old news.

  2. “If there is no report you can read, there’s no study in principle”. Yes. In prior comments, I have pushed the idea that formal standards analogous to the Actuarial Standards of Practice are needed so that the profession governs itself, rather than insisting that journalists, members of a different profession, apply this standard. Journalists seem to have enough trouble crafting appropriate standards for themselves.
    For example, from Actuarial Standard of Practice No. 41 – “The actuary should complete an actuarial report if the actuary intends the actuarial findings to be relied upon by any intended user. The actuary should consider the needs of the intended user in communicating the actuarial findings in the actuarial report.” The ASOP does not limit creativity, but does require certain disclosures regarding intended purpose, limitations on use of report, conflict of interest, responsibility, reliance on other sources, etc. If a written report is needed to communicate results of research,then isn’t a comprehensive written set of documents, with buy-in from the profession, needed to communicate professional standards?
    http://www.actuarialstandardsboard.org/wp-content/uploads/2014/02/asop041_120.pdf

Leave a Reply to Keith O'Rourke Cancel reply

Your email address will not be published. Required fields are marked *