About that quasi-retracted study on the Mediterranean diet . . .

Some people asked me what I thought about this story. A reporter wrote to me about it last week, asking if it looked like fraud. Here’s my reply:

Based on the description, there does not seem to be the implication of fraud. The editor’s report mentioned “protocol deviations, including the enrollment of participants who were not randomized.” I suppose this could be fraud, or it could be simply incompetence, or it could be something in between, for example maybe the researchers realized there were problems with the randomization but they didn’t bother to mention it in the paper because they thought it was no big deal, all studies are imperfect, etc. I have no idea. In any case, it’s good that the NEJM looked into this.

Looking at the new article, it sees that it was only a small subset of patients that were not randomized as designed: out of a total of 7447 patients, 425 shared a household with a previously enrolled participant, and 467 participants were at the site where patients were not randomized. There may be some overlap, but, even if not, this is only 892 people, that’s 12% of the people in the study. So the easiest thing is to just analyze the 88% of the people who were in the trial as designed. Sure, that’s throwing away some data, but that will be the cleanest way to go.

I’m surprised this story is getting so much play, as it appears that the conclusions weren’t really changed by the removal of the questionable 12% of the data. (According to the above-linked report, the data problems were “affecting 10 percent of respondents,” and according to this news article, it’s 14%. So now we have three numbers: 10% from one report, 14% from another, and 12% from my crude calculation. A minor mystery, I guess.)

It says here that the researchers “spent a year working on the re-analysis.” If it was me, I think I would’ve taken the easy way out just analyzed the 88% (or 86%, or 90%) of the data that were clean, but I can’t fault them for trying to squeeze out as much information as they could. I’m similarly confused by the quotes from skeptical statisticians. Is the skepticism on specific statistical grounds, or is it just that, if 12% of the study was botched, there’s concern about other, undiscovered biases lurking in the data? As usual when the study involves real-life decisions, I end up with more questions than answers.

P.S. Just read Hilda Bastian’s discussion. Lots of interesting details.

10 thoughts on “About that quasi-retracted study on the Mediterranean diet . . .

  1. It seems to me, that the key question becomes one of trusting or not trusting the results of what was in essence, a Per Protocol re-analysis of the data, something that is commonly done, even in regulatory submissions, in parallel to the Intention to Treat analysis.

    In this case, if I read the coverage correctly, the Per Protocol re-analysis generally supports the findings of what was presumed to be the original Intention To Treat analysis, albeit perhaps, not as profoundly. The loss of somewhere between 10% and 14% of the subjects in the Per Protocol analysis is not unreasonable, in my experience, notably given the large sample size present here and the nature of the study design.

    What I don’t see in the coverage in the main stream media at least, and I have not read either the original or the corrected paper, is a sense of adherence to the protocol defined diets for the three arms, for how long were the subjects compliant and what kind of impact any compliance issues and/or lost to follow up concerns may have been present. That would be important to know, especially if there was any differential/imbalance in those characteristics between the three arms. Subjects were followed for a median of 5 years, according to some of the coverage, but what variability in follow up was present and is that relevant to their conclusions?

  2. Here is one why researchers and journalists I follow on Twitter are skeptical. It is that there is a continually repeated phenomenon in nutrition research: Any study that appears to support, or even not to demonize, a pleasant level of saturated fat consumption must be immediately questioned, minimized, or simply greeted with silence by nutrition writers at places like the New York Times, who will seek out the usual anti-fat doctors to quote as needed. Conversely, any study tending to show even moderate benefits from low-fat diets is presumed to be correct, even when reanalysis reveals the usual problems with randomization, compliance, and choice of end point measurement (which is rarely all-cause mortality).

      • The suspicion i read is that this study is of the kind described in my final sentence. And you have just engaged in the usual rhetoric. High fat versus low fat wasn’t tested here (I didn’t say it was), but this study goes in the mental box of “did not support high SFA intake” anyway.

  3. Hypothesis: The media kinda/sorta gets the data thug (apologies to Carlisle) angle and thinks it’s sexy/clicky. They don’t yet understand the scandal (randomization in multi-center trials depends on the local doctors only rarely being idiots – i.e. p(idiot | doctor) < 0.05 – which seldom obtains as in this case p(idiot | doctor) ~ 0.125)

    • Actually Hilda’s linked discussion, reminded me of David Andrews teaching us to audit the randomization procedures usually carried out in the pharmacy – early in any trail.

      I was once collaborating with a group at Mayo clinical in mid 1990s ( https://www.ncbi.nlm.nih.gov/pubmed/7615189 ) and on my first visit I scheduled an audit with their pharmacy. I was met by a very nervous assistant director of their pharmacy who apologized profusely that the director could not be there personally. The reason for the big fuss was no one had ever done this before at Mayo.

      Its really easy for someone to not get the actual randomization details right and mess it up. I did hear afterwards that Mayo starting doing this routinely.

  4. What surprised me the most was that an entire study centre was randomized to one arm and somehow no one noticed. Wouldn’t the data contain a dummy variable for centre? Something like that not being caught signals that sanity checks weren’t done or done well which is a bit of a red flag IMO.

  5. From a 2016 comment on Andrew’s blog regarding the original article:

    “This study had 18 authors some of whom served on the board of the Research Foundation on Wine and Nutrition, received support from the California Walnut Commision, the International Nut and Dried Food Council, Nestle, PepsiCo, the Beer and Health Foundation and Danone.”

    Another substantive critique of the Mediterranean diet by way of a NYT article where Dr. Esselstyn, a noted vegan, remarked:

    “those in the Mediterranean diet study still had heart attacks and strokes. So, he said, all the study showed was that ‘the Mediterranean diet and the horrible control diet were able to create disease in people who otherwise did not have it.’”

    And, no discussion of diets would be complete without this obvious historical truth:

    “Let us not forget the famous phrase popular among our fore-bearers: when it comes to diets, there are really only two: food and no food.”
    And or course, “The best diet is the diet you can stick to.”

  6. The reason why the biostatisticians were still skeptical was not just because of potential lurking biases, but because the PREDIMED study is infamous for having a very poor control group. There are a million papers criticizing that aspect of the study. The study was also stopped early and used composite endpoints (those two traits don’t really say much, but they often raise eyebrows in clinical research)

  7. This came up at my lab meeting today.

    From Hilda Bastian’s blog:

    When more than one person in a house was recruited, instead of randomizing each, they were all assigned to the same diet

    Maybe I’m missing something, but is this not what you would want to do, if several-per-household was part of your recruitment strategy? I’m trying to imagine compliance in households where people are randomized to different diets, and it’s pretty tough to imagine that working, especially since there are many households where effectively only one person does all the cooking. I’m not saying you should pretend these are truly independent datapoints, of course not – but it seems much easier/better to me to adjust statistically or run sensitivity analyses than it is to potentially lose both subjects to dropout and/or noncompliance after the tenth night in a row of fighting over the stove.

Leave a Reply to Kyle C Cancel reply

Your email address will not be published. Required fields are marked *