Do low-fat diets have “significant’ benefits?

There have been several news stories about a recently completed study of the effects of a low-fat diet on heart disease, colon cancer, and breast cancer. One of those stories is here. Some excerpts:

The large primary prevention trial conducted at 40 U.S. clinical centers from 1993 to 2005 enlisted 48,835 postmenopausal women ages 50 to 79 years.

Women in all three trials were randomly assigned to an intervention group (40%, n=19,541) with intensive behavior modification sessions and the goal of reducing fat intake to 20% of energy and increasing vegetable and fruit intake to at least five servings a day and grains to six servings.

Participants in a comparison group (60%, n= 29,294) were not asked to make changes…

So far, so good: we know the sample sizes, we know something about the interventions. What about the results? The article goes on to say:

Over an 8.1-year average follow-up, a low-fat diet did not lead to a significant reduction in invasive breast cancer risk, the researchers reported.

The number of women (annualized incidence rate) who developed breast cancer was 655 (0.42%) in the intervention group and 1,072 (0.45%) in the comparison group (hazard ratio 0.91; 95% confidence interval 0.83-1.01).

For starters, 655 cases out of 19541 women in the intervention group is 3.3%, not 0.42%, so I’m not sure quite what is going on with the numbers. Presumably something got lost in translation from the scientific publication to the news article.

But the thing that really strikes me is that the 95% confidence interval for breast cancer prevention just barely includes 1.00, or no effect. So the news stories all say some variation of “there is no significant benefit” from a low-fat diet. In the case of the experiment with fat intake, I suspect, though of course we will never know for sure, that if there had been just a couple fewer cases in the intervention group, or a couple more in the comparison group, so that the 95% confidence interval topped out at 0.99 rather than 1.01, the news articles would have trumpeted “Significant reduction in breast cancer from a low-fat diet.” This is an issue Andrew touched on a few weeks ago on this blog.

Like a lot of people who perform statistical analyses, I’ve always been unhappy with “statistical significance” (the term, and the concept) for two reasons: (1) in common speech, “significant” means “important”, but that is not true in stat-speak, where a parameter value can be “statistically significant” even though it is of no practical importance, and can be of great practical importance even if it is not “statistically significant” (the latter being the case if sample sizes or other sources of uncertainty are so large that even important effects can be missed); and (2) when did God say that 95% is the threshold for “statistical significance”? What’s so magic about 1/20, as opposed to 1/18 or 1/21 or anything else for that matter?

In the current study, there is mild evidence for a small effect on breast cancer; the best guess (from this study alone) would suggest that a low-fat diet, of the type they tested, reduces breast cancer incidence by something between 5% and 15%, although larger or smaller benefits (or even a small penalty) cannot be ruled out. I wish reporters would put it that way, rather than declaring that the result did or did not meet some arbitrary (though admittedly customary) “significance” standard.

4 thoughts on “Do low-fat diets have “significant’ benefits?

  1. This kind of thing is so discouraging…none of the news reports I saw reported these findings accurately, and I was on the lookout for something like this.

    I suspect that the numbers for the treatment and control groups the initial sample sizes, while the numbers who developed cancer were final numbers after sample attrrition. 1,072 out of 29,294 is 3.66%, not 4.5% (or .45%). (If this is the right, it looks like there may be "statistically significant" evidence that diet counselling makes you more likely to disappear from the sample! Maybe the papers should have trumpetted that finding.)

    Interestingly, even using standard 95% confidence levels, the difference IS "statistically significant" if we do a one sided test, which one could argue would be appropriate here.

  2. Wouldn't it be so much nicer if research was about doing experiments, gathering data, and *storing* that data for others to make their own analysis? Incentives such as registration of all clinical trials are the first step towards sharing the data of experiments, and not just the conclusions derived from an experiment. The registration also helps prevent re-doing an experiment several times in the hope of chance yielding a "significant" result.

    Otherwise, I don't have a problem with a P-value – it's a perfectly useful summary. I have a problem with people claiming absolute insignificance at 0.0101 and with people claiming absolute significance at 0.0099. But this has to do more with journalism than with statistics.

  3. D'oh! Andrew is right, I didn't read carefully: the percentages given are annual incidence rate, whereas the absolute numbers are the totals over 8 years.

    But this doesn't change my main point, which is that using "statistical significance" to determine that something is or isn't of practical importance can be misleading. Not a new point, but one that lots of reporters still don't get.

Comments are closed.