What if a big study is done and nobody reports it?

Paul Alper writes:

Your blog often contains criticisms of articles which get too much publicity. Here is an instance of the obverse (inverse? reverse?) where a worthy publication dealing with a serious medical condition is virtually ignored. From Michael Joyce at the ever-reliable and informative Healthnewsreview.org:

Prostate cancer screening: massive study gets minimal coverage. Why?

The largest-ever randomized trial of using the prostate-specific antigen (PSA) test in asymptomatic men over the age of 50 has found — after about 10 years of follow-up — no significant difference in prostate cancer deaths among men who were screened with a single (“one-off”) PSA test, and those who weren’t screened. . . .

Two things caught our attention here.

First, that this “largest-ever” trial did not get large coverage in the mainstream press. In fact, none of the nearly two dozen US news outlets that we check every weekday wrote about it.

Second, it reminded us that even when coverage of screening is actually large, it often falls short in two very important ways. . . .

Here’s what the researchers reported:

– The cohort included 400,000 men without prostate symptoms, ages 50-69, enrolled at nearly 600 doctors’ offices across England

– 189,386 men had a one-off PSA test vs 219,439 men who had no screening

– After ~ 10 years: 4.3% of the screened group was diagnosed with prostate cancer vs 3.6% of the unscreened (control) group (authors attribute most of this difference to low-grade, non-aggressive cancers)

– Despite finding more cancer in the screened group, the authors found both groups had the same percentage of men dying from prostate cancer, and that percentage was very low: 0.29%

As of our publishing this article — two days after the British study was published — coverage of this “largest-ever” trial remains scant. Is it because it’s a European study? Unlikely — the results were featured prominently in one of the most prestigious US medical journals and promoted with an embargoed news release. Or, because it represents a so-called ‘negative’ or non-dramatic finding? (That is, no increase in prostate cancer deaths between the two groups found.) Who knows.

But it stands in stark contrast to the mega-coverage we’ve documented for many years on other prostate cancer screening studies that are typically much less rigorous — and which often trumpet an imbalanced, pro-screening message about prostate cancer. . . .

It’s an interesting issue of selection bias in what gets reported, what is considered to be news.

35 thoughts on “What if a big study is done and nobody reports it?

  1. I have more than a casual interest in this study – and in the issue of what gets reported. I need much more time to digest the actual study, but a few observations to kick things off:
    1. The study is quite complex and the bottom line is hard to interpret – a single PSA test in your early 50s is not demonstrated to lead to improved outcomes. As the study points out, this does not really say much about using PSA tests (more intensively) diagnostically, nor is it clear that the time frame (10 years) is adequate to reach many conclusions.
    2. The study totally relies on p values and hypothesis tests. The fact is that less men died from prostate cancer in the intervention group than in the control group, but the difference was not statistically significant. Maybe it’s a good thing that the media didn’t run with the story.
    3. The study has a number of complex features that will require further thought. Intention to treat is a big issue here – the participants were randomized, but a large number did not follow the treatment – either did not get the PSA test or did not get the biopsy that was recommended after the test. Why? This certainly should be further analyzed.
    4. Could it be that the study is just too complex for the media to report? Maybe only simple bottom line messages get picked up by the media – and, as we know, most of the time those simple bottom line messages are misleading (at best).
    5. As usual, I am left with my usual reaction – please provide the data. This is data I would like to analyze myself.

  2. Prostate cancer screening is complicated. Prostate cancer is usually indolent. Nearly all men will develop it by age 80, yet only a rather small minority of men with prostate cancer die of it, and a large proportion of them would never even have any symptoms from it were it left undiagnosed. (This is known as overdiagnosis.) Consequently for many men who are screen-diagnosed, the recommendation is to just do active surveillance for progression over time and not proceed with the potentially nasty treatments available.

    In my view, having reviewed this issue a few times, the most likely situation is that prostate cancer screening is able to produce a very small mortality reduction, but only at a latency of something like 15 years. So this study can be criticized as not having long enough follow-up period to detect the effect. On the other hand, it is quite reasonable to say that if all it does is produce a tiny effect after 15 years it’s not worth all the rigmarole of doing it (and everything that cascades from a positive test), especially in light of the high incidence of overdiagnosis.

    It’s also very difficult to study well. Very long follow-up is needed. And it is very difficult to assure that study participants adhere to their random assignment over that long haul. (The NIH sponsored study of prostate cancer some years back foundered on this issue.) Also, the follow-up for diagnosis for positive screens is much less accurate than that of, say, mammography. A mammogram identifies a place to put a biopsy needle. A positive prostate cancer screen does not, so one is left, in most cases, with blind sampling of the entire prostate in the hope that you will hit whatever happens to be lurking within.

    Why so little coverage of the negative results? Perhaps I am too cynical, but I say follow the money. The diagnostic evaluations of men who screen positive, and monitoring (or worse, treatment) of the overdiagnosed, and the treatment of actually dangerous prostate cancer have become a huge industry, supporting urologists, oncologists, hospitals, free-standing radiation facilities, and the pharmaceutical industry in style. With so much of journalism working in penury and dependent on pharma advertising to stay afloat, it doesn’t surprise me at all that negative findings about a widespread intervention get little media play. Look, this isn’t the only thing like this. It’s been established for a very long time that much of the knee and back surgery we do offer no benefit over more conservative treatment–but those findings never drew much media publicity, and the surgeries are still widely done.

    • As my urologist told me (shortly before my first of two biopsies) searching for prostate cancer with a biopsy is like searching for strawberry seeds with 12 random punctures of a strawberry’s surface. Sorta made a negative result less than completely interesting…

      But my favorite story here is that of a very famous economist who died well into his nineties. He called me two months before he died, saying “I was diagnosed with prostate cancer when I was 68. They said, ‘Don’t worry. Leave it. Something else will kill you.’ It turns out they were wrong!”

    • “Perhaps I am too cynical, but I say follow the money.”

      Simpler: The Academy is a truth-mining operation; it looks bad to investors when its big digs yield nothing. They aren’t going to be pushing this story.

    • Clyde, thank you for this summary. Next up in the cross-hairs: colonoscopy. There are much less lucrative and less invasive tests (e.g., fecal tests, sigmoidoscopy) that appear to be about equally effective against mortality, and don’t require nearly as much cajoling to get people to actually undergo.

      • Is there indeed solid evidence that the alternative tests are about equally effective? (Or perhaps the issue is whether they are equally effective in the general population, but more effective in special populations — e.g., people with a first degree relative who had colon cancer)

        • Biannual fecal tests are the norm in Europe and Australia. The follow-up test there is typically sigmoidoscopy, not colonoscopy. I’m no expert, but papers in NEJM and elsewhere suggest no material difference in the ability to catch colorectal cancers. Which is pretty remarkable — after decades of prosletyzing for colonoscopy, how many Americans do you think even know these safer, less invasive, less costly tests exist? Colonoscopy is understandably a hard sell since it involves strict fasting, as well as sedation that causees short-term amnesia, and is about as likely to perforate the patient’s colon (which happened to a family member of mine) as to find something that results in prolonging the patient’s life.

  3. 1) There’s a reason why “Just Do Something, Anything!” Is better marketing than “Just Do Nothing.”

    2) As in the case of breast cancer it would be helpful if we could get the different types of prostate cancer sorted out before we do any more studies of the “does this intervention change the mean apple-orange frequency?” variety.

  4. Similarly large multicenter studies have already been performed in US patients [1] and European patients [2] and have been published over the last 10 years. Findings regarding prostate cancer-related death have been mixed, with a benefit-harm balance ranging from mild benefit to negative (due to biopsy complications, etc.). The US Preventive Services Task Force had already suggested drafted a recommendation [3] to personalize the offering of PSA screening at the time of your correspondent’s article, and suggested against broad use of PSA screening in asymptomatic men as far back as 2012 [4].

    In short, there was nothing newsworthy to report.

    [1] https://www.uptodate.com/contents/screening-for-prostate-cancer/abstract/15
    [2] https://www.uptodate.com/contents/screening-for-prostate-cancer/abstract/12
    [3] https://www.aafp.org/news/health-of-the-public/20170411uspstfprostate.html
    [4] https://www.uspreventiveservicestaskforce.org/Home/GetFile/1/924/prostatefinalsum/pdf

    • Indeed, I opted to pay for the MRI myself since insurers do not cover the MRI fusion biopsy yet. The evidence is still coming in – and the nature of prostate cancer is that it may take a long long time before there is enough evidence to change the practice. But I’m convinced that getting an MRI before even considering a biopsy makes sense. Whether it makes commercial or societal sense is a different matter, but at an individual level I highly recommend it. Somehow the idea that just sticking needles into 12 places and seeing if you manage to hit a cancer just seems not as sensible as having images to guide you.

      • I can assure you that you probably made a good decision. I had a prostate biopsy several years ago—triggered by a PSA test no less.

        It was unpleasant and, as of today, there is something like a 1.5% chance of requiring hospitalization due to post-biopsy infection.

        The biopsy route does have the advantage that, should you encounter a woman complaining that medical procedures are humiliating etc, you can trump her ace by describing prostate biopsy.

        Bob
        P.S. Biopsy turned out negative—a fact that made me quite happy.

        • My dear late Dad suffered through three biopsies over seven years triggered by PSA tests. In each case those literal “stabs in the dark”, the sort beloved by epidemiologists, were both (a) quite unpleasant and (b) wholly unenlightening. One day about 7 months after the third “negative” biopsy he had a severe nose bleed that wouldn’t stop. A bone scan revealed something and a biopsy demonstrated it to be prostate cancer – in his marrow.

          Since then I’ve worked on multiple lawsuits in which e.g. a mesothelioma case turned out upon examination by immunohistochemical staining to be instead a prostate cancer that had set up shop far from home. A recent Twitter meme was “What radicalized you?” For me this was it. And so, again, mounting my hobby horse to sally forth into the fray, I cry: “It would be very nice if we could agree upon what snipes look like before we’re sent on the next snipe hunt, Sir.”

  5. Most people, apparently, are just incapable of spotting the flaws in the assertion, “If we CAN screen for something, we MUST screen for it.” As I age, doctors and dentists use this “logic” with me all the time. (“If we don’t X-ray your teeth, we can’t see interdental decay.” Well, yes, I know what an X-ray is, thank you.)

  6. “I think that journalists ought to prioritize coverage based on the quality of the study, not the perceived newsworthiness of the conclusion.”

    If they do this and the newspaper/etc loses money, then what does he suggest?

    Also, I don’t think the reporting is necessarily being biased for “newsworthiness” in this case. It sounds like standard “fake news” procedure where you report some info that fit your narrative but not stuff that doesnt fit. Obviously a lot of money gets spent on all this screening, some of that goes to guiding what gets covered by the media in one way or another.

  7. 647 out of 219,439 men in the control group died of prostate cancer (0.295%).

    549 out of 189,386 men in the screening group died of prostate cancer (0.290%). This is 9 less than the number that would’ve died if they had the same rate as the control group.

    Using classical statistics and this site, it looks like the 95% confidence interval is from 68 fewer deaths to 57 extra deaths (-57 to +68) due to prostate cancer for the 189K people who got screenings.

    If we set the statistical value of life at $5M, then that’s a point estimate of $248 in value (due to lives saved) generated per screening with a 95% confidence interval from $1804 generated to $1495 lost (-$1495 to +$1804).

    So classical statistics would suggest that we can basically rule out the possibility that these screenings are a big win. And given the cost of the screening, plus the costs of additional steps that are taken if the screening comes out positive, maybe this is enough to rule out the possibility that the screening is worth it.

    If there aren’t any confounds, other major benefits besides the change in this death rate, etc. then this study provides a pretty strong case for not doing the screenings.

    • Since these people are over 50 you can’t even use the 5M statistical value of a life, since they’ve already experienced more than half of their expected lifetime, you’d really want to use something like 5M $/90 QALYs to break it down further. That’d cut your estimate by about a factor of 2

    • More precisely, you are making the point that this particular type of screening (one time test in your early 50s) does not seem like an efficient thing to do. Don’t generalize that to PSA screening in general – there is a lot more work that needs to be done about that (although there is a growing contingent questioning the wisdom of PSA testing). Also, before we settle on your numerical estimates, be aware that a huge percentage (I believe it was over 50%) of the men randomized into the intervention group did not have the test done. This is a recurring controversy in clinical trials – intent to treat is used to group the participants. I want to know what happened to the mean who actually had the PSA testing done compared with those that did not. This is not what the statistics in the paper are reporting.

      • “I want to know what happened to the men who actually had the PSA testing done compared with those that did not.”

        I think you might know this, but just to be pedantic, this is not what you want to know as it’s not even an estimate of a causal effect. Those who receive PSA screening might be quite different from those who don’t since actually receiving screening is not randomized. You’d really want an instrumental variable estimate of, depending on assumptions, the average treatment effect on the entire population, the effect of treatment on the treated, or the effect of receiving treatment on those who would comply with their treatment assignment regardless of what it was.

    • A 2017 paper, based on data from 2005-20012, states that 1.55% of prostate biopsy patients are admitted to a hospital within 30 days for treatment of infectious complications. The average insurance claim for these admissions was $14,498.96. (Andrew, please don’t gig me for spurious precision—that number is in the paper.) 0.0155*$14498.96 = $225.

      So, at least $225 should be subtracted from your estimate of $248 in value.

      The overall 30-day admission rate is higher, 3%.

      Here’s a link to the paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5413992/

      Bob

  8. I suspect no one paid much attention to this study because it doesn’t add much to our knowledge. It doesn’t add much because no one thinks that a single one-time screening for prostate cancer is a sensible strategy.

    I would think that this website would be interested in statisticians who have looked at prostate cancer screening by combining structural modeling with the research evidence. Probably the most prominent examples are the various research studies associated with Ruth Etzioni at the Hutchinson Cancer Center: https://research.fhcrc.org/etzioni/en/publications.html

    See, for example, this paper which combines microsimulation modeling with evidence from various studies. https://www.ncbi.nlm.nih.gov/pubmed/27010943 . They basically find that PSA screening CAN make sense if it is used conservatively and combined with relatively conservative treatment strategies.

  9. I’ve always found it fascinating that studies like these (large randomized trials) or systematic reviews/meta-analyses of such trials almost rarely get as much coverage as studies that are just outright noisy such as studies coming out of fields like nutritional epidemiology or studies that brag about using prediction etc

  10. Because media coverge tends to focus on shocking or counterintuitive sttements (often associated to bad science), there might be a negative correlation between actual quality and coverage.

  11. The question isn’t about the best way to protect a population from prostate cancer deaths; it is about publicizing a study. I blame the authors for this. They should have included more outcomes. In addition to looking at the impact of the intervention on prostate cancer morbidity and mortality, they should have looked at shark attack rates. A study entitled “PSA screening and SHARK ATTACKS” would have garnered a lot of attention. The study abstract and the actual paper would show their data regarding the prostate stuff, of course.
    My local newspaper last Sunday had an eight page supplement in advance of the annual Race for the Cure. Anyone interested in media coverage of any topic would do well to study the strategies of the Komen Foundation.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *