The chorus of ablationists

Dan Walter writes:

I am writing an article about the study of a medical procedure that was recently published in the Journal of the American Medical Association. The study concerns a procedure called catheter ablation for atrial fibrillation. It was bought and paid for by the device manufacturer. You can see my [Walter’s] take on it here.

Despite that I once refereed an article for the journal Chest, I don’t know arterial fibrillation from–well, I don’t even know what is different enough from “arterial fibrillation” to use as a comparison here. So I’ll just quote from Walter:

Usually you’d have to be a subscriber, or be willing to cough up $15 to read an article in JAMA. But the full text of Comparison of Antiarrhythmic Drug Therapy and Radiofrequency Catheter Ablation in Patients With Paroxysmal Atrial Fibrillation: A Randomized Controlled Trial can be had on their site for free. It comes with a video presentation and a link to a patient information page. It’s all courtesy of Biosense Webster, one of the Johnson & Johnson family of companies, the one that manufactures the NaviStar ThermoCool Irrigated Tip Catheter©, which is the device vindicated by their study.

The mainstream media basically reprinted Johnson & Johnson’s press release, using headlines like this: Heart Procedure Beats Drugs for Irregular Heartbeat. . . .

One senses the statistical sleight of hand in the aggressive number crunching, and I am told that a Bayesian method allows for cherry picking. In case you didn’t know, Bayesian is a system for describing epistemological uncertainty using the mathematical language of probability. Whatever that means, it seems certain to have enabled interested parties to make the swirling numbers come to rest in a particular pattern.

As Bill James wrote, you can lie with statistics, just as you can lie with English or French or any other language. Similarly, you can cheat with Bayesian statistics, just as you can cheat with classical methods. Don Berry (the statistician on the cited paper) has an excellent reputation and I doubt he’d cheat.

Walter also gets to some specific criticisms of the study:

Most people who suffer from atrial fibrillation were excluded from the study right off the bat. That’s because people who have taken the drug amiodarone for their afib were not allowed, and two thirds of afib sufferers have been on amiodarone at one time or another. . . . Also, those who were treated in the study were relatively young, otherwise healthy people who suffered from paroxysmal bouts of afib, and had their procedures done at hospitals with lots of experience. So the eye-catching headlines don’t really apply to most people with the problem, who tend to be older people with more persistent atrial fibrillation.

Finally, Hugh Calkins, who nonetheless joined in the chorus of ablationists touting the study, pointed out the biggest flaw of all: The way the study was set up, there was no way that drug therapy could beat catheter ablation. So the fix was in. Ablation was being compared to drug therapy in people who had already failed at drugs, and if you fail one drug, you’re likely to fail a second. . . .

Interesting comments, but they would seem to apply more to the public relations effort than to the JAMA article, which lists these limitations prominently right at the beginning (see the link above).

P.S. Here’s the financial disclosure statement that concludes the published report:

Dr Wilber reported receiving grants fromBiosense Webster, Boston Scientific, Medtronic, and St Jude Medical; consulting fees from Biosense Webster, Medtronic, and Sanofi-Aventis; honoraria from Biosense Webster, Boston Scientific, Medtronic, and St Jude Medical; and royalties from Blackwell/Futura. Dr Pappone reported receiving grants and consulting fees from St Jude Medical and Johnson & Johnson, and honorarium from Biosense Webster. Dr Neuzil reported receiving grants from Biosense Webster, Cardiofocus, Cyrocath Technologies, Hansen Medical, NIH BARI 2D, and St Jude Medical; consulting fees from Stereotaxis; and honorarium from Biosense Webster. Dr De Paola reported receiving a grant from Bristol-Myers Squibb. Dr Marchlinski reported receiving grants and honoraria from Biosense Webster, Boston Scientific, and St Jude Medical; consulting fees from Biosense Webster, Boston Scientific, GE Healthcare, Medtronic, and St Jude Medical; and speakers’ bureau fees from Biosense Webster. Dr Natale reported receiving grants from Biosense Webster and St Jude Medical, and speakers’ bureau fees from Biosense Webster, Boston Scientific, Medtronic, and St Jude Medical. Dr Macle reported receiving consulting fees and honorarium from Biosense Webster. Dr Daoud reported receiving consulting fees from BARD and Biosense Webster, and honorarium from Biosense Webster. Dr Calkins reported receiving consulting fees from Ablation Frontiers, Atricure, BARD, Biosense Webster, Boston Scientific, CryoCor, CyberHeart, Medtronic, ProRhythm, Sanofi-Aventis, and TASER International; a grant and honorarium from Biosense Webster; speakers’ bureau fees from Atricure, BARD, Biosense Webster, Boston Scientific, Medtronic, and Reliant; and fellowship fees from BARD, Boston Scientific, and Medtronic. Dr Hall reported receiving consulting fees from Biosense Webster. Dr Reddy reported receiving grants from Atritech, Boston Scientific, Biosense Webster, Cardiofocus, CryoCath Technologies, Endosense Hansen Medical, St Jude Medical, and Stereotaxis; consulting fees from Biosense Webster and St Jude Medical; and honoraria from Boston Scientific, Biosense Webster, Medtronic, and St Jude Medical. Dr Augello reported receiving honoraria from BARD, Biosense Webster, and St Jude Medical. Dr Reynolds reported receiving consulting fees from Biosense Webster, Cardiome Pharma Corp, and Sanofi-Aventis. Mr Vinekar and Ms Liu are employees of Biosense Webster. Drs S. Berry and D. Berry reported receiving consulting fees from Biosense Webster, Veridex LLC, Boston Scientific, Endologix, R.R. Bard, W.L. Gore, Medtronic, Bristol-Myers Squibb, Pfizer, and Teva Pharmaceuticals.

Wow! Perhaps it would be more convenient for them to present this sort of information in matrix form.

9 thoughts on “The chorus of ablationists

  1. Paper seems way short of being reproducible, even if you had the data. For example,

    Study success required a probability of superiority of catheter ablation over ADT in the primary end point at the final analysis of at least 98% to control the type I error rate at no more than 2.5%, which was verified by simulations.

    Hello? What simulations? How as all the multiple imputation done? And where are the priors?

  2. The huge financial disclosure statement makes me suspicious, yes. But it isn't clear what you make of it. The criticism that the study isn't about all patients with atrial fibrillation but just those who'd failed with a drug seems to me like a very good point. You let the JAMA article off the hook, I don't: Its title says nothing about how the subjects had been selected. If I wrote an article called "Teenage Girls Great at Dunking Basketball" that turned out to be about teenage girls taller than 6'5", I'm pretty sure you'd think that was a misleading title. (Speaking of titles, I love the title of your post.)

  3. I certainly don't mean to call Mr. Berry's integrity into question. I'm sure the Bayesian method was applied with full scientific and academic rigor.

    As it says on the FDA's site, however, "The Bayesian approach is not substitute for sound science." The method can be employed "When good prior information on clinical use of a device exists…" My reading of industry sponsored clinical studies is that they are often unsound.

  4. Yikes, far from a visible success for Bayesian methods in clinical trials, especially given the talent involved.

    Rather than increasing the credibility in the target audience (clinicial researchers and clinicians) – it appears to have inadvertently _discredited_ a rather strong trial result.

    From the sponsor's point of view, far more resources were likley put into statistical design and analysis than any other RCT I am aware of. Guessing that the sponsor also had to pay for the independent mainly frequentist statistical analysis at the journal's insistence?

    No advantage of a Bayesian approach is apparent in the paper and nor more than a vague sense of what the Bayesian analysis actually entailed (i.e. no mention at all about what priors were used) – perhaps all deleted at the journal insistence.

    May not have been a good opportunity to show a real value added in that treatment was for symptom relief ( given my limited clinical understanding ) and a simple larger trial (200 or 250) would have likley been ethical and of benifit ( given better info about treatment effects and side-effects ) and the analysis for practical purposes sealed by a long-rank statistic.

    Anyways, glad this was posted and I look forward reading Don Berry's papers given in the references.

    Keith

  5. Seth – re: let[ting] the JAMA article off the hook.

    The trial's exclusion criteria were in the abstract and JAMA's readership can be expected to know about the pervasiveness and importance of exclusion criteria in RCT's.

    On the other hand, clinical journals regularily prevent the inclusion of the necessary design and analysis details for the appropriate interpretation of many RCTs. There are guidelines to help, but a journal is just not the appropriate medium for fully interpretable or better still reproducible results. May well be the reason here why the use of Bayesian methods decreased rather than increased the credibility of this RCT – in some communities.

    As for the "The huge financial disclosure statement" that is pretty standard now and due to some of us arguing that it was better to keep more people actively involved in clinical research – disclosing all possibly apparent conflicts of interest – rather than excluding their participation (which probably would be almost everyone currently involved.)

    Someone had to do and check all that Fortran coding – and I hope they did get paid money for it!

    Keith

  6. It's a frequent misconception that priors are used to include prior information in clinical trials. In all of the Bayesian adaptive trials that I've been involved with (well over 20, including over a dozen designed) the parameters are chosen so that the priors are very uninformative; any information contained in the prior is quickly over-ridden by actual data.

    The point (as currently practiced and increasingly accepted by FDA, esp. CDRH) of the Bayesian framework is to make incorporation of information as the trial progresses simple, not to incorporate information outside of the trial.

    The trial in question does make use of a hierarchical model, which adaptively uses information outside of the specific trial; there will of course be some shrinkage, even if all of the trials are heterogeneous. I can't speak authoritatively for this specific trial, but it is best practice to design the trial to control type I error.

    Disclosure: I have worked for Berry Consultants since 2007. However, I do not speak for them, nor was I involved with the trial in question.

  7. Sure, but what were the priors? How was the Type I error rate evaluated? This is not in the paper. Relevant code is not made available. Effectively, the authors say "we checked it but we're not telling you how – just trust us". Then one looks at the disclosures…

    The (much-advertised) Bayesian character of the analysis looks unconvincing. The standard analyses – which seem to have been incorporated specifically at the request of the statistical auditor – do a better job.

  8. David – this seems to confirm your comment here

    The quick one sentence take out being

    "Additionally, we have chosen non-informative priors and selected loss functions to produce trials with appealing frequentist error rates, rather than choosing priors that reflect realistic prior information and loss functions that reflect true costs. Our Bayesian designs allow interpretation of the final results along either Bayesian or frequentist lines"

    with the same Donald A Berry on this paper

    Abstract of paper
    Bayesian decision-theoretic group sequential clinical trial design based on a quadratic loss function: a frequentist evaluation
    Roger J Lewis
    Ari M Lipsky
    Donald A Berry

    The decision to terminate a controlled clinical trial at the time of an interim analysis is perhaps best made by weighing the value of the likely additional information to be gained if further subjects are enrolled against the various costs of that further enrollment. The most commonly used statistical plans for interim analysis (eg, O'Brien–Fleming), however, are based on a frequentist approach that makes no such comparison. A two-armed Bayesian decision-theoretic clinical trial design is developed for a disease with two possible outcomes, incorporating a quadratic decision loss function and using backward induction to quantify the cost of future enrollment. Monte Carlo simulation is used to compare frequentist error rates and mean required sample sizes for these Bayesian designs with the two-tailed frequentist group-sequential designs of, O'Brien–Fleming and Pocock. When the terminal decision loss function is chosen to yield typical frequentist error rates, the mean sample sizes required by the Bayesian designs are smaller than those of the corresponding O'Brien–Fleming frequentist designs, largely due to the more frequent interim analyses typically used with the Bayesian designs and the ability of the Bayesian designs to terminate early and conclude equivalence. Adding stochastic curtailment to the frequentist designs and using the same number of interim analyses results in largely equivalent trials. An example of a Bayesian design for the data safety monitoring of a clinical trial is given. Our design assumes independence of the probabilities of success in the two trial arms. Additionally, we have chosen non-informative priors and selected loss functions to produce trials with appealing frequentist error rates, rather than choosing priors that reflect realistic prior information and loss functions that reflect true costs. Our Bayesian designs allow interpretation of the final results along either Bayesian or frequentist lines. For the Bayesian, they minimize the total cost and allow the direct calculation of the probability density function for the difference in efficacy. For the frequentist, they have well-characterized type I and II error rates and in some cases lead to a reduction in the mean sample size.

    Clinical Trials, Vol. 4, No. 1, 5-14 (2007)
    DOI: 10.1177/1740774506075764

Comments are closed.