Clinical trials are broken. Here’s why.

Someone emailed me with some thoughts on systemic exertion intolerance disease, in particular, controversies regarding the Pace trial which evaluated psychological interventions for this condition or, should I say, set of conditions.

I responded as follows:

At one point I had the thought of doing a big investigative project on this, formally interviewing a bunch of people on all sides of the issue, etc., but then I didn’t really have the energy to do so. When Rehmeyer’s book came out, I had the idea of reviewing it, and using that review as a springboard to talk about the larger issues, as I think this goes beyond Pace to the more general question about how to develop and evaluate therapies for poorly-understood medical conditions. The standard paradigm of statistically based science goes like this:

1. Come up with a cool idea.

2. Test it in a clinical trial.

The trouble is that part 1 is not well integrated with data—there’s not really a quantitative approach to developing potential treatment ideas.

And part 2 has the problem that classical statistical methods don’t work when studying small effects; see here.

With Pace there is an additional difficulty that systemic exertion intolerance disease, or chronic fatigue syndrome, is not so well defined, and so it seems perfectly plausible to me that graded exercise therapy and cognitive behavioral therapy could help some subset of people characterized as having chronic fatigue syndrome, even if useless to Julie Rehmeyer and others. But the larger problem is the disconnect between the development and evaluation of the treatments. To make progress, I think, we need to move beyond the idea that the idea and all the details of the treatment come from the outside, with the data and statistical analysis being only to evaluate (or, unfortunately, to “prove” or “demonstrate”) pre-existing ideas. One thing I like about working in pharmacometrics is that there’s more of a continuous path between data, science, and policy.

32 thoughts on “Clinical trials are broken. Here’s why.

  1. I think that these fair concerns can seem a bit beside the point in the context of the PACE trial. In pharmacometrics, would they conduct a nonblinded trial relying on subjective self-report outcomes, and then mid-trial tell the participants receiving particular interventions that their treatment had been shown to be effective for symptoms x, y & z in previous trials?

    Also, the difficulty of making claims about treatment efficacy to patients with a diagnosis is likely to include those suffering from a range of different issues is complicated further by the culture surrounding ‘CFS’. Some proponents of a biopsychosocial approach have promoted making positive and optimistic claims to patients about the nature of their ill health and their prognosis, as a form of ‘treatment’ in itself. If one believes that being honest with patients about how little is known, and how legitimately scary their situation is, risks worsening their health through some cognitive process, is honesty still the best policy? I would say ‘yes’, but some within UK medicine seem to disagree. I think that this form of ‘management’, necessarily conducted without informed consent, would be problematic even if those patients whose symptoms were perpetuated by ‘negative’ cognitions could be easily identified. It is easy to imagine how such an approach would come to be resented by patients.

    These sorts of moral issues are particularly significant in the UK, where government policies towards disabled people have recently been condemned by a UN inquiry, and those responsible for the research underpinning these policies are also often tied to the PACE trial. http://www.independent.co.uk/news/uk/uk-disability-rights-un-report-government-treatment-disabled-people-a7616101.html
    http://www.disabilitynewsservice.com/biopsychosocial-basis-for-benefit-cuts-is-cavalier-unevidenced-and-misleading/
    http://www.centreforwelfarereform.org/news/call-for-ret-publication/00320.html

    It seems that lots of people with prior concerns about the state of medical research can use PACE as a case-study, and that’s a valuable contribution that the PACE authors have made to academia. I expect that it will help to enliven lectures about waste in research for years to come!

      • Ben Goldacre studied for his PhD under Simon Wessely, and while PACE was going on claimed things like “i have to say i think medically unexplained symptoms and the mechanisms of psychological factors in ill health are officially the most interesting thing in medicine right now, a huge amount of very real distress, little clear knowledge, and i’m totally moving into it as a dayjob, research if not clinical.” http://www.badscience.net/2007/11/bmj-column-beware-of-mentioning-psychosocial-factors/#comment-17961

        Many patients see evidence of hypocrisy in the way that Goldacre has encouraged causing trouble for those promoting bad science through FOI requests, naming and shaming, mockery, etc, yet also worked to promote the post-PACE media campaign which promoted prejudices against patients who had been raising concerns about the claims made about the value of CBT and GET, and the way in which patients were being treated under the biopsychosocial model. This was presented as militant anti-science against rigorous research, yet minutes for a meeting where the PACE researchers and their colleagues discussed their ‘harassment’ report that “harassment is most damaging in the form of vexatious FOIs”. http://www.centreforwelfarereform.org/news/major-breaktn-pace-trial/00296.html

        The recent Tribunal which ordered the release of some of the PACE trial’s data found that claims about activist behaviour were “grossly exaggerated and the only actual evidence was that an individual at a seminar had heckled Professor Chalder.” The Information Commissioner reported of testimony under oath from Trudie Chalder, PACE Principal Investigator, that “she accepts that unpleasant things have been said to and about PACE researchers only, but that no threats have been made either to researchers or participants. The highest she could put it was that some participants stated that they had been made to feel “uncomfortable” as a result of their contact with and treatment from her, not because of their participation in the trial per se.”
        http://www.informationtribunal.gov.uk/DBFiles/Decision/i1854/Queen%20Mary%20University%20of%20London%20EA-2015-0269%20(12-8-16).PDF

        Examples of Goldacre tweets:

        Hate attacks by Chronic Fatigue extremists on researchers, & what those scientists rly say about CFS http://bit.ly/pAlXiU https://web.archive.org/web/20170712130528/https:/twitter.com/bengoldacre/status/107064835677753344

        How a small, vindictive community of CFS activists prevent discussion of the disease @maxpemberton http://qako.me/PZZmLP
        http://web.archive.org/web/20170712132832/https:/twitter.com/bengoldacre/status/250137025615917057

        The interconnected nature of the UK medical Establishment means that personal connections can be important, and the PACE researchers are well connected, with many influential figures having invested the own reputations to claims that patient activism to challenge PACE, and claims about the value of CBT and GET, are ‘anti-science’. This history makes it very difficult for them to now look afresh and the evidence, and admit that, largely, patients had got it right but prejudices about them led to their concerns being dismissed. That seems to explain why we are stuck in this strange situation of no-one really being willing to defend PACE in open debate, yet UK authority figures are acting as if there are no real problems with it (“no trial is perfect…”, “I don’t have time to look into all the details, but…”, “none of the papers have been retracted”). To me, it seems that Ben Goldacre is a good example of the worst of this – presenting himself as a scourge of spun and hidden data, unless spun and hidden date serves the interests of his mates.

        • “To me, it seems that Ben Goldacre is a good example of the worst of this – presenting himself as a scourge of spun and hidden data, unless spun and hidden date serves the interests of his mates.” – Just to clarify this final point, although he has failed to criticise the way data from PACE was spun, or support the patient who had to make huge sacrifices to fight to get a partial release of anonymised PACE data, Goldacre did eventually suggest that PACE data should be given to YODA / CSDR. Given his history on this issue I think that it’s fair to expect rather more from him.

        • In late 2015, neither Ben Goldacre ignore myself were actively involved in discussions of chronic fatigue syndrome. I was however, in regular contact with Simon Wessley. When Simon saw me tweeting my very first tweets about chronic fatigue syndrome on the PACE trial, he expressed puzzlement as to why I would get involved in the controversy. He suggested I spend months familiarizing myself with the issues before commenting again. When I didn’t heed his advice, he direct messaged me that he could arrange a debate between myself and the PACE investigators with Mental Elf as the moderator. But I would have to be quite nice to the PACE investigators in social media, especially because they were such very nice people. When I later met Mental Elf for the first time, he told me that he had raised the issue of a debate, but it was never given serious consideration by the PACE investigators.

          I also had had a cordial relationship with Ben Goldacre. We had started on opposite sides when I criticized a terrible meta-analysis in British Journal of psychiatry supposedly showing that abortions were a major cause of mental disorder and suicide among women of childbearing age. I began publicly criticizing the meta-analysis, creating quite a controversy. The editor of BJP brought on Ben Goldacre has a special editor to mediate, but then came down on my side. He later even propose to me that I get involved in what became the Open Trials project.

          I would’ve expected been to jump in and criticize the PACE trial because of it switched outcomes and refusal of the investigators to share data. The patients really look to bend to get involved. However, that became an likely when Simon Wessley lavishly praised in the Lancet a new book by Goldacre and in the long review described how Simon had saved Goldacre from a career doing standup comedy after he left psychiatry (seriously!).

          http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(15)60588-8.pdf

          Then became quite hostile when ever came up on Twitter that he should have something to say about the PACE trial and accused me of trolling him. They would be in exchange, and that it would die down, but that a week or two later, patients would be tweeted, starting at a new. Neither Ben nor I understood the dynamic and iStock the other one was unnecessarily causing trouble.

  2. I think there will always be a need for means of evaluating ideas that come from outside the data sources, such as from other areas of science. For example, if somebody comes out of the Amazon and says “I found a tribe that is using this leaf to cure malaria” are you going to need to know how many different leaves their ancestors tried before they developed the tradition?

  3. I think a better notion is feedback control systems / optimal control. Come up with an idea for a possible cause, and a treatment that addresses that cause, come up with measurement ideas to monitor what that treatment is doing, and the quality of life of the patient, recruit patients, take measurements through time while treatment progresses, look at the time-series of results, try to find ways to make the treatment affect the timeseries in the maximally positive way (ie. adjust dosage, frequency, subgroup being treated, combination of therapies, whatever)

  4. It seems what you are recommending here is some sort of new statistical paradigm for ‘Precision Medicine’ (personalized/stratified/alternative jargon are available). This is much needed as the standard paradigm, as you describe it, will fare worse and worse as more data from electronic records, self-monitoring, all the ‘omics, etc allow greater sub-division of patient groups. To put it another way, a lot of “well understood” medical conditions are going to start looking as poorly understood as chronic fatigue syndrome for at least some sub-groups of patients.

    Most of the potential of Precision Medicine type advances will be squandered if researchers are not comfortable with exploratory analysis and embracing variation. Regulators and payers will also need to adapt to allow evidence to come in more nuanced forms. I think there is a long way to go to be able to have the same kind of continuous path between data, science, and policy as exists for standard pharmaceuticals.

  5. I am a pharmaceutical statistician and have been for over 40 years, with a 4 year interlude during which I held a chair in Medical Statistics at King’s College in London. I have also spent over 10 years on grant bodies of the National Institute of Health Research and the Medical Research Council in the UK.

    I do not recognise your description of the process of developing ideas for new treatments neither within the pharmaceutical industry, nor in academic research medicine. The idea that coming up with a “cool idea” without supporting data from pre-clinical investigations would somehow automatically lead to a grant for a clinical trial in publicly-funded medical research is in my experience nowhere near the reality. Nor in the pharmaceutical industry would human clinical trials be started without biological rationale supported by in-vitro and/or in-vivo data.

    Whilst there are notable exceptions, antibiotics and Viagra, modern medical research is not serendipitous process.

    • Andy Grieve:

      See the last sentence of my post above: “One thing I like about working in pharmacometrics is that there’s more of a continuous path between data, science, and policy.”

    • Andy Grieve:

      It might depend on where you work – when I was in teaching hospitals there was very little real collaboration between basic and clinical scientists (and often some disdain). Not expecting all the basic research and clinical research be done in the same teaching hospital (bench to bedside) but rather that the clinicians involve the basic research to review basic research being citing in their grant applications and basic researchers involving clinicians regarding possible applications of their work.

      I think there also is a big difference between citing others basic science work in a grant applications to support a proposed clinical trail and actually being involved connected with it. Why else did the recognition of selective reporting and sloppy research methods arise well after concerns were raised about this clinical trails? (My sense, clinical trials mid 1980,s and animal studies late 1990,s or early 2000,s when clinical trialists wondered why the trail failed).

      Now I did work in Oxford for two years so I would not be too surprised if its different of the other side of the pound but Andrew is taking about the Pace trial and “The PACE trial was funded by the UK Medical Research Council, Department of Health for England, Scottish Chief Scientist Office”

  6. Well, the pharma industry is likely one of the few areas where the vested interests generally line up across the development and evaluation of the treatments – as without some credible evidence of some effect over a placebo, some reasonable cost/benefit and increasingly on market safety they can’t make good profits. On the other hand, my old strategy professor thinks the marketing departments have largely subverted this into me too products of little real benefit that they can get away with promoting.

    In academic research, the various interests are diverse and so funding and effects are dispersed into largely isolated islands – an archipelago of research empires and inefficiencies.

    There was recent panel report trying to address that in Canada recently – “overall stewardship of the federal research ecosystem needs to be strengthened” http://www.sciencereview.ca/eic/site/059.nsf/vwapj/ScienceReview_April2017.pdf/$file/ScienceReview_April2017.pdf

  7. The PACE trial’s central flaw is the combination of subjective outcome measures and lack of blinding. In the author’s preferred treatment arms (CBT and GET), patients were encouraged to expect improvement or perhaps even recovery, and to see their condition as reversible, while this was not done in the APT or the ill-defined control group (SMC). During the trial, a newsletter was circulated which praised the efficacy of the treatment and while the treatment wasn’t named, it was clear from the context that it was CBT/GET.

    A more detailed discussion of these problems by Jonathan Edwards: “PACE team response shows a disregard for the principles of science”
    http://journals.sagepub.com/doi/full/10.1177/1359105317700886

    The statistical analysis of results by the authors deviated from the original protocol in such a way that previously nonsignificant recovery rates increased by a factor of about four and became statistically significant.
    A detailed discussion of this can be found here: Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary re-analysis of the PACE trial
    http://www.tandfonline.com/doi/abs/10.1080/21641846.2017.1259724?journalCode=rftg20

    There are other problems with PACE. It’s bad science. The full extent of the problem is however much bigger: these issues are the norm in triasl of psycological or behavioural interventions for CFS, and from what I’ve seen, they are also widespread in research concerning other conditions. I know of only a single double blind placebo controlled trial that included a CBT treatment group and the result was null: “Immunologic and psychologic therapy for patients with chronic fatigue syndrome: a double-blind, placebo-controlled trial.”
    https://www.ncbi.nlm.nih.gov/pubmed/8430715

    Psychological medicine is in deep trouble and doesn’t realize it yet.

    It took 5 years and a legal battle over the right to perform an independent re-analysis of a portion of the data to expose these problems with the PACE trial. Awareness of these problems is still limited. Meanwhile, more similarly flawed studies are being published.

    Patients are being harmed in various ways by incorrect information produced with such poor methodology:
    – By waste of research funding.
    – By waste of treatment funding (in response to claims that CBT/GET are effective treatments, clinics have been set up to treat patients with these interventions).
    – Multiple patient surveys report that GET was found harmful by the majority of patients. CBT may also be harmful.
    – By creating the public perception that CFS is not a devastating serious illness or that it is imaginary (CBT and GET are based on the idea that the condition is perpetuated by beliefs and behaviour, and the authors promote a definition of CFS that is considred excessively broad by international researchers).
    – By creating the public perception that there is no pressing need to fund biomedical research.
    – By creating the public perception that biological aspects of CFS are unknown or not important (by never acknowledging them).

    CFS is indeed severely underfunded considering the impact on patients and the economic losses caused by disability: “Estimating the disease burden of ME/CFS in the United States and its relation to research funding”
    https://oatext.com/Estimating-the-disease-burden-of-MECFS-in-the-United-States-and-its-relation-to-research-funding.php

    There are about 17 million CFS patients world wide and there has been very little progress in a generation. In my view this is primarily due to lack of investment in research. Health authorities are unlikely to invest in biomedical research when CBT/GET appear to be proven effective by the evidence base.

    I’m writing this as patient that has been ill for nearly two decades. We are disabled, suffering, some of us are dying quietly and we all need real treatments. In its current state, research into our condition is dysfunctional. Please help us fix this mess.

    There are signs that the situation is improving. PACE is being talked about and critics have been able to speak freely. Biomedical research is making progress, although more due to luck than anything else. The CDC has apparently become aware that CBT/GET have been promoted with dubious evidence and has retracted its recommendation for these treatments. Yet there is still a long way to go, and it is clear that there is a bubble in psychological medicine that will pop sooner or later. The sooner it happens the better.

    PS: The Journal of health Psychology has published a series of articles on the topic: http://journals.sagepub.com/action/doSearch?AllField=PACE&SeriesKey=hpqa which are useful to better understand the problem, although the broader context this is all occurring in is also very important.

    • “The PACE trial’s central flaw is the combination of subjective outcome measures and lack of blinding.”

      This is a big problem in many, many studies, particularly in device trials (where blinding is often impossible).

      Other than asking for objective outcome measures, what are reasonable solutions? I can’t think of any….

      • It should generally be possible to use objective outcomes, or at least outcomes that are less susceptible to biased self reporting. Part of the problem is that authors intentionally choose the least reliable outcome measures.

        In CFS, reduced activity levels are an important feature and can be measured with an actometer. The patient may adjust their behaviour initially but the impact of this should be limited if the device is worn for a sufficient amount of time. The Institute of Medicine concluded in 2015 that there is sufficient evidence that CFS patients experience something called “post-exertional malaise” which limits their activity levels (with worsening the more a patient exceeds their safe activity level).

        Cardiopulmonary exercise testing is another possibility. There are standardized protocols.

        Both of these will probably work in a number of other conditions where disease severity cannot yet be measured directly.

        If bias can’t be designed out or minimized to insignificant levels, then strategies to equalize biasability must be used. For example if two psychotherapies are compared, by equalizing therapist contact time as well as other relevant factors. Then again one could argue that depression and anxiety will affect outcomes such as education and employment status, relationships, and again probably activity levels.

        These are just some of my thoughts as patient. I’m sure if experts put their minds onto this problem they can do much better.

        At the very least, it needs to be acknowledged that there is no sense in doing trials with a PACE trial style design.

        • The PACE trial had purchased actometers to use as an outcome measure, but these were dropped, only being used at baseline to be assessed as a predictor. Soon after this was announced, results from actometers for three earlier CBT for CFS RCTs were released, showing that CBT did not allow patients to increase their activity levels. It is a shame that this data was not collected for PACE, and that for these three earlier trials this data had not been released with their main results, but in a separate paper on mediation which argued that these result showed that CBT was so useful that it could successfully treat patients severe and disabling fatigue without them even needing to increase their activity levels: https://www.ncbi.nlm.nih.gov/pubmed/20047707

          I had started writing to respond to some of the wider issues, but then realised that Wilshire’s recent paper on PACE provides a better summary than I could manage, so hope it will be okay if I just quote a few paragraph from that:

          “The next step is to ensure that therapists present all treatments to participants as equally likely to lead to improvement. This is especially important when the primary outcomes are self-report measures, since these measures can be strongly influenced by patients’ expectations (Hróbjartsson et al., 2014). Unfortunately, in PACE, CBT and GET were promoted to patients during therapy as highly effective. For example, CBT participants were told that CBT was ‘a powerful and safe treatment which has been shown to be effective in … CFS/ME’ and that ‘many people have successfully overcome CFS/ME using cognitive behaviour therapy, and have maintained and consolidated their improvement once treatment has ended’ (Burgess and Chalder, 2004: 123). GET participants were told that ‘in previous research studies, most people with CFS/ME felt either “much better” or “very much better” with GET’ and that GET was ‘one of the most effective therapy strategies currently known’ (Bavinton et al., 2004: 28). No such information was given to the remaining two groups. In any rigorous trial, researchers need to at least acknowledge this potential confound and consider its possible impact on results.”

          “Specific problems and limitations

          A researcher’s enthusiasm for a particular treatment can also lead them to overinterpret their findings or overlook limitations. One limitation of the PACE trial – which has been pointed out by critics, but never fully acknowledged by the investigators – is that the treatment effects were almost entirely limited to self-report measures. Most of the objectively measurable outcomes did not yield significant treatment effects, for example, fitness and employment status did not differ across treatment groups when measured an entire year after trial commencement, and although mean walking distances were higher after treatment with GET than after medical care only, this difference was small (approximately 30 m, less than 10% of the baseline walking distance1), and no such benefit was observed for the CBT group.

          Again, the problem here is that, in a non-blinded study, self-report measures are highly vulnerable to response bias. The size of this bias is not trivial. A recent meta-analysis of clinical trials for a range of disorders calculated that when participants were non-blinded to treatment allocation, self-reported improvements associated with treatment were inflated by an average of 0.56 standard deviations relative to comparable blinded trials. Importantly, no such inflation was observed when the outcomes involved objectively measurable indices (Hróbjartsson et al., 2014). Therefore, in order to securely demonstrate the efficacy of any intervention within a non-blinded design, researchers need to show that self-reported improvements are supported by evidence based on more objectively measurable outcomes.

          It would be unreasonable to expect the PACE investigators to solve the problem of participant response bias single-handedly. But in a trial of this size and importance, we can reasonably expect them to take some simple measures, such as balancing the information they provide to different participant groups about effectiveness. We can also reasonably expect them to minimise – or at the very least acknowledge – potential sources of bias. And we can reasonably expect researchers to acknowledge and discuss potential red flags, such as a lack of agreement between self-reported improvements and objectively measurable outcomes. Instead, the PACE investigators did not even report the most worrying results until several years after publication of the main findings, and when they did, they dismissed them as unimportant (see especially, Chalder et al., 2015; McCrone et al., 2012).”

          http://journals.sagepub.com/doi/full/10.1177/1359105317700885

  8. I may have misunderstood, but (I guess like Grieve) I can’t see the basis of the claim:
    “The trouble is that part 1 [come up with a cool idea] is not well integrated with data — there’s not really a quantitative approach to developing potential treatment ideas.”
    The much maligned and yet common process of data dredging – less pejoratively, EDA or screening for effects – has for decades been a source of ‘cool ideas’ that is completely integrated with data. Admittedly it is often done badly but many methods for doing it well (e.g., cross-validation, hierarchical modeling) are now extensively developed. And a tradition going back much further looked for ‘cool ideas’ in the less formal data comprising folk and clinical observations.

    As for this comment:
    “part 2 [test in a trial] has the problem that classical statistical methods don’t work when studying small effects; see here.”
    – stat-smart epidemiologists have been remarking for decades that stat methods – and fixes that supposedly adapt them to observational research – don’t work well for small effects (albeit what is ‘small’ varies with context). Not only because feasible studies often lack the accuracy or power to distinguish among harmful, unimportant, and beneficial effects, but also because standard analytic methods use models that assume away uncontrolled sources of variation (“biases”) with no justification.

    It appears the PACE example is a focused study of disease treatments, not about effect screening, so my question is: If one had the data, how would the problem with it differ from those I just mentioned in my response part 2?

    • Sander:

      In the Pace example, there were good prior reasons to consider graded exercise therapy and cognitive behavioral therapy as possible treatments, but the difficulty is that there are lots of patients for whom we’d expect these treatments to not be effective. It’s hard to handle this in the usual “average treatment effect” framework.

      Looking forward, and considering systemic exertion intolerance disease more generally, there are various research teams pursuing ideas for therapies, and it’s my impression that what happens is that once a therapy is proposed, it’s tested in this clinical-trial approach which is pretty brute-force in that there’s not much effort put into understanding how the treatments work, during the trial. That is, there’s close to a complete separation between (1) the research used to develop the treatments, and (2) the research used to evaluate the treatments. And I think that’s a problem.

      Regarding data analysis: yes, I agree with your points. And, when treatment effects are small and heterogeneous, these can be problems not just with observational studies but also with perfectly clean randomized experiments.

  9. From my experience I would agree that, at least in academic trials, there can be problems of what interventions get evaluated. It’s something that has bothered me for a while. It’s not too hard to get funding based on some selective reporting plus a dose of spin and optimism, especially in areas where patients don’t have many treatment options, and often the supporting data might not be of the best quality (and they are often just presented as true, without much critique – it’s published, right? so it’s a “finding”). And it’s true that the background data don’t often play much formal role in trial planning and execution; the trial is seen as a stand-alone piece of research. That’s a consequence of the traditional methodology. So I think it’s true that sometimes trials are based on not a lot more than a “cool idea”. Not always, but it happens.

    As Andy Grieve and others say above, I think the pharma industry is generally much better at this than academia.

    • Simon: Largely agree but it would be nice to have better assessment than “not always” – what percent of conducted trials were conducted based on a faulty sense of existing theory and evidence?

      I do remember being involved in planning a clinical trial for a new combination of drugs where the basic scientists that we engaged (which was unusual) figured out the motivation for combining them was simply bogus – so the trial was shut down. And the clinician who had the cool idea had been giving patients the combinations was told not to do something like that again. Had that clinician applied for their own funds rather than going to our group that already had funds – the trial likely would have been done.

  10. Keith: I agree. It’s a piece of work I’ve thought about trying to do, but not really thought through how it could be done. It would involve looking at the research underlying the justification for doing the trial. Wouldn’t apply to all trials – often there is the situation that doctors are doing A and B, and we just don’t know which is better (or which is better for whom), so resolving that uncertainty seems a good justification. But other times you see trials proposed based on animal/lab/clinical data where there might be lots of problems of forking paths, p-hacking and the rest. I don’t really know but it seems pretty likely that there are big issues there.

  11. Andrew,

    You seem to think it’s reasonable that GET and CBT may work in a subset of ME/CFS patients. This is a flawed assumption, by definition an ME /CFS patient is intolerant to exertion.

    Likewise, tere is nothing to suggest their symtoms are false. Can you name any other medical area where a patient presents with symptoms and therefore based on that presentation the patient has false illness beliefs?

    Does that seem reasonable.?

    Another point, rather than testing ones “cool” idea in a clinical trial, one simply has to try to disprove it.

    This would have been very simple approach had any objective measurements been taken.

    I suggest you reread your blog from last year and ask yourself if Simon Wessely pulled some wool over your eyes.

    Perhaps also you should familiarise yourself with the basics of ME /CFS and PACE interventions before making any more blogs on this topic?

    • Burn:

      (a) SEID, or whatever it is called, is not defined precisely. I do find it reasonable to believe that CBT can be effective for some people who are diagnosed with this condition.

      (b) Regarding “cool ideas,” I think you misinterpreted my post above. What I wrote was:

      The standard paradigm of statistically based science goes like this:

      1. Come up with a cool idea.

      2. Test it in a clinical trial.

      But this is the approach to science that I think is broken. I don’t endorse “coming up with a cool idea” as a research strategy.

      • Surely after coming up with a cool idea any sensible researcher would run the idea past patients and tested its validity in the real world before moving onto a 5 million pound study. Since long before the trial started the PACE authors were well aware that patients find uncontrolled exercise harmful and physiological abnormalities are described in the diagnostic criteria ie the CCC and ICC attention should have been paid to measuring the known physiological abnormalities and objective measures? That is if teh aim was to move science forward and help patients rather than feather the nests of the insurance industry?

        • Henriette:

          Again, I’m not recommending the “cool idea” approach to research. In any case, I agree with you that researchers should working with patients. And I also agree with you that it’s not the best use of scarce research funds to attempt to perform a definitive randomized trial. That was the main point of my New Yorker article linked to above.

        • “Surely after coming up with a cool idea any sensible researcher would run the idea past patients and tested its validity in the real world before moving onto a 5 million pound study.”

          Agreed. (Unfortunately, my impression is that involving patients in design of a study does not seem to be common in clinical trials of any sort.)

  12. The PACE manual has lots of objective monitoring measures mentioned that have been ignored by the PACE investigators and would have objectively determined the physical health of the patients at baseline. Heart rate monitoring is recommended by NICE, mentioned in the PACE manual but ignored by PACE investigators. The 6 minute walking data has been released but nearly 1/3 of the final 6 minute walking test at 52 weeks not released. That that was released showed 2/3 of the participants were very little better at 52 weeks.

    The STEP test data, HR for STEP and borg exertion test not released to date…..It is a travesty of science that an exercise trial is run with limited clinical exercise assessments or physiological outcome measures especially with the known heart rate abnormalities.

    A travesty that 5 million pounds has been spent and so, so little knowledge gleaned. The amount of money spent on 640 patients, on a very low tech trial is at odds with what would be reasonably expected to justify the $$$$.

Leave a Reply to Simon Gates Cancel reply

Your email address will not be published. Required fields are marked *