The New England Journal of Medicine wants you to “identify a novel clinical finding”

Mark Tuttle writes:

This is worth a mention in the blog.

At least they are trying to (implicitly) reinforce re-analysis and re-use of data.

Apparently, some of the re-use efforts will be published, soon.

My reply: I don’t know enough about medical research to make any useful comments here. But there’s one bit that raises my skepticism: the goal is to “use the data underlying a recent NEJM article to identify a novel clinical finding that advances medical science.”

I’m down on the whole idea that the role of statistics and empirical work is to identify novel findings. Maybe we have too much novelty and not enough reproducibility.

I’m not saying that I think the whole project is a bad idea, just that this aspect of it concerns me.

P.S. A lot more in comments from Dale Lehman, who writes:

This is a challenge I [Lehman] entered and am still mad about. Here are some pertinent details:

1. The NEJM editors had published an anti-sharing editorial which attracted much criticism. They felt pressured to do something that either appeared pro-sharing or actually might move data sharing (from clinical trials) forward. So, they started this Challenge.

2. There were a number of awkward impediments to participating – including the need to get IRB approval (even though the data was anonymized and had already been used in publications) and have an officer at your institution/organization sign off that had financial authority (for what?).

3. 279 teams entered, 143 completed (there was a qualifying round and then a challenge round – ostensibly to make sure that entrants into the latter knew what they were doing enough to be allowed to participate), and 3 winners were selected.

4. I entered but did not win. My own “discovery” was that the results of the more aggressive blood pressure treatment depended greatly on whether or not participants in the trial had missed any of their scheduled visits – particularly if they missed one of the first 3 monthly visits that were in the protocol.

5. Since it appeared to me that compliance with the protocol was important, I was particularly interested in data about noncompliance, I asked about data on “adherence to antihyperintensive medications” which the protocol said data was collected in the trial. I was told that the original publication did not use that data, so I could not have it (so much for “novel” findings).

6. To make matters worse, I subsequently discovered that a different article has been published in a different journal (by some of the same authors) using the very adherence scale data I had asked for.

7. To make matters even worse, I sent a note to the editors complaining about this, and saying that either the authors misled the NEJM or the journal was complicit in this. I got no response.

8. The final winners did some nice work, but 2 of the 3 winners created decision tools (one was an app) providing a rating for a prospective patient as to whether or not more aggressive blood pressure treatment was recommended. I did not (and do not) think this is such a novel finding and it disturbs me that these entries focused on discrete (binary) choices – the uncertainty about the estimated effects disappeared. On the contrary, I submitted a way to view the confidence intervals (yes, sorry I still live in that world) for the primary effects and adverse events simultaneously.

So, yes I am upset by the experience, as were a number of other participants. The conference they held afterwards was also quite interesting – the panel of trial patients were universal in supporting open data sharing and were shocked that researchers were not enthralled by the idea. Of course, I am a sore loser and perhaps that is what all the other disgruntled lowers feel. But it is hard to escape the bad taste the whole thing left in my mouth.

When all the dust settles, it may still prove to be a small step forward towards more open sharing of clinical trial data and the difficulties may be due to the hard work of changing established and entrenched ways of doing things. But at this point in time, I don’t feel supportive of such a conclusion.

41 thoughts on “The New England Journal of Medicine wants you to “identify a novel clinical finding”

  1. Well, this is old news, probably due to Andrew’s time lag to posting. But since it is a challenge I entered and am still mad about, here are some pertinent details:

    1. The NEJM editors had published an anti-sharing editorial which attracted much criticism. They felt pressured to do something that either appeared pro-sharing or actually might move data sharing (from clinical trials) forward. So, they started this Challenge.
    2. There were a number of awkward impediments to participating – including the need to get IRB approval (even though the data was anonymized and had already been used in publications) and have an officer at your institution/organization sign off that had financial authority (for what?).
    3. 279 teams entered, 143 completed (there was a qualifying round and then a challenge round – ostensibly to make sure that entrants into the latter knew what they were doing enough to be allowed to participate), and 3 winners were selected.
    4. I entered but did not win. My own “discovery” was that the results of the more aggressive blood pressure treatment depended greatly on whether or not participants in the trial had missed any of their scheduled visits – particularly if they missed one of the first 3 monthly visits that were in the protocol.
    5. Since it appeared to me that compliance with the protocol was important, I was particularly interested in data about noncompliance, I asked about data on “adherence to antihyperintensive medications” which the protocol said data was collected in the trial. I was told that the original publication did not use that data, so I could not have it (so much for “novel” findings).
    6. To make matters worse, I subsequently discovered that a different article has been published in a different journal (by some of the same authors) using the very adherence scale data I had asked for.
    7. To make matters even worse, I sent a note to the editors complaining about this, and saying that either the authors misled the NEJM or the journal was complicit in this. I got no response.
    8. The final winners did some nice work, but 2 of the 3 winners created decision tools (one was an app) providing a rating for a prospective patient as to whether or not more aggressive blood pressure treatment was recommended. I did not (and do not) think this is such a novel finding and it disturbs me that these entries focused on discrete (binary) choices – the uncertainty about the estimated effects disappeared. On the contrary, I submitted a way to view the confidence intervals (yes, sorry I still live in that world) for the primary effects and adverse events simultaneously.

    So, yes I am upset by the experience, as were a number of other participants. The conference they held afterwards was also quite interesting – the panel of trial patients were universal in supporting open data sharing and were shocked that researchers were not enthralled by the idea. Of course, I am a sore loser and perhaps that is what all the other disgruntled lowers feel. But it is hard to escape the bad taste the whole thing left in my mouth.

    When all the dust settles, it may still prove to be a small step forward towards more open sharing of clinical trial data and the difficulties may be due to the hard work of changing established and entrenched ways of doing things. But at this point in time, I don’t feel supportive of such a conclusion.

      • Very nice. While some lament that all the low-hanging big-effect fruit has been picked, ripe and juicy “compliance” seems within easy reach. Yet it gets little attention other than when it is blamed for obscuring the view of much smaller effects higher up. Maybe the problem is that, unlike beauty, people can’t figure out how to monetize “compliance”.

    • the panel of trial patients were universal in supporting open data sharing and were shocked that researchers were not enthralled by the idea.

      I’ve been thinking this for awhile. People I know freely give all sorts of info to facebook, google, uber, etc and when asked about it they simply do not care about their privacy. I bet it is the same for sharing most medical data so that is just an excuse to do less work, get more publications out of it, or whatever.

      • “I bet it is the same for sharing most medical data so that is just an excuse to do less work, get more publications out of it, or whatever.”

        I’m confused by what you mean about this statement. IRB’s make the general use of human subject’s data almost impossible. So generally speaking, it’s not an issue of laziness but law. It’s been my experience that probably more time goes into figuring out just exactly what the rules are about handling the data as actually handling the data in most human subjects studies.

        • Who is on the IRBs?

          I’ve done IACUC (IRB for animals) approved research. The primary goal of the law is to minimizing damage/suffering. Yet open up any pre-clinical paper that makes some claim like molecule x is related to behavior y in animal z. More often than not you will find there is no scatter plot of x vs y. This is either because molecule x and behavior y were measured in different animals (too many animals), or they don’t want other people to learn how noisy the relationship was (misleading future researchers to misuse animals). Either way, it is against the law and no one cares about it.

          If medical researchers collectively just started putting patient 1, patient 2, etc instead of the names I doubt anyone would stop them as long as they got consent.

        • Anon:

          IRB is like “security theater” at the airport: it persists as a chronic cost because of a mix of inertia, paranoia or worse-case thinking, legitimate security concerns, and the interests of people who benefit from the regime.

        • I am definitely not against IRB approval, but for sure a lot of it is just bureaucratic cruft at this point. Like “lab inspections” where you know the exact date and time months before hand, or “continued training” that is satisfied by a video playing on your screen to an empty room. It will only catch the very lowest hanging fruit.

        • Anoneuoid:

          Yes, IRBs are often within institutions, but I still don’t understand your point. It’s really, really difficult to properly handle human subjects research data. This isn’t because the IRB’s are trying to keep us from sharing our data. This is because the IRBs are following the law as written. To be quite honest, it seems like you’ve never worked on human subjects data or you wouldn’t have these opinions. I definitely agree with Andrew about the cause of disfunctionality caused by IRBs, but with a heavy downplay on the last point; I’ve never had an IRB try to claim protection of our data for our own benefit. I suppose experiences can vary, but I would be quite surprised if it was at all widespread that IRBs were attempting to maximize publications by claiming the need to protect data.

          “If medical researchers collectively just started putting patient 1, patient 2, etc instead of the names I doubt anyone would stop them as long as they got consent.”

          This is just so far off base, I don’t know where to start. Lots of researchers have done work showing that even though data set X was supposed to anonymous, it was not too hard to figure out who the subjects were (especially when augmented with some other public dataset). As a classic, but far from isolated, example, you are forced to censor overweight subjects in many datasets, as it’s speculated that knowing someone’s weight (if it’s unusual) + something like ZIP code could uniquely identify them.

        • Lots of researchers have done work showing that even though data set X was supposed to anonymous, it was not too hard to figure out who the subjects were (especially when augmented with some other public dataset).

          Sure, that’s why I brought up the IACUC “minimize suffering of animals” law. Lots of researchers know that if A is causing B then we should measure both A and B in the same animal and see that on a scatter plot. Still, these scatter plots seem to be missing the majority of the time due to bad (and technically illegal if it involves animals) science.

          The point is these laws are obviously only enforced to the standards chosen by researchers in the field. The law is only stopping them when it is collectively convenient for them to allow it. I don’t know much at all about who lobbied for these laws, but that would be interesting to check as well…

          Also, it isn’t like there are any perfect implementations of anonymization, just like no perfect studies. How many of these datasets are kept on hospital networks anyway? When I had to use one viruses were so common I just reformatted the flash drive after every single use. I’ve seen medical exams where a requirement to pass is handing in some of your answers on a *virus-free* flash drive.

        • Data sets on hospital networks are not for research purposes. In the US they are governed by HIPPA and then if someone wanted to use them for research also by the relevant rules for research, meaning the IRB.

          The Obama administration actually tried to put through a big Common Rule (IRB) reform but the current administration blocked it the day before it would have gone into effect this past January.

          People are justifiably worried that their data will have to be destroyed and that they will lose their jobs if they do not follow the approved procedures, it’s really not the loosey goosey situation you seem to think it is.

        • If you would like your data made available to the wider research community with only a numeric identifier, please initial here. Please note that it may be possible to infer your identity using your age weight and zip code or similar methods….

          But does anyone even give the patient this option? No

        • +1 to Daniel’s suggestion.

          I am sensitive to the possibility of being identified if complete data is available — in the past, I have refused to fill out well-meaning questionnaires that have asked for data that would allow someone to identify me (e.g., academic surveys at my university that asked for my gender, department, and rank — there was a time when that would have been enough information to identify me).

        • Martha:

          I agree that the rules protecting health data is not always all helpful. But blaming the researchers who follow the law is a bit misguided in my opinion.

        • A reader,

          I’m not blaming researchers who follow the law — if the law is the problem, then the law needs to be changed.

        • Martha:

          I completely agree, but will also say ethics at scale is not easy!

          But I read Daniel and Anoneuoid’s comments as blaming the researchers in such cases, which I believe to be a misinformed opinion. Maybe I misinterpreted what they meant.

        • I’m just pointing out that it’s trivial to ask patients what they want done with their data and yet it’s not done. In my opinion that’s an unethical lack of respect for the patients wishes and should be changed, at whatever level necessary

        • In fact, in some studies patients are offered this option, or a range of options that include possible restrictions on the range of uses to which it can be put. It isn’t common, not nearly as common as it should be, but it is allowable and it is used from time to time.

          I have been, personally, on all three sides of these issues. I am, by profession, an investigator. I have also personally been a subject in 6 biomedical research studies. And I spent 21 years serving on two different institutional IRB’s.

          Frankly, I resent Andrew Gelman’s characterization of IRB’s as security theater. He has not seen, as I have, the study protocols that are clearly dangerous to subjects before the IRB insisted on revisions. He has not seen the consent documents that are materially misleading about risks and benefits and alternatives. He has not seen the consent documents that are either written in such jargon that most professionals have a hard time understanding them, or, reduced to such elementary language that they are practically as long as War and Peace and unreadable for that reason. He has not seen the research protocols that are such vague descriptions of the research that nobody could reasonably even assess the risks and benefits (and probably nobody could sensibly interpret the “results” of such a study either–the garden of forking paths is larger than most people imagine). I suspect he has not seen these things because he’s an outstanding researcher who doesn’t do those things, and those in his circle are similarly so. It’s also the case that the kind of research that Andrew does involves no real risks other than that associated with disclosure, if that, so, of course, it feels irksome to have to go through this process. But there is a strong selection bias in his experience, and he has allowed it to color his judgment on this matter.

          There are researchers out there who are not like him. At least one materially problematic study was reviewed at nearly every monthly IRB meeting I attended over those 21 years. It’s a small fraction of the research, but it’s a lot bigger than the proportion of people trying to smuggle bombs onto airplanes.

          I do agree that the IRB process can be unnecessarily cumbersome and there is much that could be done to improve it. Broader categories of studies could be exempt from review (and new rules are coming that will do that.) The rules were mostly designed with medical research in mind, but have been applied, sometimes with insufficient modification or flexibility, in other contexts where lower levels of safety-checking would suffice. Moreover, it cannot be denied that sometimes IRB members make unreasonable demands based on personal pet peeves. But I don’t know of any human judgment process that isn’t subject to problems like that.

          From my perspective as a former subject in 6 biomedical studies, I value the safety check provided by the IRB. And I have an MD degree–which gives me a leg up on the ordinary subject in terms of ability to judge the risks involved and the potential benefits either to me personally or to knowledge gained. But even with my background as an investigator, and a formerly-practicing physician, some studies are just too hard for a typical patient to understand. Knowing that the protocol has been independently reviewed by a panel of disinterested people with sufficient expertise to appraise it is an important reassurance. Also, as a research subject, I am concerned about the privacy of my data. (OK, I’ll admit I’m also something of a fanatic about my data in other contexts and I avoid social media because of it. But I do use gmail!) I know enough about what goes on in medical research to know that there are entire branches of it that I deem ethically questionable and would not want my data to contribute to. A lay person could have even stronger feelings of distrust and not want his or her data used for purposes other than what he or she originally consented to: plenty of people out there distrust Big Pharma, some think the health care system is largely racist–I could go on with the reasons. While I do think that data sharing should be much more common than it is, and opt-in consent to use of data outside the original study (either conditionally or unconditionally) should be used far more often, I would never want to see the research volunteers lose control over their data.

          Finally, from my perspective as an investigator, the obstacles to data sharing that arise most in my work are two. Often I am using data provided by other sources who, for various reasons, forbid me from disseminating the data or using it for other purposes. I even know of a colleague who collected data on his own institution’s patients as part of a multi-institutional consortium for a study. Some years later, he noted that the data could be useful for another purpose. But the governing body of the consortium denied him permission to re-use his own data, that he still had sitting on his computer, for this other purpose.

          The second obstacle is simply that it takes a long time to shepherd papers through the publication process. You don’t want to just give out your data when you still have more papers planned from the same data collection. So you need to hold onto your data for a while, till you’re done with your planned articles.

        • Clyde, I think your post was on the money about almost everything: IRBs are important in medical research, opt-in is an important option that should be used widely… etc etc

          until I got down to the last paragraph:

          The second obstacle is simply that it takes a long time to shepherd papers through the publication process. You don’t want to just give out your data when you still have more papers planned from the same data collection. So you need to hold onto your data for a while, till you’re done with your planned articles.

          And I think this is clearly the way the incentives work for individual researchers, but it is clearly in conflict with the public good. Data collected for publicly funded research (and most in the US is) isn’t an asset “belonging” to a researcher to be used by an individual to better their career. Data is a public good paid for by public money, and belongs properly to the public (assuming appropriate release is ethical / opt-in etc). It should be released ASAP.

        • Clyde: “Knowing that the protocol has been independently reviewed by a panel of disinterested people with sufficient expertise to appraise it is an important reassurance.”

          Could there be a strong selection bias in your experience here?

          My experience of IRBs and being in an IRB (roughly 1995 to 2005) was somewhat different. When one or more members did not have sufficient expertise, security theater often resulted.

          Also, I am going to agree with Daniel, regardless of the pressures involved in individual research careers – data hording for planned publications in my view is unethical.

        • My experience of 15 years on an IRB is similar, I saw many proposals where the research clearly had not thought through the potential harms to their participants or third parties and really bad designs that could not justify the level of risk participants were taking. I did also see some board members who would get very worried about things that could have been managed, but sometimes those lone voices, often those from the public member, were really important to hear. There are still many scientists who do not understand the lasting impact of experiences like that of Henrietta Lacks and her family (and of course Tuskegee) not only on all other researchers but on actual health since they make people afraid to get health care at research institutions.

          That said, I think in the raft of papers that people are now asked to sign as part of HIPPA there is often agreement that an anonymized version of your data can be used for some research purposes.

          @Daniel you might be interested in https://www.openhumans.org/

        • “If medical researchers collectively just started putting patient 1, patient 2, etc instead of the names I doubt anyone would stop them as long as they got consent.”

          Very funny, really laugh out loud funny.

          Anyway IRBs include scientists, non scientists and members of the public.

        • Anyway IRBs include scientists, non scientists and members of the public.

          Which of those groups is going to stop them? “Failure to use sufficient anonymization techniques” is actually a more technical concept than “limit animal suffering by measuring both x and y in the same group of animals” (not to mention that would also be better science). Anyone can easily spot check the literature to see there is pretty much no concern for the latter even though it is illegal. I don’t see why basic anonymization (with consent) should be any different.

          If the researchers all just did what I said and claimed it to be “standard in the field”, who would dislike them for it? How exactly do you imagine this to go down?

          Dale Lehman claims (probably correct imo) most subjects (primarily “non-scientists” and “members of the public”*) actively want their info to be open to anyone. This convinces me even more it is mostly for show. Maybe it is just a matter of lawyers making money?

          *Interesting how “member of the public” and “scientist” are mutually exclusive categories here. I wonder what the definition of “scientist” actually is. Someone with an NIH grant?

    • “Since it appeared to me that compliance with the protocol was important, I was particularly interested in data about noncompliance, I asked about data on “adherence to antihyperintensive medications” which the protocol said data was collected in the trial. I was told that the original publication did not use that data, so I could not have it (so much for “novel” findings)”

      Wow. What’s the point of re-analyzing data if you’re forbidden from re-analyzing it correctly?

      “The final winners did some nice work, but 2 of the 3 winners created decision tools (one was an app) providing a rating for a prospective patient as to whether or not more aggressive blood pressure treatment was recommended.”

      This is really surprising. I would think a lot more research would be required to legally deploy these tools, so doesn’t that mean that 2 of the 3 winners built something useless? I also agree that building an app is not really a novel contribution. Unfortunately, it’s my experience that lots of government agencies *love* apps, because it’s something they can really easily show others as an accomplishment of their money spent (even if it’s a silly accomplishment). Don’t get me wrong, I like that government agencies should feel pressure to show their accomplishments but I think this is a realization that this comes with it’s own set of problems.

    • I can understand your frustration, but accusing them of complicity almost guarantees they will see you as hostile and not respond. At least wait for them to completely drop the ball before making such accusations.

        • Sure, there used to be another anon who posted here that eventually started a “bayesian philosophy” blog. I can’t find it now (maybe they took it down) but remember a good discussion on there about whether/how it was possible for the timing of the data to be incorporated into Bayes theorem.

        • For some reason, I thought you meant replication as in someone else getting new data and testing the same model (with no adjustments). Thinking more, that is likely not what you meant. It is key to check the performance of the original model without fiddling with it.

  2. Anoneuiod
    The patients wish to share is far from careless – they subject themselves to a clinical trial, first to hope to get better treatment, and second to fulfill their wish to help others. Their desire to share data appears to result from the latter. The clinical trialists resist sharing to protect their “rights” to enhance their careers from the data they laboriously have collected. The interests of other researchers may well be to free ride on these efforts. But the problem is one often discussed here – poor research incentives that promote not sharing your data while not rewarding its collection.

    Pointeroutguy
    I think overfitting is not really a valid criticism of the NEJM Challenge. The focus was on clinically interesting findings, not merely prediction “accuracy” within sample. Aside from that, overfitting is a well known problem facing all machine learning algorithms and I think Kaggle does a good job against protecting against this. I think a more valid worry/concern about Kaggle competitions is the focus on marginal gains in predictive accuracy at the expense of much larger questions about data measurement accuracy, refinement of research questions, and other higher level tasks that these competitions steer away from.

    • “a more valid worry/concern about Kaggle competitions is the focus on marginal gains in predictive accuracy at the expense of much larger questions about data measurement accuracy, refinement of research questions, and other higher level tasks that these competitions steer away from.”
      +1. You said it much better than I ever could have.

    • +1 to the part about not sharing data being really about rent-seeking on the part of the researchers, and it’s masquerading as concern for the patients. It’s trivial to ask the patients to fill out a checklist of how they want their data shared. But various people who benefit from not sharing data also make sure that they make a bunch of rules to ensure that they can hide behind them and pretend to be super concerned with ethical ideals, when in fact they are just seeking rent.

    • “The patients wish to share is far from careless – they subject themselves to a clinical trial, first to hope to get better treatment, and second to fulfill their wish to help others.”

      Not always. I mentioned in another post above that I have been a subject in 6 research studies. Only in one of them was there any treatment being offered (and I ended up randomized to the placebo group in that one). The other studies were basically studies of normal human biology, and offered no direct benefit to participants.

      • So, one presumes you either did it for some kind of money payment, or you did it specifically for the general benefit of humanity, in other words, all the more reason why many people in that situation would want the data *shared widely* to maximize the benefit to humanity.

        I still think “far from careless” applies in most cases.

        • In my instance, it was for the benefit to humanity; I was never paid for any of these studies. And, yes, I would want the data shared widely, though, as I noted earlier, there are some purposes for which I would not want my data used.

Leave a Reply to Daniel Lakeland Cancel reply

Your email address will not be published. Required fields are marked *