This one’s for the Lancet editorial board: A trolley problem for our times (involving a plate of delicious cookies and a steaming pile of poop)

A trolley problem for our times

OK, I couldn’t quite frame this one as a trolley problem—maybe those of you who are more philosophically adept than I am can do this—so I set it up as a cookie problem?

Here it is:

Suppose someone was to knock on your office door and use some mix of persuasion, fomo, and Harvard credentials to convince you to let them in so they can deliver some plates of handmade cookies. You tell everyone about this treat and you share it with your friends in the news media. Then a few days some people from an adjoining office smell something funny . . . and it seems that what that those charming visitors left on your desk was not delicious cookies, but actually was a steaming pile of poop!

What would you do?

If you’re the management of Lancet, the celebrated medical journal, then you might well just tell the world that nothing was wrong, sure, there was a minor smell, maybe a nugget or two of poop was in the mix, but really what was on your desk were scrumptious cookies that we should all continue to eat.

Background

People have been sending me lots of emails regarding this recent Lancet study of chloro-hydro-oxy-spirograph or whatever it’s called.

For example, see the blog by Peter Ellis (“A peer-reviewed study that probably used fabricated data”):

The data in that study, and in at least one preprint on a second treatment, were provided by an Illinois firm called Surgisphere. Allegedly the data represents the treatment and health outcomes of 96,032 patients from 671 hospitals in six continents. However, there is simply no plausible way I [Ellis] can think of that the data are real. . . .

If Surgisphere can name the 671 participating hospitals or otherwise prove that the data is real I [Ellis] will retract that statement, delete this post or write whatever humbling apology they desire. . . .

Surgisphere has five employees with LinkedIn accounts. Other than the CE and co-author of the Lancet paper, these are a VP of Business Development and Strategy, a Vice President of Sales and Marketing, and two Science Editors (actually, one Science Editor and one Scoence Editor, which does not inspire confidence in their attention to detail while editing). LinkedIn also records one employee of QuartzClinical – a director of sales marketing. . . .

Surgisphere should be a billion dollar company if it’s done this 670 times, but it clearly is not. In fact, Dun and Bradstreet estimate its revenue at $45,245. You couldn’t even do the discovery stage of an EHR integration project at a single hospital for that, never mind deploy anything. . . .

And the news article by Catherine Offord (“Disputed Hydroxychloroquine Study Brings Scrutiny to Surgisphere”):

Surgisphere is currently headquartered in Palatine, Illinois, and run by Desai, who trained in vascular surgery, a subject on which he has published many scientific articles and books. Until February 10 of this year, Desai was employed by Northwest Community Hospital (NCH) in suburban Arlington Heights. He tells The Scientist that he resigned for family reasons.

Court records in Cook County, Illinois, show that Desai is named in three medical malpractice lawsuits filed in the second half of 2019. . . .

He also sent a comment purporting to be from Alan Loren, the executive vice president and chief medical officer of NCH: “Dr. Desai was employed at NCH and resigned in February 2020. We did not have any problems with him while he was here.”

Asked by The Scientist if he made this statement, Loren says, “What I can tell you is that he was employed here and he did resign. I can’t speak to whether or not there were any problems.” . . .

Reviews of the company’s products on Amazon are polarized, and a handful of positive reviews that appeared to impersonate actual physicians were removed when those doctors complained to Amazon. Kimberli S. Cox, a breast surgical oncologist based in Arizona, tells The Scientist that she was one of several practicing physicians who in 2008 discovered five-star reviews next to names that were identical or very similar to their own, that they had not written. She and her colleagues successfully persuaded Amazon to take the reviews down. . . .

This is all in addition to the statistical concerns from James Watson that we discussed last week. But it makes sense that the paper would have statistics problems, given that there’s no evidence that anyone on this team has any statistical expertise—and these particular adjustments would present a challenge even to an expert.

The most damning criticism

But the most general and comprehensive takedown (I want to call it “trenchant,” but I’ve never used the terms “trenchant,” “tendentious,” or “disingenuous” in my life, so I don’t want to start now) comes from from this guy who edits a medical journal in England. He took a look at this Surgisphere controversy and drew some important general conclusions:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”.

And he doesn’t just target the individual doctors involved in the studies. He also slams the medical journals:

Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. . . . unhealthy competition . . . Our love of “significance” pollutes the literature with many a statistical fairy-tale.

He continues:

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative. . . . Instead of changing incentives, perhaps one could remove incentives altogether. . . . Or reward better pre and post publication peer review. . . .

Lots of ideas. The author concludes:

Those who have the power to act seem to think somebody else should act first. . . . The good news is that science is beginning to take some of its worst failings very seriously. The bad news is that nobody is ready to take the first step to clean up the system.

It’s too bad it took this latest Surgisphere scandal to get all these ideas out there.

If only . . .

If prominent medical journal editors had been thinking about all this five years ago, just think of how much good could’ve been done. Between, say, 11 Apr 2015 and today, there would’ve been time to open up the main medical journals and really improve scientific communication, in advance of this pandemic we’ve been dealing with.

OK, not all the journals, not all at once. But, if back in early 2015, the editor of at least one major medical journal had been on board, that would’ve been amazing.

I hope . . .

I hope the author of the above-linked article can get in touch with the editor of the Lancet as soon as possible so that his ideas can be implemented right away. It’s not too late! Sure, some of these ideas are radical, but I’ve heard that the government in England is looking for “weirdos and misfits with odd skills,” so no problem.

I hope that radical journal editor can persuade the Lancet editor to do something already!

It would be so easy for the Lancet editorial board to just publicly ask the authors of the Surgisphere paper to respond directly to their critics on pubpeer and other forums.

The Lancet editorial board could also require the authors release their data and code. If necessary, de-identified data. Or data summaries. Or a paper trail of the data. And the code could be released in any case.

If the authors refuse to respond seriously to criticism and supply code and some data, the editors could follow previous Lancet practice, as in the Wakefield vaccine episode, and just retract the paper 12 years from now. Actually, they don’t have to wait 12 years! Or, if they don’t want to retract, they could keep the paper up on their website and watermark it with some note like this:

Serious questions have been raised about the data and analysis in this paper. The authors have refused to share data and code or to respond to the post-publication review.

Instead, we get this (from the above-linked news article by Offord):

On May 29, Jessica Kleyn, a press officer at The Lancet journals, informed The Scientist in an emailed statement that the authors had corrected the Australian data in their paper and redone one of the tables in the supplementary information with raw data rather than the adjusted data Desai said had been shown before.

“The results and conclusions reported in the study remain unchanged,” Kleyn adds in the email. “The original full-text article will be updated on our website. The Lancet encourages scientific debate and will publish responses to the study, along with a response from the authors, in the journal in due course.”

The sad thing is, the authors of that paper don’t even work for Lancet! The Surgisphere team published a paper with questionable statistics and and questionable data in their journal. You’d think that, instead of coming to defense of the authors, they’d be annoyed! See the trolley cookie problem at the top of this post.

The real scandal . . .

To recap:

The real scandal of that Lancet/Surgisphere paper is not that the authors may have done a poor statistical analysis, thus making their causal claims empty at best and misleading at worst. Bad statistical analyses get done all the time, and they get published in top journals all the time. Statistics is hard, there aren’t enough competent statisticians to go around (and certainly not if you restrict your author list to M.D.’s), and otherwise-competent statisticians make mistakes too.

Nor is the real scandal that the data were misreported and may (or may not) be faked in some way. Cheats, fakers, and plain old sloppy researchers abound, and the fact that they’ve reached whatever status they have at top universities suggests that they’ll be around for awhile. Errors, sloppiness, and out-and-out fraud will continue to get in print. Recall Clarke’s law.

No, the real scandal is that the respected medical journal Lancet aids and abets in poor research practices by serving as a kind of shield for the authors of a questionable paper, by acting as if secret pre-publication review has more validity than open post-publication review.

This was a paper whose conclusions absolutely relied on complicated statistical adjustment, which is exactly the sort of problem where open data, open code, and post-publication

Check out this recent online exchange:

The author of the controversial Surgisphere paper applauds Lancet and its editor for “incisive reviews and bold decisions in publishing.”

But what did the “incisive reviews” say? Under what grounds were those questionable results published? We don’t know, and we’ll probably never now, because peer reviews are secret!

To put it another way, if it’s true that they “intensively review all COVID-19 research papers,” but we see that what gets published has serious problems . . . then this tells us something about the limitations of the intensive review process. And the behavior of the journal after the criticisms have been raised tells us something about the limitations of the system. Again, the author of this article should get in contact with the Lancet editorial board, toot sweet.

P.S. Zad shares the above photo, captioned “When someone finally shares their raw data and code.”

P.P.S. See here for a reminder of why I feel the need to waste time on this, rather than just spending my work time “doing science.”

58 thoughts on “This one’s for the Lancet editorial board: A trolley problem for our times (involving a plate of delicious cookies and a steaming pile of poop)

  1. The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness

    Not a single thing mentioned is the actual cause of the problems, every single one is a distraction or result of the problems. The problems are

    1) It is very uncommon for another group to attempt to directly replicate a study

    2) The theories are very imprecise and weak, of the form “x should be positively correlated with y” (but then even if not we can still fiddle with the theory afterwards to make it fit).

    Fix those problems and you won’t even be talking about tiny effects and conflicts of interest.

  2. Whole Foods is in business to make money, not to sell groceries. To the extent that journals are run like businesses, they are not there to make good science available to readers, any more than Whole Foods is there to make groceries available to customers. How else can you explain the Richard Horton tweet?

    • John:

      I would guess that the personal goal of the Lancet editor is not to make money for Lancet; it’s to advance science. Being a journal editor is a means to that end. Sure, there will be some institutional loyalty and not wanting to admit error, and we’re seeing that here, but I think it misses the point to focus on the commercial incentives.

      • Please note that my comment was conditional; “To the extent …”

        I think all of us have mixed motives. We want to do science, but we also want to get ahead, help people close to us, etc., and institutional motives also play into things. My wife was a virologist, and knew good people working on vaccines for pharmaceutical companies. Their main intention was to develop vaccines to help people, but they also wanted to advance in their institutions, and they worked for institutions for which helping people was a secondary consideration. I talked recently to a statistics prof from Berkeley who reviews for medical journals, and had one editor explain to him that the journals have to compete for high impact papers. Some journals, such as the Elsevier journals, are by their own description “a global information analytics business,” and making money surely seems to be important to them.

        • I am missing your point. Sure, incentives are mixed and it is naive to think that money is not at stake. But, in the end. it is individuals that must be accountable for their actions. If they would be honest enough to say “I don’t like this paper or this research but it will sell subscriptions,” then I’d at least respect their honesty, but not their actions. But when they are not honest – instead they seem to say “our motives are pure” – and then they act in ways that disparage the scientific process in order to promote those messy profit incentives – I don’t respect their actions or their honesty.

      • I was an associate editor of a (bio)medical journal. A weird sort of thing happened to me. While trying my hardest to do a good job editing, I felt myself becoming a “fan” of the journal – like a college student becomes a fan of her school’s football team.

        I wanted our impact factor to go up, I wanted papers to get publicity. These sorts of things are dangerous – and seemed so at the time.

        In the end, I resigned. I didn’t feel I had lost my way ethically – but I became so worried about the challenges in obtaining high-quality peer review that I felt it was only a matter of time before I got embroiled in something awful. By the way, in addition to the hard statistics and rare/busy/frequently unhelpful/\ & unfriendly statistician problem, a reality is that there are waaaaay too many papers submitted for the number of good reviewers. I believe most editors will tell you that this ratio (submission : reviewer) is the number 1 problem for scientific journals today.

        I don’t really know the moral of this story – and I am 100% not defending the Lancet or Horton – but being an editor can be a pretty tough gig.

        • One way to cut down on the submissions would be to pick some threshold value for publication. Traditionally I guess it has been p=0.05, but would something like n=100 generate higher quality results and reduce the number of submissions? I wonder by how much? n<100 could have a special journal, something like the Journal of Meaningless Analysis.

  3. I for one will not see the noble profession of Scoence trashed, particularly in a section titled “Backgrond” (or was that ironic as well?) Cancel my subscription.

    • I remember the heady days I spent in groduate school, absorbing every detail in my reosearch methods and statostics courses, filled with optimism that these tools would allow me—indeed, all of humonity—to lay bare the inner workings of Natore and build a better world for our choldren.

      Sadly, the growth of the scoence-indostrial complex, rife with incentives to publish flashy but ephemeral resolts in order to chase the dwindling funds available from govornment agencies and corporotions, has corrupted the once noble field of Scoence. The rising prominence of sociol media and the TOD talk circuit have only quickened its decline.

      Luckily, we have venues like this blog to explore—and hopefully initiate!—means of reforming incentives and improving the practice of Scoence so that we might one day enable it to achieve its true porpose.

      • “in order to chase the dwindling funds available from government agencies and corporations, has corrupted the once noble field of Science.”

        I can’t help but wonder if it isn’t the other way around: the funds aren’t dwindling so much as the number of people and institutions chasing them is increasing, as one-time “teachers colleges” want MS and PhD Programs so they can get those overhead $$$$. As the competition becomes stiffer and overhead rates higher, more schools have been providing professional assistance to faculty to increase the chances on every grant, and pumping up the publicity on results to help leverage both grants and donations. These professional grant-getters and publicists have far more interest in the dollars than in the science and they create an atmosphere where the trappings of science are more important than the reality. Scientists are playing into it too, pushing hard for policy relevance to get more exposure and increase the power of their own names.

        • For the record, my comment was written ironically (note the strategic typos), but you bring up an interesting point.

          Of course, “dwindling” funds in practice means that there are fewer dollars for the number of people trying to use them, and that ratio can be reduced either by decreasing the numerator or, as you suggest, increasing the denominator.

          That said, my experience having worked in several different types of academic departments—some with a large amount of grant support, some without, though all with an emphasis on research—is that the grant staff are actually the ones who care more about the quality and integrity of the submission and it is the faculty who play the necessary games to become “professional grant-getters”.

          To me, this makes sense: You become a grant person if you genuinely believe that the grant mechanism supports quality research and education. People who become faculty often see the grant system as an irritating bureaucratic exercise that gets in the way of their goal of doing good research and teaching, so they are willing to take shortcuts to get to the good stuff.

          This is all to say that the situation “on the ground” is complex, and I’m not sure that specific types of people are to blame for what is ultimately a problem of incentives that we have all bought into, however unwittingly.

  4. Zad’s cat is great… But surely there was more uptake to the message for cat submissions several months ago? Could have sworn my feline is somewhere in the backlog.

    • If I recall correctly, after that call for cat photos, there were quite a bit of different cat photos on this blog, but as time went on, seems the frequency of people sending in photos of their cats went down, whereas for me, it’s become a sort of habit… I usually send in stuff as backup just in case Andrew has nothing to post! Guess what we need is a more reliable way to get people to send in enough cat photos to match Andrew’s writing speed

  5. Andrew, if you check your email, I corresponded with you back in 2018 regarding similar “evidence” in the legal system. This is simply more of the same. If you are truly interested in being a part of this solution, I would like to chat. The biggest problem with this publish by Lancet is that it sets a new lower bar for what is considered acceptable process for “scientific findings”. Left unchallenged, this Lancet article would set the precedence for “author attestation of privitized databases with opaque provenance”. While the chances are greater than not that this contains fraudulent data, please understand that these “hidden registries” and “private exchanges” do exist and are carefully protected by MOUs and legal NDAs. I have the software application that can easily handle 96,000 patient records, whether from APIs of Common Data Sets or individual messages in what is called the HL7 format. It truly is time for this groundbreaking work and I hope you will consider discussing my proposal to the Lancet with me. If you can google hangout, I am happy to demo capabilities as well. I eat sleep and breath medical data. Also, check out the publication by Dr. Dessai from 2013 under the Journal of Surgical Radiology regarding fraudulent research submissions. Thanks either way. You have my email.

    • Unless we consider that right out fraud is involved I suspect nothing will really be achieved to deal with this..
      The-scientist.com (if you haven’t heard of it like me, see wikipedia for its many awards) reports that Surgisphere founder Sapan Desai faces 3 lawsuits. These were initiated at the end of last year and concern medical malpractice in Cork County, Illinois. I’ve checked several online profile search engines – and two indicated ‘multiple criminal records’ for the only Illinois listed Sapan Desai. It seems worth finding out more, but I’m not well paced for that as from UK.
      Not recommending payment to profile search companies but checking directly with County records.

      • Andrew, sorry – I was rushed to put my comment in I see now you covered the the-scientist article already. NB I looked at the profile engines up to the payment stage and the criminal record indicator was shown wuthout detail.

      • @ Eric

        Yes, There is a very real possibility that “right out fraud” is involved as well. As I responded to Dale Lehman, Dr. Desai has previously published (posted?) an article which addreses fraud in research in which his abstract states:

        “While there are multiple checks and balances in medical research to prevent fraud, the final enforcement lies with journal editors and publishers. There is an ethical and legal obligation to make careful and critical examinations of the medical research published in their journals. Failure to follow the highest standards in medical publishing can lead to legal liability and destroy a journal’s integrity.”

        He then proceeds to violate his own suggested guidelines. Journal editors really have no comprehensive tool through which they are capable of running the “raw” patient data and there is absolutely no way an editor can take the raw HL7 or digitized records (or even Common Data Sets) and efficiently validate them for processing. This data validation needs to occur prior to any statistical evaluation to be sure you are working with accurate data. The article referenced is:

        Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology Bhavin Patel 1, Anahita Dua 1,2, Tom Koenigsberger 3 and Sapan S. Desai 3,4,*

        Please do not underestimate the importance that this debacle has going forward. Please understand the importance of this article’s “impact” alone (meaning the Lancet article). If left to stand, it will change the basic “standard operating procedures” for publishing medical data. And pleaee consider that no one can simply “eyeball” 96,000 patient records, moreso if unstructured and unorganzed, which in this case is where you would want to start. I would not trust anything from these folks that comes “pre-aggregated”. That ship has sailed.

        • ‘Raw data’ was the term used in one or two critiques of the article to describe the absent hospital level data – so I used that. Not ideal but the situation is pretty unprecedented to find the right term for.

          Given that Surgisphere.com states
          ‘We found that in the original publication, one of our data tables of propensity score matched and weighted data, was being misinterpreting as raw data, which would, if raw, make the data appear overly homogeneous. While the data were accurate, we are providing The Lancet and updated table with unadjusted data to help resolve the confusion. There was no error in analysis, and none of the results of the paper are affected.’
          So if original world accumulated data is supposed to count as ‘raw’ it seems worth giving up using the term ‘raw data’ for standard hospital level results.

          The raw – raw data is supposed to have been ‘anonymised’ and it’s difficult to see a problem with knowing which hospitals, let alone the country level data. But if this data even exits at hospital level, Surgisphere has found a way to stitch things up so we can’t confirm the electronic results independently.

          And I don’t see how the attitude of the Lancet – nor NEJM https://www.nejm.org/doi/full/10.1056/NEJMe2020822?query=featured_home&fbclid=IwAR1S6PzI45f5_1ZKBzr8K5mgimWWliMboGq6-irGH9n_s2ewu_vSspQ9mGw – is necessarily going to help in dealing with this properly. The Lancet may yet republish this as ‘corrected’ yet still without hospital level data (I can’t see that happening} nor any attempt at hospital corroboration. Though 200+ signatures has achieved a significant amount so far, I’m not confident this will get to be properly resolved.

        • .. (I can’t see that happening) ..
          – I mean I don’t that the paper will subsequently get to include Hospital level data given the contracts Surgisphere has arranged.

    • I do not appreciate this comment. You can easily communicate with Andrew privately, but there is no value in posting this for the rest of us. I did find your reference to the Journal of Surgical Radiology interesting – I did not know it existed and it is a product of Surgisphere, with Dr. Dessai as managing editor. However, I don’t see the publication you are mentioning in 2013 – there is only one issue in 2013 (the last issue of the Journal) and nothing authored by Dr. Desai appears. I’m getting tired of anonymous (Harold?) “contributions” like this. Of course, I’m old enough to remember that “on the Internet, nobody knows you are a dog,” so I am not naive enough to believe that when people provide their “real” names that it necessarily means it is who they claim to be.

      • @ Dale Lehman

        No, I am not anonymous. Yes, your reply is offensive. My credentials are BA/MD from UMKC, MPH from Emory and MBA from Vanderbilt. No, I am not a statistician. I have developed software that has actually interfaced with live HL7 and understand hospital IT architecture. I have zero conflicts with this article, the articles publisher, the publishers owner nor am I employed by any of the data submitters or “originators”. If you found my reference to the prior Fraud article enlightening, then good. That was part of my purpose for posting here. There is a statement previously about only MDs being able to publish. Here is an MD reaching out and asking for assistance from a statistician who laments the lack of true statisticians in the process. The title of the article you cannot seem to locate is:
        Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology Bhavin Patel 1, Anahita Dua 1,2, Tom Koenigsberger 3 and Sapan S. Desai 3,4,

        PS: Dale, are all of Dr. Gelman’s blog followers this (insert whatever colorful adjective you wish)? Signed Not Anonymous

        • I apologize for the tone – and please don’t attribute it to any other commentators on this site. If you had said what you say here to begin with, it would not have provoked me the way it did. Just using your first name is essentially anonymous. The article you cite is indeed interesting – but I only found it by going to Google scholar – it was supposedly published in a special issue of the Journal of Surgical Radiology in 2013, but I can find no record that such a special issue exists. The Journal appears on the Surgisphere website and archives the issues of the Journal, with only one issue in 2013 – not the one that this article supposedly appeared in.

          I hope you can appreciate that a number of people have devoted time to trying to understand much of the recent work that has come out of this same group of authors, including Dr. Sapan Desai. I take your message to mean that the concerns that have been expressed about whether the data could be real or not are misplaced – you seem to indicate that you are capable of obtaining data such as the authors have now used in numerous papers. I have been withholding my own judgement concerning the authenticity of the Surgisphere data – you may have something useful to tell us regarding that. However, the continued silence from Surgisphere and the other authors, along with all of the suspicions raised by the people trying to track down information about this database, have made many of us even more suspicious. This was the origin of my initial comment in response to your comment. Again, I apologize for the tone – but please try to understand why I reacted the way that I did.

          For anyone continuing to follow the Surgisphere saga, I found the article Harold Duke mentions worth looking at – but I think you will have to find it through Google Scholar. I’ve also discovered that the original Lancet paper we have been discussing was funded by the “William Harvey Distinguished Chair in Advanced Cardiovascular Medicine at Brigham and Women’s Hospital.” That particular Chair is held by Dr. Mandeep Mehra, one of the authors on all of these papers. Finding that this prestigious hospital has provided the funding for this work, coauthored by the holder of this Chair, is just another of the continuing strange features of this research. This is why I react strongly when anonymous postings are made regarding this research, and why citations that don’t appear to be valid raise red flags for me.

          I only wish the authors and the Lancet editors would directly respond to all of the questions being raised. It is possible that the database is, in fact, real and valuable and that this research is providing valuable contributions to our pressing needs regarding COVID treatments. But a lot of smart people (present commentator excluded) have been spending time – perhaps not necessary – trying to unravel an increasingly bunch of knots. If the authors believe what they have written about fraud and research, then they should be more forthright in addressing these concerns.

        • > The article you cite is indeed interesting – but I only found it by going to Google scholar – it was supposedly published in a special issue of the Journal of Surgical Radiology in 2013, but I can find no record that such a special issue exists.

          I don’t know if it was supossed to appear in the Journal of Surgical Radiology but it was published in a journal called “Publications” (open access): https://www.mdpi.com/2304-6775/1/3/140/htm

        • MDPI journals do not have a good reputation (https://en.wikipedia.org/wiki/MDPI), and I can’t imagine wanting to read, let alone submit a paper to, a journal called “Publications”.

          As it happens, a few days ago I was invited to review an article for one of the MDPI journals. The abstract was obvious junk, so I declined the invitation on the grounds of “not enough time”, and commented that it was not worth anyone’s time to pay any further attention to the paper.

        • Dale:

          You write: “Finding that this prestigious hospital has provided the funding for this work, coauthored by the holder of this Chair, is just another of the continuing strange features of this research.”

          But maybe this is not strange at all.

          This article had many evident problems. Beyond the data issues, which might be so clear to an outsider, there are the concerns about causal inference: causal adjustment is tricky in the best of circumstances, and none of the authors seemed to have expertise in the topic. Conditional on the paper being accepted by the journal, it must have had something going for it. The Harvard affiliation could be that “something.” Perhaps the editorial board figured that a Distinguished Chair at Harvard would have to know what he was doing.

        • @ Dale

          I appreciate your statements. To be clear, YES, I think there is fraud involved. Please see my comment responding to Eric posted above.

          There is a very real opportunity for Dr. Gelman and / or other interested persons to help evaluate these claims by Surgisphere, as well as usher in a new standard for data validation PRIOR TO handing such information off to the “peer reviewers”.

          Yes the authors responses are cryptic and therefore unsettling. And this behavior, though new to this audience, has been occurring for years in other forums as well. You cannot appreciate how true the statement of “Harvard researchers with bs (i meant dubious) data convincing decision makers that their cookies taste good” actually is. It boggles the mind.

          To recap, yes the peer review process has merit. And there needs to be another layer to the peer review process because one cannot expect a peer reviewer to be able to eyeball this data. Nor would I expect a statistician to fully appreciate how to navigate around all of the garbage (or as the french would say GarBaaage) created with the electronic health records. And the wheat can be separated from the chaff. And the review process HAS to start with the raw data, unobfuscated.

        • ‘“William Harvey Distinguished Chair in Advanced Cardiovascular Medicine at Brigham and Women’s Hospital.” That particular Chair is held by Dr. Mandeep Mehra, one of the authors on all of these papers. ‘

          But is this funding statement not a conveniently round-about way of acknowledgment that the funding was personally by Dr Mehra? Otherwise why would it not say ‘Brigham and Women’s Hospital ..’. To do that would then involve non authors in the funding process. But the use of non open source AI software and no hospital level data might not have been so accepted. And while I’m no Hospital accounts expert, I don’t see how a position can itself be a source of funding either.

        • I believe others can speak more definitively than me about endowed chairs. But, to the extent I am aware of, most endowed chairs come with some discretionary research funds. If that is the source of the funding for this work, I don’t think that is correctly characterized as funded “personally by Dr Mehra.” It would be the use of endowed hospital funds, at the discretion of Dr. Mehra – and this is usually subject to some constraints.

          By the way, I’ve inquired directly to Dr. Mehra and another of the authors about the Ivermectin paper. Neither have responded. But, then again, I’m not at Harvard (or Columbia, for that matter).

        • Okay, sounds credible, so then likely with some constraints, but still no need for non-authors to check over in order to agree funding. Also that a co-author has solely decided on funding.

          Again, I don’t know, but from my experience of looking at papers that’s highly unusual. I”d assume agreement for funding is a major part of checks and balances outside of peer review.

        • I’m certainly not suggesting this is routine or innocent. I found it strange, and a bit disturbing, that the Chair’s funds were used to fund a study that they authored.

  6. Having the data as evidence is important. I am reminded of Miyakawa (2020, No raw data, no science: another possible source of the reproducibility crisis. Molecular Brain,13. doi: 10.1186/s13041-020-0552-2) who had 41 submit where he wanted some data (or other evidence) so wrote to the authors, and received satisfactory information from only 1. I displayed this as an icon array (so 41 stick figures with 40 of them X-ed out) since I thought this showed the problem if he had not done this, what a reader would be facing if reading 41 papers and not knowing which paper would provide their data. My plot should be published soon, but a pre-pub is at https://www.researchgate.net/publication/341381098_Improving_Trust_in_Research_Supporting_Claims_with_Evidence, and the figure is on p. 2.

  7. Enrolment on the HCQ arm in the Solidarity trial has resumed.

    https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19—03-june-2020

    “As you know, last week the Executive Group of the Solidarity Trial decided to implement a temporary pause of the hydroxychloroquine arm of the trial, because of concerns raised about the safety of the drug.

    “This decision was taken as a precaution while the safety data were reviewed.

    “The Data Safety and Monitoring Committee of the Solidarity Trial has been reviewing the data.

    “On the basis of the available mortality data, the members of the committee recommended that there are no reasons to modify the trial protocol.

    “The Executive Group received this recommendation and endorsed the continuation of all arms of the Solidarity Trial, including hydroxychloroquine.

    “The Executive Group will communicate with the principal investigators in the trial about resuming the hydroxychloroquine arm.

    “The Data Safety and Monitoring Committee will continue to closely monitor the safety of all therapeutics being tested in the Solidarity Trial.

    “So far, more than 3500 patients have been recruited in 35 countries.

    “WHO is committed to accelerating the development of effective therapeutics, vaccines and diagnostics as part of our commitment to serving the world with science, solutions and solidarity.”

  8. I’m expecting the worst – Lancet will be as accepting of Surgisphere as they were first time. Lancet is supposed to have an agreement that its articles involve raw data but it doesnt look like anything will be different on that even now.

    Just like if Surgisphere persuade their own appointed auditors that the report is legit they only need to persuade the Lancet team likewise – behind closed doors. Suppose the team even wants to corroborate with a hospital – that hospital can’t discuss the detail (even though the raw data should already be anonymised). Then WHO gets back to blocking HCQ research, bans continue.. “It’s all been resolved” and a new practice of fabricatable results relying on proprietary AI software and hidden data when other research or reports indicate dramatic benefit when used with proper heart monitoring. low dose, wutb zinc and particularly at the pre hospital stage when the disease us still dominantly viral.

    • Clarification (with my capilisations):

      The Lancet –
      Although an independent audit of the provenance and validity of the data HAS BEEN COMMISSIONED BY THE AUTHORS not affiliated with Surgisphere and is ongoing, with results expected very shortly,’

      NEJM –
      ‘We have asked the AUTHORS TO PROVIDE EVIDENCE that the data are reliable.’

      Surgisphere.com –
      ‘This process will follow strict boundaries as it relates to our data use agreements, among other considerations. We ARE PURSUING SUCH AN INDEPENDENT AUDIT with all due haste while ensuring compliance with legal and regulatory concerns.’

      So, no independent checking beyond from hand picked auditor(s) along with Surgisphere’s ongoing commitment to keeping the data hidden through the non disclosure contracts. Also notice that while Surgishere refers to their own auditor The Lancet refers to only the other authors finding the auditor. So The Lancet will just ignore what Surgishere’s auditor says? Or there’s only one auditor and Surgisphere did also approve it. Either way is not looking great – and I suspect the reality is the latter.

  9. The doctor’s business is listed as consulting. We need to discover who the nefarious party was that funded this alleged research (sic).

Leave a Reply

Your email address will not be published. Required fields are marked *