Harvard-laundering (the next stage of the Lancet scandal)

We’ve been talking a lot recently about how the Lancet brand has been used to launder questionable research.

Things are changing; though! People have sent me links showing that Lancet and New England Journal of Medicine have retracted the controversial Surgisphere papers, or issued expressions of concern, or whatever.

It’s good to see a scientific journal admit mistakes. It took Lancet 12 years to retract the Wakefield vaccine-denial paper and only 3 weeks to retract the latest paper. At this rate they’ll soon be able to be retracting papers in 15 seconds or less!

One thing that’s too bad about these extreme cases is that they can be used to get run-of-the-mill bad-but-not-fraudulent science off the hook. I don’t think that in a million years Lancet would consider retracting or expressing concern about that gun control paper (see criticisms here and here)—no fraud was involved there, it was just bad science, useless research that happened to come to a political conclusion that fit the journal editors’ preconceptions. So let’s not forget that, just cos some papers are retracted amid massive irregularities, that doesn’t mean that the un-retracted papers are necessarily any good at all.

But that’s not what I wanted to talk about here. What I wanted to talk about today is the Harvard connection.

Here’s the affiliation section of that Lancet paper:

Hmmmm . . . the journal editors may have thought. It’s an observational analysis of coronavirus treatments, one of the authors is from Surgisphere Corporation (?), one is from some heart center in Swizerland—is coronavirus a heart disease?—one is in a department of biomedical engineering (kinda strange, as there’s no biomedical engineering involved here), and one is at HCA Research Institute, which seems to be the research division of a big health-care company. The first author, though, he’s listed as the “William Harvey Distinguished Chair in Advanced Cardiovascular Medicine at Brigham and Women’s Hospital,” which is affiliated with . . . Harvard!

Harvard—hey, we’ve heard of them. They do good work at Harvard! You don’t get to be a professor at Harvard Medical School by just taking an analysis that someone else did, of data that someone else collected and you’ve never seen, and then putting your name on it. It doesn’t work that way! This isn’t Cornell or Ohio State or the University of California, it’s Harvard, goddamn it. Even the people who did that Stanford study, flawed as it was, they collected and analyzed the data themselves.

It says in the Contributors section of the paper that “The study was conceived and designed by MRM and ANP. Acquisition of data and statistical analysis of the data were supervised and performed by SSD. MRM drafted the manuscript and all authors participated in critical revision of the manuscript for important intellectual content. MRM and ANP supervised the study.” MRM is the Harvard guy. But I really don’t know what it means to have “conceived,” “designed,” and “supervised’ the study. It seems that SSD (the Surgisphere guy) did the actual work.

Aaaaand, the analysis involves some tricky statistical adjustment but all the authors were M.D.’s with no apparent statistical expertise, so who did that statistical adjustment? It says, “We acknowledge Jide Olayinka (Surgisphere) for their helpful statistical review of the manuscript,” but that’s just a review. Presumably this person didn’t actually do the work, or he would’ve been a coauthor, no?

So what did the Lancet editors think about the statistics when they received this paper? I guess we’ll never know because reviews are TOP SECRET because secrecy. But maybe they thought: Hey, the first author is a professor at HARVARD! We can trust him. A Harvard professor would never just sign off on questionable work as a way of getting a publication in Lancet. It doesn’t work that way.

I have no idea. I don’t know any of these people. I don’t even know the “sci-fi writer and adult content model” who are said to work for Surgisphere. The data and analysis may all be sitting somewhere, maybe in the same place as Brian Wansink’s bottomless soup bowl, Mary Rosh’s original survey data, the original version of that wikipedia article that Ed Wegman plagiarized, and the “real killer” who O.J. is still hunting down.

But I do wonder if Harvard was being used to launder this work. If I were a Lancet editor, I’d be kind of annoyed at Harvard. If I worked at Harvard, I’d be kind of annoyed some part of my university’s reputation is being spent in this way.

More from the U.K. Science Media Centre here. They emphasize the value of controlled experiments, which I agree with. I do think that observational studies can be useful, but you have to take the adjustment problems seriously. It’s not enough to have zero authors with statistical expertise in such a difficult problem. Even if one of the authors is the so-and-so Chair at You-Know-Where.

And more from Catherine Offord here.

P.S. If only someone could send me a picture of an adorable cat spinning in the dryer, that would be a perfect illustration of reputation laundering.

P.P.S. Conflict of interest: I went to MIT so maybe all this Harvard-bashing is sour grapes. Also I was a student at Harvard and I’ve taught at Harvard, so maybe I’m too protective of Harvard’s reputation.

37 thoughts on “Harvard-laundering (the next stage of the Lancet scandal)

    • Ulc:

      Harvard guys aren’t known for their numeracy. Remember “the replication rate in psychology is quite high—indeed, it is statistically indistinguishable from 100%”?

      They’re good at getting published in top journals, though, and that’s the important thing, right?

  1. > But I do wonder if Harvard was being used to launder this work. If I were a Lancet editor, I’d be kind of annoyed at Harvard.

    Maybe that Lancet editor would find some consolation in the fact that the New England Journal of Medicine was also fooled by this Harvard-laundering. From geopolitical considerations I’m not sure if we should expect the NEJM to be more or less susceptible than the Lancet.

  2. I think you are overstating the Harvard-MIT rivalry. Just a few years ago I heard the then president of Harvard refer to MIT as “the best university in Cambridge”—–“east of Central Square.”

    Actually, I found the rivalry to be asymmetric with MIT students expressing antagonistic sentiments regarding Harvard—but little or nothing similar from Harvard students.

    Bob76

  3. Not quite a cute cat in a dryer, but a photo of an adorable dog is quite germane here. Cloning fraudster Hwang Woo-suk used eminent US scientist Gerald Schatten for similar reputation-laundering purposes in a paper about the first cloned dog, [Snuppy](http://doi.org/fh92z4). Once it became obvious Hwang was going down in flames, Schatten tried to downplay his role. An institutional investigation [found](http://web.archive.org/web/20060506104516/newsbureau.upmc.com/PDF/Final%20Public%20Report%202.08.pdf):
    “we have no reason to doubt Schatten’s statement to us that his major contribution to the paper was a suggestion that a professional photographer be engaged, so that Snuppy would appear with greater visual appeal. It is less clear that this contribution fully justifies co-authorship.”

    • Thomas:

      interesting story. Greed + trust in the wrong people = trouble! I followed the link and read the story. I was curious about Schatten’s latter career so I looked him up in wikipedia. His page had this bit on the scandal:

      Schatten himself called for an investigation by his university, the University of Pittsburgh, in 2005.[5] Finished in February 2006, the investigation committee concluded that Schatten was not guilty of scientific misconduct, because he was not involved in the falsification of data.[5]

      This is a misrepresentation of the committee’s finding. What the committee actually wrote was, “Dr. Schatten shirked these responsibilities, a serious failure that facilitated the publication of falsified experiments in Science magazine. While this failure would not strictly constitute research misconduct as narrowly defined by University of Pittsburgh policies, it would be an example of research misbehavior.”

      I will fix the wikipedia page.

  4. Seriously, do people really think editors and reviewers have the time to look up all this and see where the date came from and who is behind the study? We are not talking 1 paper a week, right.

    I think all this would call for open peer review like BMJ does and maybe pay peer reviewers for their time. Lancet and NEJM makes a ton of money so they can easily pay people to really go digging.

    As much as the data being uploaded sounds ideal, don’t think it will happen anytime soon in big medical journals.

    • Anoop:

      What the hell? Lancet rejects zillions of papers because they judge them not good enough or not important enough. If they can’t get their act together enough to see big problems in their papers, they should just fold up and go home. You don’t have to “really go digging” to see that this paper has serious issues. If they’re accepting this paper, what’s their justification for rejecting all those others? If it’s really true that editors and reviewers don’t have the time to look at the papers they’re publishing, they should cut out the middleman and just accept all their submissions. Or maybe just accept all their submissions from Harvard.

      • Aren’t you being a tad simplistic here?

        This is akin to saying oh your test failed to detect cervical cancer in this particular instance. Why dont we just stop all testing?! Let people go to astrologers.

        • Rahul:

          There’s lots of good medical research being published. I don’t think we should go to astrologers. I do think maybe Lancet should think twice next time about publishing bad work that happens to be newsworthy.

        • I sympathise with Anoop’s POV insofaras it’s inevitable that at least some papers like this slip through the cracks. Four reviewers, including maybe one statistician, will sometimes miss issues like this, even though they are quite obvious in hindsight. And the reviews process will have difficulty catching outright falsification like in this situation, since I suspect most reviewers would be looking for methodological faults and assuming good faith on the side of the authors.

        • Zhou:

          Oh, sure, everyone makes mistakes. I’ve been fooled by fakes too.

          What’s important is that people and organizations are open to the possibility they made a mistakes, rather than following the standard pattern of deny-and-defend.

  5. Andrew,

    Do you really think they should do an independent analysis for every SINGLE paper that comes into to check for data provenance and how credible the source of the data by checking their company history and how many people work for and such? If this was so easy and “so serious” as you claim, they wouldn’t have to do an independent analysis do they?. They could have rejected it right now without having to do an investigation.

    And none of the other ‘major’ concerns like con founders and dose response would warrant a retraction or rejection. If that was the case, none for the observational studies would ever see print.

    • It is probably impractical to do an “independent analysis for every SINGLE paper.” But they could require the data be made available. It won’t cure all problems, but it will go a long way towards helping. I increasingly believe that if somebody wants to publish research that can influence policy or clinical decisions, then they should be obligated to provide the data. If they want to use it for their own purposes, then can just keep it secret.

      • That may not happen in big journals any time soon.

        In my personal experience, big multi-center studies takes years to conduct and get grants and the last thing they want to do is give their data up so easily. 100’s of retrospective analysis are conducted and published and serve as preliminary for next set of grants so they can keep a job! Once they milk they data, then they will upload it. And we are still taking studies using NIH grants. So probably will solve a lot of problems, but may not happen int he near future.

        • And so what? You are expecting journal reviewers to go back and conduct the statistical analysis to see if they get reproduce all the findings? Not practical. And that’s not what people are saying when they say data not available

        • I have (a few times) as reviewer told the editors that I could not recommend publication due to data issues that could not be addressed without seeing the data – data which had not been provided. The response of editors has been to ignore the concern. Fortunately, I don’t review that many papers – but I would investigate the data a bit if it had been provided. For those who do a lot of reviews, it is impractical. But there might be a valuable deterrence effect anyway. If the data as provided to reviewers, it might install some useful discipline on authors that seems to be missing in the current process.

    • Anoop:

      As I wrote in my earlier post, doing statistical analysis of observational data is difficult even if done by experts. There was no indication that any of the authors of this paper had any expertise at all in statistical analysis of observational data. So, yeah, that’s a problem. I don’t really care about whether a paper “warrants a retraction or rejection.” I’m concerned that the Lancet reputation is used to give credence to crap papers.

      • I liked the previous conclusion better (https://statmodeling.stat.columbia.edu/2020/06/01/this-ones-the-lancet-editorial-board-a-trolley-problem-for-our-times-involving-a-plate-of-delicious-cookies-and-a-steaming-pile-of-poop/):

        “This was a paper whose conclusions absolutely relied on complicated statistical adjustment, which is exactly the sort of problem where open data, open code, and post-publication”

        I’m not sure “There was no indication that any of the authors of this paper had any expertise at all in statistical analysis of observational data” matters.

        If the authors had a record of crappy stuff, that would matter (I don’t know if they do or don’t). If the authors didn’t have a record, then rejecting them because their titles weren’t fancy enough seems very similar to accepting something just coz it comes from Harvard.

        • Ben:

          I agree that open data, open code, and post-publication review is the way to go.

          Also, nowhere did I recommend that the paper be rejected because the authors don’t have a fancy title. For one thing, having experience in statistical adjustment is not the same as a fancy title. Beyond that, the lack of demonstrated statistical expertise is just one more clue that something may be wrong. I’m reacting here to the implicit idea that the Lancet editors had no choice but to accept this paper on an important topic. It’s the same way I felt when people defended JPSP’s decision to publish that ESP paper in 2001. These journals reject dozens of submissions a week; they don’t have any obligation to publish any particular submission.

        • > Also, nowhere did I recommend that the paper be rejected because the authors don’t have a fancy title. For one thing, having experience in statistical adjustment is not the same as a fancy title.

          Hmm you’re right; I conflated these two things. My bad.

    • This was not ANY single paper, but a paper which conclusions could and actually did change the WHO recommendations on the clinical management of the pandemic COVID 19, including the interruption of clinical trials. As a result, MOHs of different countries have also made decisions based on the conclusions of this paper. Very predictable. Yes, it should have been thoroughly analyzed.

  6. 96k patients and 4 md authors and no statisticians. Wondering who verified that the verified data was actually verified. Tedious stuff you’d think busy people would be too busy to do. Equally interesting is SSD’s Surgisphere, which specializes in machine learning and data analytics, doesn’t seem to have an IT staff. Maybe they contract the IT infrastructure and db stuff. SSD not only is a Man for All Seasons, but doesn’t sleep much, either.

  7. I’m thinking to conduct an HCQ (hydroxy) study because, why not? Regardless of the study design, analytical quality or value of the results, the editors of high-impact journals find them unusually interesting. The papers receive wide press coverage, which paired with my other efforts, creates the elusive synergy required to boost my twitter followers. The paper may even confer benefits to my career and social standing. A finance career and engineering degrees were holding me back. And I feel better about only having an MS. I will learn from Surgisphere mistakes. I feel certain the journals will not. /s

    (Sober Editorial)
    Hydroxychloroquine for the Prevention of Covid-19 — Searching for Evidence (Myron S. Cohen, M.D.)
    https://www.nejm.org/doi/full/10.1056/NEJMe2020388

    (A Log for the campfire)
    A Randomized Trial of Hydroxychloroquine as Postexposure Prophylaxis for Covid-19
    https://www.nejm.org/doi/full/10.1056/NEJMoa2016638?query=recirc_curatedRelated_article

  8. “We can no longer vouch for the veracity of the primary data sources,” Mandeep Mehra of Brigham and Women’s Hospital, Frank Ruschitzka of University Hospital Zurich, and Amit Patel of University of Utah said in a statement. “Due to this unfortunate development, the authors request that the paper be retracted.”
    GIGO
    In any endeavor using data — research, data conversion for a new system — you’d expect in this day and age it would be SOP to apply procedures, eg, sampling and auditing — before the data is used.
    RA Fisher:
    “To consult a statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” In Statistics Done Wrong — the woefully complete guide, Alex Reinhart

    • Sergio:

      The general ideas are discussed in various textbooks in causal inference, for example chapter 20 of Regression and Other Stories. The basic idea is to start with the comparison of treated and control group in the observed data, then compare the two groups with respect to demographics and pre-treatment health status, then do some combination of matching and regression to estimate the treatment effect among the subset of people that had a chance of getting either option, then see how the estimate changes as you adjust for more things, then consider the effects of adjustment for important but unmeasured pre-treatment predictors. It’s all assumption-based, but if you do it carefully and make your assumptions clear, you can learn something. It’s not a button that can be pushed in a statistics package.

      • Andrew,

        Thank you for the answer. Would it be correct to say that they, in fact, did a causal inference work but: 1) their assumptions are not laid out transparently, and 2) the assumptions (from what we can get, given the lack of transparency) won’t hold in any population of interest?

  9. “Never assume conspriacy when stupidity will suffice”. Was this a simple case of wanting to be first rather than being right for a career boost or something darker? Can you get an EUA if a theraputic or prophylactic exists? [“that there is no adequate, approved, and available alternative to the product for diagnosing, preventing, or treating such disease or condition” ] https://www.law.cornell.edu/uscode/text/21/360bbb-3

    The timing of the piece smells bad…the tie back to a shady company with an even shadier “CEO” smells worse.

    Oh, and what’s the termination date for the EUA again? I thought so.

Leave a Reply

Your email address will not be published. Required fields are marked *