Unlike MIT, Scientific American does the right thing and flags an inaccurate and irresponsible article that they mistakenly published. Here’s the story:

Scene 1

A few months ago I wrote about a really bad article that appeared in Undark, MIT’s science magazine. The article was so bad it lowered my opinion of MIT, my alma mater, in that it showed such poor judgment by the administration to sponsor this kind of irresponsible anti-scientific crap. MIT’s done worse things, of course, but I knew enough about this case to be particularly disappointed in the institution. (I’m not naming the authors of the article here, not because it’s any kind of secret—just follow the links—but because my problem is not with the people, it’s with the institutions. Bad actors have power because they’re empowered by others, and well-meaning people can become bad actors when their mistakes are not corrected.)

The problem in this story was not so much the errors in the published article—nobody’s perfect, even the best writers make mistakes, and as a journalist it’s hard not to be swayed by your primary contacts. No, the big, big problem was that the magazine refused, absolutely refused, to correct their article even after the errors were pointed out to them. I’ve come to expect this sort of unscholarly behavior from the Association for Psychological Science—after all, they have their bigshots to protect. But I was disturbed to see this coming from MIT.

As I wrote at the time, beyond its misrepresentations, that Undark article was horrible because it offered a science-empty take on science. They took a science dispute and tried to turn it into a political dispute. Part of the job for Politico magazine, maybe; not part of the job for MIT. The authors tried their best to disparage the work of real science reporter Stephanie Lee, but all they did is make themselves look bad—and muddy the waters for casual readers who didn’t know the whole story.

Scene 2

That was too bad. I sent some emails to MIT people who didn’t respond, and I was preparing yet another but didn’t bother to send it. Life is too short.

Then a couple weeks ago the authors of the bad article rehashed their story in Scientific American magazine.

It was the same crap as before: taking a scientific dispute and making it personal (referring to a paper where a particular scientist was 16th author as “his science”), attempting to discredit Lee’s reporting by labeling Buzzfeed as “left-leaning” (I’m guessing that Buzzfeed is actually about as left-leaning as Scientific American!), using loaded language (critics are “accusing,” “public shaming,” “complaining,” “below-the-belt,” “attacking”), dismissing statisticians’ analyses on the ground of “absurdity,” and saying Lee’s “charges” were “wrong on all accounts” without refuting anything that she said. The bit where they guy donated $5000 to support the study but that doesn’t count because the 16th author said he received “zero dollars” . . . huh? I guess they call it “below-the-belt” because it’s true.

I wasn’t sure what could be done about this. There’s little to be gained by getting into a fight with journalists who refuse to act in good faith and who only declare their blatant conflicts of interest when forced to do so. It’s frustrating, though: they demonstrate a willingness to mislead, and that lets them win. If they were reasonable people, we could publicly disagree with them. But since they don’t care about getting things right, they get to set the agenda.

Really sad to see venerable institutions such as MIT and then Scientific American getting conned by these people. Martin Gardner would be spinning in his grave.

Scene 3

Stephanie Lee responded to the bad article on twitter! I hate twitter—it just seems like a terrible format for telling stories, laying out an argument, or having discussion—but, given the constraints of that form, Lee makes her points well. I was glad to see that she went to the trouble of doing this.

Scene 4

Scientific American flagged the article! Here’s what the editors added:

Editor’s note: This article was originally published on November 30, 2020 with a number of errors and misleading claims. First, it should have been labeled “Opinion,” but was not. Second, the authors’ bylines were omitted. Third, the authors failed to note that they have collaborated in the past with both John Ioannidis and Vinay Prasad, who are discussed in this essay, and also in this accompanying story. This, we now understand, was also the case with a similar opinion piece by the same authors in Undark magazine in June. Fourth, the authors did not disclose that there were other problematic issues raised about the design of a study co-authored by John Ioannidis, most notably how the study authors recruited study participants and how independent faculty at Stanford said that they were unable to verify the accuracy of their test.
Other specific errors or omissions are noted with asterisks in the text below. Scientific American sincerely regrets all of these errors.

Wow. Good job, Scientific American. It’s such standard practice for institutions to circle the wagons and defend wrongdoers, often using bureaucratic language (see this recent example), and it was such a breath of fresh air to see Scientific American do the right thing. The authors weren’t willing to correct their misrepresentations, so the magazine did it for them.

P.S. One annoying thing about this whole episode is that the authors of this article aren’t even serving their own cause by misrepresenting the science and ethics issues. It would be so easy for them to have made their case directly, something like this:

There’s been a lot of confusion about the coronavirus, and, since the spring, the Stanford team has been trying to keep their heads while all about them were losing theirs. They’ve taken some unpopular positions and sometimes they’ve been right, a lot more than people who were telling us that our cities would become covid-driven graveyards. Criticism of the Stanford team has been overblown and ideological. Sure, they’ve made mistakes—but who hasn’t? Their antibody study in April was sloppy in design and analysis, but this is excusable. They—and everyone else—were in a rush. Statisticians made valuable criticisms and that’s how science moves forward. It’s a mistake to blame the authors for an imperfect study—especially given that their larger claims are supported by other evidence. As for conflicts of interest: yes, there was some of that, and we appreciate the work of the Buzzfeed reporters who uncovered the involvement of the Jetblue founder. But this doesn’t invalidate the science. This work was no more invalid for being partly funded by commercial interests than other work that is funded by pharmaceutical companies, the U.S. Department of Defense, or, for that matter, college tuition. As for the criticisms of the Stanford team for presenting their work on Fox TV: that’s just silly. They are doing solid (if imperfect) science, they have concerns about policy, and of course they want to share that with the world. Wouldn’t you? And yes, we, the authors of this article, have personal and professional connections with some of the Stanford team. We’re just offering our perspective. But we think our perspective is a valuable counterweight to knee-jerk criticism that focuses on some particular issues and failed predictions without seeing the big picture.

So easy. You can make your case and make it firmly without denying the science or misrepresenting the journalism.

13 thoughts on “Unlike MIT, Scientific American does the right thing and flags an inaccurate and irresponsible article that they mistakenly published. Here’s the story:

  1. This is impressive and nice to see.

    It makes me wonder about the recent discussions on what should get published/retracted. On the one hand, maybe this piece just shouldn’t have been published at all. But on the other hand, having the piece remain available but inextricably attached to this notice of wrongdoing, that actually seems more useful than just not publishing it at all. It’s useful to students as an example of what not to do; it’s an example for readers of popular science of what to look for to detect signs of distortion in science reporting; and I suppose it warns any future editors not to accept anything from these authors without scrutiny.

    • Gec:

      I agree. One trouble with retraction is that it’s seen as a kind of “death penalty,” something so severe that it’s only done in cases of pizzagate-level malfeasance, and not always even then. Once we accept that the vast majority of bad science—even really really bad science of the beauty-and-sex-ratio variety—will stay forever unretracted in the public record, this pushes us to go beyond thoughts of retraction and consider ways of flagging the stuff.

      In any case, I’m just really glad that the Scientific American editors didn’t just automatically side with their authors, in the way that scientific journal editors typically do.

      • Oddly enough, it was the (lack of transparent COI again in) Sci Am article that led to one of the authors being removed from Undark’s board.
        I find it slightly amusing because non-disclosure of COI is something like #10 in the list of things these op-eds get wrong.

    • Joseph:

      There is some connection to MIT, for sure. For example, the mailing address listed on Undark’s contact page is: Undark Magazine, E19-623, 77 Massachusetts Ave., Cambridge, MA 02139. That’s on the MIT campus.

      Beyond that is what I was told from one of the authors of that notorious article. The email I received in May, subject line “Media query,” said:

      Dear Andrew Gelman,

      I have interviewed John Ioannidis and Eran Bendavid regarding the Santa Clara study. I am writing for Undark (MIT’s science magazine) and wonder if you would be willing to chat briefly today or tomorrow morning?

      So no ambiguity there! If the reporter was actually misrepresenting what Undark was, that’s yet another problem.

    • Well, it’s an MIT science magazine. It seems to be published by the Knight Science Journalism program at MIT. See https://ksj.mit.edu/about/faculty-staff/ and https://ksj.mit.edu/about/. The latter webpage contains the following text.

      “Undark

      As part of its mission to promote excellence in science journalism, KSJ publishes an award-winning digital science magazine, Undark which reaches millions of readers annually. The magazine’s work is routinely republished by some of the world’s most respected media outlets, including The Atlantic, Scientific American, Smithsonian, Time, Newsweek, NPR, Quartz, Salon, and Slate. Renowned for its rigorous and comprehensive fact-checking process, Undark has won numerous national awards, including the 2018 George K. Polk Award in Environmental Reporting. (Learn more about Undark here.)” (emphasis added)

      The MIT directory lists the director of the program as having office address E19-622—the same as the address Andrew found for Undark. Maybe he should email his thoughts to her.

      Bob76

  2. Hi Andrew,
    Let me 1st off say that I have been a member of Right Care Alliance, DC, founded by Shannon Brownlee, author of the book Overtreated, and VP at Lown Institute Her book was excellent. I know that John Ioannidis gave one keynote at a Lown Symposium on Evidence Based Medicine, which can be found on YouTube.

    I think that past collaborations with a particular expert doesn’t mean that there can’t be debates & differences between experts subsequently. Nor do I think it is so bad if any of us favor the views of one or another expert. That goes on here on this blog.

    I am not privy to the conversation you had with whomever. I have not raised this issue with Shannon nor anyone else b/c I was busy with writing testimony to be presented before the DC Council. Since I have tremendous respect for you, as you may surmise.

    I myself disagree with people whom I respect and admire. I Retweet the perspectives I disagree with too.

    In short, I’m not really understanding what happened even from the Editor’s Note. In all fairness, each of us has some biases and even loyalties which may prevent us from being fully open about our opinions. I think Sander Greenland has made note of these proclivites.

    OK that’s my spiel.

    I believe there are circles that gravitate toward each other.

    • Sameera:

      From my perspective, what happened is that they misrepresented what I wrote, they ignored other equally valuable statistical criticisms of that Stanford paper, and they pretty much contradicted themselves when writing about Lee’s reporting. The originally-unstated conflict of interest was the starting point, but I’d have problems with that article even if it had zero conflicts of interests. The authors had a story they wanted to tell, and they didn’t let the facts get in the way.

      I would not take this to imply that other things written by these reporters are wrong, or that they are particularly unethical, or whatever. Everyone makes mistakes. It’s too bad that Undark didn’t flag the mistakes (I contacted the authors of the article directly, so it’s not like they didn’t know about the problems), it’s good that Scientific American flagged them, and I hope this can be a model going forward.

      I agree with you regarding disagreement. As noted in my P.S. above, I think those articles could easily be written to strongly express the authors’ views but without misrepresenting or ignoring the relevant work of others.

  3. Thanks for the response. I’ll review the articles when I have some time. It takes a lot of effort to be fully apprised, I admit. I consider myself a consumer of expertise and a questioner sometimes. I love your blog b/c it is unique among blogs.

    I hope that some of the angst can be resolved at some point. But then again angst can be useful too. Anyway just wanted to read your take.

  4. Actually,

    > But we think our perspective is a valuable counterweight to knee-jerk criticism that focuses on some particular issues and failed predictions without seeing the big picture.

    Not to suggest that someone inclined towards their view wouldn’t think Ioannidis et al.’s work solid and lump criticism of their work into “knee-jerk” and nit-picky.

    But there was a ton o’ legit criticism and would it be too much to hope for that even people who support Ioannidis et al.’s work could come up with something more specific and productive than reflexive tribalism?

    Below – an Interview with Lee about the conflict of interest thingy.

    https://www.theopennotebook.com/2020/06/23/stephanie-lee-unravels-the-conflicts-of-interest-behind-a-controversial-covid-19-study/

  5. While it is good to see retractions, most people arent going to see the process, detailed in the blog posts, with the same attention to detail, nor will they see it as non-personal, if only because there’s almost no way to present such a dispute without detailing personal interactions on a timeline. Most people view that as tinging the entirety with personal animus of some kind. That fits you as ‘animus’ is equivalent to ‘caring about truth’, and particularly truth as a process, but it fits nearly everyone else as some version of a spat. If it involves women at all, then many characterize it in their minds as a ‘spat’ because they downgrade how women think. If it involves someone who is a stickler for truth, then it becomes a ‘spat’ because they downgrade the necessity of truthfulness. (Perhaps down-regulating might be a better word. The medical usage to refer to a system tendency may be more appropriate because any new issue down-regulates in their heads.)

    There are a lot of good reasons why this occurs, other than just saying human nature. One is the shame factor. You see this with the greatest: Einstein had to retract! Einstein was publicly shown to be wrong! That is supposed to mean: everyone makes mistakes, so dont believe genius is perfect, but it is apparenly internalized as having large negative connotations in which trying something and failing is bad. That is a purification form, which you see all over academia. My dad’s advice, back in the 60s, was, ‘Dont go into academia, the stakes are low so the politics are high.’ In that form of struggle, nealy all the people will not be Einstein. So they will tend to coalesce around purification, meaning a model of behavior which they as a group can fit to themselves. That model rather clearly tends to spotlessness, because the ultimate middle of the distribution goal is then to be the ultimate middle of the road example of the ultimate middle, meaning a distribution on top of the distribution. That purifies across layers to become conformities.

    When you have conformities, admitting error shows up right away when the purification function runs: it finds the 0’s, meaning the error halts there and various other ways to describe the best way to find something in a field, so finding 0’s means you just have to run around the edge looking for holes which appear because the threads coming out terminated at error. That just takes the concept of an increasing graph and treats it as growing into a space which envelops the graph in the necessary dimensions. So then you look for holes, not bumps. So when people read a resume, they scan for basic fit to the job and then look for errors because they eliminate the impure by identifying the threads that dont rise, that terminate and thus terminate consideration.

    I called this a shame factor because it becomes shame, as in ‘shame on you’ to ‘it’s a shame about …’ And I like how it works externally as imposed shaming, which is how error recognition is treated, and internally. Internal expression of shame ranges from suicidal despair to violent anger. It’s a big label.

    So, most people fit themselves to a degree – another distribution! – to a model that has a purification form or modularity, which is adept at identifying impurities such as errors. (That last reminds me of fielding stats and the work done to show that a player with fewer errors may not be a very good fielder, that the player with more errors had much more range, etc. That’s interesting because it’s an example of how if you saw this from the inverted position, instead of seeing errors sticking up, you’d see the explanatory reasons like range generating an expected number of errors and comparing that so those below stand out as 0’s. Never saw that before. Thank you for that. So reading out takes you to errors, while reading in uses those as values in the function that generates errors generalized. In my terms, that’s GS:gs or the General to the generalized application or representation.)

    My ‘best’ comment may be this: you are presenting yourself as an aspirational example in which success in your field means your obligations to truth expand consciously past your career concerns. It is not rational to ask most of the people who are striving hard to harm their chances in a competitive race. And I mean mathematically; they would be taking on the error code of the purification modularity of the model they need to follow. It’s like the model says we count by 1s and sorry you have a square root of 2 there. But of course, we also know that some people are able to square the irrational to make 2, which is like saying that some follow a different pathway through the model. It is a distribution, right?

    As for me, I complain about the national news but not the local news. I expect that a half hour show produced for a national audience will have higher standards of thoughtful writing, so the reliance on facile clichés bothers me. I thought about using them as a Covid example of chaotic threading: they go from story essentially crying for restrictions to a story doing the same about the cost to businesses of restrictions, from a story crying about the loss of life to a story crying about the difficulties of isolation on families of children with special needs, back and forth, back and forth. They have no coherent voice. How can they? It’s a real messy situation. But do they ever admit this? No. What bothers me is they act like they’re coherent. Everything is to cry over. And because it infused with partisan election competition, anything anyone says gets attacked. It annoys me.

  6. “Their antibody study in April was sloppy in design and analysis, but this is excusable”

    I think it was a favorably slanted report, bent to avert infection protection policy that was perceived as going against business interests.
    If scientific rigor is your personal brand, like with John Ioannidis, why would you want to rush ahead with a biased study design?
    They set out to influence the narrative with a pr-stunt… the same thing happened in germany with Hendrick Streecks “COVID-19 Case-Cluster-Study”… in german there is a word for such kind of studys: “Gefälligkeitsgutachen” (roughly: doing a favorable study to serve somebodys agenda).
    I dont want to excuse that.

Leave a Reply to Joshua Cancel reply

Your email address will not be published. Required fields are marked *