More on that 4/20 road rage researcher: Dude could be a little less amused, a little more willing to realize he could be on the wrong track with a lot of his research.

So, back on 4/20 we linked to the post by Sam Harper and Adam Palayew shooting down a silly article, published in JAMA and publicized around the world, that claimed excess road deaths on 4/20 (“cannabis day”).

I googled the authors of that silly JAMA paper and found that one of them, Dr. Donald Redelmeier, was featured in a New York Times article from 2010, where he’s referred to as “perhaps the leading debunker of preconceived notions in the medical world.”

Kind of ironic that he’s called a “debunker,” considering that his work needed to be debunked.

Anyway, here’s how that NYT article begins:

Presidential elections can be fatal.

Win an Academy Award and you’re likely to live longer than had you been a runner-up.

Interview for medical school on a rainy day, and your chances of being selected could fall.

Such are some of the surprising findings of Dr. Donald A. Redelmeier, a physician-researcher and perhaps the leading debunker of preconceived notions in the medical world.

In his 20 years as a researcher, first at Stanford University, now at the University of Toronto, Dr. Redelmeier, 50, has applied scientific rigor to topics that in lesser hands might have been dismissed as quirky and iconoclastic. In doing so, his work has shattered myths and revealed some deep truths about the predictors of longevity, the organization of health care and the workings of the medical mind.

OK, let me be clear. Wacky ideas are fine. Some of the ideas might be wrong, and that’s ok. Science needs its careful researchers and also its eccentrics. I have no problem with that, nor do I have problems with journals publishing speculations.

Here are the three problems I have with the above news article:

1. “Scientific rigor.”

2. “Shattered myths.”

3. “Deep truths.”

Let me go through these things one at a time.

1. This work is not scientifically rigorous. Check out Harper and Palayew for more on this. Yes, it’s possible that Redelmeier’s work was rigorous in the past and now he’s switched to non-rigorous work, but I doubt it.

2. The news article does not mention any myths that were shattered. I have no idea what they’re talking about.

3. What are the “deep truths”? That medical school interviews are different on rainy days??

First of all, it’s an open question which of these claims are “truths” in the sense of being at all applicable to the wider world beyond what particular datasets this guy was studying. Second, the only way you can possibly characterize these claims, if true, as “deep,” is if they reveal some general insight about human nature. But in what sense does a grab-bag of data patterns represent depth?

Also this:

“He’ll go totally against intuition, and come up with a beautiful finding,” said Eldar Shafir, a professor of psychology and public affairs at Princeton University who has worked with Dr. Redelmeier on research into medical decision-making.

I’m not sure how Redelmeier’s claims go “totally against intuition”—all of them have convenient stories attached—but, setting that aside, my point here is the problem with the traditional heroic mode of science reporting, the kind of thing we’re familiar with from Ted talks and the Edge foundation.

The scientist-as-hero narrative seems to me to be a disaster, and I think we need something new. Indeed, lots has changed since 2010, and I’d like to think that a modern version of this science reporting would show a bit more skepticism.

Who cares?

Why do I write about this?

The reason I write about this is not to pick on some doctor in Toronto who has a sideline in writing amusing noise-mining epidemiology papers. As I’ve said many times, I have no problem with journals publishing papers on 4/20 Day, or ESP, air rage, himmicanes, etc. (I have more problem with that paper on homicides and NRA conventions because it had political content so it wasn’t just a harmless feature story.) But there is a problem in that similar statistical fallacies arise in more serious medical research. So it’s interesting to break down this particular news story and see how it presses all the “scientist as hero” buttons. We can do better now, and that’s great.

And I can’t blame Redelmeier for the fact that he made statistics errors and got hyped in the newspaper. We all make statistics errors, and we don’t control what is written about us. I’am concerned that “scientist as hero” treatment can encourage a lack of self-questioning, and that’s too bad.

P.S. To her credit, Katie Hafner, author of that news article, does present a dissenting view:

Professor Tibshirani, for instance, has reservations about some of Dr. Redelmeier’s choices, and declined to collaborate on the Academy Awards study.

“I honestly thought it was frivolous, and we’ve argued about it,” Professor Tibshirani said. He also questioned the Election Day research. “Of course there’s more traffic, so it seemed self-evident,“ he said.

The article continues:

That perspective amuses rather than offends Dr. Redelmeier.

Dude could be a little less amused, a little more willing to realize he could be on the wrong track with a lot of his research.

18 thoughts on “More on that 4/20 road rage researcher: Dude could be a little less amused, a little more willing to realize he could be on the wrong track with a lot of his research.

  1. I’d like to bring up a question that seems natural given the framing:

    Are you simply against heroism, overall?

    It’s a serious question. I’ve seen quite a bit of current postmodern discussion
    that is simply against the concept, the identity, the meaning of heroism. Some of
    it takes on ‘white knights’ or the idea that anybody could be doing untrammeled
    good. There are efforts to coopt heroism for victims, but this seems a half-measure
    and pretty soon the erstwhile heroes are being ‘debunked.’

    If this is the instinct that is driving you away from ‘scientist as hero’ – it
    would be good to get the framing entirely into view. Nobody perhaps is a hero.
    That raises questions of what makes a hero, what is the definition, etc; and
    what it is replaced with, perhaps (collectivism?). But it also places the
    question outside science, per se. It says that entrepreneurs aren’t heroes,
    nor are physicians, nor first responders, nor soldiers, nor parents.

    If, on the other hand, only scientists can’t be heroic – that raises
    questions about the place of science in society and the nature of the
    enterprise.

    Alternatively, the problem lies with what is deemed heroic science; such
    that so many claimants to the mantle are actually deluded or frauds, not even
    anti-heroes or arch-villains, but simply foolish or grubby.

    • What even makes it “heroic science”?

      Is it just that one person can start from some different way of looking at things and come to some revolutionarily conclusions vs a an army of people who have been working tirelessly yet hopelessly based on the same shared wrong premises they teach each other? I see no problem with that happening. I don’t see it in this type of research though.

      • It is a good question – what makes science heroic – if science can be heroic?
        Does it mean standing alone or taking personal risks? Or expending extreme effort?

        Perhaps we can deduce from examples?
        Was Walter Reed heroic for the risks he took in pursuing the vector hypothesis of malaria? How about the self-experimentation in addressing the question of helicobacter? Or can wrong science even be heroic – like Pettenkofer carefully making an effort to refute Koch?

        Or do we deny that these were heroic? Or that they were only heroic because they were medical?

        In the vein of denying the existence of any heroism in science, critics of individuals frequently advance the idea that additional individuals were pursuing related ideas, that the resistance was less than it appears, that in reality the scientist was small, petty, paranoid, incorrect in other fields, beset by various sins, or otherwise insignificant. Which, if any of these, is a valid grounds for dismissing a person and personal achievements?

        Only once we have clarity about whether heroism exists, whether scientific heroism can exist, can we decide whether some people are rightly or wrongly included.

        • I don’t think “took great risks and succeeded” is really what Andrew means by “heroic”. It seems more to be like “one person is largely responsible for a big step forward in understanding, everyone else was just left to work out the details”. I just don’t see a problem with happening, but really wouldn’t call it “heroic” unless it did include a large degree of personal risk taking.

          I mean, just last week I’m pretty sure I resolved “a great mystery of physics”. Afaict, the problem in that case is that the physicists are calculating what they expect to see on average and comparing that to the individual instance they observe, two different things with the same units. It doesn’t take an army of researchers to figure something like that out, or even an expertise in the subject matter. It just requires someone who isn’t stuck in the same old way of thinking about the problem.

        • I think the issue is not with the concept of “hero.” It is a view of the history of science that is incorrect and therefore misleading. We are told a narrative, Europe was in the dark ages, then Galileo through his courageous battles with the dogmatic church started the scientific revolution. Then Newton unlocked the mysteries of the universe, blah blah blah. It is this narrative that misses the reality of slow progress and constant gathering of experimental results that leads us to think that scientific progress is dependent of courage and genius. It then elevates those with credentials as being beyond question which is the very opposite of the scientific spirit. The heroic genius narrative also makes us search for some enormous breakthrough rather than plodding along. The heroic narrative elevates the surprising breakthrough finding, which are the very findings of which we ought to be most suspicious.

        • I think the “hero narrative” is pretty accurate for that period though. You had basically millions of monks/etc philosophizing and making no progress for ~1500 years (assuming the standard chronology is correct…) until Tycho Brahe collected good enough data to figure out what was going on.

          Soon enough Kepler used that to come up with his laws. Next Newton derived Kepler’s laws from some postulates, followed by Haley who used Netwonian physics to accurately predict when a comet was going to return. That was a feat any layperson could understand and verify without needing to know any jargon, etc. Then “science” really took off.

          It really was a process dominated by just a couple important people… and Galileo didn’t even play a crucial role (because he ignored Kepler’s laws which did play a crucial role). He was more of a political figure or “science popularizer” like the people we see on the discovery channel today.

  2. Funnily enough, his earlier study claiming that winning an Oscar prolongs your life was itself debunked years before the NYT article. It is perhaps the most notorious and widely derided example of “immortal time bias”.
    An early debunking is available here:
    http://www.medicine.mcgill.ca/epidemiology/hanley/bios601/CandHchapter06/Sylvestre_et_al_OSCARS.pdf
    Later debunkings include:
    http://doi.org/cbhbxh
    It’s since become the go-to example when epidemiologists want to ridicule this bias, e.g.:
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4052143/

  3. Andrew refers to “shattered myths” so here is one from this week’s NYT:

    https://www.nytimes.com/2019/10/04/well/eat/scientist-who-discredited-meat-guidelines-didnt-report-past-food-industry-ties.html?te=1&nl=morning-briefing&emc=edit_nn_20191006?campaign_id=9&instance_id=12882&segment_id=17634&user_id=d7e3e90dc8fbbcc2d51df749fc62495f&regi_id=77532059

    The opening sentence is

    “A surprising new study challenged decades of nutrition advice and gave consumers the green light to eat more red and processed meat.”

    The second sentence is

    “But what the study didn’t say is that its lead author has past research ties to the meat and food industry.”

    A bit further down

    “Dr. Johnston also indicated on a disclosure form that he did not have any conflicts of interest to report during the past three years. But as recently as December 2016 he was the senior author on a similar study that tried to discredit international health guidelines advising people to eat less sugar.”

    And,

    “He said he did not report his past relationship with ILSI because the disclosure form asked only about potential conflicts within the past three years. Although the ILSI-funded study publication falls within the three-year window, he said the money from ILSI arrived in 2015, and he was not required to report it for the meat study disclosure.”

    • How does that debunk the study? If that was enough to debunk a study, then every single one out of academia run by grad students who need to “get results” should be debunked.

      • I agree, that doesn’t debunk the study. But the fact that their conclusions, or as they call it, guidelines, are not supported even by their own analyses, that throws me off. You can’t just state that the evidence is weak, and therefore recommend the opposite without any evidence at all.

        • That’s not how evidence works.

          As for ‘past research ties’, that’s right up there with ‘links’. Give me a break.

  4. Coincidentally enough, I just read an article by Redelmeier and Tibshirani on distracted driving. I guess there were some studies they agreed about in the past.

  5. OK, meant to comment on the earlier post but must have forgotten.

    Now, I worked with Don Redelmeier fairly often when I was at U of T in the 1990s and thought highly of his work.

    For instance, I initially was working with him on the distracted driving topic. When Rob Tibshirani joined in on the work, I continued as a second opinion on the work. I remember spending a couple days reviewing their drafts and answering many question from Don (don’t know if Rob ever knew about that). So initially his work was careful and circumspect.

    Now what I found strange was that he never acknowledged me for anything I ever did for him or ever asked me to be a coauthor. Every one else I worked with at U of T invariably did so. Also, later when I had left U of T, he once phoned me to discuss time dependent covariates. When I mentioned the potential confounded issues of that, he dismissed my arguments with “look very good people are writing papers on this topic – and they wouldn’t be if there was that problem. You must be wrong.” I ended the conversation by saying I needed to get back to my work.

    He never called again and I was glad. I did not feel it was in my best interest to interact with him further. Apparently Rob Tibshirani came to the same view?

    Recall noticing the 4/20 (“cannabis day”) seemed to use what looked like what Tibshirani had worked out for him 20+ years ago. But methodology evolves and one needs keep current and continue to work with up to data meteorologists.

Leave a Reply to yyw Cancel reply

Your email address will not be published. Required fields are marked *