The causal hype ratchet

Noah Haber informs us of a research article, “Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review,” that he wrote with Emily Smith, Ellen Moscoe, Kathryn Andrews, Robin Audy, Winnie Bell, Alana Brennan, Alexander Breskin, Jeremy Kane, Mahesh Karra, Elizabeth McClure, and Elizabeth Suarez, and writes:

The study picked up the 50 most shared academic articles and their associated media articles about any exposure vs. some health outcome (i.e. “Chocolate is linked to Alzheimer’s”, “Going to the ER on a weekend is associated with higher mortality., etc.). We recruited a panel of 21 voluntary scientific reviewers from 6 institutions and multiple fields of study to to review these articles, using novel systematic review methods developed for this study.

We found that only 6% of studies exhibited strong causal inference, but that 20% of academic authors in this sample used language strongly implying causality. The most shared media articles about these studies overstated this evidence even further, and were likely to inaccurately describe the study and its implications.

This study picks up on a huge number of issues salient in science today, from publication-related biases, to issues in scientific reporting, all the way down to social media. While this study can’t identify the degree to which any specific factor is responsible, we can identify that by the time we are likely to see health science, it is extremely misleading.

A public-language summary of the study is available here.

I’ve not read the article in detail, but I thought it might interest some of you so I’m sharing it here. Their conclusion is in accord with my subjective experiences, that exaggerated claims slip in at every stage of the reporting process. Also, I don’t think we should only blame journalists for exaggerated claims in news articles and social media. Researchers often seem all too willing to spread the hype themselves.

15 thoughts on “The causal hype ratchet

    • Pedanticmonk:

      I think it’s safe to say that the vast majority of people, when they look at an article and draw conclusions from it, do not read it in detail. That’s ok, it’s the way it is. The title of an article is the first line of defense, then the abstract, then the figures and tables, then the article itself, then ultimately the raw data. A good article conveys its message fractally in all these ways. It’s important to have the full article and also the raw data available in order to check the claims and avoid pizzagate scenarios, but when writing articles we should be aware that readers have limited time, so we should aim for accuracy in our titles, abstracts, and figures.

  1. So it sounds like a good heuristic is to ignore anything you read in the media. A report of a very high causal relationship doesn’t even rule out that the causal inference was actually low.

    • “So it sounds like a good heuristic is to ignore anything you read in the media.”

      Yes, or perhaps better phrased: you should always (try and) be skeptical, and think for yourself, etc. Another reason why i think rational, logical reasoning should in my opinion be taught as early as possible, and definitely at universities. This could also come in handy for non-scientists of course, who could use rational/logical reasoning for many things in life, for instance, listening to, or reading the news :)

      Why i wasn’t taught anything about rational, logical reasoning at school and university, and why adding and/or emphasizing this in/at the curricula at universities isn’t part of all the proposed “solutions” to “improve” science is incomprehensible to me.

      Anyway, i recently started to think scientists should maybe not talk to the media at all. This is because it will necessarily involve being part of presenting and/or emphasizing a sub-set of presented facts and/or opinions by a sub-set of people, which i reason could in a way be considered to be(-come) problematic. Perhaps scientists should only make their work, opinions, etc. openly available to the public (and maybe ideally give them a chance to participate as well, like this blog does), and that’s it.

      In light of this pondering, i like to refer to a quote i recently read in a book by/of Bruce Lee called “Striking thoughts”:

      “A teacher, a good teacher that is, functions as a pointer to truth, but not a giver of truth. He employs a minimum of form to lead his students to the formless. Furthermore, he points out the importance of being able to enter a mold without being imprisoned by it, or to follow the principles without being bound by them”

      In light of this all, i always cringe when i hear the same “open science”/”let’s improve science” people talking in the media about their “solutions” and “improvements”, like nothing has been learned in the past decade about scientists talking about their “sexy findings”.

      I recently came across a “Registered Replication Reports” page on the APS site (https://www.psychologicalscience.org/publications/replication), where the following is written (in all seriousness i presume) about the “broader benefits to Psychological Science” of “Registered Replication Reports” which caused me to die a little inside:

      “Authors and journalists will have a source for vetted, robust findings, and a stable estimate of the effect size for controversial findings.”

      In my interpretation, sentences like these should not be thought of (let alone written down) in science:

      1) you shouldn’t be thinking/talking about journalists at all on a page that is about a certain research format, and
      2) you shouldn’t suggest that you, and/or your research format, provide some sort of stable, or robust estimation for certain controversial findings (like you are the “best” and/or “most objective” and/or “accurate” source?)

    • So it sounds like a good heuristic is to ignore anything you read in the media.
      You are probably okay with the hockey and baseball results. Anything that mentions chocolate is immediately suspect.

    • Terry: Yes and the hyped Genovese story is still repeated in psychology texts as evidence of the “bystander effect.” (For those who don’t know what we mean, in brief, there were about 8 ear- or eyewitnesses to the 1964 attack (not 38), some of whom promptly called police; some neighbors couldn’t discern what the cries were or where they came from; one witness shouted at the murderer and chased him off; police arrived minutes after being dispatched, etc.)

  2. I think its really interesting to compare what’s said in three different places: A) analysis section of academic paper; B) conclusions of academic paper, & C) quotes of scientists in general media. At each step the strength of the relationship and the imperative to act increases – and this is before the journalist puts a spin on it. Often the biggest jump is between the analysis and conclusions, so big that I commnly have to look back to see if I missed something. If the scientist makes such large jumps, why wouldn’t the journalist follow suit?

  3. One aspect that bothers me, is seeing researchers (not just in published articles, but in blogging as well) drawing (or strongly implying, without caveats) causality from cross-sectional data. Seems to me, as a non-scientist, that is highly problematic. Seems to me that longitudinal data is pretty much a prerequisite for making a confident assertion of causality, and while cross-sectional data can imply causality, such implications should be explicitly caveated.

    I also think that assertions of causality without a clearly outlined and probable mechanism of causality should be explicitly caveated.

  4. This is certainly true. But let’s think about who has the megaphone and distributes the hype most widely to the public, which ultimately results in the hype and/or misinformation or overstated claims. A university press release has a limited audience of the general public, I’m sure.

    And don’t discount that for some articles, the industry uses them (with 6 figure support) to press media coverage. I have an example of that here–the organic industry peddled a milk study to the media with great ferocity. It did not deserve the large coverage it got. https://twitter.com/mem_somerville/status/989670687446065152

    • The purpose of a press release is to interest the press in disseminating an idea (with the university’s name in the story). Of course it is not for the general public. It is for people who can reach the general public. Universities “peddle” research, as you say in your second paragraph “industry” does, but we are entitled to expect better of universities.

Leave a Reply to Terry Cancel reply

Your email address will not be published. Required fields are marked *