More on that credulity thing

I see five problems here that together form a feedback loop with bad consequences. Here are the problems:

1. Irrelevant or misunderstood statistical or econometric theory;
2. Poorly-executed research;
3. Other people in the field being loath to criticize, taking published or even preprinted claims as correct until proved otherwise;
4. Journalists taking published or even preprinted claims as correct until proved otherwise;
5. Journalists following the scientist-as-hero template.

There are also related issues such as fads in different academic fields, etc.

When I write about regression discontinuity, we often focus on item #2 above, because it helps to be specific. But the point of my post was that maybe we could work on #3 and #4. If researchers could withdraw from their defensive position by which something written by a credentialed person in their field is considered to be good work until proved otherwise, and if policy journalists could withdraw from their default deference to anything that has an identification strategy and statistical significance, then maybe we could break that feedback loop.

Economists: You know better! Think about your own applied work. You’ve done bad analyses yourself, even published some bad analyses, right? I know I have. Given that you’ve done it, why assume by default that other people haven’t made serious mistakes in understanding.

Policy journalists: You can know better! You already have a default skepticism. If someone presented you with a pure observational comparison, you’d know to be concerned about unmodeled differences between treatment and control groups. So don’t assume that an “identification strategy” removes these concerns. Don’t assume that you don’t have to adjust for differences between cities north of the river and cities south of the river. Don’t assume you don’t have to adjust for differences between schools in the hills and schools in the flatlands, or the different ages of people in different groups, or whatever.

I’m not saying this is all easy. Being skeptical without being nihilistic—that’s a challenge. But, given that you’re already tooled up for some skepticism when comparing groups, my recommendation is to not abandon that skepticism just because you see an identification strategy, some robustness checks, and an affiliation with a credible university.

10 thoughts on “More on that credulity thing

  1. Idk about #4. Or at least I’m not sure about what the solution is. Asking journalists to critically evaluate technical details of scientific studies seems like a lot to ask. On one hand, they already do this to some extent – after all they have to decide what to report on and not to report on in the first place. But on the other hand, criticizing science is squarely in the domain of scientists and asking journalists to be both seems like a not very practical ask. Or maybe I’m understating the training of science journalists?

    • “Or maybe I’m understating the training of science journalists?”

      I think you’re misunderstanding what journalism is.

      If journalists simply reported what happened or was said (‘facts’), it would be appropriate to expect them not to criticize or critique. But that’s *NOT* what they do. They spin the facts into a story and convey the story. Ostensibly the spun story is a kind of simplified narrative of reality that puts the facts into an accessible context for everyday readers. Spinning facts into stories requires interpretation, assessment and critiquing facts and claims, choosing which facts to include and how to represent them.

      And despite their lack of credentials, journalists are freely creating their own pseudoscience with their own statistical incantations, so if they’re going to create it they better be willing to critique it and accept critiques. Every column and article I read is full of claims about percentages of who has/does what and what it means. These aren’t taken from scholarly articles. They come direct from the data source. In fact I learned SQL from just such a writer – so confident in his own ability to statisticate that he wrote a book on it.

      we need a name for this new breed: Journo-stagician?

    • jim and Martha:

      Fair enough. I guess I was letting journalists off the hook too much (and not recognizing the praise they deserve when they do appropriately represent uncertainty and give balanced narratives). Thanks for the comments.

  2. Andrew said, “4. Journalists taking published or even preprinted claims as correct until proved otherwise” . He didn’t say that journalist should do all the critiquing themselves; my understanding is that he is saying that journalists should not say that claims are “correct” unless they have been thoroughly vetted (presumably by scientists who are well qualified to critique them).

  3. As a former journalist, I am frustrated by the failure of the profession to adapt to the increasing role that scientific debate plays in politics, economics and culture. Journalism is increasingly about peddling stories, and scientist-as-hero and the-breakthrough-that-changes-everything are stories that dazzle and sell. Not surprisingly, some scientists buy in too and shape their work as journalist bait.

    What I think journalists should do when a new piece of research appears:

    1. Read it themselves, which means having the chops to do that. You don’t have to be a specialist to make general sense of a scientific research paper, just some fluency in algebra, basic stats and a handy laptop to look up the meaning of key technical terms. You won’t be able to evaluate the paper, but you’ll have a general idea of its strategy and what it claims to be doing.

    2. Try to identify what’s new or different, whether it’s new data, a new analytical technique or whatever. This may entail contacting the author(s).

    3. By asking around and thinking about the what’s different part, identify potential fault lines — points of potential contention or doubt. Use your networks to locate researchers likely to position themselves at some distance from the paper and get their perspective.

    4. Finally, write it up. Explain to lay readers the message and possible significance of the paper, *and then explain the issues that separate those who agree from those who don’t, or not yet.* Only in the rarest of cases will a new paper sweep the field, leaving no doubt at all. The two takeaways for the reader should be “how things would be different if the paper is right” and “what are the continuing questions that (some) people in the field have about work of this sort”? That’s journalism that actually reports on the science the way good journalism would report on politics, sports, etc.

  4. Here’s my advice to journalists, from a few years ago:

    When you sees a report of an interesting study, contact the authors and push them with hard questions: not just “Can you elaborate on the importance of this result?” but also “How might this result be criticized?”, “What’s the shakiest thing you’re claiming?”, “Who are the people who won’t be convinced by this paper?”, etc. Ask these questions in a polite way, not in any attempt to shoot the study down—your job, after all, is in part to promote this sort of work—but rather in the spirit of fuller understanding of the study.

    And if the authors don’t want to help you in this way by criticizing their own study, be suspicious!

    • Unfortunately, most journalists today just get paid for enough time to skim and paraphrase the press release and maybe have a quick phone conversation with one or two other people :( So even if they knew how to ask good questions about quantitative social science papers, they might not be allowed to.

  5. … taking published or even preprinted claims as correct until proved otherwise;

    This “until proved otherwise” seems overly optimistic to me.

Leave a Reply

Your email address will not be published. Required fields are marked *