Advice for science writers!

I spoke today at a meeting of science journalists, in a session organized by Betsy Mason, also featuring Kristin Sainani, Christie Aschwanden, and Tom Siegfried.

My talk was on statistical paradoxes of science and science journalism, and I mentioned the Ted Talk paradox, Who watches the watchmen, the Eureka bias, the “What does not kill my statistical significance makes it stronger” fallacy, the unbiasedness fallacy, selection bias in what gets reported, the Australia hypothesis, and how we can do better.

Sainani gave some examples illustrating that journalists with no particular statistical or subject-matter expertise should be able to see through some of the claims made in published papers, where scientists misinterpret their own data or go far beyond what was implied by their data. Aschwanden and Siegfried talked about the confusions surrounding p-values and recommended that reporters pretty much forget about those magic numbers and instead focus on the substantive claims being made in any study.

After the session there was time for a few questions, and one person stood up and said he worked for a university, he wanted to avoid writing up too many stories that were wrong, but he was too busy to do statistical investigations on his own. What should he do?

Mason replied that he should contact the authors of the studies and push them to explain their results without jargon, answering questions as necessary to make the studies clear. She said that if an author refuses to answer such questions, or seems to be deflecting rather than addressing criticism, that this itself is a bad sign.

I expressed agreement with Mason and said that, in my experience, university researchers are willing and eager to talk with reporters and public relations specialists, and we’ll explain our research at interminable length to anyone who will listen.

So I recommended to the reporter that, when he sees a report of an interesting study, that he contact the authors and push them with hard questions: not just “Can you elaborate on the importance of this result?” but also “How might this result be criticized?”, “What’s the shakiest thing you’re claiming?”, “Who are the people who won’t be convinced by this paper?”, etc. Ask these questions in a polite way, not in any attempt to shoot the study down—your job, after all, is to promote this sort of work—but rather in the spirit of fuller understanding of the study.

25 thoughts on “Advice for science writers!

  1. Formulating the right questions is hard but doable; but were it not for platforms like Twitter & Linkedin, we wouldn’t be able to have extended opportunities to delve deeply into the underlying drawbacks of our current knowledge & knowledge acquisition systems.

  2. I don’t know whether any one specific profession is better able to ask caliber of questions. A basic course of logic would have helped to uncover some of the claims. Also continuous exposure to these controversies in science, medicine, and law also engenders a critical mindset.

    • Well, you know, I’m a country geologist not a damned statistician! But, that being said, when I hear or read an incredible claim – e.g. power pose – I rarely need any statistical knowledge to out it as false. If you read the study design for power pose you have to wonder how such an incredibly subjective method could generate any reliable data in the first place, even before you dive into the interactions and small sample size.

      I guess the moral of the story is that every study gas a weak spot and its usually not hard to find. It’s usually right in the method, is the first thing you should ask about us not the result, but the method. If it sounds bad (like a play job interview where play interviewees are getting play judged by play interviewers), then dont be afraid to ask “isn’t that highly subjective?”. Or “hasn’t survey data usually unreliable?” Or some other such question.

  3. I wish authors would do this directly in their papers. It’s a disservice to science to only report things that work. Authors are shy for the obvious reason—they think it’ll make it less likely that their paper will be accepted. Given that the reward structure in academia is deeply tied to paper publication, you can see the conflict of interest. This intrinsic bias is one reason I’ve completely lost faith in academic publications. That, and many failed replications in computer science (machine learning and before) where authors put some secret sauce into their code that’s not in the paper, so the paper isn’t detailed enough.

    I think part of the problem is the idea that journals feel they need to be “selective” in order to rank faculty by where their papers are published. This is out of control in computer science conferences, where acceptance rates are very low and hence acceptance/rejection is a very noisy process given the limited number of accepted papers and the fragmentation of the field (see the NIPS experiment for some details of just how noisy the process is).

    People often ask me to be less negative in the summary sections of my own papers, and I’m ashamed to say that I’ve complied in the past because I just couldn’t be bothered. Luckily, I’m not on the tenure track and seem to be evaluated based on fundraising and building software. At least I hope that’s the case or my upcoming review isn’t going to go well.

  4. ”journalists… should be able to see through”. I disagree. A reporter should simply interview one or two other people in the same field who had nothing to do with the study and ask them if the results are plausible, if there are weak links or limitations, etc.

    It is not the reporter’s job to dissect a study, and I would consider it presumptuous if a reporter claimed to know better than a PhD expert (maybe I’m thin-skinned!). An exception is fraud, plagiarism, publishing the same thing multiple times, friends publishing friends… then yes a reporter can do a great job bringing this into the open.

    • Jack:

      Just to be clear, it was Sainani who said that, not me.

      But, regarding your other point: it’s fine for a reporter or public relations writer to contact other people in the same field. But I think that in many cases they can get even more out of contacting the original authors and asking some questions like: “Can you elaborate on the importance of this result?” but also “How might this result be criticized?”, “What’s the shakiest thing you’re claiming?”, “Who are the people who won’t be convinced by this paper?”

      Not that these questions are magic, but if a scientist isn’t interested in answering them, that’s a bad sign. Not a sign of fraud, plagiarism, etc., just a sign that this isn’t someone willing to explore their ideas fully in front of you, the reporter.

      • This is exactly right. It’s not the journalist should pretend to know better than the researcher, but there are some simple questions that should have direct answers. If the researcher skirts those issues, it is, like Andrew says, a bad sign.

        Also, I think people in the same field are often reluctant to point out the mistakes of their peers. I’m sure many researchers would feel uncomfortable having to do a post-publication peer-review for a journalists. It should be part of the job. I just don’t think it’s something people like doing, and I think they very often pull their punches.

    • Jack, i think your argument is bunk. Should journalists treat politicians the same way? Oh, well goodness me, the politician is the expert in government! How dare a journalist challenge an expert!

      If someone is a “science journalist” then they should have more that a pedestrian knowledge of science, and in fact many Sci j’s have good science backgrounds – good enough to ask skeptical questions.

      And, frankly, if all a journalist is going to do is father quotes from scientists with opposing opinions, then google will run them all out of work soon enough. They need to distill the story, not just be parrots

  5. “in my experience, university researchers are willing and eager to talk with reporters and public relations specialists,”

    This is the opposite of my experience in England, where I never saw someone who was unfortunate enough to interact with university PR not spend a lot of time and energy fruitlessly trying to rein them in.

  6. An outside perspective: We use statistics for inference and decision-making in a corporate environment. Usually, the analysis is presented to people not familiar with statistical terminology, but responsible for the final decision. So the most important guideline we use when presenting results is this: “Is this all reasonable?”. The decision-maker should be able to evaluate exactly this reasonableness on his / her own. In many cases, even complicated ones, we are able to come up with charts and texts that convey both certainties and uncertainties in the right way.
    This “reasonableness-check” happens to be the same question that I ask myself when looking at science papers, especially when I’m not familiar with the field. Plotting all data + fitted regressions highly helps evaluate reasonableness. See fig. 2 here for an example (completely unrelated to my own work): http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0185809

    “Ask these questions in a polite way, not in any attempt to shoot the study down—your job, after all, is to promote this sort of work—but rather in the spirit of fuller understanding of the study.”
    I’m assuming you’re basically suggesting the same here, to encourage everyone to do a reasonableness-check.

    Cheers, Daniel

Leave a Reply to Kyle MacDonald Cancel reply

Your email address will not be published. Required fields are marked *