Yes.

Reflecting on the recent psychology replication study (see also here), journalist Megan McArdle writes an excellent column on why we fall for bogus research:

The problem is not individual research papers, or even the field of psychology. It’s the way that academic culture filters papers, and the way that the larger society gets their results. . . .

Journalists . . . easily fall into the habit (and I’m sure an enterprising reader can come up with at least one example on my part), of treating studies not as a potentially interesting result from a single and usually small group of subjects, but as a True Fact About the World. Many bad articles get written using the words “studies show,” in which some speculative finding is blown up into an incontrovertible certainty.

I’d just replace “Journalists” by “Journalists and researchers” in the above paragraph. And then there are the P.R. excesses coming from scientific journals and universities. Researchers are, unfortunately, active participants in the exaggeration process.

McArdle continues:

Psychology studies also suffer from a certain limitation of the study population. Journalists who find themselves tempted to write “studies show that people …” should try replacing that phrase with “studies show that small groups of affluent psychology majors …” and see if they still want to write the article.

Indeed. Instead of saying “men’s upper-body strength,” try saying “college students with fat arms,” and see how that sounds!

More from McArdle:

We reward people not for digging into something interesting and emerging with great questions and fresh uncertainty, but for coming away from their investigation with an outlier — something really extraordinary and unusual. When we do that, we’re selecting for stories that are too frequently, well, incredible. This is true of academics, who get rewarded with plum jobs not for building well-designed studies that offer messy and hard-to-interpret results, but for generating interesting findings.

Likewise, journalists are not rewarded for writing stories that say “Gee, everything’s complicated, it’s hard to tell what’s true, and I don’t really have a clear narrative with heroes and villains.” Readers like a neat package with a clear villain and a hero, or at least clear science that can tell them what to do. How do you get that story? That’s right, by picking out the outliers. Effectively, academia selects for outliers, and then we select for the outliers among the outliers, and then everyone’s surprised that so many “facts” about diet and human psychology turn out to be overstated, or just plain wrong. . . .

Because a big part of learning is the null results, the “maybe but maybe not,” and the “Yeah, I’m not sure either, but this doesn’t look quite right.”

Yup. None of this will be new to regular readers of this blog, but it’s good to see it explained so clearly from a journalist’s perspective.

6 thoughts on “Yes.

  1. I disagree with some of her arguments. A lot of progress is outlier based and driven by the legitimate urge to discover something fantastic e.g. Take Industrial Chemistry: Someone discovers that by doping a conventional catalyst with traces of a promoter metal you get super duper catalytic activity. Now this is a fantastically useful result and an outlier in most senses. People did not expect this. It is counter-intuitive.

    Blaming outliers is focusing on the wrong problem. Outlier results are welcome so long as there is a robust system in place to rapidly and independently verify outliers and penalize artifact & false-positive generation.

    A fantastic catalyst would immediately be assigned by the company bosses to several independent chemists to verify. If none of the others could replicate your outlier results *you* would definitely feel the repercussions of the false alarm. There are dis-incentives in place to penalize exuberant overoptimism, sloppy experimentation or even sheer naivete.

    And (unlike psychologists) the excuse that “all the other conditions were not identically controlled” doesn’t really cut it. If you couldn’t tell other chemists what *exactly* you did, it is *your* fault not theirs. Ensuring external validity becomes the researcher’s responsibility & very integral to his reputation.

    The bigger problem in this epidemic of crappy results is that no one really cares. The funding agencies, the science journalists, the university departments no one actually has any interest in verifying whether a fantastic result. No skin in the game.

    Vilifying outliers is throwing the baby out with the bathwater. Without outlier discovery there is little progress. The key problems lie elsewhere.

    • > The bigger problem in this epidemic of crappy results is that no one really cares.

      This is increasingly, and depressingly, my experience. The senior folk pay lip service but ultimately just do not give enough of a crap to do anything. They’re likely too busy trying to get a good review for a grant they know hasn’t and can’t follow through on what it claims but is ‘too big to fail’. Behind closed doors their advice to the next generation is ‘no one likes it but it’s just what you gotta do’. If that’s how it is, fine, but it’s this politics amd pragmatism that has led to this.

  2. “The bigger problem in this epidemic of crappy results is that no one really cares. The funding agencies, the science journalists, the university departments no one actually has any interest in verifying whether a fantastic result. No skin in the game. ”

    Arina K. Bones worded it perfectly (http://shell.cas.usf.edu/~pspector/ORM/Bones-12.pdf):

    “However, orchestrating such a large-scale hoax would require the coordination and involvement of thousands of researchers, reviewers, and editors. Researchers would have to selectively report those studies that “worked” or reengineer those that did not for other purposes. Reviewers and editors would have to selectively accept positive, confirmatory results and reject any norm-violating negative result reports. The possibility that an entire field could be perpetrating such a scam is so counterintuitive that only a psychologist could predict it if it were actually true.”

    I hope Bones will publish an article again soon! Can’t wait.

  3. “The bigger problem in this epidemic of crappy results is that no one really cares. The funding agencies, the science journalists, the university departments no one actually has any interest in verifying a fantastic result.”

    It is even worse than no one caring. They do care. They care to avoid it because it will make too many people look bad. Imagine if only a small portion of results are reproducible. The researchers in that area have spent decades not checking (because the majority used p-values without understanding what they meant), and this has been used to determine what medical treatments people received, government policy, etc.

Leave a Reply to Anoneuoid Cancel reply

Your email address will not be published. Required fields are marked *