This one belongs in the statistical lexicon. Kaiser Fung nails it:
In reading [news] articles, we must look out for the moment(s) when the reporters announce story time. Much of the article is great propaganda for the statistics lobby, describing an attempt to use observational data to address a practical question, sort of a Freakonomics-style application.
We have no problems when they say things like: “There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.”
Or this: “On average, Smith’s students slide under his instruction, losing 14 percentile points in math during the school year relative to their peers districtwide, The Times found. Overall, he ranked among the least effective of the district’s elementary school teachers.”
Midway through the article (right before the section called “Study in contrasts”), we arrive at these two paragraphs (Kaiser’s italics):
On visits to the classrooms of more than 50 elementary school teachers in Los Angeles, Times reporters found that the most effective instructors differed widely in style and personality. Perhaps not surprisingly, they shared a tendency to be strict, maintain high standards and encourage critical thinking.
But the surest sign of a teacher’s effectiveness was the engagement of his or her students — something that often was obvious from the expressions on their faces.
At the very moment they tell readers that engaging students makes teachers more effective, they announce “Story time!” With barely a fuss, they move from an evidence-based analysis of test scores to a speculation on cause–effect. Their story is no more credible than anybody else’s story, unless they also provide data to support such a causal link.
I have only two things to add:
1. As Jennifer frequently reminds me, we–researchers and also the general public–generally do care about causal inference. So I have a lot of sympathy for researchers and reporters who go beyond the descriptive content of their data and start speculating. The problem, as Kaiser notes, is when the line isn’t drawn clearly, in the short time leading the reader astray and in the longer term, perhaps, discrediting social-scientific research more generally.
2. “Story time” doesn’t just happen in the newspapers. We also see it in journal articles all the time. It’s that all-too-quick moment when the authors pivot from the causal estimates they’ve proved, to their speculations, which, as Kaiser says, are “no more credible than anybody else’s story.” Maybe less credible, in fact, because researchers can fool themselves into thinking they’ve proved something when they haven’t.