Story time

This one belongs in the statistical lexicon. Kaiser Fung nails it:

In reading [news] articles, we must look out for the moment(s) when the reporters announce story time. Much of the article is great propaganda for the statistics lobby, describing an attempt to use observational data to address a practical question, sort of a Freakonomics-style application.

We have no problems when they say things like: “There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.”

Or this: “On average, Smith’s students slide under his instruction, losing 14 percentile points in math during the school year relative to their peers districtwide, The Times found. Overall, he ranked among the least effective of the district’s elementary school teachers.”

Midway through the article (right before the section called “Study in contrasts”), we arrive at these two paragraphs (Kaiser’s italics):

On visits to the classrooms of more than 50 elementary school teachers in Los Angeles, Times reporters found that the most effective instructors differed widely in style and personality. Perhaps not surprisingly, they shared a tendency to be strict, maintain high standards and encourage critical thinking.

But the surest sign of a teacher’s effectiveness was the engagement of his or her students — something that often was obvious from the expressions on their faces.

At the very moment they tell readers that engaging students makes teachers more effective, they announce “Story time!” With barely a fuss, they move from an evidence-based analysis of test scores to a speculation on cause–effect. Their story is no more credible than anybody else’s story, unless they also provide data to support such a causal link.

I have only two things to add:

1. As Jennifer frequently reminds me, we–researchers and also the general public–generally do care about causal inference. So I have a lot of sympathy for researchers and reporters who go beyond the descriptive content of their data and start speculating. The problem, as Kaiser notes, is when the line isn’t drawn clearly, in the short time leading the reader astray and in the longer term, perhaps, discrediting social-scientific research more generally.

2. “Story time” doesn’t just happen in the newspapers. We also see it in journal articles all the time. It’s that all-too-quick moment when the authors pivot from the causal estimates they’ve proved, to their speculations, which, as Kaiser says, are “no more credible than anybody else’s story.” Maybe less credible, in fact, because researchers can fool themselves into thinking they’ve proved something when they haven’t.

3 thoughts on “Story time

  1. I know this is old territory but in case anyone missed this finding (reported by EPI):

    "A study designed to test this question used VAM methods to assign effects to teachers after controlling for other factors, but applied the model backwards to see if credible results were obtained. Surprisingly, it found that students’ fifth grade teachers were good predictors of their fourth grade test scores. Inasmuch as a student’s later fifth grade teacher cannot possibly have influenced that student’s fourth grade performance, this curious result can only mean that VAM results are based on factors other than teachers’ actual effectiveness."

    http://epi.3cdn.net/b9667271ee6c154195_t9m6iij8k….

  2. Did I miss something? It seems to me when the Times said "sign" of a teacher's effectiveness, they were speaking in entirely appropriate statistical language — observation of one variable (engagement) conveys probabilistic information about another (effectiveness). What is the causal language that is raising ire?

  3. But "engagement", as observed by the reporter, was purely a variable introduced by the reporter, without definition or objective measurement.

    It's entirely the reporter's opinion that their were differences in engagement and he is a biased observer – he does a "better" job if he makes the story more interesting by explaining what is going on.

    And anyway, my interpretation of "sign" in the reporting is that it's meant to mean one of a number of attributes that make a good teacher. Rather than the meaning "engagement" was associated with good teaching but it might not be the cause but some other variable it's associated with that makes a good teacher.

Comments are closed.