Skip to content

Time-release pedagogy??


Mark Palko points to this report and writes:

Putting aside my concerns with the “additional years of learning” metric (and I have a lot of them), I have the feeling that there’s something strange here or i’m missing something obvious. That jump from 3-year impact to 4-year seems excessive.

The press release links to a full report that might answer the questions but I can’t get it to come up.


  1. Rahul says:

    Agree with Palko. Smells fishy. Very fishy. My bet is on a data entry / coding error.

  2. Adam Schwartz says:

    I think this is what you’re looking for. The figure above can be found on pg7.

  3. Radford Neal says:

    This is one of those misleading graphs where the bottom doesn’t correspond to zero. It seems to be the “additional” learning produced. This might have been zero or negative, so thinking in terms of it jumping by a factor of three is the wrong approach. It might have jumped by a factor of infinity if the additional learning in year 3 had been zero!

  4. zbicyclist says:

    Ignore the “x year effect” labels and just concentrate on the cohort labeling.

    Looks like the first year of the program (2009), perhaps filled with the enthusiasm of starting something new, was highly successful (or, perhaps, had measurement issues). But subsequent cohorts (2010, 2011, 2012) have been less successful.

  5. Rahul says:

    So far as I can tell, the paper maps raw test scores to something they call “impacts” and further maps those to “additional years of learning” (using some outside-of-this-study benchmarks)

    I think the devil resides in these mappings / transformations. My guess is noise introduced there overwhelms whatever feeble signal there is to start with.

    • jrc says:

      Yeah – it looks like they are multiplying the effect size estimate (which comes in SDs of… ┬┐control group?) by about 2.5 or so. The effect on the 3rd cohort is 0.17sd; the effect on the fourth is about 0.64. They are getting the transformations (or possibly the method for their own transformation) from this paper, I think:

      Over half a standard deviation (which is a unit I see used in this literature a lot more than effective schooling, or whatever that is) is a really large impact, on the order of giving near-sighted Chinese kids eyeglasses (I could probably reasonably assume you could give any poor school children eyeglasses and up their grades, but I’ve only seen the research in China).

      But it isn’t noise in the transformation I don’t think. The effect is there is z-scored exam scores, and then I think they just transform the years-gained estimate from that. It could be that sustained exposure to the program has cumulative impacts; it could be that the teachers liked the program a whole lot (why wouldn’t they) and when they realized they weren’t upping test scores they started teaching to the test (or just cheating); it could be “luck”; it could be a funny symptom of the matching procedure they use. But I don’t think it is the transformation.

      • Rahul says:

        Did they plot / tabulate the raw, un-mutilated test-scores of these cohorts anywhere in that paper?

        After all their wizardry I’m dying to see a simple scatter plot of median scores or something.

  6. Eric Loken says:

    From page 23 of the report (30 of the pdf), the first cohort had the same teacher for 7th and 8th grade math (the last two years of the program). They mention that individual teachers can have large specific influences.

    “The importance of individual teachers does not mean, however, that the estimated impacts should be
    discounted as good luck, because hiring and developing effective teachers is the most important
    component of the TEP model.”

  7. Keith O'Rourke says:

    > But in exchange [for higher salary], teachers, who are not unionized, would accept additional responsibilities, and the school would keep a close eye on their work.

    > (The study was funded by the Gates Foundation.)

    If for some strange reason you wanted to make that foundation happy, what sort of result from this study might you guess would do that?

    Given funding source was acknowledged, it should be reflected (expected) in any thorough review.

  8. Dan Wright says:

    I was confused by something. In Appendix B it states “all students who participated in the lottery were eventually admitted to TEP”. I was hoping that because there was a lottery this would have allowed other analyses to be conducted. Was the lottery not will advertized or was there just not much interest in this. If it is that only a small percentage of the people who knew about it was interested this seems more problematic than if it just wasn’t advertized well.

    • Elin says:

      I happened to have a discussion in my class today about public/charter comparisons. I have a lot of students who have children and almost all of them are in charters or have applied to be in charters. As one of them put it “the lottery is a joke as long as there is a sibling admissions policy.” And another one said, when I asked how she got her 16 year old into a charter, “I just kept trying and trying every year.” The report says that eventually everyone in the lottery was admitted and I think that is what happens.

  9. Elin says:

    Well, I think one of the issues is that they started with 4th graders and then are looking at 8th graders. Everyone in math ed knows that whether or not you get algebra in 8th grade is crucial in mulitple ways. If they had all of their students in algebra and the comparison group was almost all students in schools that do not have algebra in 8th grade that would explain a big part of the difference. Also I bet they were doing a ton of test prep for the sci high tests and you really have to get good at right triangles and some very specific algebra to perform well on those, so they are studied like crazy in the test prep.

Leave a Reply