Skip to content

How effective (or counterproductive) is universal child care? Part 1

This is the first of a series of two posts.

We’ve talked before about various empirically-based claims of the effectiveness of early childhood intervention. In a much-publicized 2013 paper based on a study of 130 four-year-old children in Jamaica, Paul Gertler et al. claimed that a particular program caused a 42% increase in the participants’ earnings as young adults. (It was a longitudinal study, and these particular kids were followed up for 20 years.) At the time I expressed skepticism based on the usual reasons of the statistical significance filter, researcher degrees of freedom, and selection problems with the data.

A year later, Gertler et al. released an updated version of their paper, this time with the estimate downgraded to 25%. I never quite figured out how this happened, but I have to admit to being skeptical of the 25% number too.

One problem is that a lot of this research seems to be presented in propaganda form. For example:

From the published article: “A substantial literature shows that U.S. early childhood interventions have important long-term economic benefits.”

From the press release: “Results from the Jamaica study show substantially greater effects on earnings than similar programs in wealthier countries. Gertler said this suggests that early childhood interventions can create a substantial impact on a child’s future economic success in poor countries.”

These two quotes, taken together, imply that (a) these interventions have large and well-documented effects in the U.S., but (b) these effect are not as large as the 25% reported for the Jamaica study.

But how does that work? How large, exactly, were the “important long-term economic benefits”? An increase of 10% in earnings, perhaps? 15%? If so, then do they really have evidence that the Jamaica program had effects that were not only clearly greater from zero, but clearly greater than 10% or 15%?

I doubt it.

Rather, I suspect they’re trying to have it both ways, to simultaneously claim that their results are consistent with the literature and that they’re new and exciting.

I’m perfectly willing to believe that early childhood intervention can have large and beneficial effects, and that these effects could be even larger in Jamaica than in the United States. What I’m not convinced of is that this particular study offers the evidence that is claimed. I’m worried that the researchers are chasing noise. That is, it’s not clear to me how much they learned from this new experiment, beyond what they already knew (or thought they knew) from the literature.

This was the first of a series of two posts. Tune in tomorrow for part 2.


  1. Kyle C says:

    I know you wrote part II months ago, but I’ll add that Gertler et al. didn’t study “universal child care,” they studied a program that, if it worked, would give some kids an employment advantage over others who weren’t in childcare. Universalize the program and–again, assuming for the sake of argument it works as advertised — by definition that relative advantage disappears and you are in Lake Wobegone where all children grow up to get better jobs on average than all the other children.

  2. Chris Auld says:

    I don’t quite understand what the series of objections to this papers is based on. There is no pre-registered design, but the authors go to great lengths to do what they can to address the small sample size, endogenous attrition, the impact of migration, and multiple comparisons (although oddly, to me, you suggest in the Symposium article that correcting for multiple comparisons is not necessary—I’d be interested in an expansion on that argument). They also did do some of the analyses you suggest they ought to have done, such as evaluating effects only on the subset of participants who did not emigrate (e.g, panel III of Table 3 in the working paper version).

    Much of the issue seems to be that the estimates of the ATE are higher than your priors. 42% (for all workers) or 25% (for full-time jobs) do seem at first glance like implausibly large effects, but they’re not, to me at least, surprisingly large once we view them in context. The sample is not merely from a developing country (where we might expect such interventions to have larger effects), but critically the sample was of stunted children: the average study participant had a height three standard deviations below the average child’s height, conditional on age. The literature on potential “catch up” of children subject to early adversity is small but consistent with the idea that these children may respond particularly well to interventions such as studied here. The result is more or less that the intervention increased the earnings of stunted children such that those children eventually had earnings comparable to those of non-stunted children in this developing country, which doesn’t seem incredible. I’ve read extensively in the relevant literatures, and my prior on the effect of this particular intervention would be both very diffuse and place substantial weight on very high, like 25% or 42%, outcomes. I am not sure why yours places so little weight on large outcomes.

    Can they reject a 10% effect? An annoyance with the paper is they only report p-values, so there is rounding error, but if we consider the highlighted estimate from the published version: the effect on log-earnings is 0.3, p~=0.01, implies we should should reject nulls of returns lower than about +7%.

    • Andrew says:


      Part of the issue is effects that are higher than my priors. Not just my priors, but lots of prior information. Recall this quote from Charles Murray: “To me, the experience of early childhood intervention programs follows the familiar, discouraging pattern …small-scale experimental efforts staffed by highly motivated people show effects. When they are subject to well-designed large-scale replications, those promising signs attenuate and often evaporate altogether.” Charles Murray isn’t always right but the point is that it’s not so unreasonable to be skeptical about large claims from small studies. You refer to the literature, but we know that just about any published literature will overestimate effect sizes, sometimes by extreme amounts.

      And that brings me to my second concern, which is that by seeking and publishing statistically significant estimates, researchers are biasing their effect size estimates; see here. When people present results from biased estimates and don’t try to correct these biases, I’m concerned.

    • Chris,

      Following up on your last point – if I am doing the same calculation you are, then I get a 95% uncertainty interval (Andrew’s blog, Andrew’s preferred term) of [0.07, 0.53] for the effect on log earnings. That’s pretty wide. I don’t think it’s exactly kosher to do it this way (p. 11 of the working paper version) but I’d guess that if you did it properly you’d get something similar.

      So … effect imprecisely estimated, and the interval includes modest as well as very large effects. My complaint is similar to yours (Chris’) – had they reported intervals, they would have been doing readers a service.

  3. elin says:

    For all the issues in this paper, I don’t think they make any claim about universal childcare. Their data isn’t about childcare (it is about stimulation) and it’s not about universal, it is about a population that is extremely high need. The policy implication of the paper is that resources should be targeted toward very high need populations.

Leave a Reply