More on the Heckman curve

David Rea writes:

A slightly more refined version of our paper on the Heckman Curve [discussed on blog last year] has been published in the Journal of Economic Surveys.

The journal will also publish a response by James Heckman, as well as a reply from us.

As you predicted, James Heckman’s critique of our work is fairly strong!

I have attached the drafts of both papers in the exchange. Heckman says that we, along with other people, have misinterpreted this aspect of his work. From our point of view it is useful to clarify that the Heckman Curve is not what many people think it is.

One of the other interesting aspects of Heckman’s critique of our paper is some strong statements about meta-analysis.

In the draft version I have attached, Heckman says ‘I also want to use this opportunity to warn readers about the dangers of the pseudoscience of meta-analysis…….meta-analysis replaces substantive rigorous comparisons of studies with arbitrary statistical procedures, ignoring potentially important aspects of how the reported effects are generated……There are numerous prior critiques of meta-analysis, see e.g. Anderson and Kichkha (2017); Feinstein (1995); Eysenck (1978). It is a lazy person’s way to evaluate and compare programs. Uncritical use of statistical meta-analysis will cause more harm than good in evaluating and guiding policy’.

These are interesting statements from one of the leaders of the economics profession, and perhaps worthy of a wider discussion? Be interested in your thoughts on the usefulness and best practice for comparing and summarising studies.

I agree with Heckman that we should not in general trust meta-analysis of published results to give us a good estimate of a treatment effects. Published results are biased in so many ways! File drawer is the least of the problems; I’m much more worried about selection bias within studies, which leads to implausible results such as the claim of 42% higher earnings from early childhood interventions (for discussion, see section 2.1 of this paper).

The thing that bugs me is that Heckman is slamming meta-analysis but then accepting obviously biased estimates from individual studies. He’s routinely using statistical methods that overstate effect sizes.

Also amusing that one of the experts Heckman cites is Eysenck.

Anyway, good to see Heckman taking a stance against pseudoscience. That’s a start. We’ll take allies wherever we can find them.

17 thoughts on “More on the Heckman curve

  1. I know we’re supposed to minimize ad hominem arguments, but 2/3 being Feinstein and Eysenck? Let’s all go smoke a few packs while we ponder their wisdom about evaluating data.

  2. I find it ironic that Eysenk was a critic of meta-analyses, and now the implausible results of one of his ridiculous papers is messing up modern meta-analyses. Revenge from beyond the grave?

  3. For purposes of evaluating a single article, it might be useful to catalogue the biases within it. I did that occasionally with the international relations articles, as I was taking critical thinking and logic courses also. I was on the fence though of how to use the cataloguing of biases to forge better theorization and therefore forge better policies. There are assumptions made that are so entrenched axiomatically, some of which are patently a-contextual and false. It is frustrating to engage others b/c to dislodge false assumptions may require therapeutic skills. Seriously. Experts thrive on argumentation and binariness.

    • I think that David Kennedy, Harvard Law School, sets out a useful framework for evaluating the sociology of expertise within different fields. His book A World of Struggle; How Power, Law, and Expertise Shape the Global Political Economy. His other book, The Dark Side of Virtue is a masterpiece. Worth reading.

      One of the disheartening features on Twitter is that some experts seem to discount the observations of the public, even excluding them b/c they see themselves as the primary gatekeepers of the epistemic environment. That’s fine. After going through the grind of a PHD, one should get some return. Respect and Prestige are fine ambitions. But as we see arrogance and condescension are unbecoming and frankly silly. And worse when it is patently obvious that an expert is constantly framing their tweets to elicit adulation.

  4. Yesterday, I followed gec’s link and read Shalizi’s hilarious “A Simple Model of the Evolution of Simple Models of Evolution” (https://arxiv.org/abs/adap-org/9910002).

    Then today, I am reading Heckman’s response, and I suddenly thought “wait, am I still reading parody?”

    “[Meta-analysis] is widely and uncritically used to summarize very diverse studies. It uses statistics to avoid the hard work of carefully comparing studies or, better yet, re-estimating the studies in a common framework.”

    OK, I’ll bite, how do you “re-estimate the studies in a common framework” without the use of statistics?

    This one is even better:

    “Rea and Burton use statistical procedures in an attempt to make up for their lack of understanding of the details”

    I suppose you can use statistical procedures for the main thrust of a paper, but I feel that they are really useful when you are trying to understand the details. I guess the corollary to “if your effect is strong enough, you don’t need statistics,” is “if you understand the details well enough, you don’t need statistics.”

  5. (Note: The link to the reply of the authors, right after Heckman’s reply, does not link to the actual reply, but to an email exchange between authors and Andrew Gelman)

  6. Andrew wrote, “Also amusing that one of the experts Heckman cites is Eysenck.”

    According to Wikipedia, Eysenck’s doctoral advisor was Cyril Burt who is famous for the fictitious ladies, Miss Margaret Howard and Miss Jane Conway. And if those names are opaque, some catching up is needed.

  7. > I agree with Heckman that we should not in general trust meta-analysis of published results to give us a good estimate of a treatment effects. Published results are biased in so many ways! File drawer is the least of the problems; I’m much more worried about selection bias within studies,…

    Seems to me that meta-analyses can be extremely valuable (that there’s a kind of multiplier effect) when they apply statistical analyses to categorize the various individual studies by methodological characteristics, sample size, whether they are cross-sectional or longitudinal, etc.

    Saying that published studies are “biased” seems to me to kind of under play an important point. “Bias” isn’t necessarily a subjective or a black/white designation – sometimes it’s an artifact or function of perspective. “Bias” is actually information – that can be useful within meta-analysis or meta-survey.

  8. As for meta-analysis, here is a nice quote from Richard Berk and David Freedman in “Statistical Assumptions as Empirical Commitments”:

    “Finally, with respect to meta-analysis, our recommendation is simple: just say no. The suggested alternative is equally simple: read the papers, think about them, and summarize them.17 Try our alternative. Trust us: you will like it. And if you can’t sort the papers into meaningful categories, neither can the meta-analysts.”

    • Robert:

      Yes, the problem is not with meta-analysis or lack thereof; the problem is with researchers such as Heckman taking grossly biased estimates and treating them as unbiased. That will cause trouble whether or not these numbers are put into a formal meta-analysis.

  9. Apropos “meta-analysis”:

    My understanding that the conclusion that cigarette smoking causes cancer came from (early, but very careful meta-analysis, before the term existed) – by synthesizing many papers, each one of which could be dismissed individually, but, collectively, could not be dismissed.

    Can someone else confirm or refute this memory?

    Thanks.

    • Probably William Cochran on the Surgeon General’s report as he and Yates did most of what’s done in meta-analysis back in the 1930’s and publishing warnings that the methods had to be very different than for randomized studies.

      When Sander Greenland and I wrote the chapter on meta-analysis in 2007 for Modern Epidemiology we again stressed how different the challenge was (primarily systematic rather than random error). We seemed to have been alone on that for years.

  10. Clearly, Sir Heckman never worked in a meta-analysis. There is nothing lazy about such studies. A lot, and I mean a lot, of work goes into it. But of course… some researchers, say of certain social categories, have strong blindspots. #weneeddiversityinacademia

Leave a Reply to Mark Samuel Tuttle Cancel reply

Your email address will not be published. Required fields are marked *