3 examples of post-publication review (ecology, the underground economy, and “lockdowns”)

1. Manuel Lerdau writes:

This might interest you as an example of a public post-publication review that will, I hope, make a difference. As a bonus, one of the key issues with the original paper is a statistical problem. We have this issue frequently in Ecology (the field, not, in particular, the journal), an over-emphasis on mean values and a corresponding underappreciation of the importance of variability.

2. Asher Meir writes:

This should warm your heart:

Uncovering errors on measuring the underground economy: Manuel A. Gómez and Adrián Ríos-Blanco uncover problems in the Journal of Macroeconomics article “Measuring the Size of the Shadow Economy Using a Dynamic General Equilibrium Model with Trends”, by Mario Solis-Garcia and Yingtong Xie, who thank them for pointing them out, and who offer a revised procedure said to meet the goals of the original paper.

I think I already wrote to you about a well-known case in which an RA (David Reitman) discovered an error in a simulation which formed the basis for an article by my Rebbe, Marty Feldstein, and Marty immediately acknowledged the error and explained why it did change the conclusion of the particular simulation but why the weight of that particular simulation was not enough to change his view on the big question (namely, if Social Security reduces private savings). Marty was a paragon of good judgment.

3. Nicolas Banholzer writes:

You may be interested in sharing your thoughts on a recent meta-analysis by a group of economists, which concluded that “lockdowns [during the COVID-19 pandemic] have had little to no public health effects” and that “lockdowns should be rejected out of hand as a pandemic policy instrument”.

The strong conclusions from this meta-analysis were quickly taken up by public media, in particular conservative outlets in the US. However, several epidemiologists expressed strong criticism about the the analysis (e.g. by Seth Flaxman, Samir Bhatt and Neil Ferguson). I was contacted by a science editor from a popular newspaper in the UK to comment on the above meta-analysis by Herby et al. After sharing my initial thoughts, I teamed up with two colleagues to look more closely into the methodology of the meta-analysis. Together, we encountered several methodological issues including but not limited to:

1. Definition: You would expect to conduct a meta-analysis for an exposure that is clearly defined. However, Herby et al. have a very vague definition of the “lockdown”, which is not surprising considering the fact that “lockdown” is an umbrella term for different types of non-pharmaceutical interventions. The term “lockdown” is thus not a good starting point for a meta-analysis, but we understand that using this term will draw more attention to the analysis.

2. Measurement: You would expect that the lockdown is considered to be a combination of multiple non-pharmaceutical interventions such as the cancellation of events, the closure of schools, and orders to stay at home. Accordingly, you would expect that the effect of lockdown is measured as the combined effect of these interventions. However, Herby et al. measure it separately as the effect of single interventions or as the change in a stringency index (which is generally difficult to interpret).

3. Exclusion criteria: You would expect that, given the abundance of studies on the effects of non-pharmaceutical interventions, a corresponding meta-analysis would have a large sample of studies with a variety of outcomes and methods. However, Herby et al. made their life a bit easier by simply excluding all studies with an outcome other than mortality and with a methodological approach other than “counterfactual difference-in-difference approach” (whatever that is, we couldn’t exactly figure it out based on the manuscript). Unexpectedly, they exclude a large number of studies, including the one by Flaxman et al. in Nature that was discussed on your blog.

We elaborate on these issues in more detail in our comment on SSRN. Overall, we believe that strong conclusions should be backed up by equally strong evidence. For that reason we think that, unless Herby et al. can resolve these methodological issues, their conclusions should not be considered as policy advice. Moreover, we suspect that their results will not hold up when they revise their work, but we may be wrong (of note, we informed Herby et al. about our comment).

More generally, we think that meta-analyses on the effects of non-pharmaceutical interventions are extremely difficult. Most studies analyzed the same data from the same time period and the same populations, yet they differed greatly in the methods that they used to analyze the effects of non-pharmaceutical interventions. It is thus unclear how any meta-analysis can adequately account for this.

I looked at this paper by Herby et al., and I agree that it has problems, and I say that even though, according to page 1 of this article, its third author “was named a Knight of the Order of the Flag.” (Really.)

First, yeah, their definition of “lockdown” is unclear. From the abstract:

Lockdowns are defined as the imposition of at least one compulsory, non-pharmaceutical intervention (NPI). NPIs are any government mandate that directly restrict peoples’ possibilities, such as policies that limit internal movement, close schools and businesses, and ban international travel.

How, for example, would you characterize a rule that you have to wear a mask in the supermarket? This is unambiguously a compulsory non-pharmaceutical intervention, but does it “restrict peoples’ possibilities”? Later on in the article they do include “mandated face masks.” I don’t know if this restricts anyone’s possibilities, whatever that means, but it’s no “lockdown” in any sense of the word in English.

Beyond this, yeah, I agree with the other criticisms coming from Banholzer et al., and I agree with their point that it’s not clear what you can get by taking a bunch of different studies all fit to the same data and putting them through this sort of meta-analysis. In summary, I would not take paper by Herby et al. seriously. It does seem that they worked hard on it, but putting in lots of effort is not enough if the approach has fundamental flaws.

9 thoughts on “3 examples of post-publication review (ecology, the underground economy, and “lockdowns”)

  1. These observational data can be used as inspiration to abduce a theory. And if a sufficiently precise/surprising prediction has been earlier deduced from a theory, they can be used to check that.

    But trying to conclude something like group A > group B from this data simply does not work. There are too many possible explanations and arbitrary choices. You can interpret the numbers to mean whatever you want.

    Is there a single counterexample that is not fatally flawed?

    • He is in an ecologist, but his stated # 1 research interest is “the regulation of ozone exchange between forests and the atmosphere”, and he’s based in an interdisciplinary department of environmental sciences which has “atmospheric sciences” as one of its research foci, so you can see why he’d be paying attention to climate models. (See https://evsc.as.virginia.edu/ )

  2. WRT 2) I had never heard of Econ Journal Watch before, seems to be some Mercatus Center thing. Anyway, I was poking around some of the other pieces and found this quote from Morgan Kelly which will make regular readers of this blog face-palm themselves into oblivion:
    “A substantial literature on deep origins or persistence finds that many modern outcomes such as income or social attitudes strongly reflect the characteristics of the same places in the more or less distant past, often centuries or millennia previously…
    Naturally, such findings are open to various charges of p hacking, of publication bias, of answers in search of questions, of scepticism about monocausal and largely atheoretical explanations of complex phenomena, about the mechanisms driving persistence, and so on. However, all of these crumble into irrelevance in the face of one blunt fact: the unusual explanatory power of these persistence variables.”

    WHAT??

    (Morgan Kelly, “Understanding Persistence” CEPR Discussion Paper 2020, quoted from Jason Briggeman, “Long-Run Determinants of Economic Growth: Putterman and Weil Revisited” Econ Journal Watch 2022).

    • I don’t know where you got the impression that Econ Journal Watch is associated with the Mercatus Center. As far as I know, there is no connection other than the fact that Marginal Revolution (whose authors are associated with Mercatus) frequently reports what is in Econ Journal Watch. It appears that the association is with a Canadian think tank. Unlike Mercatus, Econ Journal Watch has never struck me as reflecting any particular ideological school of thought.

  3. “We think that meta-analyses on the effects of non-pharmaceutical interventions are extremely difficult”. Maybe so, but given the extremely harmful secondary effects on society of these interventions, it’s hardly trivial whether they deliver the promised primary effect. Or are we just supposed to take Ferguson, Flaxman et al’s word for it, because epidemiology?

    • Daniel:

      I don’t know what you mean by “because epidemiology.” Flaxman et al. write papers with data analyses. The sources of their data are clear. You can replicate what they do and then do whatever additional analyses you’d like to do. No need to take anyone’s word for it.

      • – The Ferguson report (https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/report-9-impact-of-npis-on-covid-19/) that pushed the initial lockdowns was based on very limited data and the code wasn’t published.

        – The early Ferguson + Flaxman paper claiming to show lockdown effectiveness was intensely criticized for how it treated Sweden. It was also published far too early to give conclusive results.

        None of this would be such a big deal if we hadn’t launched worldwide lockdowns on the basis of the report, and kept them going on the basis of paper 2. But we did (there were other bases, of course). And much of the credence given to those papers was not due to their excellence, but to their author’s status as epidemiologists.

        So yes, there was “no need to take anyone’s word for it”, but we did exactly that.

Leave a Reply

Your email address will not be published. Required fields are marked *