The effects of fiscal consolidation

José Iparraguirre writes:

I’ve read a recent paper by the International Monetary Fund on the effects of fiscal consolidation measures on income inequality (Fiscal Monitor October 2012, Appendix 1).

They run a panel regression with 48 countries and 30 years (annual data) of a measure of income inequality (Gini coefficient) on a number of covariates, including a measure of fiscal consolidation.

Footnote 39 (page 51) informs that they’ve employed seemingly unrelated regression and panel-corrected standard errors, and that to double-check they’ve also run ordinary least squares and fixed-effects panel regressions—all with similar results.

So far, so good. However, the footnote goes on to explain that “Some of the results (e.g. the causal relationship between consolidation and inequality) may be subject to endogeneity and should be interpreted with caution”. (Italics are mine).

Therefore, it seems that the crux of the exercise—i.e. estimating the relationship between fiscal consolidation on inequality—should be interpreted with caution because the direction of causality might go the other way round and they haven’t controlled for or tested this possibility.

This is not a peer-reviewed paper, but it is very influential nonetheless in policy circles (if not more so than the average academic paper). After such huge caveat, in general and without delving into this particular publication, how valid would you say any of the reported results and conclusions drawn from them would be?

My reply: I’ll retreat to the statistical view of causality, in which all effects are defined in terms of potential interventions. In this case, “fiscal consolidation” sounds like a reasonably clearly-defined treatment, so the comparison problem seems pretty clear: you’d want to use matching and regression to control for as many pre-treatment variables as possible.

5 thoughts on “The effects of fiscal consolidation

  1. 1. Controlling for everything and the kitchen sink is not a good idea. You may include an collider. (Pearl will have fit if he reads Andrew’s advice…)

    2. The discussion about robustness is had on the basis of what estimators are used. Better to have it in terms of what really matters — the hypothesized causal relations, confounding etc. Identification and estimation are often separable problems.

    3. Nothing wrong with caveats. Causal identification is always and everywhere based on assumptions, even in RCTs. But not all caveats are created equal: some go directly against our (pre-specified) theory and so are more worrisome.

    • Ck:

      No, good point. For example, suppose that, the year before any country did fiscal consolidation, they formally announced that they would be doing consolidation in a year. Then, even though the “announcement of consolidation” variable (call it X) came before the actual consolidation (call it T), I would not want to control for X when estimating the effect of T. In practice I don’t see people trying to control for this sort of thing; rather, I see a bunch of background variables to be matched on.

      My point in the post above was that, compared to some settings, this is a relatively straightforward causal question because I can picture the treatment as a potential intervention. (In contrast, something like “the effect of inflation” is less clear because inflation can (and does) change without any direct treatments.)

  2. Assume you are interested in conducting a study that allows you to legitimately make some causal inferences concerning relationships among a set of constructs. What sorts of things can you, as the researcher, do to bolster your ability to make such causal inferences? In answering this question, be sure to pay attention to issues pertaining to both study design and data analysis.

Comments are closed.