Matt Fox writes:

I teach various Epidemiology courses in Boston and in South Africa and have been reading your blog for the past year or so and used several of your examples in class . . . I am curious to know why you are skeptical of structural models. Much of my training has been in how essential these models are and I rarely hear the other side of the debate.

I’ve never used structural models myself. They just seem to require so many assumptions that I don’t know how to interpret them. (Of course the same could be said of Bayesian methods, but that doesn’t seem to bother me at all.) One thing I like to say is that in observational settings I feel I can interpret at most one variable causally. The difficulty is that it’s hard to control for things that happen after the variable that you’re thinking of as the “treatment.”

To put it another way, there’s a research paradigm in which you fit a model–maybe a regression, maybe a structural equations model, maybe a multilevel model, whatever–and then you read off the coefficients, with each coefficient telling you something. You gather these together and those are your conclusions.

My paradigm is a bit different. I sometimes say that each causal inference requires its own analysis and maybe its own experiment. I find it difficult to causally interpret several different coefficients from the same model.

As a total outsider, your paradigm makes perfect sense to me.

My suspicion of the other approach would be that you end up hiding assumptions and hypotheses inside the model, and that the pretty readouts at the end would mask the fact that the numbers are all conditional on things that aren't explicit. Basically, that people would turn "if [lots of complicated things] then X" into just "X".

I'd be curious to hear how those who use or promote structural models deal with that.

Structural models are extremely common in economics, not only in macro but also in micor, because the world is structural. Single equation models can be useful, but when we're looking at an inter-related set of variables, we're almost forced into looking at structural relationships. (I don't work much in my empirical work with structural models, but it's all but imporrible to teach macro without using them.)

Donald: I have no problem with multiple-equation models, including measurement-error models, multilevel models, and instrumental variables. But I'm skeptical of trying to answer several casual questions by fitting a single model to a dataset.

are you skeptical of any & all approaches to testing for mediation? Even in an experimental setting in which manipulation is designed to exert causal effects on both the mediator and the dependent variable? How would you recommend that the investigator estimate the direct and indirect effects in such a situation, if he or she had good reason to think that only part of the experimental effect was mediated? (Often that will be of practical interest since one might want to draw inferences or form hypotheses about the effect of other potential treatments one has reason to believe can influence the mediator, too).

Andrew Gelman wrote:

I've never used structural models myself. They just seem to require so many assumptions that I don't know how to interpret them. (Of course the same could be said of Bayesian methods, but that doesn't seem to bother me at all.)

I recently wrote about this phenomenon among Bayesians (i.e., why certain assumptions are more

frightening than others). What happened was that I was asked

to write an article for Statistics Surveys

http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf

and the editor insisted that I add a section on Bayesianism and causality.

So I asked readers: why did Lindley and Novick not used causation in their attempt to explain

the Simpson's paradox? (they used "exchangeability" instead, which no one

understands till this very day). Lindley and Novick answered the question in

1981, essentially saying: we could not express

the assumption that treatment does not change

gender in any language accept "exchangability";

plain causal statement like "treatment does not

cause gender" is not well-defined.

Conclusion: Bayesians have no problem making

assumptions, however bold, as long as those

assumptions are cast in the confortable language

of some probability function, however imaginary.

Ask them to make those same assumptions in a language that everyone understands (e.g., treatment does not cause gender to chage"), and they will answer: "causation is not well defined" or "I dont know how to interpret them".

Luckily, this habbit seems to be changing,

slowly, but steadily (e.g., see discussions

with Kadane and Lindley)

Judea

When we want to estimate quantities of interest from data there are often many assumptions involved. Some assumptions come from theory, others are statistical, structural models are useful in that they take advantage of the additional structure implied by our theory to allow the analyst to estimate parameters that are otherwise unidentified. Structural models also allow for direct estimation of a theoretical parameter in ways that, no matter how constructed, other methods do not. That said, structural models have the explicit baggage of the theory that generates. This is in some sense good. In order to make inferences about relationships in the world from data we need some combination of theoretical and statistical assumptions. This long and sometime arbitrary road from pure theory to empirical test is often obscured in non-structural estimation, but this explicit characterization is necessarily a condition for even righting down a structural estimator. In these ways structural models are good. They make all the little assumptions one needs to do inference explicit. In many settings, however, our theory many not be strong or the analyst is looking for a quantity of interest like the average treatment effect, which is not the product of a specific theory. In these environments the analyst makes different assumptions—theoretical (like the exclusion restriction) and statistical—to draw inferences about causation. This link between theoretical credibility and structural estimation is why we tend to find structural estimation in areas like Industrial Organization and Auction Theory. In these fields the theoretical models are well developed and widely held to be reasonable models of the world. Moreover, in these settings we are directly interested in theoretical parameters and can only estimate them using structural techniques.

Anyone interested in a detailed discussion of these issues is encourage to see Reiss and Wolak’s nice Handbook of Econometrics Chapter “Structural Econometric Modeling: Rationales and Examples from Industrial Organization” (v.6) http://www.sciencedirect.com/science/article/B7GX…