No comment

How come, when I posted a few entries last year on Pearl’s and Rubin’s frameworks for causal inference, I got about 100 comments, but when yesterday I posted my 12-page magnum opus on the topic, only three people commented?

My theory is that the Pearl/Rubin framing of the earlier discussion personalized the topic, and people get much more interested in a subject if it can be seen in terms of personalities.

Another hypothesis is that my recent review was so comprehensive and correct that people had nothing to say about it.

P.S. The present entry is an example of reverse causal inference, in the sense described in my review.

19 thoughts on “No comment

  1. I liked it so much that I'm considering assigning it to my structural equation modeling class next term. The only problem is that the week where it fits best in the syllabus is prior to the drop deadline. I'm worried that if my students think about it too carefully I'll have to cancel the course.

  2. Wow–that is some seriously empty drawing. What's the point of making a comic strip and then not drawing the people's faces? Seems like a bit of a lost opportunity, no?

  3. Last year you posted in July and this year in March. Could that be the reason? I don't know what it's like "up there", but here in the land down-under we're all writing grants in February-March.

  4. long entry: needs some time to be read; also, you already discuss this, right? Maybe people need sometime to read and then discuss. The problem is that when they are done with the reading you already wrote many other new posts.

    Anyway I'm very curious about the point I raised in the other post….

  5. It's not the type of thing you want to read quickly while eating a sandwich, and them comment on.

    Few blog posts anywhere have 27 scholarly references, are 5000+ words long, with average sentence length of over 30 words.

  6. The causality people and the statistics people are talking past each other, your 12 page magnum opus included.

    Point 0) Sense of responsibility -> decision -> commitment to action/inaction -> action/inaction ==> implies you possess a general description of reality, unless you are limiting yourself to a very narrow sphere of responsibility.

    Point 1) Statistics cannot be the basis for a general description of reality because of Simpson's Paradox. When it arises, the paradox can only be eliminated by an appeal to plausible causality, directly or indirectly.

    Point 2) Causality cannot be the basis for a general description of reality because reality violates the assertion of independent variables needed for effective causal analysis ("no true zeroes" as you put it). Reality doesn't even adhere to the laws of conditional probability [ http://www.stat.columbia.edu/~cook/movabletype/ar… ] much less the structure of independence needed for causal analysis.

    Point 3) There are no other contenders for general descriptions of reality besides statistics or causality.

    Point 4) SOL

    So people, under the burden of responsibility, must maintain several models of reality, over smaller and larger domains of applicability, some statistical, some causal, some based on symmetry & curve fitting, some based on the laws of probability, some based on scientific laws, some hybrids. These models compete against each other, at the cost of maintenance, data collection, computation, and comparison, with the benefit of correct probabilistic predictions of consequences of action/inaction, or the benefit of demonstrations of broad range of uncertainty that swamps discernment between decisions.

    And the sense of responsibility is made of shifting sands, and human values and goals are not static. So you could pay all the costs for a model, just to dispense with it.

    But all this *still* can be done for individuals or small groups. Once you get past 30 members, what is rewarded are techniques for rubber stamping decisions already taken by the politically powerful, under the name of "objective analysis" for political cover.

    So "small" decisions can be made quite well, with effort. And "large" decisions are made quite poorly, because evidence of a cold calculated analysis would be blood on the hands of the politically powerful (besides, the ability to perform such analysis is in opposition to dumb loyalty, which is the most prized character trait of the in-group). But these "large" lousy decisions possess notoriety, and thus human appeal. So a thousand pages each describing a thousand theories chase after a relative small number of very poor decision making processes.

    The consequences of all this may dent my sparkling optimism, so I must leave that as an exercise for others.

  7. I didn't comment on it because it is far too reasonable. If you want comments, a good place to start is to call somebody's favorite approach "non-scientific ad hockery." That will get our blood flowing!

    I wonder if you are being too reasonable, however. Even if Pearl's approach provides some insight, it splits the causal literature in half. The two approaches may be mathematically isomorphic, but their language is fundamentally different. I doubt we can or should teach both. A common language matters for progress.

    So what language is best? Researchers familiar with statistics and Rubin's formulation of causal effects will, rightfully I think, resist adopting a new set of operators and learning about super-conducting super colliders, d-separation, and backdoor criteria, unless absolutely necessary. Particularly when there's an alternative that uses standard probability statements and adopts a missing data perspective that is now familiar from an enormous range of other statistical problems.

    I do admit that there is something nice about simple graphs that show omitted confounding variables. Students seem to get it. But even simple graphs can be confusing. Does T->Y imply a constant treatment effect? Does the arrow exist for all units or just some? Are these questions even embedded in the graphs? While potentially a tad more work to digest, Rubin's tables answer all these questions easily.

    But perhaps, as Pearl suggests, statistics and tables are a poor language to talk about causality. Rubin's approach does seem largely agnostic about the best language to describe the "science" of the problem. It's all about missing data. Sure, Rubin talks about the "science" but he doesn't provide a language to think about its structure. To Pearl (and Deaton and Heckman) this a great deficiency.

    I'm not so sure.

    Pearl's graphs are one language to describe scientific problems, but they are not the only one. As Heckman's and Deaton's writing shows, economists think in terms of agents' information sets and decision processes, perhaps including the strategic behavior between agents. Casual graphs are nowhere to be found. In education and sociology there's some graphical modeling, but it typically results in fuzzy thinking. Natural scientists have their own domain specific languages. When they do speak in terms of graphs, they already have their own definitions, often unrelated to Pearl's.

    What is the unifying language of science? That's where Rubin, I think, succeeds. He just sticks to the basics and leaves the scientific language to the scientists.

  8. I forwarded to several econ colleagues for comment. I guess we're just giving it the long digestion it deserves!

  9. The format of a blog post is not as conducive to reading such in-depth analysis as a PDF would be. Also, no pictures / graphs like in much of your other work. Is there a pdf version? Printing out and reading offline is much easier with these sorts of things.

  10. whoops, somehow I missed that pdf link right at the top, I guess I dove into the meat and got tired before noticing it.

  11. I wouldn't be at all surprised if the main variable in how many comments you receive is the length of the initial post. A nice negative correlation between number of words and number of comments would be fun to test sometime.

Comments are closed.