Skip to content
Archive of posts filed under the Causal Inference category.

Causal inference and within/between person comparisons

There’s a meta-principle of mathematics that goes as follows. Any system of logic can be written in various different ways that are mathematically equivalent but can have different real-world implications, for two reasons: first, because different formulations can be more directly applied in different settings or are just more understandable by different people; second, because […]

“Machine Learning Under a Modern Optimization Lens” Under a Bayesian Lens

I (Yuling) read this new book Machine Learning Under a Modern Optimization Lens (by Dimitris Bertsimas and Jack Dunn) after I grabbed it from Andrew’s desk. Apparently machine learning is now such a wide-ranging area that we have to access it through some sub-manifold so as to evade dimension curse, and it is the same […]

What’s the evidence on the effectiveness of psychotherapy?

Kyle Dirck points us to this article by John Sakaluk, Robyn Kilshaw, Alexander Williams, and Kathleen Rhyner in the Journal of Abnormal Psychology, which begins: Empirically supported treatments (or therapies; ESTs) are the gold standard in therapeutic interventions for psychopathology. Based on a set of methodological and statistical criteria, the APA [American Psychological Association] has […]

My talk at Yale this Thursday

It’s the Quantitative Research Methods Workshop, 12:00-1:15 p.m. in Room A002 at ISPS, 77 Prospect Street Slamming the sham: A Bayesian model for adaptive adjustment with noisy control data Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University It is not always clear how to adjust for control data in causal inference, […]

What happens to your metabolism when you eat ultra-processed foods?

Daniel Lakeland writes: Hey, you wanted examples of people doing real science for the blog! Here’s a randomized controlled trial with a within-subjects crossover design, and completely controlled and monitored conditions, in which all food eaten by the subjects was created by the experimenters and measured carefully, and the participants spent several weeks in a […]

“Causal Processes in Psychology Are Heterogeneous”

Niall Bolger sends along this article he wrote with Katherine Zee, Maya Rossignac-Milon, and Ran Hassin, which begins: All experimenters know that human and animal subjects do not respond uniformly to experimental treatments. Yet theories and findings in experimental psychology either ignore this causal effect heterogeneity or treat it as uninteresting error. This is the […]

We’re hiring an econ postdoc!

It’s for hierarchical modeling for policy analysis in Stan. We’re really excited about this project. Will share more details soon, but wanted to get this out right away.

Are statistical nitpickers (e.g., Kaiser Fung and me) getting the way of progress or even serving the forces of evil?

As Ira Glass says, today we have a theme and some variations on this theme. Statistical nitpickers: Do they cause more harm than good? I’d like to think we cause more good than harm, but today I want to consider the counter-argument, that, even when we are correct on the technical merits, we statisticians should […]

Bank Shot

Tom Clark writes: I came across this paper and thought of you. You might be aware of some papers that have been published about the effect of military surplus equipment aid that is given to police departments. Some economists have claimed to find that it reduces crime. My coauthors and I thought the papers were […]

Challenge of A/B testing in the presence of network and spillover effects

Gaurav Sood writes: There is a fun problem that I recently discovered: Say that you are building a news recommender that lists which relevant news items in each person’s news feed. Say that your first version of the news recommender is a rules-based system that uses signals like how many people in your network have […]

Let’s try this again: It is nonsense to say that we don’t know whether a specific weather event was affected by climate change. It’s not just wrong, it’s nonsensical.

This post is by Phil Price, not Andrew. If you write something and a substantial number of well-intentioned readers misses your point, the problem is yours. Too many people misunderstood what I was sayinga few days ago in the post “There is no way to prove that [an extreme weather event] either was, or was […]

More golf putting, leading to a discussion of how prior information can be important for an out-of-sample prediction or causal inference problem, even if it’s not needed to fit existing data

Steve Stigler writes: I saw a piece on your blog about putting. It suggests to me that you do not play golf, or you would not think this was a model. Length is much more important than you indicate. I attach an old piece by a friend that is indeed the work of a golfer! […]

“There is no way to prove that [an extreme weather event] either was, or was not, affected by global warming.”

This post is by Phil, not Andrew. It’s hurricane season, which means it’s time to see the routine disclaimer that no single weather event can be attributed to global warming. There’s a sense in which that is true, and a sense in which it is very wrong. I’ll start by going way back to 2005. […]

“I feel like the really solid information therein comes from non or negative correlations”

Steve Roth writes: I’d love to hear your thoughts on this approach (heavily inspired by Arindrajit Dube’s work, linked therein): This relates to our discussion from 2014: My biggest takeaway from this latest: I feel like the really solid information therein comes from non or negative correlations: • It comes before • But it doesn’t […]

“Beyond ‘Treatment Versus Control’: How Bayesian Analysis Makes Factorial Experiments Feasible in Education Research”

Daniel Kassler, Ira Nichols-Barrer, and Mariel Finucane write: Researchers often wish to test a large set of related interventions or approaches to implementation. A factorial experiment accomplishes this by examining not only basic treatment–control comparisons but also the effects of multiple implementation “factors” such as different dosages or implementation strategies and the interactions between these […]

Multilevel Bayesian analyses of the growth mindset experiment

Jared Murray, one of the coauthors of the Growth Mindset study we discussed yesterday, writes: Here are some pointers to details about the multilevel Bayesian modeling we did in the Nature paper, and some notes about ongoing & future work. We did a Bayesian analysis not dissimilar to the one you wished for! In section […]

“Study finds ‘Growth Mindset’ intervention taking less than an hour raises grades for ninth graders”

I received this press release in the mail: Study finds ‘Growth Mindset’ intervention taking less than an hour raises grades for ninth graders Intervention is first to show national applicability, breaks new methodological ground – Study finds low-cost, online growth mindset program taking less than an hour can improve ninth graders’ academic achievement – The […]

Causal inference workshop at NeurIPS 2019 looking for submissions

Nathan Kallus writes: I wanted to share an announcement for a causal inference workshop we are organizing at NeurIPS 2019. I think the readers of your blog would be very interested, and we would be eager to have them interact/attend/submit. And here it is: The NeurIPS 2019 Workshop on “Do the right thing”: machine learning […]

Causal Inference and Generalizing from Your Data to the Real World (my talk tomorrow, Sat., 6pm in Berlin)

For the Berlin Bayesians meetup, organized by Eren Elçi: Causal Inference and Generalizing from Your Data to the Real World Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Learning from data involves three stages of extrapolation: from sample to population, from treatment group to control group, and from measurement to the […]

Concerned about demand effects in psychology experiments? Incorporate them into the design.

Johannes Haushofer sends along this article with Jonathan de Quidt and Christopher Roth, “Measuring and Bounding Experimenter Demand,” which begins: We propose a technique for assessing robustness to demand effects of findings from experiments and surveys. The core idea is that by deliberately inducing demand in a structured way we can bound its influence. We […]