Skip to content
Archive of posts filed under the Causal Inference category.

Online Causal Inference Seminar starts next Tues!

Dominik Rothenhäusler writes: We are delighted to announce the creation of the Online Causal Inference Seminar (OCIS)! Our goal in creating this seminar series is to provide a platform for our community to continue interacting and growing in spite of the current health crisis. The causal tent is a big one, and we hope to […]

Are we ready to move to the “post p < 0.05 world”?

Robert Matthews writes: Your post on the design and analysis of trials really highlights how now more than ever it’s vital the research community takes seriously all that “nit-picking stuff” from statisticians about the dangers of faulty inferences based on null hypothesis significance testing. These dangers aren’t restricted to the search for new therapies. I’m […]

Some recommendations for design and analysis of clinical trials, with application to coronavirus

Various people have been contacting me lately about recommendations for design and analysis of clinical trials, with application to coronavirus. Below are some quick thoughts, or you can scroll down to the Summary Recommendations at the end. I’m sure there’s lots more to say on this topic but I’ll get my quick thoughts down here. […]

New dataset: coronavirus tracking using data from smart thermometers

Dan Keys writes: I recently came across the new coronavirus tracker website which is based on data from Kinsa smart thermometers. Whenever someone takes their temperature with one of these thermometers, the data is sent to Kinsa. Thermometer users also input their location, age, and gender. The company has been using these data for a […]

Is it really true that candidates who are perceived as ideologically extreme do even worse if “they actually pose as more radical than they really are”?

Most of Kruggy’s column today is about macroeconomics, a topic I’m pretty much ignorant of. But I noticed one political science claim: It’s easy to make the political case that Democrats should nominate a centrist, rather than someone from the party’s left wing. Candidates who are perceived as ideologically extreme usually pay an electoral penalty; […]

MRP Conference at Columbia April 3rd – April 4th 2020

The Departments of Statistics and Political Science and Institute for Social and Economic Research and Policy at Columbia University are delighted to invite you to our Spring conference on Multilevel Regression and Poststratification. Featuring Andrew Gelman, Beth Tipton, Jon Zelner, Shira Mitchell, Qixuan Chen and Leontine Alkema, the conference will combine a mix of cutting […]

Causal inference in AI: Expressing potential outcomes in a graphical-modeling framework that can be fit using Stan

David Rohde writes: We have been working on an idea that attempts to combine ideas from Bayesian approaches to causality developed by you and your collaborators with Pearl’s do calculus. The core idea is simple, but we think powerful and allows some problems previously that only had known solutions with the do calculus to be […]

The latest Perry Preschool analysis: Noisy data + noisy methods + flexible summarizing = Big claims

Dean Eckles writes: Since I know you’re interested in Heckman’s continued analysis of early childhood interventions, I thought I’d send this along: The intervention is so early, it is in their parents’ childhoods. See the “Perry Preschool Project Outcomes in the Next Generation” press release and the associated working paper. The estimated effects are huge: […]

American Causal Inference May 2020 Austin Texas

Carlos Carvalho writes: The ACIC 2020 website is now up and registration is open. As a reminder, proposals information can be found in the front page of the website. Deadline for submissions is February 7th. I think that we organized the very first conference in this series here at Columbia, many years ago!

Will decentralised collaboration increase the robustness of scientific findings in biomedical research? Some data and some causal questions.

Mark Tuttle points to this press release, “Decentralising science may lead to more reliable results: Analysis of data on tens of thousands of drug-gene interactions suggests that decentralised collaboration will increase the robustness of scientific findings in biomedical research,” and writes: In my [Tuttle’s] opinion, the explanation is more likely to be sociological – group […]

No, I don’t think that this study offers good evidence that installing air filters in classrooms has surprisingly large educational benefits.

In a news article on Vox, entitled “Installing air filters in classrooms has surprisingly large educational benefits,” Matthew Yglesias writes: An emergency situation that turned out to be mostly a false alarm led a lot of schools in Los Angeles to install air filters, and something strange happened: Test scores went up. By a lot. […]

The Generalizer

I just saw Beth Tipton speak at the Institute of Education Sciences meeting on The Generalizer, a tool that she and her colleagues developed for designing education studies with the goal of getting inferences for the population. It’s basically MRP, but what is innovative here is the application of these ideas at the design stage. […]

DAGS in Stan

Macartan Humphries writes: As part of a project with Alan Jacobs we have put together a package that makes it easy to define, update, and query DAG-type causal models over binary nodes. We have a draft guide and illustrations here. Now I know that you don’t care much for the DAG approach BUT this is […]

External vs. internal validity of causal inference from natural experiments: The example of charter school lottery studies

Alex Hoffman writes: I recently was discussing/arguing about the value of charter schools lottery studies. I suggested that their validity was questionable because of all the data that they ignore. (1) They ignore all charter schools (and their students) that are not so oversubscribed that they need to use lotteries for admission. (2) They ignore […]

Causal inference, adjusting for 300 pre-treatment predictors

Linda Seebach points to this post by Scott Alexander and writes: A recent paper on increased risk of death from all causes (huge sample size) found none; it controlled for some 300 cofounders. Much previous research, also with large (though much smaller) sample sizes found very large increased risk, but used under 20 confounders. This […]

Causal inference and within/between person comparisons

There’s a meta-principle of mathematics that goes as follows. Any system of logic can be written in various different ways that are mathematically equivalent but can have different real-world implications, for two reasons: first, because different formulations can be more directly applied in different settings or are just more understandable by different people; second, because […]

“Machine Learning Under a Modern Optimization Lens” Under a Bayesian Lens

I (Yuling) read this new book Machine Learning Under a Modern Optimization Lens (by Dimitris Bertsimas and Jack Dunn) after I grabbed it from Andrew’s desk. Apparently machine learning is now such a wide-ranging area that we have to access it through some sub-manifold so as to evade dimension curse, and it is the same […]

What’s the evidence on the effectiveness of psychotherapy?

Kyle Dirck points us to this article by John Sakaluk, Robyn Kilshaw, Alexander Williams, and Kathleen Rhyner in the Journal of Abnormal Psychology, which begins: Empirically supported treatments (or therapies; ESTs) are the gold standard in therapeutic interventions for psychopathology. Based on a set of methodological and statistical criteria, the APA [American Psychological Association] has […]

My talk at Yale this Thursday

It’s the Quantitative Research Methods Workshop, 12:00-1:15 p.m. in Room A002 at ISPS, 77 Prospect Street Slamming the sham: A Bayesian model for adaptive adjustment with noisy control data Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University It is not always clear how to adjust for control data in causal inference, […]

What happens to your metabolism when you eat ultra-processed foods?

Daniel Lakeland writes: Hey, you wanted examples of people doing real science for the blog! Here’s a randomized controlled trial with a within-subjects crossover design, and completely controlled and monitored conditions, in which all food eaten by the subjects was created by the experimenters and measured carefully, and the participants spent several weeks in a […]