Skip to content
Archive of posts filed under the Causal Inference category.

Fitting multilevel models when the number of groups is small

Matthew Poes writes: I have a question that I think you have answered for me before. There is an argument to be made that HLM should not be performed if a sample is too small (too small level 2 and too small level 1 units). Lot’s of papers written with guidelines on what those should […]

New estimates of the effects of public preschool

Tom Daula writes: You blogged about Heckman and the two 1970s preschool studies a year ago here and here. Apparently there are two papers on a long-term study of Tennessee’s preschool program. In case you had an independent interest in the topic, a summary of the most recent paper is here, and the paywalled paper […]

Principal Stratification on a Latent Variable (fitting a multilevel model using Stan)

Adam Sales points to this article with John Pane on principal stratification on a latent variable, and writes: Besides the fact that the paper uses Stan, and it’s about principal stratification, which you just blogged about, I thought you might like it because of its central methodological contribution. We had been trying to use computer […]

No, I don’t buy that claim that Fox news is shifting the vote by 6 percentage points

Tyler Cowen writes: This is only one estimate, from Gregory J. Martin and Ali Yurukoglu, but nonetheless it is backed by a plausible identification stragegy and this is very interesting research: We find that in a hypothetical world without Fox News but with no other changes, the Republican vote share in the 2000 election would […]

The butterfly effect: It’s not what you think it is.

John Cook writes: The butterfly effect is the semi-serious claim that a butterfly flapping its wings can cause a tornado half way around the world. It’s a poetic way of saying that some systems show sensitive dependence on initial conditions, that the slightest change now can make an enormous difference later . . . Once […]

Causal inference data challenge!

Susan Gruber, Geneviève Lefebvre, Tibor Schuster, and Alexandre Piché write: The ACIC 2019 Data Challenge is Live! Datasets are available for download (no registration required) at https://sites.google.com/view/ACIC2019DataChallenge/data-challenge (bottom of the page). Check out the FAQ at https://sites.google.com/view/ACIC2019DataChallenge/faq The deadline for submitting results is April 15, 2019. The fourth Causal Inference Data Challenge is taking place […]

Does Harvard discriminate against Asian Americans in college admissions?

Sharad Goel, Daniel Ho and I looked into the question, in response to a recent lawsuit. We wrote something for the Boston Review: What Statistics Can’t Tell Us in the Fight over Affirmative Action at Harvard Asian Americans and Academics “Distinguishing Excellences” Adjusting and Over-Adjusting for Differences The Evolving Meaning of Merit Character and Bias […]

Coursera course on causal inference from Michael Sobel at Columbia

Here’s the description: This course offers a rigorous mathematical survey of causal inference at the Master’s level. Inferences about causation are of great importance in science, medicine, policy, and business. This course provides an introduction to the statistical literature on causal inference that has emerged in the last 35-40 years and that has revolutionized the […]

“The Book of Why” by Pearl and Mackenzie

Judea Pearl and Dana Mackenzie sent me a copy of their new book, “The book of why: The new science of cause and effect.” There are some things I don’t like about their book, and I’ll get to that, but I want to start with a central point of theirs with which I agree strongly. […]

“She also observed that results from smaller studies conducted by NGOs – often pilot studies – would often look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.”

Robert Wiblin writes: If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else? Dr Eva Vivalt . . . compiled a huge database of impact evaluations in […]

Matching (and discarding non-matches) to deal with lack of complete overlap, then regression to adjust for imbalance between treatment and control groups

John Spivack writes: I am contacting you on behalf of the biostatistics journal club at our institution, the Mount Sinai School of Medicine. We are working Ph.D. biostatisticians and would like the opinion of a true expert on several questions having to do with observational studies—questions that we have not found to be well addressed […]

Debate about genetics and school performance

Jag Bhalla points us to this article, “Differences in exam performance between pupils attending selective and non-selective schools mirror the genetic differences between them,” by Emily Smith-Woolley, Jean-Baptiste Pingault, Saskia Selzam, Kaili Rimfeld, Eva Krapohl, Sophie von Stumm, Kathryn Asbury, Philip Dale, Toby Young, Rebecca Allen, Yulia Kovas, and Robert Plomin, along with this response […]

A potential big problem with placebo tests in econometrics: they’re subject to the “difference between significant and non-significant is not itself statistically significant” issue

In econometrics, or applied economics, a “placebo test” is not a comparison of a drug to a sugar pill. Rather, it’s a sort of conceptual placebo, in which you repeat your analysis using a different dataset, or a different part of your dataset, where no intervention occurred. For example, if you’re performing some analysis studying […]

What to do when your measured outcome doesn’t quite line up with what you’re interested in?

Matthew Poes writes: I’m writing a research memo discussing the importance of precisely aligning the outcome measures to the intervention activities. I’m making the point that an evaluation of the outcomes for a given intervention may net null results for many reasons, one of which could simply be that you are looking in the wrong […]

Don’t get fooled by observational correlations

Gabriel Power writes: Here’s something a little different: clever classrooms, according to which physical characteristics of classrooms cause greater learning. And the effects are large! Moving from the worst to the best design implies a gain of 67% of one year’s worth of learning! Aside from the dubiously large effect size, it looks like the […]

Discussion of effects of growth mindset: Let’s not demand unrealistic effect sizes.

Shreeharsh Kelkar writes: As a regular reader of your blog, I wanted to ask you if you had taken a look at the recent debate about growth mindset [see earlier discussions here and here] that happened on theconversation.com. Here’s the first salvo by Brooke McNamara, and then the response by Carol Dweck herself. The debate […]

The gaps between 1, 2, and 3 are just too large.

Someone who wishes to remain anonymous points to a new study of David Yeager et al. on educational mindset interventions (link from Alex Tabarrok) and asks: On the blog we talk a lot about bad practice and what not to do. Might this be an example of how *to do* things? Or did they just […]

John Hattie’s “Visible Learning”: How much should we trust this influential review of education research?

Dan Kumprey, a math teacher at Lake Oswego High School, Oregon, writes: Have you considered taking a look at the book Visible Learning by John Hattie? It seems to be permeating and informing reform in our K-12 schools nationwide. Districts are spending a lot of money sending their staffs to conferences by Solution Tree to […]

When anyone claims 80% power, I’m skeptical.

A policy analyst writes: I saw you speak at ** on Bayesian methods. . . . I had been asked to consult on a large national evaluation of . . . [details removed to preserve anonymity] . . . and had suggested treading carefully around the use of Bayesian statistics in this study (basing it […]

Let’s be open about the evidence for the benefits of open science

A reader who wishes to remain anonymous writes: I would be curious to hear your thoughts on is motivated reasoning among open science advocates. In particular, I’ve noticed that papers arguing for open practices have seriously bad/nonexistent causal identification strategies. Examples: Kidwell et al. 2017, Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method […]