Skip to content
Archive of posts filed under the Decision Theory category.

Steven Pinker on torture

I’ve recently been thinking about that expression, “A liberal is a conservative who’s been arrested.” Linguist and public intellectual Steven Pinker got into some trouble recently when it turned out that he’d been offering expert advice to the legal team of now-disgraced financier Jeffrey Epstein. I would not condemn Pinker for this. After all, everybody […]

How to get out of the credulity rut (regression discontinuity edition): Getting beyond whack-a-mole

This one’s buggin me. We’re in a situation now with forking paths in applied-statistics-being-done-by-economists where we were, about ten years ago, in applied-statistics-being-done-by-psychologists. (I was going to use the terms “econometrics” and “psychometrics” here, but that’s not quite right, because I think these mistakes are mostly being made, by applied researchers in economics and psychology, […]

Four projects in the intellectual history of quantitative social science

1. The rise and fall of game theory. My impression is that game theory peaked in the late 1950s. Two classics from that area are Philip K. Dick’s “Solar Lottery” and R. Duncan Luce and Howard Raiffa’s “Games and Decisions.” The latter is charming in its retro attitude that all that remained were some minor […]

Linear or logistic regression with binary outcomes

Gio Circo writes: There is a paper currently floating around which suggests that when estimating causal effects in OLS is better than any kind of generalized linear model (i.e. binomial). The author draws a sharp distinction between causal inference and prediction. Having gotten most of my statistical learning using Bayesian methods, I find this distinction […]

The Generalizer

I just saw Beth Tipton speak at the Institute of Education Sciences meeting on The Generalizer, a tool that she and her colleagues developed for designing education studies with the goal of getting inferences for the population. It’s basically MRP, but what is innovative here is the application of these ideas at the design stage. […]

Votes vs. $

Carlos Cruz writes:

On deck for the first half of 2020

Here goes: Smoothness, or lack thereof, in MRP estimates over time Open forensic science, and some general comments on the problems of legalistic thinking when discussing open science Votes vs. $ Of book reviews and selection bias This graduate student wants to learn statistics to be a better policy analyst How to “cut” using Stan, […]

The long pursuit

In a comment on our post, Using black-box machine learning predictions as inputs to a Bayesian analysis, Allan Cousins writes: I find this combination of techniques exceedingly useful when I have a lot of data on an indicator that informs me about the outcome of interest but where I have relatively sparse data about the […]

How many lobsters would you trade off for a human?

Neil Dullaghan writes: I have a strange set of correlations and am wondering if they are due to some oddity of statistics rather than real associations, but I am quite lost as to an answer. The study in brief: 3 independent surveys asking respondents how many [insert animal] would they trade for 1 human. e.g […]

External vs. internal validity of causal inference from natural experiments: The example of charter school lottery studies

Alex Hoffman writes: I recently was discussing/arguing about the value of charter schools lottery studies. I suggested that their validity was questionable because of all the data that they ignore. (1) They ignore all charter schools (and their students) that are not so oversubscribed that they need to use lotteries for admission. (2) They ignore […]

‘Sumps and rigor

Assumps and rigor, ‘sumps and rigor Go together with utmost vigor. This I tell you brother You can’t have one without the other. Assumps and rigor, ‘sumps and rigor It’s an institute you can’t figger. Ask the local gentry And they will say it’s elementary. Try, try, try to separate them It’s an illusion. Try, […]

How to think about “medical reversals”?

Bill Harris points to this press release, “Almost 400 medical practices found ineffective in analysis of 3,000 studies,” and asks: The intent seems good; does the process seem good, too? For one thing, there is patient variation, and RCTs seem focused on medians or means. Right tails can be significant. This seems related to the […]

“Pfizer had clues its blockbuster drug could prevent Alzheimer’s. Why didn’t it tell the world?”

Jon Baron points to this news article by Christopher Rowland: Pfizer had clues its blockbuster drug could prevent Alzheimer’s. Why didn’t it tell the world? A team of researchers inside Pfizer made a startling find in 2015: The company’s blockbuster rheumatoid arthritis therapy Enbrel, a powerful anti-inflammatory drug, appeared to reduce the risk of Alzheimer’s […]

Unquestionable Research Practices

Hi! (This is Dan.) The glorious Josh Loftus from NYU just asked the following question. Obviously he’s not heard of preregistration. Seriously though, it’s always good to remember that a lot of ink being spilled over hypothesis testing and it’s statistical brethren doesn’t mean that if we fix that we’ll fix anything.  It all comes to […]

In short, adding more animals to your experiment is fine. The problem is in using statistical significance to make decisions about what to conclude from your data.

Denis Jabaudon writes: I was thinking that perhaps you could help me with the following “paradox?” that I often find myself in when discussing with students (I am a basic neuroscientist and my unit of counting is usually cells or animals): When performing a “pilot” study on say 5 animals, and finding an “almost significant” […]

The default prior for logistic regression coefficients in Scikit-learn

Someone pointed me to this post by W. D., reporting that, in Python’s popular Scikit-learn package, the default prior for logistic regression coefficients is normal(0,1)—or, as W. D. puts it, L2 penalization with a lambda of 1. In the post, W. D. makes three arguments. I agree with two of them. 1. I agree with […]

“Machine Learning Under a Modern Optimization Lens” Under a Bayesian Lens

I (Yuling) read this new book Machine Learning Under a Modern Optimization Lens (by Dimitris Bertsimas and Jack Dunn) after I grabbed it from Andrew’s desk. Apparently machine learning is now such a wide-ranging area that we have to access it through some sub-manifold so as to evade dimension curse, and it is the same […]

“Whether something is statistically significant is itself a very random feature of data, so in this case you’re essentially outsourcing your modeling decision to a random number”

I happened to come across a post of mine that’s not scheduled until next April, and I noticed the above line, which I really liked, so I’m sharing it with you right now here. The comment relates to a common procedure in statistics, where researchers decide exclude potentially important interactions from their models, just because […]

No, Bayes does not like Mayor Pete. (Pitfalls of using implied betting market odds to estimate electability.)

Asher Meir points to this amusing post from Greg Mankiw, who writes: Who has the best chance of beating Donald Trump? A clue can be found using Bayes Theorem. Here is the logic. Let A be the event that a candidate wins the general election, and B be the event that a candidate wins his […]

What does a “statistically significant difference in mortality rates” mean when you’re trying to decide where to send your kid for heart surgery?

Keith Turner writes: I am not sure if you caught the big story in the New York Times last week about UNC’s pediatric heart surgery program, but part of the story made me interested to know if you had thoughts: Doctors were told that the [mortality] rate had improved in recent years, but the program […]