Skip to content
Archive of posts filed under the Decision Theory category.

Coronavirus “hits all the hot buttons” for promoting the scientist-as-hero narrative (cognitive psychology edition)

The New York Times continues to push the cognitive-illusion angle on coronavirus fear. Earlier this week we discussed an op-ed by social psychologist David DeSteno; today there’s a news article by that dude from Rushmore: There remains deep uncertainty about the new coronavirus’ mortality rate, with the high-end estimate that it is up to 20 […]

Expert writes op-ed in NYT recommending that we trust the experts

Asher Meir points us to this op-ed by social psychologist David DeSteno entitled, “How Fear Distorts Our Thinking About the Coronavirus: The solution isn’t to try to think more carefully. It’s to trust the experts.” DeSteno writes: When it comes to making decisions that involve risks, we humans can be irrational in quite systematic ways […]

Is there any scientific evidence that humans don’t like uncertainty?

Avram Altaras asks: Is there any scientific evidence that humans don’t like uncertainty? I think I saw that in one of Gigerenzer’s articles, and the guest lecturer talked about it last week. It’s def conventional wisdom but I’m having difficulty accepting it. I replied that I’m not sure. I guess a statement such as “humans […]

The fallacy of the excluded rationality

Malcolm Bull writes: Thanks to the work of behavioural economists there is a lot of experimental evidence to show what many of us would have suspected anyway: that people are not the rational, utility-maximisers of neoclassical economics, but loss-averse sentimentalists who, faced with even the simplest cognitive problem, prefer dodgy short cuts to careful analysis. […]

Forget about multiple testing corrections. Actually, forget about hypothesis testing entirely.

Tai Huang writes: I am reading this paper [Why we (usually) don’t have to worry about multiple comparisons, by Jennifer, Masanao, and myself]. I am searching how to do multiple comparisons correctly under Bayesian inference for A/B/C testing. For the traditional t-test approach, Bonferroni correction is needed to correct alpha value. I am confused with […]

Steven Pinker on torture

I’ve recently been thinking about that expression, “A liberal is a conservative who’s been arrested.” Linguist and public intellectual Steven Pinker got into some trouble recently when it turned out that he’d been offering expert advice to the legal team of now-disgraced financier Jeffrey Epstein. I would not condemn Pinker for this. After all, everybody […]

How to get out of the credulity rut (regression discontinuity edition): Getting beyond whack-a-mole

This one’s buggin me. We’re in a situation now with forking paths in applied-statistics-being-done-by-economists where we were, about ten years ago, in applied-statistics-being-done-by-psychologists. (I was going to use the terms “econometrics” and “psychometrics” here, but that’s not quite right, because I think these mistakes are mostly being made, by applied researchers in economics and psychology, […]

Four projects in the intellectual history of quantitative social science

1. The rise and fall of game theory. My impression is that game theory peaked in the late 1950s. Two classics from that area are Philip K. Dick’s “Solar Lottery” and R. Duncan Luce and Howard Raiffa’s “Games and Decisions.” The latter is charming in its retro attitude that all that remained were some minor […]

Linear or logistic regression with binary outcomes

Gio Circo writes: There is a paper currently floating around which suggests that when estimating causal effects in OLS is better than any kind of generalized linear model (i.e. binomial). The author draws a sharp distinction between causal inference and prediction. Having gotten most of my statistical learning using Bayesian methods, I find this distinction […]

The Generalizer

I just saw Beth Tipton speak at the Institute of Education Sciences meeting on The Generalizer, a tool that she and her colleagues developed for designing education studies with the goal of getting inferences for the population. It’s basically MRP, but what is innovative here is the application of these ideas at the design stage. […]

Votes vs. $

Carlos Cruz writes:

On deck for the first half of 2020

Here goes: Smoothness, or lack thereof, in MRP estimates over time Open forensic science, and some general comments on the problems of legalistic thinking when discussing open science Votes vs. $ Of book reviews and selection bias This graduate student wants to learn statistics to be a better policy analyst How to “cut” using Stan, […]

The long pursuit

In a comment on our post, Using black-box machine learning predictions as inputs to a Bayesian analysis, Allan Cousins writes: I find this combination of techniques exceedingly useful when I have a lot of data on an indicator that informs me about the outcome of interest but where I have relatively sparse data about the […]

How many lobsters would you trade off for a human?

Neil Dullaghan writes: I have a strange set of correlations and am wondering if they are due to some oddity of statistics rather than real associations, but I am quite lost as to an answer. The study in brief: 3 independent surveys asking respondents how many [insert animal] would they trade for 1 human. e.g […]

External vs. internal validity of causal inference from natural experiments: The example of charter school lottery studies

Alex Hoffman writes: I recently was discussing/arguing about the value of charter schools lottery studies. I suggested that their validity was questionable because of all the data that they ignore. (1) They ignore all charter schools (and their students) that are not so oversubscribed that they need to use lotteries for admission. (2) They ignore […]

‘Sumps and rigor

Assumps and rigor, ‘sumps and rigor Go together with utmost vigor. This I tell you brother You can’t have one without the other. Assumps and rigor, ‘sumps and rigor It’s an institute you can’t figger. Ask the local gentry And they will say it’s elementary. Try, try, try to separate them It’s an illusion. Try, […]

How to think about “medical reversals”?

Bill Harris points to this press release, “Almost 400 medical practices found ineffective in analysis of 3,000 studies,” and asks: The intent seems good; does the process seem good, too? For one thing, there is patient variation, and RCTs seem focused on medians or means. Right tails can be significant. This seems related to the […]

“Pfizer had clues its blockbuster drug could prevent Alzheimer’s. Why didn’t it tell the world?”

Jon Baron points to this news article by Christopher Rowland: Pfizer had clues its blockbuster drug could prevent Alzheimer’s. Why didn’t it tell the world? A team of researchers inside Pfizer made a startling find in 2015: The company’s blockbuster rheumatoid arthritis therapy Enbrel, a powerful anti-inflammatory drug, appeared to reduce the risk of Alzheimer’s […]

Unquestionable Research Practices

Hi! (This is Dan.) The glorious Josh Loftus from NYU just asked the following question. Obviously he’s not heard of preregistration. Seriously though, it’s always good to remember that a lot of ink being spilled over hypothesis testing and it’s statistical brethren doesn’t mean that if we fix that we’ll fix anything.  It all comes to […]

In short, adding more animals to your experiment is fine. The problem is in using statistical significance to make decisions about what to conclude from your data.

Denis Jabaudon writes: I was thinking that perhaps you could help me with the following “paradox?” that I often find myself in when discussing with students (I am a basic neuroscientist and my unit of counting is usually cells or animals): When performing a “pilot” study on say 5 animals, and finding an “almost significant” […]