Skip to content
Archive of posts filed under the Decision Theory category.

We want certainty even when it’s not appropriate

Remember the stents example? An experiment was conducted comparing two medical procedures, the difference had a p-value of 0.20 (after a corrected analysis the p-value was 0.09) and so it was declared that the treatment had no effect. In other cases, of course, “p less than 0.10” is enough for publication in PNAS and multiple […]

Election forecasts: The math, the goals, and the incentives (my talk this Friday afternoon at Cornell University)

At the Colloquium for the Center for Applied Mathematics, Fri 18 Sep 3:30pm: Election forecasts: The math, the goals, and the incentives Election forecasting has increased in popularity and sophistication over the past few decades and has moved from being a hobby of some political scientists and economists to a major effort in the news […]

They want “statistical proof”—whatever that is!

Bert Gunter writes: I leave it to you to decide whether this is fodder for your blog: So when a plaintiff using a hiring platform encounters a problematic design feature — like platforms that check for gaps in employment — she should be able to bring a lawsuit on the basis of discrimination per se, […]

Information, incentives, and goals in election forecasts

Jessica Hullman, Christopher Wlezien, and I write: Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. We discuss challenges in understanding, communicating, and evaluating election predictions, using as examples the Economist and Fivethirtyeight forecasts of the 2020 election. Here are […]

Automatic data reweighting!

John Cook writes: Suppose you are designing an autonomous system that will gather data and adapt its behavior to that data. At first you face the so-called cold-start problem. You don’t have any data when you first turn the system on, and yet the system needs to do something before it has accumulated data. So […]

More on that Fivethirtyeight prediction that Biden might only get 42% of the vote in Florida

I’ve been chewing more on the above Florida forecast from Fivethirtyeight. Their 95% interval for the election-day vote margin in Florida is something like [+16% Trump, +20% Biden], which corresponds to an approximate 95% interval of [42%, 60%] for Biden’s share of the two-party vote. This is buggin me because it’s really hard for me […]

Florida. Comparing Economist and Fivethirtyeight forecasts.

Here’s our current forecast for Florida: We’re forecasting 52.6% of the two-party vote for Biden, with a 95% predictive interval of approx [47.0%, 58.2%], thus an approx standard error of 2.8 percentage points. The 50% interval from the normal distribution is mean +/- 2/3 s.e., thus approx [50.7%, 54.5%]. Yes, I know these predictive distributions […]

53 fever!

One thing statisticians love is a story about people acting foolish around probability . . . Paul Alper points us to this news article by David Robson: Fifteen years ago, the people of Italy experienced a strange kind of mass hysteria known as “53 fever”. The madness centred on the country’s lottery. Players can choose […]

Decision-making under uncertainty: heuristics vs models

This post is by Phil Price, not Andrew. Sometimes it’s worth creating a complicated statistical model that can help you make a decision; other times it isn’t. As computer power has improved and modeling capabilities have increased, more and more decisions shift into the category in which it’s worth making a complicated model, but often […]

Heckman Curve Update Update

tl;dr: “The policy conclusion we draw from our analysis is that age is not a short cut for identifying where governments should, or should not, invest. There are many well‐studied interventions for children that are worthy candidates for public funding based on efficiency considerations. However, the same is also true of many interventions targeting youth […]

Probabilistic forecasts cause general misunderstanding. What to do about this?

The above image, taken from a site at the University of Virginia, illustrates a problem with political punditry: There’s a demand for predictions, and there’s no shortage of outlets promising a “crystal ball” or some other sort of certainty. Along these lines, Elliott Morris points us to this very reasonable post, “Poll-Based Election Forecasts Will […]

Getting all negative about so-called average power

Blake McShane writes: The idea of retrospectively estimating the average power of a set of studies via meta-analysis has recently been gaining a ton of traction in psychology and medicine. This seems really bad for two reasons: 1. Proponents claim average power is a “replicability estimate” and that it estimates the rate of replicability “if […]

On deck through Jan 2021

This should keep you busy through the end of the year . . . Many of these posts were originally scheduled earlier but then got bumped because of coronavirus and other topical material. The typical set and its relevance to Bayesian computation Getting negative about the critical positivity ratio: when you talk about throwing out […]

Thinking about election forecast uncertainty

Some twitter action Elliott Morris, my collaborator (with Merlin Heidemanns) on the Economist election forecast, pointed me to some thoughtful criticisms of our model from Nate Silver. There’s some discussion on twitter, but in general I don’t find twitter to be a good place for careful discussion, so I’m continuing the conversation here. Nate writes: […]

Negativity (when applied with rigor) requires more care than positivity.

Tyler Cowen writes: Avoid criticizing other public intellectuals. In fact, avoid the negative as much as possible. However pressing a social or economic issue may be, there is almost always a positive and constructive way to reframe your potential contribution. This also will force you to keep on thinking harder, because it is easier to […]

Would we be better off if randomized clinical trials had never been born?

This came up in discussion the other day. In statistics and medicine, we’re generally told to rely when possible on the statistically significance (or lack of statistical significance) of results from randomized trials. But, as we know, statistical significance has all sorts of problems, most notably that it ignores questions of cost and benefit, and […]

Further debate over mindset interventions

Warne Following up on this post, “Study finds ‘Growth Mindset’ intervention taking less than an hour raises grades for ninth graders,” commenter D points us to this post by Russell Warne that’s critical of research on growth mindset. Here’s Warne: Do you believe that how hard you work to learn something is more important than […]

“To Change the World, Behavioral Intervention Research Will Need to Get Serious About Heterogeneity”

Beth Tipton, Chris Bryan, and David Yeager write: The increasing influence of behavioral science in policy has been a hallmark of the past decade, but so has a crisis of confidence in the replicability of behavioral science findings. In this essay, we describe a nascent paradigm shift in behavioral intervention research—a heterogeneity revolution—that we believe […]

Priors on effect size in A/B testing

I just saw this interesting applied-focused post by Kaiser Fung on non-significance in A/B testing. Kaiser was responding to a post by Ron Kohavi. I can’t find Kohavi’s note anywhere, but you can read Kaiser’s post to get the picture. Here I want to pick out a few sentences from Kaiser’s post: Kohavi correctly points […]

No, I don’t believe that claim based on regression discontinuity analysis that . . .

tl;dr. See point 4 below. Despite the p-less-than-0.05 statistical significance of the discontinuity in the above graph, no, I do not believe that losing a close election causes U.S. governors to die 5-10 years longer, as was claimed in this recently published article. Or, to put it another way: Despite the p-less-than-0.05 statistical significance of […]