Skip to content
Archive of entries posted by

How to think about “medical reversals”?

Bill Harris points to this press release, “Almost 400 medical practices found ineffective in analysis of 3,000 studies,” and asks: The intent seems good; does the process seem good, too? For one thing, there is patient variation, and RCTs seem focused on medians or means. Right tails can be significant. This seems related to the […]

“There is this magic that our DNA enables”

I was just at a talk where a computer scientist was making dramatic claims for the value of human decision making. This is typically a crowd-pleasing position to take—after all, we in the audience are humans and we want to hear how great we are. What really riled me was when the speaker said, “There […]

The tone problem in psychology

Are you tone deaf? Find out here. P.S. Link updated. I guess things can change in 6 months!

Measuring Fraud and Fairness (Sharad Goel’s two talks at Columbia next week)

MONDAY DSI TALK One Person, One Vote Abstract: About a quarter of Americans report believing that double voting is a relatively common occurrence, casting doubt on the integrity of elections. But, despite a dearth of documented instances of double voting, it’s hard to know how often such fraud really occurs (people might just be good […]

“Pfizer had clues its blockbuster drug could prevent Alzheimer’s. Why didn’t it tell the world?”

Jon Baron points to this news article by Christopher Rowland: Pfizer had clues its blockbuster drug could prevent Alzheimer’s. Why didn’t it tell the world? A team of researchers inside Pfizer made a startling find in 2015: The company’s blockbuster rheumatoid arthritis therapy Enbrel, a powerful anti-inflammatory drug, appeared to reduce the risk of Alzheimer’s […]

What’s wrong with null hypothesis significance testing

Following up on yesterday’s post, “What’s wrong with Bayes”: My problem is not just with the methods—although I do have problems with the method—but also with the ideology. My problem with the method You’ve heard this a few zillion times before, and not just from me. Null hypothesis significance testing collapses the wavefunction too soon, […]

“Would Republicans pay a price if they vote to impeach the president? Here’s what we know from 1974.”

I better post this one now because it might not be so relevant in 6 months . . . Bob Erikson answers the question, “Would Republicans pay a price if they vote to impeach the president? Here’s what we know from 1974.” The conclusion: “Nixon loyalists paid the price—not Republicans who voted to impeach.” This […]

What’s wrong with Bayes

My problem is not just with the methods—although I do have problems with the method—but also with the ideology. My problem with the method It’s the usual story. Bayesian inference is model-based. Your model will never be perfect, and if you push hard you can find the weak points and magnify them until you get […]

Hey—the 2nd-best team in baseball is looking for a Bayesian!

Sarah Gelles writes: We are currently looking to hire a Bayesian Statistician to join the Houston Astros’ Research & Development team. They would join a growing, cutting-edge R&D team that consists of analysts from a variety of backgrounds and which is involved in all key baseball decisions at the Astros. Here’s a link to the […]

What’s wrong with Bayes; What’s wrong with null hypothesis significance testing

This will be two posts: tomorrow: What’s wrong with Bayes day after tomorrow: What’s wrong with null hypothesis significance testing My problem in each case is not just with the methods—although I do have problems with the methods—but also with the ideology. A future post or article: Ideologies of Science: Their Advantages and Disadvantages.

“Some call it MRP, some Mister P, but the full name is . . .”

Jim Savage points us to this explainer, How do pollsters predict UK general election results?, by John Burn-Murdoch of the Financial Times. It’s bittersweet seeing my method described by some person I’ve never met. Little baby MRP is all grown up! Being explained by the Financial Times—that’s about as good as being in the Guardian […]

Don’t believe people who say they can look at your face and tell that you’re lying.

Kevin Lewis points us to this article, Lessons From Pinocchio: Cues to Deception May Be Highly Exaggerated, by Timothy Luke, which begins: Deception researchers widely acknowledge that cues to deception—observable behaviors that may differ between truthful and deceptive messages—tend to be weak. Nevertheless, several deception cues have been reported with unusually large effect sizes, and […]

What comes after Vixra?

OK, so Arxiv publishes anything. But some things are so cranky that Arxiv won’t publish them, so they go on Vixra. Here’s my question: where do the people publish, who can’t publish on Vixra? The cranks’ cranks, as it were? It’s a Cantor’s corner kinda thing.

When speculating about causes of trends in mortality rates: (a) make sure that what you’re trying to explain has actually been happening, and (b) be clear where your data end and your speculations begin.

A reporter writes: I’d be very interested in getting your take on this recent paper. I am immensely skeptical of it. That’s not to say many Trump supporters aren’t racist! But we’re now going to claim that this entire rise in all-cause mortality can be attributed to the false sense of lost status? So so […]

In short, adding more animals to your experiment is fine. The problem is in using statistical significance to make decisions about what to conclude from your data.

Denis Jabaudon writes: I was thinking that perhaps you could help me with the following “paradox?” that I often find myself in when discussing with students (I am a basic neuroscientist and my unit of counting is usually cells or animals): When performing a “pilot” study on say 5 animals, and finding an “almost significant” […]

The default prior for logistic regression coefficients in Scikit-learn

Someone pointed me to this post by W. D., reporting that, in Python’s popular Scikit-learn package, the default prior for logistic regression coefficients is normal(0,1)—or, as W. D. puts it, L2 penalization with a lambda of 1. In the post, W. D. makes three arguments. I agree with two of them. 1. I agree with […]

Controversies in vaping statistics, leading to a general discussion of dispute resolution in science

Episode 2 Brad Rodu writes: The Journal of the American Heart Association on June 5, 2019, published a bogus research article, “Electronic cigarette use and myocardial infarction among adults in the US Population Assessment of Tobacco and Health [PATH],” by Dharma N. Bhatta and Stanton A. Glantz (here). Drs. Bhatta and Glantz used PATH Wave […]

“Life Expectancy and Mortality Rates in the United States, 1959-2017”

A reporter pointed me to this article, Life Expectancy and Mortality Rates in the United States, 1959-2017, by Steven Woolf and Heidi Schoomaker, and asked: Are the findings new? Can you subdivide data, like looking at small populations like middle aged people in Wyoming and have validity? Can you make valid inferences about causes and […]

Why “bigger sample size” is not usually where it’s at.

Aidan O’Gara writes: I realized when reading your JAMA chocolate study post that I don’t understand a very fundamental claim made by people who want better social science: Why do we need bigger sample sizes? The p-value is always going to be 0.05, so a sample of 10 people is going to turn up a […]

Some recent progress in the Stan community

Bob writes in with a partial list of recent developments in the Stan community. Governance: The interim Stan governing body stepped down and were replaced with a new board elected by the developer community. Funding: Stan receives millions of dollars annually in grants, gifts, and in-kind contributions across its global developer base. Releases: Stable quarterly […]