Skip to content
Archive of posts filed under the Zombies category.

The status-reversal heuristic

Awhile ago we came up with the time-reversal heuristic, which was a reaction to the common situation that there’s a noisy study, followed by an unsuccessful replication, but all sorts of people want to take the original claim as the baseline and construct high walls to make it difficult to move away from that claim. […]

A heart full of hatred: 8 schools edition

No; I was all horns and thorns Sprung out fully formed, knock-kneed and upright — Joanna Newsom Far be it for me to be accused of liking things. Let me, instead, present a corner of my hateful heart. (That is to say that I’m supposed to be doing a really complicated thing right now and […]

On the term “self-appointed” . . .

I was reflecting on what bugs me so much about people using the term “self-appointed” (for example, when disparaging “self-appointed data police” or “self-appointed chess historians“). The obvious question when someone talks about “self-appointed” whatever is, Who self-appointed you to decide who is illegitimately self-appointed? But my larger concern is with the idea that being […]

Elsevier > Association for Psychological Science

Everyone dunks on Elsevier. But here’s a case where they behaved well. Jordan Anaya points us to this article from Retraction Watch: In May, [psychology professor Barbara] Fredrickson was last author of a paper in Psychoneuroendocrinology claiming to show that loving-kindness meditation slowed biological aging, specifically that it kept telomeres — which protect chromosomes — […]

Are statistical nitpickers (e.g., Kaiser Fung and me) getting the way of progress or even serving the forces of evil?

As Ira Glass says, today we have a theme and some variations on this theme. Statistical nitpickers: Do they cause more harm than good? I’d like to think we cause more good than harm, but today I want to consider the counter-argument, that, even when we are correct on the technical merits, we statisticians should […]

More on that 4/20 road rage researcher: Dude could be a little less amused, a little more willing to realize he could be on the wrong track with a lot of his research.

So, back on 4/20 we linked to the post by Sam Harper and Adam Palayew shooting down a silly article, published in JAMA and publicized around the world, that claimed excess road deaths on 4/20 (“cannabis day”). I googled the authors of that silly JAMA paper and found that one of them, Dr. Donald Redelmeier, […]

Kaiser Fung suggests “20 paper ideas pre-approved for prestigious journals”

I got to thinking about this after reading a post from Kaiser Fung “offering up 20 paper ideas pre-approved for prestigious journals.” What happened is that JAMA published a silly paper claiming a 12 percent increase in fatal car crashes on April 20 (“420 day,” the unofficial marijuana holiday). Following Sam Harper and Adam Palayew, […]

P-value of 10^-74 disappears

Nick Matzke writes: Given the recent discussion of p-values, you or colleagues might find this interesting: Population Genetics: Why structure matters Nick Barton, Joachim Hermisson, Magnus Nordborg One possibility is to compare the population estimates with estimates taken from sibling data, which should be relatively unbiased by environmental differences. In one of many examples of […]

Schoolmarms and lightning bolts: Data faker meets Edge foundation in an unintentional reveal of problems with the Great Man model of science

Hey—I happened to run across an article by Virginia Heffernan on the now-notorious Edge foundation, and it contained a link to all sorts of people . . . including Marc Hauser, the disgraced primatologist who we’ve discussed in this space from time to time. Here’s an Edge article by Hauser in 2002—almost a decade before […]

Junk science and fake news: Similarities and differences

Jingyi Kenneth Tay writes: As I read your recent post, “How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions” . . . and still stays around even after it’s been retracted, I realized that there are many similarities between this and fake news: how it is much easier to put fake news out […]

They misreport their experiments and don’t fess up when they’ve been caught.

Javier Benitez points us to this paper, “COMPare: Qualitative analysis of researchers’ responses to critical correspondence on a cohort of 58 misreported trials,” by Ben Goldacre, Henry Drysdale, Cicely Marston, Kamal Mahtani, Aaron Dale, Ioan Milosevic, Eirion Slade, Philip Hartley and Carl Heneghan, who write: Discrepancies between pre-specified and reported outcomes are an important and […]

Dan’s Paper Corner: Can we model scientific discovery and what can we learn from the process?

Jesus taken serious by the many Jesus taken joyous by a few Jazz police are paid by J. Paul Getty Jazzers paid by J. Paul Getty II Leonard Cohen So I’m trying a new thing because like no one is really desperate for another five thousand word essay about whatever happens to be on my […]

“Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science”

As promised, let’s continue yesterday’s discussion of Christopher Tong’s article, “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” First, the title, which makes an excellent point. It can be valuable to think about measurement, comparison, and variation, even if commonly-used statistical methods can mislead. This reminds me of the idea in decision analysis […]

Harking, Sharking, Tharking

Bert Gunter writes: You may already have seen this [“Harking, Sharking, and Tharking: Making the Case for Post Hoc Analysis of Scientific Data,” John Hollenbeck, Patrick Wright]. It discusses many of the same themes that you and others have highlighted in the special American Statistician issue and elsewhere, but does so from a slightly different […]

“Boston Globe Columnist Suspended During Investigation Of Marathon Bombing Stories That Don’t Add Up”

I came across this news article by Samer Kalaf and it made me think of some problems we’ve been seeing in recent years involving cargo-cult science. Here’s the story: The Boston Globe has placed columnist Kevin Cullen on “administrative leave” while it conducts a review of his work, after WEEI radio host Kirk Minihane scrutinized […]

Deterministic thinking (“dichotomania”): a problem in how we think, not just in how we act

This has come up before: – Basketball Stats: Don’t model the probability of win, model the expected score differential. – Econometrics, political science, epidemiology, etc.: Don’t model the probability of a discrete outcome, model the underlying continuous variable – Thinking like a statistician (continuously) rather than like a civilian (discretely) – Message to Booleans: It’s […]

Exchange with Deborah Mayo on abandoning statistical significance

The philosopher wrote: The big move in the statistics wars these days is to fight irreplication by making it harder to reject, and find evidence against, a null hypothesis. Mayo is referring to, among other things, the proposal to “redefine statistical significance” as p less than 0.005. My colleagues and I do not actually like […]

A world of Wansinks in medical research: “So I guess what I’m trying to get at is I wonder how common it is for clinicians to rely on med students to do their data analysis for them, and how often this work then gets published”

In the context of a conversation regarding sloppy research practices, Jordan Anaya writes: It reminds me of my friends in residency. Basically, while they were med students for some reason clinicians decided to get them to analyze data in their spare time. I’m not saying my friends are stupid, but they have no stats or […]

It’s not just p=0.048 vs. p=0.052

Peter Dorman points to this post on statistical significance and p-values by Timothy Taylor, editor of the Journal of Economic Perspectives, a highly influential publication of the American Economic Association. I have some problems with what Taylor writes, but for now I’ll just take it as representing a certain view, the perspective of a thoughtful […]

He says it again, but more vividly.

We’ve discussed Clarke’s third law (“Any sufficiently crappy research is indistinguishable from fraud”) and that, to do good science, honesty and transparency are not enough. James Heathers says it again, vividly. I don’t know if Heathers has ever written anything about the notorious study in which participants were invited to stick 51 pins into a […]