Skip to content
Archive of posts filed under the Zombies category.

How post-hoc power calculation is like a shit sandwich

Damn. This story makes me so frustrated I can’t even laugh. I can only cry. Here’s the background. A few months ago, Aleksi Reito (who sent me the adorable picture above) pointed me to a short article by Yanik Bababekov, Sahael Stapleton, Jessica Mueller, Zhi Fong, and David Chang in Annals of Surgery, “A Proposal […]

MRP (multilevel regression and poststratification; Mister P): Clearing up misunderstandings about

Someone pointed me to this thread where I noticed some issues I’d like to clear up: David Shor: “MRP itself is like, a 2009-era methodology.” Nope. The first paper on MRP was from 1997. And, even then, the component pieces were not new: we were just basically combining two existing ideas from survey sampling: regression […]

On deck for the first half of 2019

OK, this is what we’ve got for you: “The Book of Why” by Pearl and Mackenzie Reproducibility and Stan MRP (multilevel regression and poststratification; Mister P): Clearing up misunderstandings about Becker on Bohm on the important role of stories in science This is one offer I can refuse How post-hoc power calculation is like a […]

What to do when you read a paper and it’s full of errors and the author won’t share the data or be open about the analysis?

Someone writes: I would like to ask you for an advice regarding obtaining data for reanalysis purposes from an author who has multiple papers with statistical errors and doesn’t want to share the data. Recently, I reviewed a paper that included numbers that had some of the reported statistics that were mathematically impossible. As the […]

Authority figures in psychology spread more happy talk, still don’t get the point that much of the published, celebrated, and publicized work in their field is no good (Part 2)

Part 1 was here. And here’s Part 2. Jordan Anaya reports: Uli Schimmack posted this on facebook and twitter. I [Anaya] was annoyed to see that it mentions “a handful” of unreliable findings, and points the finger at fraud as the cause. But then I was shocked to see the 85% number for the Many […]

A couple of thoughts regarding the hot hand fallacy fallacy

For many years we all believed the hot hand was a fallacy. It turns out we were all wrong. Fine. Such reversals happen. Anyway, now that we know the score, we can reflect on some of the cognitive biases that led us to stick with the “hot hand fallacy” story for so long. Jason Collins […]

Latour Sokal NYT

Alan Sokal writes: I don’t know whether you saw the NYT Magazine’s fawning profile of sociologist of science Bruno Latour about a month ago. I wrote to the author, and later to the editor, to critique the gross lack of balance (and even of the most minimal fact-checking). No reply. So I posted my critique […]

Niall Ferguson and the perils of playing to your audience

History professor Niall Ferguson had another case of the sillies. Back in 2012, in response to Stephen Marche’s suggestion that Ferguson was serving up political hackery because “he has to please corporations and high-net-worth individuals, the people who can pay 50 to 75K to hear him talk,” I wrote: But I don’t think it’s just […]

These 3 problems destroy many clinical trials (in context of some papers on problems with non-inferiority trials, or problems with clinical trials in general)

Paul Alper points to this news article in Health News Review, which says: A news release or story that proclaims a new treatment is “just as effective” or “comparable to” or “as good as” an existing therapy might spring from a non-inferiority trial. Technically speaking, these studies are designed to test whether an intervention is […]

“Using numbers to replace judgment”

Julian Marewski and Lutz Bornmann write: In science and beyond, numbers are omnipresent when it comes to justifying different kinds of judgments. Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long “publication manuals”, “assessment reports”, “evaluation guidelines”, calling for p-values, citation rates, h-indices, or other statistics in order to […]

The State of the Art

Christie Aschwanden writes: Not sure you will remember, but last fall at our panel at the World Conference of Science Journalists I talked with you and Kristin Sainani about some unconventional statistical methods being used in sports science. I’d been collecting material for a story, and after the meeting I sent the papers to Kristin. […]

Robustness checks are a joke

Someone pointed to this post from a couple years ago by Uri Simonsohn, who correctly wrote: Robustness checks involve reporting alternative specifications that test the same hypothesis. Because the problem is with the hypothesis, the problem is not addressed with robustness checks. Simonsohn followed up with an amusing story: To demonstrate the problem I [Simonsohn] […]

Chocolate milk! Another stunning discovery from an experiment on 24 people!

Mike Hull writes: I was reading over this JAMA Brief Report and could not figure out what they were doing with the composite score. Here are the cliff notes: Study tested milk vs dark chocolate consumption on three eyesight performance parameters: (1) High-contrast visual acuity (2) Small-letter contrast sensitivity (3) Large-letter contrast sensitivity Only small-letter […]

“Recapping the recent plagiarism scandal”

Benjamin Carlisle writes: A year ago, I received a message from Anna Powell-Smith about a research paper written by two doctors from Cambridge University that was a mirror image of a post I wrote on my personal blog roughly two years prior. The structure of the document was the same, as was the rationale, the […]

The purported CSI effect and the retroactive precision fallacy

Regarding our recent post on the syllogism that ate science, someone points us to this article, “The CSI Effect: Popular Fiction About Forensic Science Affects Public Expectations About Real Forensic Science,” by N. J. Schweitzer and Michael J. Saks. We’ll get to the CSI Effect in a bit, but first I want to share the […]

Cornell prof (but not the pizzagate guy!) has one quick trick to getting 1700 peer reviewed publications on your CV

From the university webpage: Robert J. Sternberg is Professor of Human Development in the College of Human Ecology at Cornell University. . . . Sternberg is the author of over 1700 refereed publications. . . . How did he compile over 1700 refereed publications? Nick Brown tells the story: I [Brown] was recently contacted by […]

An actual quote from a paper published in a medical journal: “The data, analytic methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure.”

Someone writes: So the NYT yesterday has a story about this study I am directed to it and am immediately concerned about all the things that make this study somewhat dubious. Forking paths in the definition of the independent variable, sample selection in who wore the accelerometers, ignorance of the undoubtedly huge importance of interactions […]

“Fudged statistics on the Iraq War death toll are still circulating today”

Mike Spagat shares this story entitled, “Fudged statistics on the Iraq War death toll are still circulating today,” which discusses problems with a paper published in a scientific journal in 2006, and errors that a reporter inadvertently included in a recent news article. Spagat writes: The Lancet could argue that if [Washington Post reporter Philip] […]

Statistical Modeling, Causal Inference, and Social Science Regrets Its Decision to Hire Cannibal P-hacker as Writer-at-Large

It is not easy to admit our mistakes, particularly now, given the current media climate and general culture of intolerance on college campuses. Still, we feel that we owe our readers an apology. We should not have hired Cannibal P-hacker, an elegant scientist and thinker who, we have come to believe, after serious consideration, does […]

Don’t calculate post-hoc power using observed estimate of effect size

Aleksi Reito writes: The statement below was included in a recent issue of Annals of Surgery: But, as 80% power is difficult to achieve in surgical studies, we argue that the CONSORT and STROBE guidelines should be modified to include the disclosure of power—even if less than 80%—with the given sample size and effect size […]