A world of Wansinks in medical research: “So I guess what I’m trying to get at is I wonder how common it is for clinicians to rely on med students to do their data analysis for them, and how often this work then gets published”

In the context of a conversation regarding sloppy research practices, Jordan Anaya writes: It reminds me of my friends in residency. Basically, while they were med students for some reason clinicians decided to get them to analyze data in their … Continue reading

(People are missing the point on Wansink, so) what’s the lesson we should be drawing from this story?

People pointed me to various recent news articles on the retirement from the Cornell University business school of eating-behavior researcher and retraction king Brian Wansink. I particularly liked this article by David Randall—not because he quoted me, but because he … Continue reading

I fear that many people are drawing the wrong lessons from the Wansink saga, focusing on procedural issues such as “p-hacking” rather than scientifically more important concerns about empty theory and hopelessly noisy data. If your theory is weak and your data are noisy, all the preregistration in the world won’t save you.

Someone pointed me to this news article by Tim Schwab, “Brian Wansink: Data Masseur, Media Villain, Emblem of a Thornier Problem.” Schwab writes: If you look into the archives of your favorite journalism outlet, there’s a good chance you’ll find … Continue reading

The connection between junk science and sloppy data handling: Why do they go together?

Nick Brown pointed me to a new paper, “The Impact of Incidental Environmental Factors on Vote Choice: Wind Speed is Related to More Prevention-Focused Voting,” to which his reaction was, “It makes himmicanes look plausible.” Indeed, one of the authors … Continue reading

What data to include in an analysis? Not always such an easy question. (Elliott Morris / Nate Silver / Rasmussen polls edition)

Someone pointed me to a recent post by Nate Silver, “Polling averages shouldn’t be political litmus tests, and they need consistent standards, not make-it-up-as-you-go,” where Nate wrote: The new Editorial Director of Data Analytics at ABC News, G. Elliott Morris, … Continue reading

They came in through the window: The migration of tech hype from the fringes to the media and academic mainstream

Palko points to a ten-year-old post on 3-D printing. Here he is back in 2013: We’re talking about people (particularly journalists) who have an emotional, gee-whiz reaction to technology without really thinking seriously about the functionality. [They] can be spotted … Continue reading

Joe Simmons, Leif Nelson, and Uri Simonsohn agree with us regarding the much publicized but implausible and unsubstantiated claims of huge effects from nudge interventions

We wrote about this last year in our post, PNAS GIGO QRP WTF: This meta-analysis of nudge experiments is approaching the platonic ideal of junk science and our followup PNAS article, No reason to expect large and consistent effects of … Continue reading

Gaurav Sood’s review of the book Noise by Kahneman et al.: In writing about noise in human judgment, the authors didn’t wrestle with the problem of noise in behavioral-science research. But behavioral-science research is the product of human judgment.

Here it is. This should interest some of you. Gaurav makes a convincing case that: 1. The main topic of the book—capriciousness in human judgment—is important, it’s worth a book, and the authors (Kahneman, Sibony, and Sunstein) have an interesting … Continue reading