Skip to content
Search results for file drawer

“Identification of and correction for publication bias,” and another discussion of how forking paths is not the same thing as file drawer

Max Kasy and Isaiah Andrews sent along this paper, which begins: Some empirical results are more likely to be published than others. Such selective publication leads to biased estimates and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study’s results, the first based on […]

No, I don’t think it’s the file drawer effect

Someone named Andrew Certain writes: I’ve been reading your blog since your appearance on Econtalk . . . explaining the ways in which statistics are misused/misinterpreted in low-sample/high-noise studies. . . . I recently came across a meta-analysis on stereotype threat [a reanalysis by Emil Kirkegaard] by that identified a clear relationship between smaller sample […]

The file drawer’s on fire!

Kevin Lewis sends along this article, commenting, “That’s one smokin’ file drawer!” Here’s the story, courtesy of Clayton Velicer, Gideon St. Helen, and Stanton Glantz: We examined the relationship between the tobacco industry and the journal Regulatory Toxicology and Pharmacology (RTP) using the Truth Tobacco Industry Documents Library and internet sources. We determined the funding […]

When we talk about the “file drawer,” let’s not assume that an experiment can easily be characterized as producing strong, mixed, or weak results

Neil Malhotra: I thought you might be interested in our paper [the paper is by Annie Franco, Neil Malhotra, and Gabor Simonovits, and the link is to a news article by Jeffrey Mervis], forthcoming in Science, about publication bias in the social sciences given your interest and work on research transparency. Basic summary: We examined […]

Will decentralised collaboration increase the robustness of scientific findings in biomedical research? Some data and some causal questions.

Mark Tuttle points to this press release, “Decentralising science may lead to more reliable results: Analysis of data on tens of thousands of drug-gene interactions suggests that decentralised collaboration will increase the robustness of scientific findings in biomedical research,” and writes: In my [Tuttle’s] opinion, the explanation is more likely to be sociological – group […]

Controversies in vaping statistics, leading to a general discussion of dispute resolution in science

Episode 2 Brad Rodu writes: The Journal of the American Heart Association on June 5, 2019, published a bogus research article, “Electronic cigarette use and myocardial infarction among adults in the US Population Assessment of Tobacco and Health [PATH],” by Dharma N. Bhatta and Stanton A. Glantz (here). Drs. Bhatta and Glantz used PATH Wave […]

“I feel like the really solid information therein comes from non or negative correlations”

Steve Roth writes: I’d love to hear your thoughts on this approach (heavily inspired by Arindrajit Dube’s work, linked therein): This relates to our discussion from 2014: My biggest takeaway from this latest: I feel like the really solid information therein comes from non or negative correlations: • It comes before • But it doesn’t […]

When anyone claims 80% power, I’m skeptical.

A policy analyst writes: I saw you speak at ** on Bayesian methods. . . . I had been asked to consult on a large national evaluation of . . . [details removed to preserve anonymity] . . . and had suggested treading carefully around the use of Bayesian statistics in this study (basing it […]

On deck through the rest of the year

July: The Ponzi threshold and the Armstrong principle Flaws in stupid horrible algorithm revealed because it made numerical predictions PNAS forgets basic principles of game theory, thus dooming thousands of Bothans to the fate of Alderaan Tutorial: The practical application of complicated statistical methods to fill up the scientific literature with confusing and irrelevant analyses […]

The p-curve, p-uniform, and Hedges (1984) methods for meta-analysis under selection bias: An exchange with Blake McShane, Uri Simosohn, and Marcel van Assen

Blake McShane sent me some material related to a paper of his (McShane et al., 2016; see reference list below), regarding various methods for combining p-values for meta-analysis under selection bias. His remarks related to some things written by Uri Simonsohn and his colleagues, so I cc-ed Uri on the correspondence. After some back and […]

The Night Riders

Retraction Watch linked to this paper, “Publication bias and the canonization of false facts,” by Silas Nissen, Tali Magidson, Kevin Gross, and Carl Bergstrom, and which is in the Physics and Society section of Arxiv which is kind of odd since it has nothing whatsoever to do with physics. Nissen et al. write: In the […]

Consider seniority of authors when criticizing published work?

Carol Nickerson writes: I’ve written my fair share of letters to the editor and commentaries over the years, most of them languishing in the file drawer. It used to be impossible to get them published. The situation has improved a bit, but not enough. In any case, I never think about the sex of the […]

How does a Nobel-prize-winning economist become a victim of bog-standard selection bias?

Someone who wishes to remain anonymous writes in with a story: Linking to a new paper by Jorge Luis García, James J. Heckman, and Anna L. Ziff, an economist Sue Dynarski makes this “joke” on facebook—or maybe it’s not a joke: How does one adjust standard errors to account for the fact that N of […]

Journals for insignificant results

Tom Daula writes: I know you’re not a fan of hypothesis testing, but the journals in this blog post are an interesting approach to the file drawer problem. I’ve never heard of them or their like. An alternative take (given academia standard practice) is “Journal for XYZ Discipline papers that p-hacking and forking paths could […]

What has happened down here is the winds have changed

Someone sent me this article by psychology professor Susan Fiske, scheduled to appear in the APS Observer, a magazine of the Association for Psychological Science. The article made me a little bit sad, and I was inclined to just keep my response short and sweet, but then it seemed worth the trouble to give some […]

Publication bias occurs within as well as between projects

Kent Holsinger points to this post by Kevin Drum entitled, “Publication Bias Is Boring. You Should Care About It Anyway,” and writes: I am an evolutionary biologist, not a psychologist, but this article describes a disturbing Scenario concerning oxytocin research that seems plausible. It is also relevant to the reproducibility/publishing issues you have been discussing […]

The Puzzle of Paul Meehl: An intellectual history of research criticism in psychology

There’s nothing wrong with Meehl. He’s great. The puzzle of Paul Meehl is that everything we’re saying now, all this stuff about the problems with Psychological Science and PPNAS and Ted talks and all that, Paul Meehl was saying 50 years ago. And it was no secret. So how is it that all this was […]

The Fault in Our Stars: It’s even worse than they say

In our recent discussion of publication bias, a commenter link to a recent paper, “Star Wars: The Empirics Strike Back,” by Abel Brodeur, Mathias Le, Marc Sangnier, Yanos Zylberberg, who point to the notorious overrepresentation in scientific publications of p-values that are just below 0.05 (that is, just barely statistically significant at the conventional level) […]

Thinking of doing a list experiment? Here’s a list of reasons why you should think again

Someone wrote in: We are about to conduct a voting list experiment. We came across your comment recommending that each item be removed from the list. Would greatly appreciate it if you take a few minutes to spell out your recommendation in a little more detail. In particular: (a) Why are you “uneasy” about list […]

Preregistration: what’s in it for you?

Chris Chambers pointed me to a blog by someone called Neuroskeptic who suggested that I preregister my political science studies: So when Andrew Gelman (let’s say) is going to start using a new approach, he goes on Twitter, or on his blog, and posts a bare-bones summary of what he’s going to do. Then he […]