Skip to content
Archive of posts filed under the Miscellaneous Statistics category.

Age-period-cohort analysis.

Chris Winship and Ethan Fosse write with a challenge: Since its beginnings nearly a century ago, Age-Period-Cohort analysis has been stymied by the lack of identification of parameter estimates resulting from the linear dependence between age, period, and cohort (age= period – cohort). In a series of articles, we [Winship and Fosse] have developed a […]

“Banishing ‘Black/White Thinking’: A Trio of Teaching Tricks”

Richard Born writes: The practice of arbitrarily thresholding p values is not only deeply embedded in statistical practice, it is also congenial to the human mind. It is thus not sufficient to tell our students, “Don’t do this.” We must vividly show them why the practice is wrong and its effects detrimental to scientific progress. […]

What a difference a month makes (polynomial extrapolation edition)

Someone pointed me to this post from Cosma Shalizi conveniently using R to reproduce the famous graph endorsed by public policy professor and A/Chairman @WhiteHouseCEA. Here’s the original graph that caused all that annoyance: Here’s Cosma’s reproduction in R (retro-Cosma is using base graphics!), fitting a third-degree polynomial on the logarithms of the death counts: […]

Is JAMA potentially guilty of manslaughter?

No, of course not. I would never say such a thing. Sander Greenland, though, he’s a real bomb-thrower. He writes: JAMA doubles down on distortion – and potential manslaughter if on the basis of this article anyone prescribes HCQ in the belief it is unrelated to cardiac mortality: – “compared with patients receiving neither drug […]

Uncertainty and variation as distinct concepts

Jake Hofman, Dan Goldstein, and Jessica Hullman write: Scientists presenting experimental results often choose to display either inferential uncertainty (e.g., uncertainty in the estimate of a population mean) or outcome uncertainty (e.g., variation of outcomes around that mean). How does this choice impact readers’ beliefs about the size of treatment effects? We investigate this question […]

Standard deviation, standard error, whatever!

Ivan Oransky points us to this amusing retraction of a meta-analysis. The problem: “Standard errors were used instead of standard deviations when using data from one of the studies”! Actually, I saw something similar happen in a consulting case once. The other side had a report with estimates and standard errors . . . the […]

University of Washington biostatistician unhappy with ever-changing University of Washington coronavirus projections

The University of Washington in Seattle is a big place. It includes the Institute for Health Metrics and Evaluation (IHME), which has produced a widely-circulated and widely-criticized coronavirus model. As we’ve discussed, the IHME model is essentially a curve-fitting exercise that makes projections using the second derivative of the time trend on the log scale. […]

Statistics controversies from the perspective of industrial statistics

We’ve had lots of discussions here and elsewhere online about fundamental flaws in statistics culture: the whole p-value thing, statistics used for confirmation rather than falsification, corruption of the pizzagate variety, soft corruption in which statistics is used in the service of country-club-style backslapping, junk science routinely getting the imprimatur of the National Academy of […]

How scientists perceive advancement of knowledge from conflicting review reports

Kevin Lewis pointed me to this article. It seemed kinda familiar, I took a look at the abstract, and I realized . . . I reviewed this article for the journal! Here was my referee report:

Hey, you. Yeah, you! Stop what you’re doing RIGHT NOW and read this Stigler article on the history of robust statistics

I originally gave this post the title, “Stigler: The Changing History of Robustness,” but then I was afraid nobody would read it. In the current environment of Move Fast and Break Things, not so many people care about robustness. Also, the widespread use of robustness checks to paper over brittle conclusions has given robustness a […]

Controversy regarding the effectiveness of Remdesivir

Steven Wood writes: There now some controversy regarding the effectiveness of Remdesivir for treatment of Covid. With the inadvertent posting of results on the WHO website. https://www.statnews.com/2020/04/23/data-on-gileads-remdesivir-released-by-accident-show-no-benefit-for-coronavirus-patients/ One of the pillars of hope for this treatment is the monkey treatment trial (the paper is here). As an experience clinical trialist I was immediately skeptical of […]

New analysis of excess coronavirus mortality; also a question about poststratification

Uros Seljak writes: You may be interested in our Gaussian Process counterfactual analysis of Italy mortality data that we just posted. Our results are in a strong disagreement with the Stanford seropositive paper that appeared on Friday. Their work was all over the news, but is completely misleading and needs to be countered: they claim […]

Concerns with that Stanford study of coronavirus prevalence

Josh Rushton writes: I’ve been following your blog for a while and checked in today to see if there was a thread on last week’s big-splash Stanford antibody study (the one with the shocking headline that they got 50 positive results in a “random” sample of 3330 antibody tests, suggesting that nearly 2% of the […]

RCT on use of cloth vs surgical masks

Barry Dehlin writes: See this blog post by “Scott Alexander” about an RCT on the use of cloth and surgical masks. As he points out, this issue is momumentally important RIGHT NOW and a real statistical expert should be evaluating this study and the conclusions the authors draw. Here, I think the value of your […]

“The Generalizability Crisis” in the human sciences

In an article called The Generalizability Crisis, Tal Yarkoni writes: Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is, that the […]

Model building is Lego, not Playmobil. (toward understanding statistical workflow)

John Seabrook writes: Socrates . . . called writing “visible speech” . . . A more contemporary definition, developed by the linguist Linda Flower and the psychologist John Hayes, is “cognitive rhetoric”—thinking in words. In 1981, Flower and Hayes devised a theoretical model for the brain as it is engaged in writing, which they called […]

Structural equation modeling and Stan

Eric Brown asks: How does Stan and its Bayesian modeling relate to structural equation modeling? Do you know of a resource that attempts to explain the concepts behind SEM in terms of Stan nomenclature and concepts? Some research that I’ve looked into uses SEM to evaluate latent factors underlying multiple measurements with associated errors; or […]

I’m still struggling to understand hypothesis testing . . . leading to a more general discussion of the role of assumptions in statistics

I’m sitting at this talk where Thomas Richardson is talking about testing the hypothesis regarding a joint distribution of three variables, X1, X2, X3. The hypothesis being tested is that X1 and X2 are conditionally independent given X3. I don’t have a copy of Richardson’s slides, but here’s a paper that I think it related, […]

“For the cost of running 96 wells you can test 960 people and accurate assess the prevalence in the population to within about 1%. Do this at 100 locations around the country and you’d have a spatial map of the extent of this epidemic today. . . and have this data by Monday.”

Daniel Lakeland writes: COVID-19 is tested for using real-time reverse-transcriptase PCR (rt-rt-PCR). This is basically just a fancy way of saying they are detecting the presence of the RNA by converting it to DNA and amplifying it. It has already been shown by people in Israel that you can combine material from at least 64 […]

Are we ready to move to the “post p < 0.05 world”?

Robert Matthews writes: Your post on the design and analysis of trials really highlights how now more than ever it’s vital the research community takes seriously all that “nit-picking stuff” from statisticians about the dangers of faulty inferences based on null hypothesis significance testing. These dangers aren’t restricted to the search for new therapies. I’m […]