On deck through Jan 2021

This should keep you busy through the end of the year . . .

Many of these posts were originally scheduled earlier but then got bumped because of coronavirus and other topical material.


  • The typical set and its relevance to Bayesian computation
  • Getting negative about the critical positivity ratio: when you talk about throwing out the bathwater, really throw out the bathwater! Don’t try to pretend it has some value. Give it up. Let it go. You can do this and still hold on to the baby at the same time!
  • “100 Stories of Causal Inference”: My talk tomorrow at the Online Causal Inference Seminar

  • Which experts should we trust?
  • Kafka comes to the visa office
  • Updates of bad forecasts: Let’s follow them up and see what happened!

  • Don’t say your data “reveal quantum nature of human judgments.” Be precise and say your data are “consistent with a quantum-inspired model of survey responses.” Yes, then your paper might not appear in PNAS, but you’ll feel better about yourself in the morning.
  • Wow, just wow. If you think Psychological Science as bad in the 2010-2015 era, you can’t imagine how bad it was back in 1999
  • Guttman points out another problem with null hypothesis significance testing: It falls apart when considering replications.

  • Know your data, recode missing data codes
  • That “not a real doctor” thing . . . It’s kind of silly for people to think that going to medical school for a few years will give you the skills necessary to be able to evaluate research claims in medicine or anything else.
  • Heckman Curve Update Update

  • Thomas Basbøll will like this post (analogy between common—indeed, inevitable—mistakes in drawing, and inevitable mistakes in statistical reasoning).
  • My reply: Three words. Fake. Data. Simulation.
  • How much of public health work “involves not technology but methodicalness and record keeping”?

  • Some thoughts inspired by Lee Cronbach (1975), “Beyond the two disciplines of scientific psychology”
  • Where are the collaborative novels?
  • Theorizing, thought experiments, fake-data simulation

  • Tessa Hadley on John Updike
  • “I just wanted to say that for the first time in three (4!?) years of efforts, I have a way to estimate my model. . . .”
  • “MIT Built a Theranos for Plants”

  • David Spiegelhalter wants a checklist for quality control of statistical models?
  • Do we trust this regression?
  • 53 fever!

  • We should all routinely criticize our own work.
  • Hilarious reply-all loop
  • “Psychology’s Zombie Ideas”

  • There is only one reality (and we cannot demand consistency from any other)
  • Blast from the past
  • “How We’re Duped by Data” and how we can do better

  • Automatic data reweighting
  • “Small Steps to Accuracy: Incremental Updaters are Better Forecasters”
  • Parallel in Stan

  • Battle of the open-science asymmetries
  • Econ grad student asks, “why is the government paying us money, instead of just firing us all?”
  • The textbook paradox: “Textbooks more than a very few years old cannot even be given away, but new textbooks are mostly made by copying from former ones”

  • Here’s a question for the historians of science out there: How modern is the idea of a scientific “anomaly”?
  • “There ya go: preregistered, within-subject, multilevel”
  • They want “statistical proof”—whatever that is!

  • 2 econ Nobel prizes, 1 error
  • Why we kept the trig in golf: Mathematical simplicity is not always the same as conceptual simplicity
  • “Reversals in psychology”

  • What are my statistical principles?
  • “Figure 1 looks like random variation to me” . . . indeed, so it does. And Figure 2 as well! But statistical significance was found, so this bit of randomness was published in a top journal. Business as usual in the statistical-industrial complex. Still, I’d hope the BMJ could’ve done better.
  • Taking the bus

  • In case you’re wondering . . . this is why the U.S. health care system is the most expensive in the world
  • We want certainty even when it’s not appropriate
  • The challenge of fitting “good advice” into a coherent course on statistics

  • Derived quantities and generative models
  • His data came out in the opposite direction of his hypothesis. How to report this in the publication?
  • Bill James is back

  • “this large reduction in response rats”
  • (1) The misplaced burden of proof, and (2) selection bias: Two reasons for the persistence of hype in tech and science reporting
  • “Pictures represent facts, stories represent acts, and models represent concepts.”

  • This awesome Pubpeer thread is about 80 times better than the original paper
  • Who are the culture heroes of today?
  • “Smell the Data”

  • Fake data simulation: Why does it work so well?
  • A question of experimental design (more precisely, design of data collection)
  • Smoking and Covid

  • Covid-19 -> Kovit-17 (following the himmicanes principle)
  • Luc Sante on Nick Hornby and Geoffrey O’Brien on pop music
  • What is the relevance of “bad science” to our understanding of “good science”?

  • Uri Simonsohn’s Small Telescopes
  • The view that the scientific process is “red tape,” just a bunch of hoops you need to jump through so you can move on with your life
  • Reasoning under uncertainty

  • Alexey Guzey plays Stat Detective: How many observations are in each bar of this graph?
  • Misrepresenting data from a published source . . . it happens all the time!
  • “Stop me if you’ve heard this one before: Ivy League law professor writes a deepthoughts think piece explaining a seemingly irrational behavior that doesn’t actually exist.

  • “Everybody wants to be Jared Diamond”
  • “Men Appear Twice as Often as Women in News Photos on Facebook”
  • Should we judge pundits based on their demonstrated willingness to learn from their mistakes?

  • Fiction as a window into other cultures
  • She’s wary of the consensus based transparency checklist, and here’s a paragraph we should’ve added to that zillion-authored paper
  • Stanford prison experiment

  • Reference for the claim that you need 16 times as much data to estimate interactions as to estimate main effects
  • “Model takes many hours to fit and chains don’t converge”: What to do? My advice on first steps.
  • Meg Wolitzer and George V. Higgins

  • Statistical fallacies as they arise in political science (from Bob Jervis)
  • Pundits continue to push the white-men-are-dying story, even though the real pattern is occurring among women.
  • We have really everything in common with machine learning nowadays, except, of course, language.

  • An odds ratio of 30, which they (sensibly) don’t believe
  • Lakatos was a Stalinist
  • “The presumption of wisdom and/or virtue causes intellectuals to personalize situations where contending ideas are involved.”

  • Stephen Wolfram invented a time machine but has been too busy to tell us about it
  • Body language and machine learning
  • Public health researchers: “Death by despair” is a thing, but not the biggest thing

  • Yes, there is such a thing as Eurocentric science
  • Ahhhh, Cornell!
  • Why is this graph actually ok? It’s the journey, not just the destination.

  • My proposal is to place criticism within the scientific, or social-scientific, enterprise, rather than thinking about it as something coming from outside, or as something that is tacked on at the end.
  • “Fake Facts in Covid-19 Science: Kentucky vs. Tennessee.”
  • The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time

  • The 5-sigma rule in physics
  • No, I don’t like talk of false positive false negative etc but it can still be useful to warn people about systematic biases in meta-analysis
  • Here’s why rot13 text looks so cool.

  • Dow 72,000
  • The problem with p-hacking is not the “hacking,” it’s the “p”
  • Stop-and-frisk data

  • “When Should Clinicians Act on Non–Statistically Significant Results From Clinical Trials?”
  • “In the world of educational technology, the future actually is what it used to be”
  • Claim of police shootings causing low birth weights in the neighborhood

  • The accidental experiment that saved 700 lives
  • No, I don’t believe etc etc., even though they did a bunch of robustness checks.
  • The rise and fall and rise of randomized controlled trials (RCTs) in international development

  • You don’t need a retina specialist to know which way the wind blows
  • How much granularity do you need in your Mister P?
  • Authors repeat same error in 2019 that they acknowledged and admitted was wrong in 2015

  • “Not statistically significant” is not the same as zero
  • Is vs. ought in the study of public opinion: Coronavirus “opening up” edition
  • Today in spam

  • Best comics of 2010-2019?
  • Estimating the college wealth premium: Not so easy
  • Greek statistician is in trouble for . . . telling the truth!

  • Further formalization of the “multiverse” idea in statistical modeling
  • A new hot hand paradox
  • A very short statistical consulting story

  • Astrology, whatever.
  • Unfair to James Watson?
  • Instead of comparing two posterior distributions, just fit one model including both possible explanations of the data.

  • Basbøll’s Audenesque paragraph on science writing, followed by a resurrection of a 10-year-old debate on Gladwell
  • Many years ago, when he was a baby economist . . .
  • Quine’s be Quining

  • “The 100 Worst Ed-Tech Debacles of the Decade”
  • “Published estimates of group differences in multisensory integration are inflated”
  • The 2019 project: How false beliefs in statistical differences still live in social science and journalism today

  • From the Archives of Psychological Science
  • If you put an o on understo, you’ll ruin my thunderstorm.
  • How to convince yourself that multilevel modeling (or, more generally, any advanced statistical method) has benefits?

  • Debate involving a bad analysis of GRE scores
  • When can we challenge authority with authority?
  • You can figure out the approximate length of our blog lag now.

  • Probabilistic feature analysis of facial perception of emotions
  • PNAS is just another blog
  • More on that credulity thing

  • “Losing one night’s sleep may increase risk factor for Alzheimer’s, study says”
  • Deterministic thinking meets the fallacy of the one-sided bet
  • If—

  • One reason why that estimated effect of Fox News could’ve been so implausibly high.
  • “Like a harbor clotted with sunken vessels”: update
  • More on the Heckman curve

  • Publishing in Antarctica
  • Chess.com cheater-detection bot pisses someone off
  • Red Team prepublication review update

  • “Analysis challenges slew of studies claiming ocean acidification alters fish behavior”
  • Question on multilevel modeling reminds me that we need a good modeling workflow (building up your model by including varying intercepts, slopes, etc.) and a good computing workflow
  • “The Multiverse of Methods: Extending the Multiverse Analysis to Address Data-Collection Decisions”

  • Does regression discontinuity (or, more generally, causal identification + statistical significance) make you gullible?
  • “End of novel. Beginning of job.”: That point at which you make the decision to stop thinking and start finishing
  • Why We Sleep—a tale of non-replication.

  • Three unblinded mice
  • More institutional failure by universities that refuse to grapple with potential research misconduct by their faculty
  • “Translation Plagiarism”

  • Rasslin’ over writin’ teachin’
  • “Maybe the better analogy is that these people are museum curators and we’re telling them that their precious collection of Leonardos, which they have been augmenting at a rate of about one per month, include some fakes.”
  • Megan Higgs (statistician) and Anna Dreber (economist) on how to judge the success of a replication

  • NYT editor described columnists as ““people who are paid to have very, very strong convictions, and to believe that they’re right.”
  • xkcd: “Curve-fitting methods and the messages they send”
  • Weakliem on air rage and himmicanes

  • “Enhancing Academic Freedom and Transparency in Publishing Through Post-Publication Debate”: Some examples in the study of political conflict
  • Include all design information as predictors in your regression model, then postratify if necessary. No need to include survey weights: the information that goes into the weights will be used in any poststratification that is done.
  • This one is for fans of George V. Higgins

  • “They adjusted for three hundred confounders.”
  • MRP and Missing Data Question

9 thoughts on “On deck through Jan 2021

  1. You should really set up a poll to find out (a) which blog post is most eagerly anticipated and (b) which blog post seems most likely to be fake. Then see if it replicates.

Leave a Reply to Ryan Cancel reply

Your email address will not be published. Required fields are marked *