On deck for the first half of 2020

Here goes:

  • Smoothness, or lack thereof, in MRP estimates over time
  • Open forensic science, and some general comments on the problems of legalistic thinking when discussing open science
  • Votes vs. $

  • Of book reviews and selection bias
  • This graduate student wants to learn statistics to be a better policy analyst
  • How to “cut” using Stan, if you must

  • Two good news articles on trends in baseball analytics
  • Linear or logistic regression with binary outcomes
  • Of Manhattan Projects and Moonshots

  • Four projects in the intellectual history of quantitative social science
  • Making fun of Ted talks
  • Steven Pinker on torture

  • Will decentralised collaboration increase the robustness of scientific findings in biomedical research? Some data and some causal questions.
  • Against overly restrictive definitions: No, I don’t think it helps to describe Bayes as “the analysis of subjective
 beliefs” (nor, for that matter, does it help to characterize the statements of Krugman or Mankiw as not being “economics”)
  • Is it accurate to say, “Politicians Don’t Actually Care What Voters Want”?

  • In Bayesian inference, do people cheat by rigging the prior?
  • Graphs of school shootings in the U.S.
  • Some Westlake quotes

  • Pocket Kings by Ted Heller
  • Top 5 literary descriptions of poker
  • What are the famous dogs? What are the famous animals?

  • Are the tabloids better than we give them credit for?
  • The latest Perry Preschool analysis: Noisy data + noisy methods + flexible summarizing = Big claims
  • The intellectual explosion that didn’t happen

  • Deterministic thinking meets the fallacy of the one-sided bet
  • My review of Ian Stewart’s review of my review of his book
  • Are GWAS studies of IQ/educational attainment problematic?

  • David Leavitt and Meg Wolitzer
  • They added a hierarchical structure to their model and their parameter estimate changed a lot: How to think about this?
  • Don’t talk about hypotheses as being “either confirmed, partially confirmed, or rejected”

  • Forget about multiple testing corrections. Actually, forget about hypothesis testing entirely.
  • The fallacy of the excluded rationality
  • Is there any scientific evidence that humans don’t like uncertainty?

  • The importance of measurement in psychology
  • The importance of descriptive social science and its relation to causal inference and substantive theories
  • Advice for a Young Economist at Heart

  • The hot hand fallacy fallacy rears its ugly ugly head
  • A Collection of Word Oddities and Trivia
  • Evidence-based medicine eats itself

  • “It just happens to be in the nature of knowledge that it cannot be conserved if it does not grow.”
  • Intended consequences are the worst
  • The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time

  • “Repeating the experiment” as general advice on data collection
  • “Sometimes research just has to start somewhere, and subject itself to criticism and potential improvement.”
  • Researcher offers ridiculous reasons for refusing to reassess work in light of serious criticism

  • How many patients do doctors kill by accident?
  • This study could be just fine, or not. Maybe I’ll believe it if there’s an independent preregistered replication.
  • An article in a statistics or medical journal, “Using Simulations to Convince People of the Importance of Random Variation When Interpreting Statistics.”

  • Making differential equation models in Stan more computationally efficient via some analytic integration
  • “MIT Built a Theranos for Plants”
  • David Spiegelhalter wants a checklist for quality control of statistical models?

  • Do we trust this regression?
  • Question on multilevel modeling reminds me that we need a good modeling workflow (building up your model by including varying intercepts, slopes, etc.) and a good computing workflow
  • Deep Learning workflow

  • We should all routinely criticize our own work.
  • Hilarious reply-all loop
  • Birthdays!

  • Different challenges in replication in biomedical vs. social sciences
  • The Paterno Defence: Gladwell’s Tipping Point?
  • Conditioning on a statistical method as a “meta” version of conditioning on a statistical model

  • Theorizing, thought experiments, fake-data simulation
  • As usual, I agree with Paul Meehl: “It is not a reform of significance testing as currently practiced in soft-psych. We are making a more heretical point than any of these: We are attacking the whole tradition of null-hypothesis refutation as a way of appraising theories.”
  • “What is the conclusion of a clinical trial where p=0.6?”

  • You don’t want a criminal journal… you want a criminal journal
  • Junk Science Then and Now
  • They want “statistical proof”—whatever that is!

  • “Non-disclosure is not just an unfortunate, but unfixable, accident. A methodology can be disclosed at any time.”
  • Woof! for descriptive statistics
  • Why we kept the trig in golf: Mathematical simplicity is not always the same as conceptual simplicity

  • No, you’re not (necessarily) obliged to share your data and methods when someone asks for them. Better to post it all publicly.
  • Computer-generated writing that looks real; real writing that looks computer-generated
  • 100 Things to Know, from Lane Kenworthy

  • The Road Back
  • “Are Relational Inferences from Crowdsourced and Opt-in Samples Generalizable? Comparing Criminal Justice Attitudes in the GSS and Five Online Samples”
  • Breaking the feedback loop: When people don’t correct their errors

  • We want certainty even when it’s not appropriate
  • The New Yorker fiction podcast: how it’s great and how it could be improved
  • His data came out in the opposite direction of his hypothesis. How to report this in the publication?

  • My best thoughts on priors
  • He’s annoyed that PNAS desk-rejected his article.
  • “As a girl, she’d been very gullible, but she had always learned more that way.”

  • (1) The misplaced burden of proof, and (2) selection bias: Two reasons for the persistence of hype in tech and science reporting
  • “Pictures represent facts, stories represent acts, and models represent concepts.”
  • This awesome Pubpeer thread is about 80 times better than the original paper

  • The value (or lack of value) of preregistration in the absence of scientific theory
  • Let’s do preregistered replication studies of the cognitive effects of air pollution—not because we think existing studies are bad, but because we think the topic is important and we want to understand it better.
  • I’m still struggling to understand hypothesis testing . . . leading to a more general discussion of the role of assumptions in statistics

  • A question of experimental design (more precisely, design of data collection)
  • Structural equation modeling and Stan
  • “Everybody wants to be Jared Diamond”

  • What is the relevance of “bad science” to our understanding of “good science”?
  • Noise-mining as standard practice in social science
  • Toward understanding statistical workflow

  • Career advice for a future statistician
  • “Men Appear Twice as Often as Women in News Photos on Facebook”
  • “The Generalizability Crisis” in the human sciences

  • Should we judge pundits based on their demonstrated willingness to learn from their mistakes?
  • Upholding the patriarchy, one blog post at a time
  • Effects of short-term exposure to null hypothesis significance testing on cognitive performance

  • Stanford prison experiment
  • Given that 30% of Americans believe in astrology, it’s no surprise that some nontrivial percentage of influential American psychology professors are going to have the sort of attitude toward scientific theory and evidence that would lead them to have strong belief in weak theories supported by no good evidence.
  • The checklist manifesto and beyond

  • She’s wary of the consensus based transparency checklist, and here’s a paragraph we should’ve added to that zillion-authored paper
  • Yes, there is such a thing as Eurocentric science
  • Body language and machine learning

  • Marc Hauser: Victim of statistics?
  • Megan Higgs (statistician) and Anna Dreber (economist) on how to judge the success of a replication
  • Statistical fallacies as they arise in political science (from Bob Jervis)

  • MRP with R and Stan; MRP with Python and Tensorflow
  • We have really everything in common with machine learning nowadays, except, of course, language.
  • An odds ratio of 30, which they (sensibly) don’t believe

  • Lakatos was a Stalinist
  • Include all design information as predictors in your regression model, then postratify if necessary. No need to include survey weights: the information that goes into the weights will be used in any poststratification that is done.
  • Frustrating science reporting: I get quoted but misunderstood

  • The return of red state blue state
  • Public health researchers: “Death by despair” is a thing, but not the biggest thing
  • Ahhhh, Cornell!

  • Why is this graph actually ok? It’s the journey, not just the destination.
  • “Then the flaming sheet, with the whirr of a liberated phoenix, would fly up the chimney to join the stars.”
  • Hey, you. Yeah, you! Stop what you’re doing RIGHT NOW and read this Stigler article on the history of robust statistics

  • How scientists perceive advancement of knowledge from conflicting review reports
  • The 5-sigma rule in physics
  • No, I don’t like talk of false positive false negative etc but it can still be useful to warn people about systematic biases in meta-analysis

  • Here’s why rot13 text looks so cool.
  • The problem with p-hacking is not the “hacking,” it’s the “p”
  • We need better default plots for regression.

  • Stop-and-frisk data
  • Standard deviation, standard error, whatever!
  • Uncertainty and variation as distinct concepts

  • Claim of police shootings causing low birth weights in the neighborhood
  • The accidental experiment that saved 700 lives
  • The rise and fall and rise of randomized controlled trials (RCTs) in international development

  • You don’t need a retina specialist to know which way the wind blows
  • How much granularity do you need in your Mister P?
  • Are informative priors “[in]compatible with standards of research integrity”? Click to find out!!

  • Authors repeat same error in 2019 that they acknowledged and admitted was wrong in 2015
  • No, average statistical power is not as high as you think: Tracing a statistical error as it spreads through the literature
  • Best comics of 2010-2019?

  • Today in spam
  • Some thoughts on another failed replication in psychology
  • Estimating the college wealth premium: Not so easy

  • Create your own community (if you need to)
  • “Banishing ‘Black/White Thinking’: A Trio of Teaching Tricks”
  • Be careful when estimating years of life lost: quick-and-dirty estimates of attributable risk are, well, quick and dirty.

  • A very short statistical consulting story
  • Basbøll’s Audenesque paragraph on science writing, followed by a resurrection of a 10-year-old debate on Gladwell
  • Blast from the past

  • “The Moral Economy of Science”
  • “Note sure what the lesson for data analysis quality control is here is here, but interesting to wonder about how that mistake was not caught pre-publication.”
  • Three unblinded mice

Enjoy!

P.S. I listed the posts in groups of 3 just for easier readability. There’s no connection between the three posts in each batch.

7 thoughts on “On deck for the first half of 2020

  1. many to look forward to but: “Estimating the college wealth premium: Not so easy” might be near and dear to my heart.

    A) It should be obvious that as the supply of degrees increases, their value falls. So we should expect the wage premium to be falling under the best case scenario.

    B) Also as you dig deeper into the “ability distribution” you’re going to be forced to lower the standards for a degree, so the people who have them aren’t as valuable skillwise.

    So how do you disentangle decreasing mean graduate ability from increasing supply as factors driving down the wage premium? And of course there’s Bryan Caplan’s point of view: educated people are nothing more than Star-Bellied Sneeches:

    “And honestly, when was the last time you actually used any of the things you learned in college at your job?”

    so education is just a place to meet the right people and slap the right backs.

    • “And honestly, when was the last time you actually used any of the things you learned in college at your job?”

      I’ve been mostly retired for about 10 years, but a lot of what I learned in college was very useful in my job. The most obvious way: I was a mathematician, so doing research used the math skills (not just “techniques”, but the process of understanding a problem, looking at it from another perspective, and writing coherent explanations of solutions) that I learned in college and graduate school. Teaching similarly — but not just teaching courses I had had in college or grad school. The skills I developed and used in college gave me a basis for learning knew things to teach. And when I volunteered to teach statistics, I used the critical eye I developed in college to say, “Wait a minute” to quite a lot of the things I found in statistics textbooks, so chose textbooks very carefully, and supplemented them as needed to give more of the “why’s, wherefore’s, and when not’s” that I realized were needed to try give the students a good (not superficial, not misguided) view of statistics.

      But I also used what I learned in some college courses other than math (physics, anthropology, Asian history, and languages in particular come to mind) in helping me learn to teach people from a variety of backgrounds (in particular, by helping me learn to question my preconceptions, and be open to new information and new perspectives).

  2. Adede/jim:

    I would be interested in hearing the specific details involving what you learned in school that you are using at your job. I think that recent graduates are often surprised how little much of what they learn in math/statistics classes is used every day. The classic example is math of all kinds. Despite working in technical areas for many years, I never had a need to calculate an integral or only had to take a derivative a handful of times.

Leave a Reply

Your email address will not be published. Required fields are marked *