Skip to content

He wants some readings on the replication crisis that are accessible to college freshmen in economics

Harvey Rosen writes:

My query is similar to the one from André Ariew that you posted on August 7, in which he asked if you could suggest readings for his graduate course in philosophy. I occasionally teach an undergraduate course on introductory microeconomics. I like to devote some time to discussing challenges to economists’ conventional model of rational decision-making. As you know, some of the most salient challenges come from psychologists, who argue that people’s decisions are driven by unconscious biases rather than sensible tradeoffs of costs and benefits. I’d like my students to learn something about the replicability crisis and its relevance to assessing these claims. The problem is that the great majority of college freshmen have had no exposure to statistics whatsoever. Hence, I’m looking for a self-contained discussion that does not rely heavily on statistical jargon. Last time I taught the course I sent the students a link to “What has happened down here is the winds have changed,” but even that was too technical for most of them. My question, then, is whether you have any articles to recommend.

My reply: I’m not sure. I guess it could make sense to start with some specific examples of unreplicated or unreplicable studies such as beauty and sex ratio, early childhood intervention (see here for a completely nontechnical version), shark attacks and voting, and elite performance in sports. Give students a choice of examples, and they can pick a topic that particularly interests them and pursue it further.


  1. Kyle MacDonald says:

    The first place I learned about the replication crisis, other than that there was one, was from a video that Derek Muller made last year, “Is Most Published Research Wrong?”, available on YouTube and deliberately echoing Ioannidis’ 2005 article. The discussion has definitely changed since then, and Muller’s video is not perfect, but it’s the best introduction that I know to the general idea that the wind needed changing.

    It also mentions the diphoton bump at CERN from a couple years back, which is good, because I think it does something to dispel the idea, prevalent among many scientifically inclined young people, that social scientists who get this stuff wrong are just morons who need to learn how to count. If physicists are getting this stuff wrong, too, the problem is not as simple as researchers being bad at math — careless with quantitative data, sure, but not simply incapable of calculating a z-score without bursting into tears.

  2. Anonymous says:

    “As you know, some of the most salient challenges come from psychologists, who argue that people’s decisions are driven by unconscious biases rather than sensible tradeoffs of costs and benefits. I’d like my students to learn something about the replicability crisis and its relevance to assessing these claims”

    * Unconscious Thought Theory (UTT):

    “Researchers Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren, and Rick B. van Baaren tested this hypothesis in a series of studies measuring choice quality and post-choice satisfaction after participants used conscious and unconscious deliberation. The studies supported the deliberation-without-attention effect: conscious thinkers were better able to make normatively more desirable choices between simple products, but unconscious thinkers were better able to choose between complex products.”

    * Failed replication of UTT and meta-analysis:

    “Consistent with the reliability account, the large-scale replication study yielded no evidence for the UTA, and the meta-analysis showed that previous reports of the UTA were confined to underpowered studies that used relatively small sample sizes. Furthermore, the results of the large-scale study also dispelled the recent suggestion that the UTA might be gender-specific. Accordingly, we conclude that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.

    * Why the original published UTT findings did not even show (strong) evidence:

    “As a case study,we perform a rigorous post-publication peer review of the theoretical core of Unconscious Thought Theory(UTT). We present several uncomplicated indices that quantify the quality of evaluation of the results and conclusions by experts who reviewed the article and the amount of interpretation bias on behalf of the authors. The results reveal a failure of expert peer-review to detect empirical reports of sub-standard quality. The analyses reveal there is in fact hardly any empirical evidence for the predictions of UTT in important papers that claim its support. Our advice is to evaluate before you replicate.”

  3. Lee Jussim says:

    Science Going Bad and How to Improve it:

    Talk I gave to a large number of undergrads last year. The video shows lots of slides quite well, and the titles of all sorts of general interest science articles are there. Stuff with titles like:
    Psychology’s Replication Crisis Can’t Be Wished Away
    Lies, Damned Lies, and Medical Science (the article in The Atlantic on Ioannidis)
    Nine Circles of Scientific Hell
    Why Political Polls are So Often Wrong

    Other accessible stuff:
    Loeb, A. (2014). Benefits of Diversity. Nature: Physics. About a dozen accessible stories of how presumption and arrogance obstructed progress in Astronomy.
    Podcast about 3 renegade scientists who got screwed by dysfunctions in the science system (and ultimately vindicated). One is an economist.

    NYTimes, Our Feel Good War on Breast Cancer

    I also have a slew of very readable articles on dysfunctions in politicized areas, strongly implying that the leftist/social justice narratives that dominate the social sciences distort conclusions all the time — using pretty objective metrics, such as citation metrics (e.g., study finding bias gets cited orders of magnitude more frequently than a study on the same topic that doesn’t — even when that latter study is superior by most basic social science criteria (e.g., massively larger sample sizes). But I am not sure if he wanted that.

    Or they could just read The Blank Slate. Whole book, but very accessible.


  4. Allan Cousins says:

    I would be concerned that introducing a topic to students who, on average, would have little hope of even moderate understanding of what’s really going on would be unduly influential, in the most negative of ways. My explanation of why I think this is so is relegated to the end of this comment (it’s somewhat long winded, and I wanted to maximize the likelihood that people read the following recommendation in lieu of the author’s original approach)

    Instead of introducing the replicability crisis as some sort of cursory topic…why not have students participate in ways where they can explore the validity and impact of the modelling assumptions for themselves? One way in which this could be accomplished is through simulation. You could ask your students to write a very basic program (in R, lets say) where agents come to buy and sell goods or groups of good. Once they have it setup they could alter some of the parameters; say randomly change utilities among agents, increase/decrease bankrolls, relax the rationality assumption (e.g. program that some agents buy because of momentum) etc. They could then record what happens and graph buying/selling as a function of these parameter changes. They can then take what they learned from this exercise and compare the graphs to trading of real world goods and draw inferences from visually comparing them (under your guidance so that inferences are tempered with the proper amount of uncertainty, etc.). If the programming requirement is too great you could program it yourself and ask the students to alter the assumptions in a lab setting.

    The simulation exercise may not be the optimal way to go but I think my point is that at the freshmen level it’s far more valuable to get them to think about how to test / evaluate assumptions (even if it’s in a rudimentary way) then it is to say “hey look, some people questioned these assumptions…their field is rife with credibility issues….now back to economics.”


    Freshman are impressionable and if you gloss over fairly technical details students will be left without understanding much and will more than likely be left thinking exactly whatever it was that their professor told them (in this case, that there are issues with certain areas of research that purport to challenge some assumptions of basic economic models…translated into freshman: economic models are probably pretty good as is).

    Think about it from a first year undergraduate’s perspective. They don’t know statistics. They don’t know psychology. Hell, they don’t know economics! At this point, they don’t know anything of research, of life, of uncertainty. They take pretty much whatever their professors tell them as truth; in the undergraduate’s eyes, they are the gate keeps of knowledge.

    So when you teach them neo-classical economics…then tell them some psychological research has attempted to cast doubt on utility maximization/rational agents/etc…..then tell them there has been a fairly substantial crisis in recent years about use and abuse of statistical practices in psychology papers that has made replication of many results problematic….that this is not a trivial problem…what are they likely to think?

    To me, students are likely to think something along the lines of “yeah…that’s cool, why would I even care what psychologists have to say about economics? They can’t even do basic math! Why did we spend 2 lectures on this non-sense? ” You’ll set them up for a mind-set that some challenges classical economic theory (namely those coming from psychologists) are of sloppy character and that the students shouldn’t put too much thought into it.

    • Anonymous says:

      “They take pretty much whatever their professors tell them as truth”

      Perhaps that’s (partly) the professor’s fault. If i was a professor i would start each lecture with something along the lines of “always question everything, also what i am about to tell you” and similar things along that line. I reason (recent) history has taught us this may be considered a correct thing to do, and most importantly, i reason it fits with teaching science/how to be(-come) a scientist.

      “Why did we spend 2 lectures on this non-sense?”

      I had that same thought, but not after 2 lectures but an entire 2-year Master’s degree. Hiding abuse of statistical practices, replication problems, etc. from student is not the way forward and could even be considered to be un-ethical.

  5. I thought that Cult of Statistical Significance, Stephen Ziliak was helpful. Rex Kline mentioned Ziliak’s name in a podcast.

  6. I like your recommendations Allan Cousins

  7. Jack PQ says:

    I would suggest a different approach avoiding statistics entirely (!). Princeton students must have had high school exposure to experiments in physical science, chemistry, etc. They were asked to replicate classic experiments by following instructions, and if they did it right, they confirmed an effect or a result. Now explain to them that you have a new experiment with a new finding, but when it is assigned to the class, almost nobody is able to replicate it. What’s going on? Did everyone fail to follow the steps? Or maybe, more likely, the original result just isn’t there.

    That’s what’s happening in much of social psychology. Science should be reproducible. If it isn’t, there is a problem.

Leave a Reply to Lee Jussim