2 perspectives on the relevance of social science to our current predicament: (1) social scientists should back off, or (2) social science has a lot to offer

Perspective 1: Social scientists should back off

This is what the political scientist Anthony Fowler wrote the other day:

The public appetite for more information about Covid-19 is understandably insatiable. Social scientists have been quick to respond. . . . While I understand the impulse, the rush to publish findings quickly in the midst of the crisis does little for the public and harms the discipline of social science. Even in normal times, social science suffers from a host of pathologies. Results reported in our leading scientific journals are often unreliable because researchers can be careless, they might selectively report their results, and career incentives could lead them to publish as many exciting results as possible, regardless of validity. A global crisis only exacerbates these problems. . . . and the promise of favorable news coverage in a time of crisis further distorts incentives. . . .

Perspective 2: Social science has a lot to offer

42 people published an article that begins:

The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping.

The author list includes someone named Nassim, but not Taleb, and someone named Fowler, but not Anthony. It includes someone named Sander but not Greenland. Indeed it contains no authors with names of large islands. It includes someone named Zion but no one who, I’d guess, can dunk. Also no one from Zion. It contains someone named Dean and someone named Smith but . . . ok, you get the idea. It includes someone named Napper but no sleep researchers named Walker. It includes someone named Rand but no one from Rand. It includes someone named Richard Petty but not the Richard Petty. It includes Cass Sunstein but not Richard Epstein. Make of all this what you will.

As befits an article with 42 authors, there are a lot of references: 6.02 references per author, to be precise. But, even with all these citations, I’m not quite sure where this research can be used to “support COVID-19 pandemic response,” as promised in the title of the article.

The trouble is that so much of the claims are so open-ended that they don’t tell us much about policy. For example, I’m not sure what we can do with a statement such as this:

Negative emotions resulting from threat can be contagious, and fear can make threats appear more imminent. A meta-analysis found that targeting fears can be useful in some situations, but not others: appealing to fear leads people to change their behaviour if they feel capable of dealing with the threat, but leads to defensive reactions when they feel helpless to act. The results suggest that strong fear appeals produce the greatest behaviour change only when people feel a sense of efficacy, whereas strong fear appeals with low-efficacy messages produce the greatest levels of defensive responses.

Beyond the very indirect connection to policy, I’m also concerned because, of the three references cited in the above passage, one is from PNAS in 2014 and one was from Psychological Science in 2013. That’s not a good sign!

Looking at the papers in more detail . . . The PNAS study found that if you manipulate people’s Facebook news feeds by increasing the proportion of happy or sad stories, people will post more happy or sad things themselves. The Psychological Science study is based on two lab experiments: 101 undergraduates who “participated in a study ostensibly measuring their thoughts about “island life,” and 48 undergraduates who were “randomly assigned to watch one of three videos” of a shill. Also a bunch of hypothesis tests with p-values like 0.04. Anyway, the point here is not to relive the year 2013 but rather to note that the relevance of these p-hacked lab experiments to policy is pretty low.

Also, the abstract of the 40-author paper says, “In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues.” But then the paper goes on to unqualified statements that the authors don’t even seem to agree with.

For example, from the article, under the heading, “Disaster and ‘panic’” [scare quotes in original]:

There is a common belief in popular culture that, when in peril, people panic, especially when in crowds. That is, they act blindly and excessively out of self-preservation, potentially endangering the survival of all. . . .However, close inspection of what happens in disasters reveals a different picture. . . . Indeed, in fires and other natural hazards, people are less likely to die from over-reaction than from under-reaction, that is, not responding to signs of danger until it is too late. In fact, the concept of ‘panic’ has largely been abandoned by researchers because it neither describes nor explains what people usually do in disaster. . . . use of the notion of panic can be actively harmful. News stories that employ the language of panic often create the very phenomena that they purport to condemn. . . .

But, just a bit over two moths ago, one of the authors of this article wrote an op-ed titled, “The Cognitive Bias That Makes Us Panic About Coronavirus”—and he cited lots of social-science research in making that argument.

Now, I don’t think social science research has changed so much between 28 Feb 2020 (when this pundit wrote about panic and backed it up with citations) and 30 Apr 2020 (when this same pundit coauthored a paper saying that researchers shouldn’t be talking about panic). And, yes, I know that the author of an op-ed doesn’t write the headline. But, for a guy who thinks that “the concept of ‘panic'” is not useful in describing behavior, it’s funny how quickly he leaps to use that word. A quick google turned up this from 2016: “How Pro Golf Explains the Stock Market Panic.”

All joking aside, this just gets me angry. These so-called behavioral scientists are so high and mighty, with big big plans for how they’re going to nudge us to do what they want. Bullfight tickets all around! Any behavior they see, they can come up with an explanation for. They have an N=100 lab experiment for everything. They can go around promoting themselves and their friends with the PANIC headline whenever they want. But then in their review article, they lay down the law and tell us how foolish we are to believe in “‘panic.'” They get to talk about panic whenever they want, but when we want to talk about it, the scare quotes come out.

Don’t get me wrong. I’m sure these people mean well. They’re successful people who’ve climbed to the top of the greasy academic pole; their students and colleagues tell them, week after week and month after month, how brilliant they are. We’re facing a major world event, they want to help, so they do what they can do.

Fair enough. If you’re an interpretive dancer like that character from Jules Feiffer, and you want to help with a world crisis, you do an interpretive dance. If you’re a statistician, you fit models and make graphs. If you’re a blogger, you blog. If you’re a pro athlete, you want until you’re allowed to play again, and then you go out and entertain people. You do what you can do.

The problem is not with social scientists doing their social science thing; the problem is with them overclaiming, overselling, and then going around telling people what to do. [That wasn’t really fair of me to say this. See comment here. — AG]

A synthesis?

Can we find any overlap between the back-off recommendation of Fowler and we-can-do-it attitude of the 42 authors? Maybe.

Back to Fowler:

Social scientists have for decades studied questions of great importance for pandemics and beyond: How should we structure our political system to best respond to crises? How should responses be coordinated between local, state and federal governments? How should we implement relief spending to have the greatest economic benefits? How can we best communicate health information to the public and maximize compliance with new norms? To the extent that we have insights to share with policy makers, we should focus much of our energy on that.

Following Fowler, maybe the 42 authors and their brothers and sisters in the world of social science should focus not on “p less than 0.05” psychology experiments, Facebook experiments, and ANES crosstabs, but on some more technical work on political and social institutions, tracing where people are spending their money, and communicating health information.

On the plus side, I didn’t notice anything in that 42-authored article promoting B.S. social science claims such as beauty and sex ratio, ovulation and voting, himmicanes, Cornell students with ESP, the critical positivity ratio, etc etc. I choose these particular claims as examples because they weren’t just mistakes—like, here’s a cool idea, too bad it didn’t replicate—but were they were quantitatively wrong, and no failed replication was needed to reveal their problems. A little bit of thought and real-world knowledge, was enough. Also, these were examples with no strong political content, so there’s no reason to think the journals involved were “doing a Lancet” and publishing fatally flawed work because it pushed a political agenda.

So, yeah, it’s good that they didn’t promote any of these well-publicized bits of bad science. On the other hand, then it’s not so clear from reading the article that not all the science that they do promote, can be trusted.

Also, remember the problems with the scientist-as-hero narrative.

P.S. More here from Simine Vazire.

54 thoughts on “2 perspectives on the relevance of social science to our current predicament: (1) social scientists should back off, or (2) social science has a lot to offer

  1. This post reminded me a little bit of the recent discussions on this blog about covid models. I wonder how many models are better than a Fermi estimate https://maybury.ca/the-reformed-physicist/2020/04/09/no-better-than-a-fermi-estimate/ , and more generally, I wonder how many social science (or other science?) studies offer more than would a ‘good guess’ before they are even carried out.
    (BTW – I do use models. I do analyze data from studies. And I do think social science is worth studying.)

    • Jd:

      I dunno. The phrase “Fermi estimate” doesn’t really mean much until you tell me how you got the estimate. We can rename all our statistical models and call them Fermi estimates and we still have the problem of constructing good models—sorry, good “Fermi estimates.” I assume that some statistical understanding and subject-matter knowledge can help people improve their models and their predictions. Sure, there’s high uncertainty, but the principles of statistics still apply.

      To put it another way: At the link, David Maybury writes, “‘models’ that offer forecasts that are no better than Fermi estimates aren’t really models.” But what makes him so sure that these forecasts “are no better than Fermi estimates”? It depends on the Fermi estimate in question. You can come up with some really bad Fermi estimates, just like you can come up with some really bad models.

      • I guess I took it more to mean that models that don’t offer forecasts better than a back-of-the-envelope or finger-counting exercise aren’t of much added value. In other words, if your fancy model’s uncertainty is so large that it is not better than your back-of-the-envelope approximation, then why bother? (Technically, I think any type of calculation, “Fermi estimate” or otherwise, might be conceivably be called a “model” of some kind.)
        For the extension to studies, if the Facebook experiment, high researcher degrees of freedom, p-value less than .05 study would really offer the same uncertainty as a good guess, then why bother?

        • > much added value

          At first I agreed, and then disagreed, so now I don’t know and will share my confusion.

          On the one hand, I take the point that, if your model doesn’t offer any insight/predictive power beyond your initial intuitions and background knowledge, then it doesn’t seem like the model is doing much work.

          But on the other hand, the model was almost certainly built based on your intuitions and existing knowledge, so if done well it represents a formalization of ideas that, up until that point, existed only in your head. So there is added value in the model as either an aid for communication (to get those ideas into a form that others can see/use them) or as an aid to critical thinking (clarifying and testing assumptions).

        • ” So there is added value in the model as either an aid for communication (to get those ideas into a form that others can see/use them) or as an aid to critical thinking (clarifying and testing assumptions).”

          That is a good point. I would agree with that. Once that is put out there, then the modeler could say, ‘this model does not make predictions with lower levels of uncertainty than this simple XYZ calculation’. Maybe it is more an issue of transparency, then. ‘I did this fancy model, here are the 3 pages of Methods describing it, but it doesn’t really offer much more than a rough estimate.’ That is valuable information, I guess. I think it would be helpful to see those types of disclaimers (in words as well as the numerical uncertainty estimate from the model).

          So what about a study? Does a Facebook experiment, high researcher degrees of freedom, p-value less than .05 study offering similar uncertainty as a good guess, have anything to offer (other than blog examples)? Is it added value in terms of information, or is it just added noise that detracts?

        • “So there is added value in the model as either an aid for communication (to get those ideas into a form that others can see/use them) or as an aid to critical thinking (clarifying and testing assumptions).”

          This is true but it’s not the same has having predictive value on which to base public policy. A formalized model requires formalization of many parameters that wouldn’t be necessary in just making a good guess, and how those formalizations affect the output will take time to understand. In the mean time the model prediction could be significantly worse than a good guess.

          The appropriate way to think of a model is that it’s an *attempt* to formalize existing intuitions and knowledge. That doesn’t mean it’s a successful attempt. Until it’s clear that the model output is better than a good guess, the model should be relegated to research only, and not used for public policy.

        • Responding to both jd and jim, since I think your points are somewhat related:

          First, I agree with jim that just having a model does not make anything better on its own, and as jd says it is incumbent on modelers to illustrate at minimum that their model does better than a simpler/clearer set of assumptions would allow. It’s like R-squared: it measures how much “better” your model does (in terms of squared error) than just using the mean; if you can do as well or better with just the mean, you might as well stick with that.

          But both jim and jd also point out that models often take time to understand and require transparency. I would say that this transparency is part of what jim says as “having predictive value on which to base public policy.” If you have made the assumptions and mechanisms of your model transparent, people can more easily evaluate whether they are reasonable or not. And if they judge them reasonable, this gives a policy maker a reason to trust the model beyond just its predictive capacity. So transparent modeling gives a way for decision makers to know how much weight to give that model’s predictions.

          I suspect we would agree that most COVID modeling would not meet this criteria, as jd says.

          And as to the types of studies jd mentions, I think they also fail in the same way: it is not clear what assumptions/mechanisms would be needed to tie their results to our understanding of the world. They scratch related itches. Fancy but inscrutable models scratch the itch for what Box called “mathematistry”, claiming that a real-world problem has been solved by recourse to a ill-fitting formal version of it. Underpowered atheoretical noise-mining studies scratch the itch for what I might as well call “truthiness” (a la Colbert): claiming a revolution in our understanding of the world by poor application of the forms of science.

        • gec:

          Sounds like we generally agree but I don’t agree with this part:

          “If you have made the assumptions and mechanisms of your model transparent, people can more easily evaluate whether they are reasonable or not.”

          That’s true but it doesn’t tell you anything about whether the model makes accurate predictions! :) The whole point of building a model is to test it and find out if what’s perceived to be “reasonable” with existing knowledge is, in fact, true.

          In the late 1960s everyone agreed that Ehrlich’s model of worldwide starvation in the 1980s was “reasonable”. The conundrum of what to do about the approaching famine was resolved when it turned out that what seemed like a “reasonable” model was in fact *badly* wrong.

        • That seems like a good way to summarize it. I certainly think it’s a good discussion to keep in mind anytime I am thinking about an analysis.
          As far as covid information goes, it reminds me of those freestone streams I fish in the mountain west after a big runoff – blown out and muddy. It is hard to sift through all of the information. I certainly aim/hope that any research our team puts out doesn’t just add to the silt in the stream. So I think this is a good discussion to keep in mind.

        • Did “everyone” really agree that Ehrlich’s models were reasonable?

          I would say it should have been brought up even in the 60s that there aren’t any clear examples anywhere in history of a classic “Malthusian catastrophe” in humans, and that there is at least one pretty good reason to think it cannot happen.

          Sure, there have been many very bad famines in history, including some that lasted many years and were deadly enough to decrease the overall population of a region. But these are due to an external change, usually climate (though for the shorter ones it can be war or policies), sharply dropping the food-producing capacity of the region – not due to the population outgrowing a fixed food-producing capacity.

          A simple population-overshooting-the-carrying-capacity-followed-by-a-crash “Malthusian catastrophe” may not be possible given how long human generation times are relative to other species (in which this does happen).

        • > it doesn’t tell you anything about whether the model makes accurate predictions

          That’s true, and those are different, if not strictly independent, criteria. But even within predictive evaluation, there are two senses: In the more colloquial sense, we can ask what a model predicts about the future and then wait and see if it’s right. That’s mostly what it seems we’re doing now with most COVID models. But in another sense, we can set the model in an initiate state representing the past and ask if it correctly “predicts” what happened then.

          In experimental sciences, this latter sense of “prediction” is easy to test, because we can set the parameters of the model to exactly correspond to some prior experimental setup. In epidemiology, many social sciences, and sciences like hydrology or geology involving rare events (like floods, earthquakes, elections, etc.) it is less clear since we have fewer past events and it is not clear whether those same past settings will ever really recur.

          So while I don’t know enough about the Ehrlich models to know if this is feasible, it would have been reasonable to ask if they “predicted” the correct interaction between food consumption and population growth in a comparable past situation. But I doubt that any such situation would really be “comparable”, in the same way that the 1919 flu is not really comparable to the current COVID pandemic. So testing past “predictions” may not be feasible in many settings, forcing us to rely on our judgments about whether the assumptions are reasonable until we can wait for the future to arrive.

        • Yeah, given technological/social changes, the lack of historical equivalents isn’t enough by itself to dismiss the Population Bomb model.

          I think that combined with the “mechanism” problem (given how long human generation times are, even if people didn’t simply adapt by having fewer children or improving food-production technology [which is what actually happened*], I think the bad outcomes would look more like “increased infant mortality, infectious disease in vulnerable populations, etc.” rather than “population way overshoots the carrying capacity and then 3 billion people starve to death”.

          *IIRC, since 1960 or so, the total number of hungry people in the world has basically varied with economic cycles, long-term roughly constant; which means the percentage of the world population living with hunger has actually gone down.

        • confused says:

          “Did “everyone” really agree that Ehrlich’s models were reasonable?”

          Well maybe not *everyone*. From what I’ve read, however, it was widely agreed among academics that there was a major problem.

        • OK. I wasn’t alive then, and it hasn’t been clear to me from what I’ve read whether it was a real consensus among experts or a case of “media amplification” of the scarier view.

        • But the Population Bomb is just one of many failed models that people accepted because it’s mechanism made sense, even though it had never been tested or shown to be accurate. There are tons of other models in science history that people thought made sense, but failed when they were subjected to a comparison to reality.

          Imagine if Boeing or Airbus made flight models based on what engineers thought “made sense,” with no testing at all!! The Naderites would be have an ultra high frequency terraspasm, but if we make public policy with a virus model that we just dreamed up yesterday, whoa that’s amazing science!!

        • Well, to be fair, the difference is that in the case of COVID a decision had to be made very quickly to be useful.

          I am not sure the models actually used gave “better” answers than looking at historical evidence from the last few pandemics and adjusting for what was known at the time about this one would have, though. (E.g. “Well, we’re seeing reports of CFR in the 1-4% range; given that CFR is generally over-estimated early in the epidemic, this isn’t going to be as bad as 1918-19 but will almost certainly be worse than 2009-10. It might be comparable to 1957 and 1968, slightly less severe, or somewhat worse.”)

  2. Re: ‘Panic’

    A friend of mine once took lessons to be a scuba diving instructor. One of the things she learned in the course was that when something goes wrong, “Don’t panic.” She quite sensibly asked the instructor: since panic is defined as a counterproductive response, when is it ever correct to panic? And if it is never correct to panic, why should we constantly need to advise people not to panic? And if panic is caused by not rationalizing the correct thing to do, wouldn’t one panic no matter how many times one has heard that you shouldn’t panic?

    She passed the course after learning to ask fewer questions.

    • Jonathan:

      It’s tricky. In many settings, the first thing that comes to mind is not a good thing to do. For example, it’s natural to scream at your kids, but screaming at your kids is counterproductive (I think). The advice to not scream at your kids, or to count to ten before saying something harsh, etc., is good advice—but can be hard advice to follow, which is why we hear it so often. If the advice were easy to follow, nobody would need to remind us of it. I do think it’s worth us asking, Why is such advice so difficult to follow. But, the fact that the advice is hard to follow should not immediately be taken as a sign that sometimes it’s a good thing to scream at your kids. (Here I’m excluding emergency situations such as screaming to warn your kid that he or she is about to get run over by a bus.)

      Similarly, I can well believe that panicking comes naturally to people, that it can be hard to follow the advice not to panic, but that panicking is pretty much never a good idea.

      • The defining characteristic of panic is that it is uncontrollable. People can learn not to panic or be trained not to panic, but I don’t think it is ever possible to get someone not to panic simply by telling them not to panic.

        I think that when people say “don’t panic” they don’t really mean “don’t panic”, they mean something more like “think carefully rather than doing the thing that seems instinctively right to you.”

        • I think the real problem is that when people say “Don’t panic”, they are often being condescending and dismissive of genuine concerns. When I hear “don’t panic”, I ask myself is the person being told not to panic raising an issue that could be expensive to deal with, and is the person saying “Don’t panic” the one who’s going to have to pay for fixing it. If the answer to both is yes, time to panic.

      • You are correct. In the current debate, though, this is not about those concerns. There are innumerable examples of public officials withholding information from citizens to “avoid panic.” This isn’t like the scuba diving example though, in which the lesson precedes the event by some long passage of time. In the same breath with which they pass on the potentially panic-inducing information, they could explain what they feel to be the correct response, AND WHY. Note that if the WHY is unpersuasive, they will inherit the panic. But the alternative is to tell people to celebrate Lunar New Year with typical abandon and see 10,000 people die as a result. Maybe a little panic would have been better.

  3. According to my “You Can Be A Shadetree Psychologist” book…

    It doesn’t work to tell some yourself not not to do [fill in the blank]. Negative instructions tend to increase stress or tension without decreasing the proscribed behavior.

    But you can train yourself to identify incipient panic and respond to it with some positive instruction. As in, if you feel like you’re about to panic then check all your gauges and then take a second to mentally rehearse the emergency procedures you were trained in.

  4. The sad truth is that so much research is actually worth so little. There are gems, of course, by they are not easy to pick out – at least not when faced with a crisis and a need to act (hopefully on good advice/reasoning). The academic research mill produces volumes, much of it for purposes of promotion and tenure, getting grants, etc. To be sure, there are potential implications of all this research that may be relevant to COVID issues, but there is no time to figure out what these are nor to do the extra research to see which pieces actually yield useful insights for COVID. It is much the same with the multitude of models being produced. I’m still looking for the user guide for all this work.

    • Dale said, “The sad truth is that so much research is actually worth so little. ”

      Agreed. I often wish someone would start a “Review this paper” website, to maybe help separate the wheat from the chaff.
      (Although Andrew’s blog to some extent does this.)

  5. Just to have one post a little on the “lighter side”, I wonder how many readers have no idea what you mean when you mention “interpretive dancer like that character from Jules Feiffer”. Your age is showing

  6. “insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts.”

    ???

    Has this ever been shown to be true? Have “insights from the social and behavioural sciences” *ever* been used to “align human behaviour with the recommendations of” some public official?

    • Yeah, they have.

      Default settings on retirement savings look like a big deal, an insight from psychology and behavioral economics: https://www.nber.org/papers/w12009.pdf

      Checklists seem to work, in keeping people alive AND killing them, from psychology, public health and organizational/management science: https://www.hsph.harvard.edu/news/magazine/fall08checklist/

      And of course, there’s Mutual Assured Destruction, from game theory: https://en.wikipedia.org/wiki/Mutual_assured_destruction

      I think that the more ingrained these “insights from social and behavioral sciences” become, the less we think of them as insights from science and the more we just think of them as truth or things we know. But they didn’t just appear out of a vacuum. Maybe hand-washing and covering sneezes and so forth are even better examples, but they blur the line between “medicine” and “social science”, so I’ll leave them out of the list and just argue that most of what we “know” about health behaviors is stuff that came down to us from social science research (even if that research was the pragmatic/operationalizing work of turning insights from medicine into effective behaviors for people).

      I feel like I’m probably missing some pretty obvious examples, but that’s exactly the point – after a while, they are so ubiquitous that we don’t even see them anymore.

      • “I think that the more ingrained these “insights from social and behavioral sciences” become, the less we think of them as insights from science and the more we just think of them as truth or things we know. ”

        There’s also the other possibility: social scientists simply quantified well known common sense.

        Of your examples, the game theory example might be a unique insight from social sciences.

        The other two are good examples of how social science claims to have discovered things that everyone already knew. Checklists probably go back to the dawn of writing and may well have been among the first things written, testifying to people’s knowledge of their efficacy. Default settings on IRAs, 401ks and 403bs had to await the invention of these kinds of accounts but doesn’t the pension – hundreds of years old at least – seem like a blatantly obvious example of “default settings on retirement savings”?

        I can hardly credit those discoveries to social science.

        • I think the we agree that a lot of things social scientists look at have been thought about by smart people for a long time who noticed things. And we probably both agree that the discovery of social science is in the applicability of the general insight to the effectiveness for some purpose at hand.

          Checklists for highly-qualified and carefully-trained well-disciplined technicians (surgeons) can have incredibly large impacts on important outcomes (like people’s being alive-ness). It isn’t super obvious that would be the group to target if you were thinking “checklist”*. Social scientists can help measure the scale of the problem and propose important policy solutions…like a checklist in the operating room. Which helps some people stay alive.

          Solving the social problem of caring for old people using some foresight is not new, and individuals inability to save has led to it being viewed as a collective action problem. Part of the innovation with default savings was in making a private action consistent with solving the social problem and reducing the collective action problem (read that as you will). You know, tricking people into helping their future-selves in ways they are (at various points in time) grateful for.

          And on the other hand, while game theory is legitimately weird and interesting compared to most ways we think, it didn’t come out of thin air. It wasn’t new to consider the optimal strategy of your opponent. People have been playing Go and fighting wars forever.

          So everything is a matter of scale here. Maybe we just disagree on what counts as a big contribution to how we do things.

          *Or if you do think hospitals first, then that’s because you learned it from social science! See how awesome social science is!

  7. Are we including economics as a social science? Why not.

    Social psychologists think they understand the world with N=100 experiments. Economists just need n=100 parameters x equations.

    I’m not really convinced that macroeconomics — having not really predicted the 2008 financial meltdown, or understood why inflation has been so low (and wages risen so little) during the long economic expansion since then, are doing more airy punditry now.

    • That last sentence should be:

      I’m not really convinced that macroeconomists — having not really predicted the 2008 financial meltdown, or understood why inflation has been so low (and wages risen so little) during the long economic expansion since then — are doing more than airy punditry now.

  8. I propose an experiment: take those 42 people and isolate them from each other but not the books and journal articles of their professions, and ask each of them to make predictions about the effect of same set of very specific interventions/situations relevant to the current pandemic. Them compare the sign and magnitude of their predictions.
    What’s your prior on the number of predictions for which >50% agree on the sign and order of magnitude of the effect?

  9. I like your article.

    1. Pour lots of taxpayer money into academia
    2. Set the most important kpi to keep nice academia job to be number of papers published in peer reviewed journals
    3. Enjoy the inevitable result

  10. This articles doesn’t flatter the existing social science literature. We face a once-in-a-lifetime event, and the existing literature gives mostly vapid-sounding guidance. Take this gem at the beginning of the article:

    One of the central emotional responses during a pandemic is fear. Humans, like other animals, possess a set of defensive systems for combating ecological threats4,5. Negative emotions resulting from threat can be contagious6, and fear can make threats appear more imminent7.

    But, the pandemic seems to be a huge opportunity for future work delving into the details of how the pandemic will change society and behavior. Forget the vague, overarching studies of generalities and focus instead on the myriad of details. Things like how the meat-packing industry is going to change its assembly line procedures. How will STD transmission change. Look carefully at working remotely, when does it work and not work. How can distance education be made better; how can the collegiate experience be replicated among distance learners. Etc.

  11. Andrew’s new concept of “doing a Lancet” is worth thinking about. The main idea seems to be dropping scientific standards to publish stuff that they favour politically. This does seem to be the key dynamic when, e.g., Lancet editor Richard Horton gave this emotional speech:

    https://socialistworker.co.uk/art/9594/Video+of+Lancet+editor+Richard+Hortons+speech+at+Time+to+Go+demonstration

    and then a few days later the Lancet published a preposterously bad paper backing up his speech, debunked here:

    https://www.tandfonline.com/doi/pdf/10.1080/10242690802496898

    But some of the bad stuff they publish seems to be just a grab at sensationalism/publicity as when they published the paper linking autism with the MMR vaccine. This stuff seems to be just garden variety tabloid science.

    It may be true, though, that junk science in the Lancet is more likely to have a political dimension than junk science in, say, PNAS does.

  12. Is anyone else concerned by the fact that the article was accepted after only 14 days of submission? Did it even go under peer review? Simine Vazire on Twitter has made note of omissions of relevant literature.

    • John:

      I doubt peer review would have helped, in that a key flaw of the article was a reliance on the literature which had already been endorsed by the peers. If the reviewers had been people like Simine Vazire and Nick Brown, then, sure, they would’ve caught problems. But I’m guessing that most peer reviewers would just think something like: Hey, they’re citing papers from top journals like PNAS and Psychological Science, so it’s all good.

  13. “Now, I don’t think social science research has changed so much between 28 Feb 2020 (when this pundit wrote about panic and backed it up with citations) and 30 Apr 2020 (when this same pundit coauthored a paper saying that researchers shouldn’t be talking about panic).”

    There’s actually a preprint of this article dated March 24th: https://psyarxiv.com/y38m9

    So the research would have had to of changed in less than a month.

  14. I respect your work and enjoy your blog, but it’s hard to take something like this seriously:

    > the problem is with [social scientists] … telling people what to do.

    when 24 hours later, you publish a blog post with the headline:

    > Here’s what academic social, behavioral, and economic scientists should be working on right now.

    I read the article and I did not see any prescriptions about what people “should” do for COVID and they never once called me “foolish”. The authors are not policy makers. They published a lit review, and a pretty unbiased one at that (see your “Negative emotions” example).

    A lit review is not meant to be an arbiter of facts. If your policy makers treat them as unassailable facts, maybe the issue is with those policy makers and not researchers who summarized a bunch of experiments that were ran before COVID-19 even existed.

    As an aside, I think it’s an unfair slight to single out Cass and make such a big deal about the word “panic”. If your blog posts were co-written with 41 other authors, I suspect there might be a few things in there that don’t perfectly align with your views either.

    • Jeff:

      Fair enough. The article was not telling anyone what to do. I’m annoyed that this review article claims relevance for some research that I don’t think has much relevance to the real world, and I’m annoyed that Sunstein’s op-ed drew inferences from a silly experiment on electric shocks to claim that people were responding wrong to a public health risk. I’m also annoyed that the review specifically disparaged the idea of talking about panic, while Sunstein writes op-eds, which I assume get many more readers than that review article, where he explicitly talks about panic, and not with scare quotes either.

      I don’t think it’s unfair to write about Sunstein, not at all. What’s his position on “panic”? Is it a word that’s not good enough for science but good enough for op-eds? If it’s really true, as claimed in the review, that “News stories that employ the language of panic often create the very phenomena that they purport to condemn,” then isn’t it irresponsible of him to write news stories that employ the language of panic?

      Nobody forced Sunstein to sign on to that review article. If there are things in it he disagrees with, he should let us know. I agree with you that a literature review is not an an arbiter of facts. But what’s the point of a literature review that (a) contains studies that are not relevant to the points being made, and (b) makes claims that one of the most prominent authors of the article disagrees with?

      That all said, you’re right that they’re not telling anyone what to do. I take that part back.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *