Skip to content
 

“Curing Coronavirus Isn’t a Job for Social Scientists”

Anthony Fowler wrote a wonderful op-ed.

You have to read the whole thing, but let me start with his most important point, about “the temptation to overclaim” in social science:

One study estimated the economic value of the people spared through social-distancing efforts. Essentially, the authors took estimates from epidemiologists about the number of lives that could be saved, then multiplied them with estimates of the statistical value of a life from economists. The researchers admittedly did not consider any of the potential costs of social distancing. Yet, in the concluding sentence of their abstract, they write, “Overall, the analysis suggests that social distancing initiatives and policies in response to the Covid-19 epidemic have substantial economic benefits.” To an economist, this sentence might simply convey that they computed large benefits but did not consider costs. But to a layperson or policy maker, it sounds like they have conducted a thorough analysis and concluded that social distancing is, on net, economically beneficial. Not surprisingly, many news outlets have cited this study to support the claim that there is no trade-off between saving lives and economic recovery.

We discussed some of the problems with the dollar-value-of-life analysis in the comments section here. But the key “sociological” point here is that social scientists are trading off our collective reputation—they’re spending our cultural capital on these claims. If some dude on the street said that social distancing was with 8 trillion dollars, you (or a newspaper editor) would be like, huh? But if a respected social scientist says it, then, hmmm . . . maybe we’re on to some thing here. I also have a problem with this sort of crude dollars-and-cents reasoning because it ignores how things are implemented. A pause in the economy, if done haphazardly and with uncertainty, can wreak economic havoc. A pause that’s well coordinated could cause much less disruption. The relevant social science subdiscipline here is not cost-benefit analysis, it’s whatever it is that helps you understand how different levels of government and different private-sector entities can coordinate. The problems are solved by cooperation, not by turning on the money spigot.

This is all to say that “social science expertise” or “management expertise,” considered broadly, is extremely important here. Political science is relevant too! But not so much the kind of social and political science that I do, or the sorts of studies that Fowler critiques in his op-ed.

OK, now back to the beginning. Fowler writes:

The public appetite for more information about Covid-19 is understandably insatiable. Social scientists have been quick to respond. . . . While I understand the impulse, the rush to publish findings quickly in the midst of the crisis does little for the public and harms the discipline of social science.

Even in normal times, social science suffers from a host of pathologies. Results reported in our leading scientific journals are often unreliable because researchers can be careless, they might selectively report their results, and career incentives could lead them to publish as many exciting results as possible, regardless of validity. . . .

He gives some examples:

In one recent study, survey respondents were asked to self-report their social distancing, but people often misreport their beliefs and behaviors in political surveys. Another study used GPS data to measure visits to places of interest like restaurants and movie theaters, but this seems like a poor test of social distancing at a time when many such places are closed (especially in more Democratic places). A second challenge is that even if we find a clear difference between Democratic and Republican behavior, it’s difficult to say whether this difference is explained by political attitudes or other factors. Democrats tend to live in more urban places, where the pandemic has been more severe and local governments have implemented more stringent policies and guidelines; neither of these studies accounted for these alternative explanations.

Another recent study [parodied here] investigated the extent to which watching “Hannity” versus “Tucker Carlson Tonight” may have increased the spread of Covid-19. This is the kind of study that might make one skeptical in normal times. An extra concern now is that the paper was likely written in just a few days. Although the authors write that they used variation in sunset times to estimate the effect of watching “Hannity,” a closer reading suggests that they’re mostly using variation in how much people in different media markets watch television and how much Fox News they watch. Maybe conservative commentators like Sean Hannity have exacerbated the spread of Covid-19, but it’s dangerous for social scientists to publicize these kinds of results before they have been carefully vetted.

But what about the value of this work? Fowler writes:

One possible reason for rushing science in the midst of a crisis is that the benefits of quickly getting new information to the public and to policy makers outweigh the potential costs of giving them less reliable information. Perhaps one could make this argument for those studying how to cure or prevent the spread of Covid-19. But most of the work being done by social scientists on Covid-19, while interesting and important, is not urgent. Understanding how political attitudes affect social distancing may be relevant for understanding political psychology, for example, and it might even help us design better solutions in a future pandemic, but it doesn’t significantly benefit society to have this information today.

Good point. This work could be valuable, but it’s not urgent.

Fowler continues:

The second troubling trend is the temptation of social scientists to speak outside their areas of expertise. . . . I’ve recently seen scholars in fields as varied as political philosophy and macroeconomics giving public-health advice and predicting the future trajectory of the pandemic without seriously discussing the limits of their knowledge or the credibility of their assumptions. A legal scholar first predicted 500 deaths in the U.S., then appeared to revise that to 5,000, and most recently revised it again to 50,000. Despite the scholar’s lack of any relevant expertise or experience, these woefully optimistic early projections reportedly influenced decisions in the White House.

Just one thing. Law school professors get lots of attention, which maybe they deserve and maybe they don’t. But no need to call them “social scientists.” Social science has enough problems without shouldering the burdens of those smooth-talking b.s. artists.

And let me again remind you of the problems with the scientist-as-hero narrative. Some scientists really can be heroes right now. Those of us who run social experiments, analyze surveys, and publish papers using cool datasets, maybe not so much.

50 Comments

  1. We need more patients as heroes narrative as they have, skin in the game, to use a characterization that Nassim Taleb resorts to.

    • No, we need to stop thinking that heroes are going to save us and step up and start doing what is actually required, which is a crapload of boots on the ground work.

      • Hi Daniel.

        I believe patients and consumers of sciences are recognizing that they have to walk in a lot of crap to forge accountability.

        Frankly, I think the ‘scientists as heroes’ narrative has lessened in its influence on many people. I think Vinay Prasad’s new book Malignant should be a cautionary tale.

  2. Peter Dorman says:

    The funny thing about the Greenstone piece Fowler links to (social distancing saves us $8 trillion) is that the only thing it adds to the mortality numbers is “this is what economists think Americans would be willing to pay to avoid these extra deaths” — it puts a monetary value on how we’re supposed to think about the lives saved by cutting transmission rates. In what universe is this an addition to the fund of knowledge?

    • Andrew says:

      Peter:

      To be fair to Greenstone etc., not every published article needs to be “an addition to the fund of knowledge.” It’s ok also for experts to write articles where they apply existing ideas in useful ways to real problems. For example, on this blog, sometimes I present new ideas, or at least new thoughts; other times I regurgitate explanations that I’ve given many times before, in the hopes that I can make useful connections.

      I assume that Greenstone etc. think they’re presenting settled, well-understood ideas in economics and applying them to important new problems.

      You and I disagree with Greenstone etc.; we think the problem of valuing life is much more complicated than presented by those authors, and so we’re inclined to see such papers as glib, missing the key points, and potentially misleading to policymakers. That’s our perspective. But I’m guessing that the authors of those articles don’t even recognize that our perspective exists! They’re ensconced within mainstream economics, and, to them, our objections are just, oh, I dunno, uninformed something-or-another. To them, their statements about the monetary value of life are roughly on a par with 2 + 2 = 4, or the theory of gravity. They can ignore us flat-earthers, just like they ignore us when we say that “identification strategy + statistical significance = discovery” or when we say that just because something is published in PNAS or comes from Harvard, you can’t believe it, etc.

      That’s what’s so frustrating to us: it’s not that they disagree with our objections, it’s that our objections don’t even register with them. From their perspective, we’re annoying because we don’t accept that 2 + 2 = 4, or the equivalent.

      • Martha (Smith) says:

        “But I’m guessing that the authors of those articles don’t even recognize that our perspective exists! They’re ensconced within mainstream economics, and, to them, our objections are just, oh, I dunno, uninformed something-or-another. To them, their statements about the monetary value of life are roughly on a par with 2 + 2 = 4, or the theory of gravity. They can ignore us flat-earthers, just like they ignore us when we say that “identification strategy + statistical significance = discovery” or when we say that just because something is published in PNAS or comes from Harvard, you can’t believe it, etc.

        That’s what’s so frustrating to us: it’s not that they disagree with our objections, it’s that our objections don’t even register with them. From their perspective, we’re annoying because we don’t accept that 2 + 2 = 4, or the equivalent.”

        My description of this type of thing is “They’re clueless that they’re clueless”. Maybe it deserves an acronym: “TCTTC” — or just “CTTC”?

      • Dale Lehman says:

        +100%
        I was trained as a traditional economist and find this standard economic view deeply troubling. You are absolutely correct that many economists do not every realize that their are other points of view or that their perspective needs to be defended at all (I am generalizing here, as there are many exceptions, such as Peter Dorman above). And, for such smart people, their views are incredibly shallow. They equate the fact that whatever we decide (regarding how much/long to lock down) involves implicitly a tradeoff between lives and other values with the cost-benefit approach (such as Greenstone). Implicit and explicit tradeoffs are not the same thing, and if they had any knowledge or respect for psychology, sociology, or political science they might understand that. But I can tell you that many economists are trained to believe they are superior to other social sciences, and this allows them to both disregard those fields and to claim expertise well beyond what they should.

        • Specialization has its drawbacks within and across disciplines. To complicate the matter, the argument culture prevails to an egregious degree, thus resulting in tautological argumentation which is nuanced nor interesting.

          • Sorry I meant ‘not nuanced. Richard Posner has suggested that we need more generalists. Not sure that I would characterize the need that way. I gather that Posner’s suggestion rests on Isiah Berlin’s ‘The hedgehog knows one big thing and the fox knows many things. My question is what is meant by ‘thing’ really. So much information is noise, IMO.

            • Steve says:

              Specialization doesn’t have to have these drawbacks. It’s just that professors and academics can’t be sued for talking about things they don’t know about. Lawyers and physicians can. Lawyers and doctors give a referrals which provides economic benefits as well. We just have to figure out how to give academics skin in the game, so there is a price to their negligence.

              • Andrew says:

                Steve:

                Huh? Dr. Oz, Richard Epstein, and Cass Sunstein are doctors and lawyers, and nothing seems to be stopping them from making ridiculous public pronouncements. David Boies is a lawyer and that didn’t stop him from doing all sorts of things in the Theranos case. Maybe you have to rethink the idea that doctors and lawyers have skin in the game. Or, to put it another way, if they have skin in the game, “the game” is not truth or the public interest or in a reputation for truth-telling; “the game” is fame, book contracts, TV appearances, and consultation with clients who appreciate the idea of having a lawyer who is not constrained by ethics.

              • Steve says:

                Andrew:

                I am talking about when lawyers and physicians give advice to clients not when they act as celebrities speaking in public. That is the problem. Bad advice to a client results in litigation, but bad advice to the public has zero cost.

        • Martha (Smith) says:

          Dale said, “But I can tell you that many economists are trained to believe they are superior to other social sciences, and this allows them to both disregard those fields and to claim expertise well beyond what they should.”

          Yes, I sensed something like this as far back as when I was an undergraduate. I did take ECON 101 and it seemed like the easiest course I took in college (although there were a few things that were somewhat interesting). But then toward senior year, it seemed like the economics profs were “courting” the math majors who were above average (but not at the top of the class) to go into economics. That image of economists has pretty much stuck with me: They know more math than most people (including most other social scientists), but never seemed to assimilate what to me is a fundamental concept of mathematics: the difference between conjecture and proof.

          • gec says:

            > They know more math than most people (including most other social scientists), but never seemed to assimilate what to me is a fundamental concept of mathematics: the difference between conjecture and proof.

            I’ve had similar experiences, in the sense that I think many economists are enamored with using mathematical models (“conjectures”) but are less interested in checking whether those models describe anything in reality (“proof”; obviously I’m using these words metaphorically!).

            It reminds me of the “magic” that Stigler ascribed to least squares—you seem to get so much for so little. In this case, having mathematical models gets you the ability to make rigorous and precise statements/predictions, and all for the trivial price of labeling the components of the models after things that people care about (like money). But the real “hidden cost” (oh no econ pun!) for this ability is that you have to check whether those components actually do a good job describing the things they’re named after. Most economists just have little interest in paying that additional cost. (And why should they when it seems that no one makes them?)

        • Baruch says:

          Dale, could you elaborate on this point a bit? I mean, I am an experimental psychologist and the WTP approach to determine the worth of a policy really turns my guts but when I speak with economists (who actually need to make choices between policies involving the government’s limited budget) the say things like you need a metric to decide where and what to fund, and when people are *really* making life-choices they don’t value their life as worth more 1.5 million dollars (this maxes at 35 or something).
          is there a substantive psychological (or political science) objection to the method (not necessarily this paper, which i didn’t read)? what can we offer in return?

          • Martha (Smith) says:

            Baruch said, “the[y] say things like you need a metric to decide where and what to fund”

            Well — it is *convenient* to use a metric. But it’s not realistic to use just any old metric. There are lots of choices. At the very least, I would hope they would do their analysis with several metrics, to help keep in mind how the result depends on the choice or metric. And, typically, choice of metric involves choice of priorities — in other words, a lot of subjectivity. But so often they seem to treat whatever metric they choose as something “preordained” — but it’s not; it’s a choice.

          • Dale Lehman says:

            Many of the gut reactions to WTP as measuring the value of a statistical life are really lack of understanding what is being measured – that is what allows economists to keep feeling superior, since people don’t understand what is being measured. Assuming you understand that we are looking at either max WTP for a marginal reduction in the risk of dying or min WTA for a marginal increase in the risk of dying, then the objections should be that such a measure is inappropriately applied in different contexts. It is one thing for a worker to willingly accept a riskier occupation in exchange for higher risks of injury/death and quite another thing for society to place such values on people exposed to an infectious disease.

            The next step, however, if to recognize that we make many social/political decisions that have similar contexts: how much to spend on eliminating railroad crossings (on roads), safety measures for Air Force pilots, etc. A lot of research by economists has focused on the widely varying values of statistical lives implicit in these social decisions (for example, we can save far more lives without spending more by eliminating railroad crossings and relaxing occupational safety standards). The psychological research appears to indicate that people evaluate such risks differently – i.e., the value of a statistical life varies immensely depending on qualitative aspects of the risks – e.g., how voluntary is the risk? how many people are exposed simultaneously? how much knowledge to they have of the risks? This raises what I believe to be the real decision issues: how should we incorporate such considerations? The economic approach is not all wrong – people may behave irrationally and it costs real lives. However, there is an inconsistency in economic theory here – economists want to rely on people’s revealed preference to establish their values, but when revealed preference shows that these qualitative aspects of risk matter, economics tend to dismiss them as “irrational.”

      • David Walker says:

        I may not be a representative sample, but as a journalist and editor who spends a lot of time talking to economists, a large number of them seem pretty well aware of the shortcomings of a just-the-maths approach.

        Certainly some economists are of the “They’re clueless that they’re clueless” type. But there’s also a substantial non-economists group of the type “They’re clueless that many economists aren’t clueless”.

  3. yyw says:

    Extrapolating monetary value of a small reduction of rick to the value of a life just doesn’t make sense. The former is a local derivative of a cost/risk tradeoff that may make sense under a certain range of societal condition. You can’t assume that it will stay constant across a range of risk even for one person. That paper assumed a life value of 15mil for a young person, which is more than the average one year GDP of over 200 US citizens. Somehow I don’t see a government willing to spend over 15mil to rescue a typical young person. Extrapolating that over millions of lives is even more absurd.

    • Willingness to pay, and willingness to accept are two different things actually. I think a relevant question is how much would you have to give to a young person, so they could leave that money to their heirs, and let you feed them a cyanide pill? And I absolutely think that most young people wouldn’t accept $15M to their friends and family in order to eat a cyanide pill… This means, *to the young person* their own life is worth more than $15M

      That other people aren’t willing to pay $15M to save your life is not necessarily the relevant question, though it could be.

      • Dale Lehman says:

        Willingness to pay (WTP) and willingness to accept (WTA) are actually defined differently than most people think. In economics, WTP is really “maximum WTP” and WTA is really “minimum WTA” and there is a body of economic theory that bounds the difference between the two. The bounds have to do with the income elasticity of demand (how responsive demand is to changes in income). The basic idea is that a poor person doesn’t have much money, so their max WTP is low – but that same low income means their min WTA is also low. The difference “should not” be very large, unless the income elasticity of demand is very large. Economists used to do (perhaps still do) sophisticated surveys to establish values of things not normally traded on markets (clean air, endangered species, statistical lives, etc.) and when respondents reported huge differences between max WTP and min WTA, these responses were excluded and labelled as “protest” bids. Then, of course, the remaining responses confirmed the relatively close values for max WTP and min WTA.

        I haven’t followed these efforts over the past few years, so perhaps someone can update my information. I lost interest in this standard economic approach long ago, as it seemed incredibly short-sighted, shallow, and arrogant. I suspect this type of training, which allows one discipline to give the others short shrift, is not unique to economics.

        • when it comes to letting someone kill you in exchange for money it seems totally natural for people to have a large gap between the two.

          while I think there is a moral case for valuing lives quite high, there is also a simple economic theoretical case. a young person loses many future years, years in which the possibilities for new and better ways of doing things are large. New technologies, policies, new kinds of businesses, new medicines, etc. The option value of the future is high. On the other hand young people are generally poor, having had only limited time to build wealth. it makes no sense to me to think these two quantities should be close together.

          • Dale Lehman says:

            Within the economic framework, you are making a mistake. The minimum (emphasize) WTA will still be low for a young person with little income. In theoretical terms, it really means the minimum. Recognize that is the minimum WTA for a marginal increase in risk, not the minimum WTA for the loss of their actual life. Now, the things you are thinking about are absolutely relevant for policy as well as for estimating “values,” but not necessarily for estimating “economic values.” That is, the rules in economic theory impose restrictions on how value is determined, and these restrictions ensure that the difference between max WTP and min WTA is bounded as a function of the marginal utility of income.

            Not all economics is neoclassical economic theory. In a more encompassing view of economics, then anything and everything could be included, but that is not the way the theory is practiced by most economists. In particular, the economic estimates that would hold up in a court or regulatory proceeding are much more limited than this broader view.

            I think you are trying to make economic theory say what you would like it to say – but it does not work that way. Along the same lines, the value of an African American life is less than the value of a White American life (on average, their expected lifetime earnings are lower) – if you stick to the rules of economic theory. Yet no economist advocates this and no such difference appears in any published economic paper that I am aware of. What I believe is that economists circumscribe their analysis by some ethical principles. As they should. But they don’t make those ethical principles explicit, nor do they admit that these principles take precedence of their theories. This is how economists overstate what their analyses demonstrate, as well as how economists can stay within their silos rather than recognizing the relevance (dare I say, importance?) of ethics for economic analysis.

            • > Recognize that is the minimum WTA for a marginal increase in risk, not the minimum WTA for the loss of their actual life.

              WTA for a marginal increase is one thing. I was talking about making them take an actual cyanide pill.

              If the theory doesn’t match reality, evidently the theory is wrong. I can’t imagine a typical 18 year old high school graduate with $2000 in their bank account would accept anything close to $2000 to take a cyanide pill.

              • confused says:

                I think what people actually do is going to have a lot to do with perception of risk, not necessarily actual risk. I mean, a 1 in 2,000 risk of dying is 10 times worse than a 1 in 20,000 risk, but I’m not sure people would ACT as if it is.

                Sure, hardly anyone young would agree to die to get their survivors money, but many young people take significant risks just for enjoyment with no other benefit. I think a lot of people (especially males) in that age category would treat, say, a 0.2% risk of death as basically negligible if there was significant reward – even though it’s WAY higher than their otherwise expected risk of dying that year.

              • confused: yes and this is partly why we shouldn’t necessarily look for a “positive” answer to the question “what do people actually do” when looking at the societal question “what should we do en mass through policy?”

                To me, policy should treat things as somewhat ideally rational. The cost to society of losing a young person is all their lifetime income (the value of stuff they would produce) plus all the public goods and personal goods they would produce for society and their family and friends, minus all the externalities/pollution/public “bads” they would produce (such as carbon emissions and river pollution and crimes they’d commit and soforth).

                Willingness to pay and willingness to accept are not directly the appropriate measures I believe (and I think Dale agrees here).

                When asking what policy we should choose, if we are going to use a maximizing some utility function methodology, the methodology shouldn’t treat poor people as less worthwhile simply because they can’t pay, or because they would accept a lower amount to be given a cyanide pill. To the extent that we think in terms of margins, these are societal margins (a few people out of billions) not individual margins (a few percent increase in risk of death for a given person). The better solution is probably to *average across people first* and then construct a utility based on each person treated equal to the average. In other words a “justice is blind” kind of solution.

        • mks@math.utexas.edu says:

          Dale said, “when respondents reported huge differences between max WTP and min WTA, these responses were excluded and labelled as “protest” bids.”

          Throwing out data that doesn’t fit your pet theory! Definitely a no-no for intellectual honesty.

      • yyw says:

        Not sure I agree that WTA is more relevant here. For a substantial portion of the population, WTA is probably infinitely large.

    • Zhou Fang says:

      You aren’t accounting for economic growth. A young person could contribute $15 mil in GDP alone on the basis of (admittedly optimistic) choices of long term growth rate, life expectancy, and choice of discount rate.

      • yyw says:

        That requires some very optimistic assumption. Not an economist, but I imagine discount rate will be closely coupled with growth rate. We also have to consider the living cost over the expected life span.

    • malcolmkass says:

      Well, you are comparing stock vs. flow variables. And thinking in averages here probably isn’t the best idea.

  4. Zhou Fang says:

    I think my rebuttal to Fowler is:

    https://statmodeling.stat.columbia.edu/2020/04/13/america-is-used-to-blaming-individuals-for-systemic-problems-lets-try-to-avoid-that-this-time/

    I think the article is too much about individual social scientists doing bad things, and too little about the systematic amplification of those social scientists by the overall infrastructure of the media. Careful and responsible social scientists no doubt exist. But how will they appear on NPR?

    • Joshua says:

      Where does the responsibility for the systematic amplification and overall infrastructure of the media lie?

      I ask that because if you don’t identify the reoonsibilry, I don’t see how it can get fixed.

      Maybe I’m too uncharitable, but to me there’s an “old man yelling at clouds” element to the complaints both about “the social scientists” and those about “the media.”

      • Ben says:

        The thing that bugs me specifically about yelling at social scientists for stepping out of bounds, or generally unqualified people having opinions about the whole covid-19 thing is:

        1. I like people engaging with the problem even if they are wrong
        2. I don’t think that being quiet while the experts figure things out will work. Who are the experts anyway?

        I generally think the more people scraping covid19 data, making plots, and coming to conclusions and generally just screwing around, the better. It’s more engagement with the problem. Lots of those people will learn. Lots of people watching those people will learn.

        If government officials and random reporters are getting conned by random medium posts, as someone who has been conned a time or two, I sympathize, but the engagement with the problems seems more useful in the long term.

        The idea that if we sorta clear out the lane for Good Researchers and this will benefit anyone seems absurd. Like, what’s the theory? That politicians read those papers and change their opinions (assuming the conclusions in those papers are all perfectly calibrated)? Doubtful.

        That’s my rant against complaining about social scientists, at least.

        • Steve says:

          No one has to be quiet. And, no one is saying that social scientists can’t speak. I think Andrew is just saying one has to acknowledge one’s lack of expertise. As an attorney, if someone asks me a tax question that I think has an easy answer, I am professionally obligated to tell the client that I don’t know a damn thing about tax law. It’s that simple. You want to give your opinion, fine. Let people know that you don’t know what you are talking about first.

        • Mendel says:

          > “I don’t think that being quiet while the experts figure things out will work. Who are the experts anyway?”

          You think drowning out the real experts works better?

          • Ben says:

            > You think drowning out the real experts works better?

            Hmm, good point.

            But a couple counterpoints,

            1. If we accept the premise of the article, experts probably won’t be publishing anything anytime soon anyway, so there’s nothing to drown out on the research side.

            2. On the decision side, it seems like government people would know where to look in an emergency, or at least have a number to call to figure that out. Likewise it seems like experts would know their chains of command and how to get heard (if possible).

    • Ben says:

      > is too much about individual social scientists doing bad things

      Yeah this.

      I think the title of this article and the content (or at least the summary) are wildly different as well:

      > Curing Coronavirus Isn’t a Job for Social Scientists

      I disagree with this. Presumably there is a role for social science somewhere in this process, be it in decision making, or political manuevering, or marketing, or whatever.

      > The rush to publish results in this pandemic isn’t doing the public any favors. In the end, it might hurt the efforts of researchers, too.

      That’s a fair point though.

      It’s the difference between application and research. Seems like social sciences would be important for thinking about how societies react to something as big as covid-19. How much will we learn from it? Eh, that’s another question.

  5. Ben says:

    > I also have a problem with this sort of crude dollars-and-cents reasoning because it ignores how things are implemented. A pause in the economy, if done haphazardly and with uncertainty, can wreak economic havoc. A pause that’s well coordinated could cause much less disruption.

    Reminded me of this: http://observationalepidemiology.blogspot.com/2020/04/its-not-about-economy-versus-public.html

    Like, we shut down to save the economy, not to hurt it.

    > The relevant social science subdiscipline here is not cost-benefit analysis, it’s whatever it is that helps you understand how different levels of government and different private-sector entities can coordinate.

    > The problems are solved by cooperation, not by turning on the money spigot.

    Expand on what you mean here with the money spigot and cooperation here. Maybe a cathedral/bazaar part 3 :D?

    • jim says:

      “Like, we shut down to save the economy, not to hurt it.”

      Yes but the expectation was that some knowledge or plan to continue could be developed over X weeks, not over months and months. As we head toward three months of economic inactivity, absolutely no progress has been made on how to manage the transmission of the virus, with perhaps the sole exception of the public forcing the experts to eat their words about wearing masks.

      Here the Gov has extended stay-home for another month and has no fixed plan for reopening, but may reopen if the Model Woo makes his butt hairs tingle and his mood ring turns yellow when Virgo is rising and Jesus’ likeness emerges in his mashed potatoes.

      So far the pandemic is an unmitigated disaster for American science.

  6. anon poster says:

    I see a trend in many economists, technologists, and financial people to publish poorly thought out models and blog posts, often heavily bias. These either show that the writers don’t know what they don’t know or are claiming ideas from the medical field as their own.

    One example: GMU economists have heavily promoted variolation–giving people small doses of a virus to cause a mild illness that results in immunity. Somehow they seem to neglect the problem of how one chooses the dosing. Choosing the dosing–that is essentially a simplistic picture of what is done when a real vaccine is developed (yes—a lot more is done to make it safer and more effective than variolation).

  7. Ea says:

    I think this op-ed is dangerously wrong. Social scientists have a lot to add to models of disease spread. See, for example, https://johnhcochrane.blogspot.com/2020/05/an-sir-model-with-behavior.html?m=1

    • jim says:

      The chance that any modelling by anyone is going to do anything useful is absolute zero.

    • Dale Lehman says:

      Can you explain exactly how your linked model adds a lot to our understanding? It is one model out of hundreds that appear virtually daily. It is from a reputable source, as are most of the others. I don’t have the time to analyze each model and then try to see how they differ from each other and whether the differences are important or not. You seem to think it is self-evident that this model adds to our understanding, but what do you base that upon? I am asking seriously, as I am having difficulty figuring out how to triage my examination of the daily release of new and updated models.

    • Ea says:

      jim + Brent, I think that disease spread models can and should inform the policy actions that we take to limit disease spread. This statement is almost a truism.

      Dale, Disease spread is largely determined by human behavior. This model, and many other recently released models, attempt to provide better predictions through accounting for human behavior.

      • Dale Lehman says:

        Of course. That is the intent of such models. But I’ve seen at least a dozen different models like this – all done by reputable social scientists – all with different assumptions and different results. That is no different than other policy issues, except that the time frame is weeks rather than the usual years (decades is some cases, such as the effects of minimum wages)for pre-publication, peer review, post-publication review, etc. We are missing any kind of discipline for when new models are needed, what criteria should be used for their release and documentation, and guidance for how readers should use their results. What I’d like to see is a clear section right up front about how their model differs from, and improves upon, prior attempts. Then all the code and data should be released, along with the paper. The latter is being done (not always, but usually), but few of these attempts make it clear why I should pay attention to this paper rather than all the others. Just stating that they are accounting for human behavior is not enough. I think this concern speaks directly to the point of this post.

Leave a Reply