“The Evidence and Tradeoffs for a ‘Stay-at-Home’ Pandemic Response: A multidisciplinary review examining the medical, psychological, economic and political impact of ‘Stay-at-Home’ implementation in America”

Will Marble writes:

I’m a Ph.D. student in political science at Stanford. Along with colleagues from the Stanford medical school, law school, and elsewhere, we recently completed a white paper evaluating the evidence for and tradeoffs involved with shelter-in-place policies. To our knowledge, our paper contains the widest review of the relevant covid-19 research. It summarizes research from a number of fields, including epidemiology, economics, and political science.

I just have a few comments:

1. You write, “By summer, we hope to identify effective medical treatments. By next year, we hope to find a safe and effective vaccine.” Do you have evidence for these claims? References?

2. I think the dollar-value-of-life thing (p. 7) is bogus. Not because I think it’s incorrect or immoral to assign a dollar value to a life-year—I agree that this is just a formalization of an inevitable tradeoff of risks—but because this tradeoff depends a lot on the risk and the scale of the loss of life. The dollar value appropriate to 10 lives cannot simply be multiplied by 1 million to get to the dollar value of 10 million lives. The numbers just don’t work out. So I don’t think those calculations on the top of page 7 make any sense.

3. I don’t quite follow the argument on the bottom of page 7 comparing to the 1918 flu. The economic depression of 1918 was not as severe as what we’ve already seen here, right? If things are already worse now economically, despite the death toll still being low, then, sure, that suggests things could get a lot worse, but it also suggests to me that the 1918 economic experience isn’t so relevant to what’s happening today. The level of economic interdependence is so much higher today.

4. On page 10, I think “causes” should be “is associated with”.

5. I didn’t notice any discussion of economic risks that can’t be priced in dollars and cents, for example breakdown of the food supply chain, essential workers not wanting to go to work, etc. Or, conversely, the question of what would happen if people start going back to work everywhere: will businesses still fail because nobody wants to go to the store to buy anything, will people stay at home anyway? Would there be a worst-of-both-worlds scenario in which lots of careful people stay home, but “superspreaders” go out and cause disproportionate damage.

In general I think the economic analysis is valuable but I feel like some gaps need to be filled in, as ultimately the problem is not a lack of “the economy” or even “economic output,” so much as potential problems with key sectors of the economy failing, along with economic insecurity. I’ve been thinking about this regarding government policy, that just throwing money at people doesn’t resolve these problems. Without at least some discussion of this, I think your report is missing something.

Just by analogy, it’s as if you wrote an article about fighting a war and you talked all about casualty rates and budgets and public opinion, and a very small amount about military strategy, but nothing about logistics. A war economy is not just GDP, it’s also what gets built, how resources are allocated, and how risks and burdens are shared. Just as a country can’t win a war by just throwing money at the army, I’m skeptical of an economic strategy against the epidemic that’s just a generic stimulus.

P.S. This article by Ed Yong seems reasonable. Too bad it didn’t appear 3 months ago . . . As we’ve discussed, it’s taken us too long to think about societal rather than individual solutions.

133 thoughts on ““The Evidence and Tradeoffs for a ‘Stay-at-Home’ Pandemic Response: A multidisciplinary review examining the medical, psychological, economic and political impact of ‘Stay-at-Home’ implementation in America”

  1. “The dollar value appropriate to 10 lives cannot simply be multiplied by 1 million to get to the dollar value of 10 million lives. ”

    How come? is it because its an undercount? an over count? Or we are not sure either direction?

    But if we are sure of the direction of bias, can we not say it is a floor/ceiling value?

    • Gabbyd:

      First, we should really talk about years of life rather than lives. Let’s suppose a year of life is worth, say, $200,000, in that a person would pay $20,000, but no more, for a medical treatment that would increase survival rate by 1% with the alternatives being dying now or living for 10 more years. (That’s $20,000 for 0.1 expected years of life, or $200,000 per year of life.)

      Next, we have to recognize that all of this must be averaged over some population—when it comes to policy, it’s not a question of me paying to prolong my own life; rather, it’s all of paying to prolong everyone’s life (in expectation). If you do it at an individual level, the “value of life” has no stable value: lots of people don’t have $20,000 in the first place, and, at the other extreme, a rich person could easily pay millions to reduce avoidable risks. (Yes, that rich person might still ride in a car and take risks in other ways, but that so-called revealed preference wouldn’t stop him or her from paying millions for risk reduction in other settings.)

      Anyway, let’s go with the $200,000 per year of life as a societal average: how much we would pay in taxes, health insurance, etc., to reduce everyone’s risks by that amount. This works fine as a yardstick for relatively small decisions. For example, suppose you want to assign a social value to a cure for a rare disease that kills 1000 people a year, with these people otherwise having an average future lifespan of 20 years. Then you can multiply $200,000 x 1000 x 20 and say that this cure is worth 4 billion dollars a year.

      But these numbers break down when they get too large, because at some point assets are no longer liquid. Suppose we want to save an expected 20 million years of life by spending $4 trillion. How exactly do we do it? You could translate this into goods and services: $4 trillion is approximately $10,000 per American (unless I’m doing a Brian-Williams-style division mistake here), so this would roughly be equivalent to every person giving up, oh, I dunno, a car and a nice vacation, and if we do things right we could spread the loss so that people with 1 car only have to give up half a car while people with 3 cars have to give up 2 of their cars, etc . . . but it doesn’t really work that way. We can’t just convert goods and services to cash and convert these to savings. That’s kind of what I was getting at in the last two paragraphs of the above post.

      • “We can’t just convert goods and services to cash and convert these to savings.” But we have! We have diminished the savings and earnings of millions people by telling them to stay home, and we move this cost around in time and across people through various rebate policies. It’s exactly what we’re doing.

        I do agree with your intuition about extrapolating value of QALYs, though, but I think it’s a problem of human preferences, not economics. We will willingly allocate a lot more resources to rescue the little girl in the well than we will, per person, to stop malaria mortality in Africa. Basing the value of a QALY on the personal introspection of what individuals willingly spend on themselves is an exercise that allows a calibration of sorts (though as both you and Dale mention, there are lots of issues in this calibration which can be no better than rough.)

        But humans are not Kantian saints. As the numbers rise and the geographic spread of the victims moves farther and farther from personal knowledge, the willingness to spend falls sharply. This is not necessarily an admirable human trait, but it has been observed time and again in many contexts. You might say that a function of government is to nudge our better angels to combat this human failing, but that has nothing to do with the economics of willingness to pay, which underlies the cost-benefit calculus of QALYs.

      • The paper said age adjusted value which means it was ultimately based on years of life. Also I agree that there is a potentially nonlinear scaling, but I don’t think that’s a problem for the linear extrapolation because I think as the numbers grow the synergistic social costs mount. For example the cost of 60000 vietnam war deaths was very high in terms of a decade of social disruption.

        Also the alternative to a decent attempt at statistics is not no statistics it’s scrappy statistics… Id take this estimate over some innumerate politicians gut instinct in a heartbeat.

        Finally the appropriate measure isn’t willingness to pay. That would get us that homeless people are worth $3.19 that they have in their pocket. The appropriate metric is willingness to accept. How much do we have to pay “the heirs” of a person to have them agree to take a risk of certain death? Values will vary of course but the average is likely to be closer to your 200k/expected life year than to $3.19

        At 200k per expected life year the cost of millions dying is tens of trillions. Keeping things closed makes good sense but it doesn’t make sense to squander the opportunity to mitigate. The real outrage here isn’t that we are closing everything… It’s that we are closing everything and making fiddly squat with the resulting time we bought.

      • Boy, $200K seems high. The UK NICE (National Institute for Clinical Excellence—a slightly Orwellian term for what Sarah Palin might call death panels) appears to use a value for a QALY in the ball park of £20K to £30K. My knowledge on this point may be out of date and the current value may be somewhat higher—but I think these were the values used in 2018.

        I think Andrew’s underlying point is that $200K is too high to use consistently. If aliens appeared tomorrow and offered to sell the U.S. 350 million doses of a treatment that would add 1 QALY to a person’s life, the U.S. could not afford to pay $200K per capita—that would be three times per capita GDP. On the other hand paying $200K per patient for a cancer treatment for a handful of patients might be a sound investment if it moves the industry down the product-experience curve.

        Bob76

        • Willingness to pay is often very different from willingness to accept. A person with $3 in their pocket can’t be willing to pay more than $3 but they’re not going to let you put a bullet in their head if you first pay their friend $3

          The GDP/capita is $60k in the US. For a say 40 year old person on average they’re creating $60k of value for the world through their *wage based labor* alone, so $20-30k would be insane. But that’s just the *measured* value. 9 year old children don’t pay their parents for the labor of taking care of them… but it’s not of zero value. Your elderly mother doesn’t pay you to call her and check in on how she’s doing… but it’s valuable. Your college buddy doesn’t pay you to come out and play softball with him… but he loses that when you die.

          So well over $60k/life year lost is the right number in the US.

        • But it seems to me that the net economic loss isn’t simply the amount of the income; it is the difference between their pay and the amount that they are spending (or maybe the amount that they are spending on the basics that are typically required to keep yourself breathing). The average person who is earning $60k is also spending at least $50k, so it seems arguable that the ECONOMIC value of such a life is less than $10k per year.

      • I agree with your conclusion but I think there may be a simpler way to frame it. This: initial correlation scales as linear but as numbers increase greatly the linearity is lost and correlations don’t apply. Is this simplistic? Not in statistics, actually not in anything (retired).

    • There’s many objections so I think the answer is that we are not sure of the direction.

      1. We might adopt the “a million lives is a statistic” argument of Stalin, J and say that there’s a decline in the value of human life as losses mount.
      2. We might in contrast adopt ideas of community disruption, that suggest there’s a threshold for losses that would cause collapse, especially in terms of loss of key capabilities. So increasing death tolls would actually have an escalating cost, because you have greater risk of losing irreplaceable personnel.
      3. There’s likely a constant floor effect independent of lives lost that is basically the inherent cost of the government adopting the principle of “it’s okay for you to die”.

      TBH I think Andrew is splitting hairs somewhat with this critique though. At some point assumptions have to be made in this analysis, and the authors have made their decision, others might make different ones.

      • Zhou:

        As I wrote above, I agree that there is a tradeoff between dollars and lives. I just don’t buy the argument that we can take some off-the-shelf “value of life” number and multiply it by a million or whatever, and get a number that has any meaning.

        • If we accept that “there is a tradeoff between dollars and lives” then there is some dollar value D(N) for N lives. This means that there is a value D(N)/N which when multiplied by the lives lost gives the proper value D(N).

          So all you’re saying is that the “off the shelf” number isn’t the right number it should be a more carefully thought out number. I’m fine with that, but let’s not apparently pooh-pooh the basic calculation.

          If we got even within a factor of 5 of “the right number” it’d be better than ANYTHING else out there. The knee jerk innumerate responses are the big danger, because they can easily be off by a factor of 1000. Compared to that being off by a factor of 2 in the dollars/life-year calculation is small potatoes.

        • Well, the implicit assumption is that a linear approximation is appropriate within the range of values we are considering. Looking at the values they are talking about, they link to a PM2.5 policy analysis where they were looking at mortality changes of about 2000 per year for a period of 15 years. Extrapolating that out to hundreds of thousands of deaths in one year is a strong assumption, but not quite as bad as 10 lives -> 10 million.

        • Daniel, Zhou:

          My problem is not with the linear scaling of lives so much as with the linear scaling of money. Spending $4 billion on a rare disease is one thing; “spending $4 trillion” in the form of wrecking the economy is another thing entirely.

        • No I don’t agree with this. The whole point of dollars is that they are the best thing we have to a linear scale for utility. Spending $1M is basically a million times as bad as spending $1 by definition.

          What I think it *is* possible to say is that we aren’t calculating the actual dollar value of “all this real stuff we just did” when we assign $4T to it.

          Pricing things “correctly” is important. But once they’re priced correctly, the dollars are linear.

        • “Spending $1M is basically a million times as bad as spending $1 by definition.”

          No.

          You have to run each of them through a utility function (or something similar) to make a statement about the relative badness of spending $1M versus $1. This is Andrews’s point.

        • Terry:

          Almost. I’m not saying anything about a utility function. Rather, I’m saying that “spending $1M” is not, in an of itself, a “thing.” It depends how the $1M is spent. The relevance here is that “spending $4 trillion to fight coronavirus” is just a different set of actions than would be done by “spending $4 billion to fight a rare disease,” a thousand times. The’re different things.

          To put it another way, ultimately the economy is about goods and services, not about money. Money is a measurement, but its meaning depends on how it’s spent.

        • For a single person, yes. For an entire society, no, at least not until you get upwards of several tens of percent of global wealth.

          Global wealth is in the range of $360 Trillion so to the globe $1 vs $1M is an epsilon difference and a linear approximation is fine.

          You could go into the 30T range before you needed to deal with declining marginal utility.

        • Andrew said,
          “I’m not saying anything about a utility function. Rather, I’m saying that “spending $1M” is not, in an of itself, a “thing.” It depends how the $1M is spent. The relevance here is that “spending $4 trillion to fight coronavirus” is just a different set of actions than would be done by “spending $4 billion to fight a rare disease,” a thousand times. The’re different things.

          To put it another way, ultimately the economy is about goods and services, not about money. Money is a measurement, but its meaning depends on how it’s spent.”

          Well put.

        • Just riffing on your off-the-cuff $4 trillion thing…

          We (meaning here in USA) may have already spent $4 trillion by the economic damage of the shutdown to date. If things stay shut down another couple of months the losses will certainly be far beyond that.

          And if, as some supposed experts point out we are meant to keep it up for two years or more until a vaccine arrives I doubt there’s any meaningful number we could extrapolate to describe that scenario.

          So let’s suppose someone were to impose a decision rule that once the expected future value of damage to the USA economy reaches $4 trillion then everyone goes back to work, kids go back to school and only the very, very highest risk things (like putting 50,000 people in a sports stadium) remain off limits.

          If that dooms 20,000 more people to dying from COVD-19 then we’d be at $200,000 per life lost. If one were willing to just assume numbers, one could even construct some sort of utility function. An ever-increasing amount of long-term economic damage from prolonging the shutdown (surely that is monotonically increasing over any time frame we’d care to imagine).

          But the function of lives saved may not have such a simplistic shape it seems to me. Stopping the lockdown next week would cost more lives than stopping it next month. But at some point you switch from lowering the area under the curve for *this* pandemic cycle and you’re forestalling the (assumed) loss of life from a second or third wave of infections.

        • The paper cites a Goldman Sachs estimate of a 24% decrease in GDP for the second quarter, so the cost is $1.2 trillion, on assumption that services consumption effects fade out at the rate of 10% per month and the US doesn’t hit March levels of services sector until December. If you include the recovery expected later the number is smaller.

        • This is why we need people to do the analysis so we can point out the assumptions that are flawed.

          For example the idea was only have 20,000 deaths from turning everything back on. There is no question that with uncontrolled spread in the U.S. We could blow through 150M infections before december. Even if you take 60% asymptomatic as per the USA Roosevelt as of today… The other 40% would be symptomatic cases and the death rate there is ~4% or so. So we are talking 150M * .4 * .04 = 2.4M deaths or 2 orders of magnitude higher than your ridiculous 20k assumption.

          The situation is terrible enough as it is let’s not make a terrible decision by underestimating deaths by a factor of 100

        • Daniel,

          You do a thing over and over that makes it difficult for some of us to interact with you reasonably. Things you don’t agree with you term “assumptions that are flawed” or “ridiculous”.

          But then you posit extremely dire, apocalyptic scenarios based on assumptions of your own that are just as made-up as Andrew’s “$4 trillion” or my “20,000”.

          You may find 20,000 deaths unbelievable. I certainly make no claim that it’s a better number than 10,000 or 50,000 would be for purposes of that discussion. Likewise, the $4 trillion thing might be $1.2 trillion or $8 trillion. And different people might want to use other numbers than $200,000 for the “cost” of a life lost.

          That’s fine with me, I was musing over the PROCESS of creating utility functions to describe the tension between a longer shutdown doing more economic damage vs. a shorter shutdown resulting in more deaths. I really do wish you’d dial back your reflexive disparagement of any discussion of scenarios you find insufficiently pessimistic.

        • I just made the specific numbers up. They seem plausible to me. So far something like 50,000 people have died in USA from the pandemic. If it looks like the death rate has declined to a much lower level so we start phasing out the shutdown, it seems reasonable to me that the plausible range of results are somewhat lower than what was seen in the first wave.

          If you don’t find that sort of range to be convincing, that’s up to you. But I feel it’s in the general ballpark of likely real-world scenarios.

          What I personally don’t find convincing are the pseudo “Drake Equation” type extrapolations using percentages of infected people in one setting multiplied by death rates estimated from some other context, multiplied by this that and the other to come up with wildly pessimistic (or optimistic for that matter) scenarios.

        • So far 50k have died *under the lockdown*, there is no real reason why the deaths under termination of the lockdown should be proportionate to that. Comparing say Sweden to Finland, and you have that the mortality is 10x higher under voluntary schemes, and growing. The proper estimation methodology would be to look at the counterfactual, for example comparing business as usual estimates for death toll.

          FWIW I think 2.4M is a high estimate, I would pick about 1M. If we make a conservative estimate that we have about 2% current prevalence (which I suspect is publication biased to be too high) and we go to 20% prevalence under end of lockdown, if CFR stays constant we get to 500k easily.

        • Zhou:

          “FWIW I think 2.4M [deaths] is a high estimate, I would pick about 1M.”

          Assume a life is worth $200,000. Assume 1M deaths. $200,000 * 1M = $200 billion, which is peanuts. So we should do hardly anything? This startles me. Is there something wrong with my calculator? Did I drop an “illion” somewhere?

          (I’m not arguing with you. The entire “value of life” debate seems to end up with way too small a number judging by the rhetoric. Even using $1 million per life produces $1 trillion, which is still kind of peanuts, albeit rather large peanuts.)

        • “We (meaning here in USA) may have already spent $4 trillion by the economic damage of the shutdown to date. If things stay shut down another couple of months the losses will certainly be far beyond that.”

          Some useful benchmarks when talking about these things.

          1. US GDP is about $2 trillion per month.
          2. The total value of the US stock market is (or at least was) about $30 trillion.
          3. The US stock market dropped about 35% ($10 trillion or so) in reaction to the virus. It has bounced back since then so the drop is about 17% now ($5 trillion or so).

    • Being quite technical, this is not the problem per se. Framing the question in terms of a full statistical life is also potentially misleading.
      The values come from willingness to pay for marginal reductions in the risk of mortality. We calculate, say, that the average worker is willing to accept a $100 decrease in pay to be in a job with a 1 in 100,000 lower risk of dying this year. If we aggregate 100,000 workers and sum up their forgone pay, we get that collectively they forgone 10M to have one fewer death expected.

      The problem with jumping from this simple math to a scenario with hundreds of thousands of deaths is that the change in individual risk of mortality is MUCH greater than 1/100,000. Reducing the risk from 500/100,000 to 499/100,000 has a very different willingness to pay than, say, 3/100,000 to 2/100,000, because people are risk averse. Just taking the VSL and scaling up linearly implicitly assumes risk neutrality, for which we have plenty of evidence it’s not the case.

      (BTW, I’m preparing a working paper that proposes a way to correct for these changes in risk far from the average low-risk case)

    • I get the impression that the paper is not so strong on broad economic issues… I would hope that the authors considered both up and down side outcomes. Or is it more like press coverage when an economic analysis is half reported, and the ranges of outcomes, sensitivity analysis and consideration and confidence considerations are ignored (or not understood)

    • Pretty sure that Andrew is saying the marginal value to a single life is different than the marginal value to a million lives. We need to think about how the lives lost clump together depending on which uncertain scenario ends up actually materializing in order to assess the costs accurately. The comment below seems like it’s trying to get at this, with its mention of liquidity, though it also seems confused, with its mention of life-hours – which is understandable, since it’s a position I’ve never come across before, and articulating new ideas is hard. I like it a lot, or at least I’m arrogant enough to like my own version of it.

      • Typo: “marginal value to a single life is different than the marginal value to a million lives” should be “marginal value to a single life is different than one millionth of a million lives”. I trust that on this blog we all know even statistics can be tragedies.

      • Nope, I’m wrong! I suck at my title, and should have tried reading a few more comments down first. Seems he thinks we should look at the lumpiness of dollars spent, not the lumpiness of lives lost. I’m more on Lakeland’s side of that argument, but hope my comments get approved despite being inaccurate, as I still like my own version of the objection.

  2. A bit more on the value of a statistical life: I agree with Andrew that the concept can be useful, but it is not as straightforward as having an estimate and applying it in all circumstances and all scales. Economists like to respect “consumer sovereignty” where consumers decide the values to put on things rather than bureaucrats. The traditional examples are things like communist governments declaring cosmetics as taboo (while consumers really wanted them). However, the evidence is overwhelming that people value statistical lives differently in different circumstances (I can dig up the references, but I don’t think the idea will surprise anyone): the cost of saving a statistical life varies by orders of magnitude when we compare actions of eliminating railroad/road crossings, designing child car seats, making air force pilots safer, etc. In other words, the qualitative dimensions of risk matter to people (is the risk voluntary? are many people exposed to the risk simultaneously? etc.). Yet, many economists like to point out how inefficient our safety policies are: we could save more lives without spending any more on safety by reallocating our spending patterns on safety. This is undoubtedly correct – but largely irrelevant. At least it is inconsistent with the idea of consumer sovereignty. Consumer tastes for cosmetics are king (unfortunate term, but it has been used in economics for a long time); but somehow consumer tastes for safety are irrational. There is a lot of ethical and philosophical debate hidden within the use of the value of a statistical life.

    • I disagree that we should accept consumer sovereignty in all circumstances. After all post reconstruction south put a very low dollar value on black lives. That was a real preference, but it’s not one we should accept as ok. The right calculation is to use willingness to accept. How much do you have to pay someone to let society kill them through policy?. That cost is very high for Most people. It’s well above GDP per capita per life year lost imho.

      • That may well be your moral view, and it may well be right in some moral sense, but we have to marshal society’s resources in a way that society agrees to do, not necessarily in a way that accords with your (or my) moral preferences. That’s what the consumer sovereignty view is based on.

        • Huge swaths of society would happily PAY to get a license to be able to kill off all the X people for X = some racial/social/national group. That’s literally the history of North America from like 1492 to about 1860.

          And yet, there are laws against murder. So, we override people’s preferences all day long in the modern world.

        • I agree, unadulterated democracy is tyranny by the masses. But that’s exactly why we have checks and balances; the supreme courts and constitution and all that….

          Say, a cost-benefit analysis was made and it showed that starting the economy in a certain way (faster than you personally like!) is economically optimal (from a cost of life calculation) yet leads to X extra deaths, biased towards a certain age cohort. Then let this be judged on by the supreme courts and decided for it’s constitutionality.

          My hunch is, the courts would allow it. What’s the appropriate startup policy, sure, is a matter of debate and different experts have different views (and I myself am not sure what’s the right one!). But I doubt any of the range of options that are being discussed are abjectly unconstitutional.

          That’s why I think comparisons to American black-injustice history etc. are a bit of the slippery slope fallacy.

        • Apparently we agree! let’s see high quality estimates under a variety of assumptions and narrow in on which assumptions people actually believe in by determining how much push back there is against them!

          But what we’re going to get is just nowhere near that if we start shooting down cost-benefit analysis as fundamentally flawed… “bogus” as Andrew said. The alternative to C-B analysis is basically “Mayor Joe says god bless everyone we’ll be protected by our faith, you can all go back to work today, I’m planning to have a beer with the whole town in a celebration, and a parade!” and “Mayor Andy says no one can leave their home even to get food, and if you don’t have food just go hungry for a couple weeks” and soforth… a lot of terrible gut instincts not trained by experience.

          Cost of life years is not *fundamentally flawed*, it’s an uncertain parameter for which they plugged in a single number which doesn’t respect the realities of the uncertainty, including not just the uncertainty in the outcomes, but uncertainty in the *VALUES*.

          Andrew should have been more careful in his criticism.

        • The question is not whether we override some people’s preferences, or even a lot of peoples’ preferences. The question is whether we override the median voter’s preference, suitable adjusted for constitutional constraints.

        • Well said. I was trying to formulate this idea, but didn’t come close to your clarity.

          Daniel believes the model is the bedrock truth. But its not. Society’s reaction has to be factored in, and that reaction is probably not linear. 10,000 deaths may cause no negative societal consequences at all, while 11,000 might cause rioting in the streets. You can’t ignore this non-linear response even if you think it is irrational.

        • You’re asking for a positive estimate (what will happen… rioting after 11k but not 10k). What’s I’m asking for is a normative estimate. What tradeoff *should* we make so as to place our resources so that they do the best good, as defined by rational tradeoffs…

          People *should* put on seat-belts. When they don’t the rest of us suffer the costs of loss of life, excess medical care, etc. The fact that you’d have to pay people $10,000 to accept seatbelts in 1955 or whatever is not evidence that we shouldn’t force people to wear seat belts.

          The whole point is to answer the question *what SHOULD we do*. If you just want “what is likely to happen” you can stick to the natural experiment… let’s see ?

          we don’t want an experiment, or a “gut feeling”, we want a well reasoned plan.

        • I was not advocating for consumer sovereignty. In fact, it is an Achilles Heel of economics. If you believe in it, then the value of a statistical life becomes unusable. If you ignore it, then you have to face the enormous empirical evidence of societies that ignored it. So, the idea has to be that consumer tastes are to be respected, but up to limits – limits that depend on values. And, since values differ, that requires political and social institutions to make decisions. Unfortunately, those appear to be failing us when we most need them.

        • This is in my opinion a big problem with Economics today. It’s totally devoid of moral character. Economists are trained by their brethren to NOT use their powers for good. It’s like “with great power comes the responsibility to NEVER EVER USE IT FOR GOOD”.

          psh..

          MOST people can’t do the math. They just can’t. It’s like asking them to dunk on an NBA court. They just can’t. Maybe with decades of education they’d get there, but at the moment they can’t. So economists do this decades of training, and then they sit back and just quantify what people do and say “what they do is king”

          The reason people do what they do is that they are *literally incapable* of doing the math. They’re handicapped, so they rely on heuristics that often work, but fail in extreme circumstances.

          Is the sunk cost fallacy a fallacy or not? Is the broken window fallacy a fallacy or not? “Consumer behavior is king” say the Economists “if people act like the sunk costs are important to the decision, then the sunk costs ARE important, if people act like breaking windows stimulates the economy, then it stimulates the economy”.

          It’s the ultimate in wanking. We don’t need a science of “seeing what people do and declaring it the optimum by definition” we need a science of predicting what would happen under different circumstances and offering up those predictions for consideration in policy.

        • Andrew had a number of posts some time back on the two types of pop economists: (a) those who find that the things people do, seemingly irrational, are really rational (think Levitt), and (b) those who suggest better ways to do things that people would find rational ex post and would have found rational ex ante if only they had the brainpower to do so (think Thaler).

          The latter group, to which you proudly declare your allegiance, assumes they have the true anwer that the rest of the world is too dumb to see. But (a) if this thing that everyone is too dumb to see requires some massive mobilization of people, you have to convince them before you get them to do it, and that means actually appealing to their actual brains, not your superior one; and (b) you’d better be damn sure you’re right. I know *you’re* sure, but the rest of us are operating under a lot more uncertainty than you seem to be.

        • Jonathan:

          I’m glad you remember my old posts. Just to clarify: Levitt engages in both modes of economics! Sometimes he says that behavior that seems irrational is actually rational; other times he does the technocrat thing and says that behavior that seems rational is actually irrational.

          Here’s the rule of thumb: If someone’s acting like a jerk, an economist will tell you that his behavior is rational and, in fact, beneficial to society. If someone’s acting in a way that’s beneficial to society, an economist will tell you that his behavior is irrational and, in fact, he’s a jerk.

          And sometimes the economists are right about this! That what makes this all so difficult! If pop-economists were always wrong, we could just follow the simple rule of doing the opposite of whatever they say. But sometimes they’re right, which unfortunately means that we have to evaluate each case on its own merits.

        • I’m not declaring I have the true answer. I am just claiming to know the proof of Wald’s theorem which says that there really isn’t any better *method* for coming up with which option you should choose than the class of Bayesian Decision Rules. For any method outside the class, there’s always a method in the class that does uniformly at least as well or better. (there are some technicalities, I don’t think they matter here).

          Now, what the model for value should be, or what the parameters are, or how to predict outcomes effectively, those are all up in the air. We can all debate those things openly. I claim one value of life years, someone else claims another… I claim a certain spreading rate, or a certain asymptomatic case rate, others disagree, we can debate. But ignoring the fact that there’s a well known optimal *methodology* would be stupid.

        • “You’re asking for a positive estimate (what will happen… rioting after 11k but not 10k). What’s I’m asking for is a normative estimate. What tradeoff *should* we make so as to place our resources so that they do the best good, as defined by rational tradeoffs…”

          But rioting in the streets is real and bad. When deciding what we should do, we should take into account all effects, including rioting in the streets. D(11,000) /= 11 * D(1,000).

        • Sure, yes, but D(11000)/11000 * 11000 = D(11000)

          so if you say that we’re not doing a good job estimating D() then I’m fine with that. What I’m not fine with is “there weren’t riots with 10,000 and we couldn’t measure all the complicated knock on consequences of 10,000 deaths, so they must be zero” then I’m not ok with that.

          A big problem with Economics is assuming that “things we don’t measure are zero”. Consider the value of home work: child care, food prepared at home, medical care provided at home, and option resiliency value of people who are available to deal with family emergencies or the like? Where is it in the GDP?

        • Daniel:

          So we pretty much agree.

          1. We cannot assume D(N) is linear in N.
          2. We cannot assume D(10,000) = 0. (I never said that.)

        • Yes, I agree with you. However I think we can go farther. I think D(N) > k*N for some positive k and N larger than a few thousand. So we can absolutely get a lower bound by linear extrapolation with some price k.

          the reason is that a human life can’t get less valuable than the income that person is producing, and the average income is GDP/capita, but when there are rapid losses of life, the world clearly reacts strongly and the consequences are substantially more than just the loss of the N lives. (9/11 for example, or the Vietnam war)

          So it seems to me we’re pretty safe in saying that for large N, D(N) > GDP/capita * N

          sometimes substantially more.

        • Another reason to think that the nonlinearity is upwards from linear is network effects in families and communities, and public goods not priced by the market.

          When one parent dies, the children suffer a lot, when both parents die, the children may be split up in foster homes or whatever, the children suffer a WHOLE LOT. When a grandparent dies, all the grandchildren suffer. When a small business person dies, the business goes bankrupt and all the employees and business partners suffer “transaction” type costs where they have to find new work or new partners. Economic activity that would have occurred is lost at other places than just the now defunct business.

          A lot of these values are not captured in economic transactions. Kids don’t pay their grandparents to take them for weekend trips for example. YouTubers generate a lot of real entertainment value that isn’t captured by the advertising payments to the YouTuber, just like the Census bureau creates a lot of value by publishing its data which it doesn’t capture in sales of data. We should be happy about the consumer surplus, but we should also acknowledge that probably we should be paying people a UBI which could be considered as a way to compensate them for the public goods they create so that we get a relatively more optimal quantity of such public goods.

        • Danel:

          “a human life can’t get less valuable than the income that person is producing”

          But the person also consumes resources. If a person produces $1,000,000 in income, but consumes $100 trillion in resources, is $1,000,000 still a lower bound?

          “So it seems to me we’re pretty safe in saying that for large N, D(N) > GDP/capita * N”

          So for some large N, everyone should pay every penny they have to avoid N deaths where N < total population?

          You've got a lot of strong claims here, and especially strong claims about having definitively solved very difficult moral questions using fairly simple mathematical calculations.

        • Terry, there are three asymptotic regimes of interest in my opinion.

          The first is asymptotically for small N in the several hundred people range… This is subject to a lot of noise, it seems to depend on which people… it’s problematic. “Luckily” we’re not in that situation.

          Next is asymptotic for N approaching the total population… obviously this is problematic because at some point we just don’t have resources. We can’t spend $200k/person-year for every person in the world, because we don’t have that kind of resources to spend.

          So what we’re left with is what’s called the intermediate asymptotics of larger than small N and yet still smaller than some moderate fraction of the whole population. So maybe from 10000 people to 10% of the population. Let’s suppose we’re talking 1 Million people in the US, which is 0.3% or so. it’s reasonable to think that society breaks even if they spend resources equal to everything those one million people will ever produce *NET* in their lifetime in market exchanges, plus some additional un-captured value. After all, anyone who thinks a 1950’s housewife had zero value because she never once worked for a wage can eat my shoeleather.

          So, in the intermediate asymptotic range from 0.1% to say 10% of the population if we use as a *lower bound* the GDP/capita times the number of people, we get a perfectly reasonable order of magnitude estimate, way better than anything you’ll get by some “gut instinct” of some random politician.

          Outside this range, I agree it’s problematic, and that’s particularly difficult for things like worrying about extinction level events, asteroid collisions, and out-of-control global warming and etc. But we’re not in that situation either. 10% of the world is 700M people, I think anywhere from 100k people to 100M people my approximation is still better than any alternative I’ve heard.

        • Also, I should say that I’m totally open to some alternative model. All that matters is that we pick the best one available. I’m not claiming to have “definitively solved” the value of a life, just that the number GDP/capita * N is a lower bound for medium size N and that this calculation is far superior to anything anyone else I’ve seen is willing to write down.

          We go to war with the army we have. We calculate the values of different options in a pandemic with the best available calculation method at the moment… Come up with a better method. I’m eager to hear it.

        • Consider the alternative logic… People are worth substantially less than even their lifetime earnings… So we should shoot people and keep the money we were going to pay them for the work they were doing.

          This is actually more or less the philosophy of Nazism. Auschwitz was the outcome. I have no problem stating that I am firmly against this view.

          I think this is also the logical conclusion of the “robot apocalypse” under our current social thinking. Given a replicator and unlimited energy, the marginal value of a life as measured against wages is $0 since no one produces anything. Therefore all people aboard the star ship enterprise should immediately take a cyanide pill since they’re producing nothing and consuming something.

          Sorry. I don’t buy that either.

    • All that discussion a few days ago about how could we do mass testing to estimate prevalence and it turns out that we only had to perform some tests in Bellingham Square in Chelsea, MA to know that one third of the world population was infected.

    • Btw, here is the daily data: https://i.ibb.co/y5jsCxp/covid-Daily.png

      Here is the output of a simple SIR model that takes into account testing roughly matched to the above: https://i.ibb.co/rdrJSnx/sir-Results1.png

      Here is all the model output from start to beginning: https://i.ibb.co/gZ5M10Y/sir-Results1b.png

      I didn’t try fitting it to the data using ABC or anything at this point (and I see theres a bug/artifact in how deaths were counted I will fix later, plus some optimizations). Point is that qualitatively it is possible to get results very similar to what we see by assuming the virus was already widespread by the time testing began and by the time the first death was reported in the US (~ March 1st) it had already nearly peaked.

      • What I see from your “full” view is that you’re suggesting that around the time we had the first detected community spread case, there might have been ~ 10-20k cases.

        I agree with that. It could have been true.

        10-20k cases out of 330M people is still negligible percentage.

        We might have today anywhere from say 0.2% (the official count) to say 5-10% of the US already infected. We do NOT have 25% + infected. There’s no reason to think that and plenty of reason to think otherwise. For example they did antibody testing of 6000 people around Telluride and found ~ 1.5% positive.

        https://khn.org/news/a-colorado-ski-community-planned-to-test-everyone-for-covid-19-heres-what-happened/

        the article goes into lots of irrelevant details to my point… which is that 6000 people is a pretty good sample and you absolutely could not have 10 or 20% or more with antibodies and test positive only 1.5%

        • 10-20k cases out of 330M people is still negligible percentage.

          There is no way you can tell from the charts but the pop for the sim was 4 million, I wouldn’t read too much into the exact numbers at this point.

          We might have today anywhere from say 0.2% (the official count) to say 5-10% of the US already infected. We do NOT have 25% + infected. There’s no reason to think that and plenty of reason to think otherwise. For example they did antibody testing of 6000 people around Telluride and found ~ 1.5% positive.

          It fits with 25% of the NYPD out sick. I’d say for dense urban areas where everyone is using public transport, elevators, etc it is going to be something like that. And we still don’t know how long antibodies are detectable in people with mild/asymptomatic illness.

          After adjusting for population and test performance characteristics, we estimate that the seroprevalence of
          antibodies to SARS-CoV-2 in Santa Clara County is between 2.49% and 4.16%, with uncertainty bounds
          ranging from 1.80% (lower uncertainty bound of the lowest estimate), up to 5.70% (upper uncertainty
          bound of the highest estimate).
          […]
          The most important implication of these findings is that the number of infections is much greater than the
          reported number of cases. Our data imply that, by April 1 (three days prior to the end of our survey)
          between 48,000 and 81,000 people had been infected in Santa Clara County. The reported number of
          confirmed positive cases in the county on April 1 was 956, 50-85-fold lower than the number of
          infections predicted by this study.

          https://www.medrxiv.org/content/10.1101/2020.04.14.20062463v1

          So for example in Chicago there were so far 11k reported cases in a city with population 2.6 million. 50-80x under-ascertainment yields 21-36%.

        • There’s no question we need widespread accurate antibody testing. I just don’t see how we can have a peak in death rates now-ish whereas 40-80k people had the disease and recovered enough to express antibodies in Santa Clara before April 1.

          More likely they contaminated their tests, or have a huge false positive rate or some other issues.

          Stanford Univ found a much lower rate of antibodies in their tests. https://news.cgtn.com/news/2020-04-13/Coronavirus-in-California-circulates-months-earlier-than-anyone-knew-PECgMGS2cw/index.html

          They did a thing where they went back and looked at samples collected at student health centers etc. You can maybe google that up. So it seems likely that the Santa Clara people screwed up the testing.

        • Yea, I posted this here when it came out:

          We performed a retrospective study that evaluated all nasopharyngeal and bronchoalveolar lavage samples collected between January 1, 2020, and February 26, 2020, from inpatients and outpatients who had negative results by routine respiratory virus testing (respiratory pathogen or respiratory viral panels [GenMark Diagnostics] or Xpert Xpress Flu/RSV [Cepheid]) and had not been tested for SARS-CoV-2.

          https://jamanetwork.com/journals/jama/fullarticle/2764364

          They limited the tests to people who tested negative to everything else. Why, when coinfection is supposed to be common? Additionally, I suspect that limitation is actually going to select for bad swabs that didn’t collect enough fluid or got exposed to something that degrades RNA, etc.

          Also, that is RNA which is ideally like a snapshot of current infection. While these other results are for antibodies which is ideally cumulative (assuming no waning to below the detection threshold).

        • You’re right, I had misremembered the stanford tests as being antibody from blood, but it was just RNA from swabs.

          Anyway, I still think it’s more likely that Santa Clara had a bad round of testing than that the whole world missed a massive spike in coronavirus cases back in Dec or Jan as the Santa Clara people would have you believe.

        • From the Chelsea data:

          Chelsea had 315 COVID-19 cases on April 7 which, with a population of 40,000, means they are seeing roughly 79 cases per 10,000 residents — four times the rate of surrounding neighborhoods.

          So with 50-85x under-ascertainment that yields 40-67%, when we saw 30% out of 200 antibody tests (which were done on people walking around who had not previously tested positive).

          In Iceland they reported ~13% of people with symptoms and ~0.5-1% of people without symptoms tested positive for the RNA, and this was stable over the course of a month (March to April): https://www.ncbi.nlm.nih.gov/pubmed/32289214

          Assuming the 1,221/9,199 positives in the targeting screening is comparable to the numbers from other countries, and the 50-85% under-ascertainment that would be 61k to 104k out of 364k pop, or 17-29% would test positive on antibodies.

          Lets say each infection yields detectable RNA for 2 weeks and a constant rate of 1% have been infected there since Jan 1, then we would expect about 14/2 = 7% to test positive for antibodies. I think that makes sense since it has the lowest population density in Europe.

        • Anoneuoid says
          “In Iceland they reported ~13% of people with symptoms and ~0.5-1% of people without symptoms tested positive for the RNA, and this was stable over the course of a month (March to April): https://www.ncbi.nlm.nih.gov/pubmed/32289214
          This interpretation is not correct. The distinction is between targeted and non-targeted testing. Most people in the targeted group had symptoms (it is a requirement in most cases) but not all. The people in the non-targeted group were not necessarily without symptoms.

          “Assuming the 1,221/9,199 positives in the targeting screening is comparable to the numbers from other countries, and the 50-85% under-ascertainment that would be 61k to 104k out of 364k pop, or 17-29% would test positive on antibodies.”
          I don’t understand your reasoning. Why are you applying the percentage from the targeted testing to the whole population instead of using the percentage from the random sample (0.6%)?

          “Lets say each infection yields detectable RNA for 2 weeks and a constant rate of 1% have been infected there since Jan 1, then we would expect about 14/2 = 7% to test positive for antibodies. I think that makes sense since it has the lowest population density in Europe.”
          This makes no sense. Targeted testing started at the beginning of February and the first confirmed case was on February 28th. Also, about 2/3 of the population live in the metropolitan area so the population density doesn’t tell you very much.

        • I think the issue with the Santa Clara case is the study recruitment. They placed Facebook ads offering coronavirus testing. It seems obvious that individuals who respond to these ads and then show up to be tested would disproportionately be individuals who suspect they have the virus, biasing estimates upwards.

        • I think the issue with the Santa Clara case is the study recruitment. They placed Facebook ads offering coronavirus testing. It seems obvious that individuals who respond to these ads and then show up to be tested would disproportionately be individuals who suspect they have the virus, biasing estimates upwards.

          Thank you for contributing a useful insight.

        • So Santa Clara finds a prevalence of about 3% and you think the thing that can be generalised from that to Chicago is not the prevalence but the ratio of confirmed cases to prevalence, while Chelsea MA finds a prevalence of about 33% and you think the thing that can be generalised from that *is* the prevalence, and you think it can be generalised to the entire planet?

        • Anon:

          Disagreement is fine, debate is wonderful, disrespect can be ok, but please no rudeness as this can lead to spirals of unproductive discourse. Thanks for understanding.

        • Andrew, I understand. But this poster has been harassing me. They don’t post useful content like data or analysis. At least not anymore. When they were new to the blog they tried and it ended up being a paper of 50% children they used to make claims about the rate of smoking in China, since that loss of face they have harassed me.

          They only post critiques of strawmen they made up and attribute to me.

        • The example of the “study of 50% children” was an illustration that you cannot compare study demographics of self reported smoking to population level demographic statistics to imply that any difference has a causal basis. The presence of 50% children is an example of the confounding variables that make such an analysis deeply suspect, similar to how in the SARS/SARS2 situation you might have things like health worker status, exposure to public health campaigns, trust in doctor, so on and so forth.

          Anyway, I don’t post for your benefit. Others can judge whether my critiques are worthwhile.

        • The example of the “study of 50% children” was an illustration that you cannot compare study demographics of self reported smoking to population level demographic statistics to imply that any difference has a causal basis.

          Ok, this is just a lie:

          Here’s a Beijing study of influenza giving only 8% of influenza patients (and 15% of uninfected patients) saying they are current smokers. I suppose it must have slipped your “we only see it in SARS and SARS2 studies” sweep.

          https://www.ncbi.nlm.nih.gov/pubmed/28456530

          What exactly would it take to show your methodology is silly? How far do people have to go to *disprove* a thing that you haven’t remotely proven?

          https://statmodeling.stat.columbia.edu/2020/03/07/coronavirus-age-specific-fatality-ratio-estimated-using-stan/#comment-1258522

          Back to ignore with you.

        • That’s literally exactly my point. You asserted that only studies involving SARS/SARS2 show lower smoking prevalence than population survey values. So I immediately found one. And then you shifted the criteria to say well obviously this sample has lots of children and that explains that. But you never pre-specified that the sample has to be representative of the population, because that was my original critique – that there’s no proof *any* of these samples are representative, see e.g. your criticism that the original SARS study adjusted for health worker status.

        • No, at the point where I pointed out the particular paper I did not notice the demographic breakdown. But that’s a rather peripheral point to my argument, which was not, as you deceptively present, some fact about “the rate of smoking in China”, but rather the need for experimental trials and analysis techniques to *adjust or control for confounding variables*, which as demonstrated, have large effects in the headline value of “self reported smoking” proportion that you were focusing on.

          The particular claim you make that drew my ire was that such adjustment is somehow cheating or part of some larger anti-smoking conspiracy(?), when it is a very important part of good faith statistical investigation, a particular aspect that you persistently fail to do.

        • Is the infection rate in NYC above 20%? Could be, because the lab-confirmed death rate is 0.1% and the excess death rate is 0.2%. An infection fatality rate of 1% is plausible and deaths have not stopped yet, they are obviously delayed relative to infections.

          Is the infection rate in the rest of the US the same as in NYC? Seems unlikely, because the 600’000 excess deaths that we would expect to see are apparently not there.

        • Thanks for supporting my point that New York City or Chelsea, MA are not representative of the infection rates across the country.

        • For another example in New York City there were so far 126k reported cases in a city with population 8.4 million. 50-80x under-ascertainment yields 75%-120%.

  3. Agree with Andrew on the need to carefully tease this out, and the role of academics including economists, statisticians, public health and political scientists working diligently for consensus. But I think its important to not loose the forest for the trees. As Deming said, “Put a good person in a bad system and the bad system wins, no contest.” And as I write in my piece in THCB this week – https://thehealthcareblog.com/blog/2020/04/13/covid-19-makes-the-case-for-a-national-health-care-system/ – “As we are tragically realizing, there is no ‘United States’ without a safe, secure and reliable health care system.”

    • Thanks Mike. In Canada it’s the federal/provincial divide that will need to be addressed (but being biased, they are trying hard to co-operate.)

  4. Some quibbles with the paper’s wording. Skip this if your are not in the mood for niggling.

    “It is difficult to understate the economic impact of the COVID-19 pandemic.”

    I think the authors meant to say “overstate”. “Understate” doesn’t logically fit with the rest of the paragraph.

    Plus, I disagree. It is easy to understate or overstate the effect on the economy.

    “But the most important factor in the long-term health of the economy is the health of the population.”

    This needs at least a cite. A lot of very bad things can happen to an economy and it is far from obvious that health effects of the type we face now are the most important factor.

    To be fair, I applaud the authors’ tone overall. These are minor quibbles.

  5. Andrew writes:

    “By summer, we hope to identify effective medical treatments. By next year, we hope to find a safe and effective vaccine.” Do you have evidence for these claims? References?

    Well, Will Marble does say “hope” right? Not that he’s sure. And for that claim I would say there’s good evidence, no? The antiretrovirals, chloriquine trials, past experience of vaccine development timelines, announcements by Gilead, Johnson etc.

    At least for “hope” what stronger evidence could one demand?!

    • Rahul:

      I assume they have some place where they’re getting these hopes. So I’d like to see the references. If these are to news reports of drug company announcements, that’s fine. I just think that in this sort of policy-focused document, it’s good to give your sources and your lines of reasoning.

  6. >The dollar value appropriate to 10 lives cannot simply be multiplied by 1 million to get to the dollar value of 10 million lives. The numbers just don’t work out.<

    So how could this be estimated better? Any ideas? It's something like diminishing marginal costs, looks like….

  7. “it also suggests to me that the 1918 economic experience isn’t so relevant…”

    I don’t think that’s correct. There was no equivalent shut-down in 1918, that’s the key difference at this point.

    The shut-down itself has caused most of the damage in 2020. That was the plan. Take a deep early hit, get control of the disease, then move forward with mitigation procedures in place, lessening the long term damage.

    But now there’s a big risk: if we don’t get control and put mitigation measures in place we can’t reopen, and then continuing with the shut-down really will create an economic disaster.

    So I think, in the absence of clear leadership from the Fed and State level on when they’re going to get their shit together on testing, people are right to start questioning the shut-down. It’s a temporary measure. It has to be lifted at some point and that point has to be relatively soon.

    • Why does it have to be relatively soon?

      I agree 100% that the diddling around while rome burns has been horrible. We should have millions of tests a day in capacity by now, and antibody tests, and be doing all sorts of hiring and etc to put in place our plan.

      But we don’t have a plan, because … well because of how broken our country is and has been for a decade or more.

      In the absence of the plan, it’s fine to be angry, but it makes no sense to say “well, might as well let this thing blow through the population”.

      The answer is still mitigation, not either full shutdown, or full return to “normal”. Until we get a plan for mitigation there can be hardly any relaxation. Even during massive lockdowns we still are accumulating cases at tens of thousands a day, and deaths and thousands a day. A year at deaths = 3000/day is 1M deaths.

      Without significant physical isolation, we will go back to doubling every 3 or maybe 5 days. We have ~ 700k cases now. Unchecked and mitigated only down to say doubling every 5 days, we still would have 10M cases in 3 months or so. US is confirming 37k deaths compared to 700k confirmed cases… Under 10M confirmed cases we might expect 530k deaths, or about the same as the entire civil war, in 3 months. If you think hospitals in NY and bodies stacked in temporary morgues on the street were bad a week ago, imagine them at 20 times worse.

      Saying “we have to open up soon” is just the wrong way to think about it… “we have to substantially mitigate the economic cost soon” yes… open up NO.

      • “Saying “we have to open up soon” is just the wrong way to think about it”

        No. “shutdown until no one knows when” is the worst kind of thinking we could possibly have. We need a deadline.

        All we need is the damned testing and a plan of how to implement it. If we can’t get that in place by May 15, it’s time for new leadership on every level.

        • Exactly! and there is no rational decision rule that doesn’t try to evaluate how good one choice is vs another.

          We have NO choice but to try to quantify how good different options are unless we want to just put up with “hey my gut instinct is …”

          Saying “we need a deadline” is just another way of saying “we need to come up with this decision rule as soon as possible” which is true… But given the situation, it seems unlikely we’ll get anything good any time soon. I’m afraid it’ll come down to some people’s gut instincts.

          so far I’m thinking CA is way ahead of the curve, and it comes from Newsom having access to a lot of technology folks who have some clue about how to evaluate options.

      • Too many people are getting lost in modelling. SCREW MODELLING. So far all the modelling hasn’t provided a dime’s worth of practical value. It’s a waste of time.

        We need test kits. that’s all we need.

        • I agree and would add “We need test kits along with the political will and the manpower to use them in a systematic and rigorous manner”.

        • We need test kits. that’s all we need.

          I feel like there is a disconnect here:

          https://www.youtube.com/watch?v=NmRlvX3VrAQ
          https://www.youtube.com/user/emcrit/videos

          I have to say, I am still not convinced this test is for the actual thing that is (solely) responsible for the severe illness. The story I’m hearing from those critical care doctors is that this test isn’t providing them with much useful info. There is some kind of novel illness with very characteristic symptoms they are dealing with for sure.

        • There’s no such thing as ‘no modeling.’ As Bill James might put it: The alternative to a good model isn’t no model, it’s a bad model.

          Every decision is based on a model, either explicit or implicit. If you say “we are ready to let people go back to work as long as they do xyz”, you have at least an implicit model of what the economic effects of that will be, and how those will compare to the health effects, deaths, etc.

          This is, in fact, my big beef with the people saying LIBERATE MICHIGAN, arguing for an end to the lockdowns, etc. etc.: they won’t tell you their model or its predictions. I’m not saying they need a quantitative model, although I think there are many advantages to having one, but they should at least tell us what they think are the answers to some of the basic questions, like: (1) if we follow your plan, at what rate will the virus spread if we lift the lockdown, and what does that imply as far as number of deaths over the upcoming months; and (2) taking into account people’s reluctance to work when they might get sick themselves, and to expose a loved one to a fatal illness, what extent of economic activity do you think we can expect under your plan? Without some kind of answers to these, there’s no way to claim the reopening plan is better than what we are doing now. But the people demanding an early end to the lockdown will not answer these questions, or at least I haven’t seen them do so. It’s very frustrating. Mostly it makes me feel like they are ‘thinking with their gut’ rather than reasoning things out.

        • “Mostly it makes me feel like they are ‘thinking with their gut’ rather than reasoning things out.”

          That seems overly generous to me. I’d say they are acting like spoiled adolescents.

        • When I was an adolescent I thought with a different body part altogether, but I’m not sure that’s what’s going on here!

        • Well, some people might say they’re acting like that body part (perhaps In English, perhaps in Yiddish, perhaps in …)

  8. Thanks for the feedback, Andrew and other commenters. There’s been lots of good discussion already. I’ll just add one more thing here:

    I agree that the analogy to 1918 is not perfect. That said, it does seem like the best analog we have, and research on the effects of interventions during that pandemic should be informative about what we might experience now. Absent the historical evidence, we’re mostly relying on macroeconomic models to estimate the economic costs of public health interventions. There are lots of objections one could make to the macro models that do not apply to historical research, and vice versa. So in the review we tried to highlight findings from both.

  9. Will: First, thank you for posting this — we need more of these sorts of comprehensive analyses. Quickly skimming, though, there are several things than puzzle me. First “stay at home” or not is presented as a binary choice, with nothing in between (e.g. businesses open with physical distancing; no gatherings > N people). This is a false choice, and one that feeds into the increasing politicization of the issue (at least in the US). Second, and more important, there are lots of nice statements about the factors that feed into evaluation, but nothing I can tell about quantitatively combining them. For example, I did not realize that 30% of people quarantined for SARS developed PTSD (by some measure)! As you note things like this, infection rates, etc. should feed into decision making, but how? Where’s the phase diagram of decisions versus various parameters? Clearly it must exist, since you (the authors) quite boldly state that the present path is correct. How different would the parameter values have to be to reach different conclusions?

  10. I did read the paper and I find it a valuable effort. I do wish the value of a statistical life were left out of the discussion. I realize such values are used in many government decisions – and the original valuation was based on work done by Sherwin Rosen (he died a few years ago – I had the good fortune to have had him as a professor). His hedonic model estimated the value of a statistical life from wage premiums associated with differential risks in occupational choice. When he heard that the values the estimated had been used in federal highway decisions, he was horrified – he thought they had been applied in circumstances that were different than the risks voluntarily taken in the work place.

    My concern is that once a statistical value of life is adopted, the ethical issues are swept under the rug. What if the value is determined to be low enough that rapid opening of business is “justified” by a cost-benefit analysis? Would that be a good reason for doing so? Would it even be a relevant data point to use in making the decision? I would point out that much of environmental economics is like that – what is the value of good visibility? Night-time dark skies? Economists have lots of fancy ways to measure such things (all ultimately based on willingness to pay or willingness to accept compensation), but the valuation requires that we view these things as just another commodity to be compared with other commodities. [fortunately, for environmentalists, the economists manage to generate high enough values to suggest we need cleaner air, better visibility, etc. – just as the current paper suggests that stay-at-home policies are “justified” in an economic sense] If we adopt the cost-benefit framework, it is entirely possible that we could “justify” the continuation of slavery, just as we would be unlikely to “justify” the Endangered Species Act.

    This would seem to leave little room for ethics to govern such decisions. While ethics is far messier than cost-benefit analysis, and less amenable to mathematical analysis, I still think there is a place for ethics. In fact, it may be a key factor in the current crisis. My problem with using the statistical value of life is that it diminishes, rather than elevates, the ethical dimension of these decisions.

    • Yes, you can justify anything you like by just claiming a low or high enough value for the thing you dislike/like.

      I don’t think we can get away from balancing one thing vs another. It’s fundamentally impossible. Everything we do is implicitly making some tradeoff even if we don’t calculate it. The only thing we can do is try to do a good job of making that tradeoff by putting in values that make sense, and by setting up the rules for the evaluation so that we can’t discriminate in ways we find repugnant… It’s not ok to do the Nazi thing and have one set of values for Aryan lives and another for Jewish lives for example. So, there are in my opinion rules of the game: the model must be indifferent to race, birthplace, hair color, whatever. and then there are requirements to do a good job of matching the prices / life year to realistic values… one that don’t lead to absurdities.

      If we set the value of a life too low, then it makes sense to … whatever shoot sick people or if you hit someone with your car, just stop and run over them to make sure they’re dead instead of just injured… etc.

      None of those are ok things. So there are many good debugging tools we can use to debug the value of a life-year.

      Ultimately, though, there is such a value. If there weren’t we’d just make our Building Codes say: “no one can live in any building less durable than the great pyramid of cheops” or “do what you want, make your house out of toothpicks and spit with spiky knives embedded in the walls” but we don’t do those things.

      • I agree that tradeoffs are necessary and choices must be made. But that does not require that everything be reduced to a single dimension. Multicriteria decision making is more complex and may not lead to an unambiguous choice compared with unidimensional reasoning. But that does not mean the former is incorrect or somehow inferior to the latter. There are rules in economics (not always followed, but good economists know what they are) – they do not permit you to “justify anything you like by just claiming a low or high enough value for the think you dislike/like.” My problem is that I don’t think those rules matter much for certain decisions. Economics tells us a lot about the world – but it does not include everything. And, attempts to make economics cover everything has led to some very bad practices, in my opinion. Most economists (of course I am generalizing now, without good evidence, but I’m on a roll) are quite comfortable relegating politics and ethics to the back-burner by claiming that their analysis has included all the relevant considerations.

        • I’ve never seen a multicriteria decision making scheme that makes sense to me that wasn’t ultimately equivalent to reducing the problem to a unidimensional scale. There is no unambiguous way to put a total ordering on anything but a unidimensional number, so… I don’t know. Would be happy to have you post or send me relevant references.

          I personally think that explicit modeling of life values is important because if you don’t explicitly model it, you implicitly model it, and it’s easy for people to argue “I think we should do X” when in fact it’s equivalent to some abhorrent moral choice to say value a homeless person at $3.27 or value Jews less than others or value foreigners less than others, or etc etc.

          When you have to write down the number, you subject your decision to criticism on the basis that the value you placed is too small / too large and there’s explicit negotiation.

        • Well, you could have a tree-based decision scheme. Or maybe some kind of probabilistic decision, for some reason. Not saying those apply now.

        • This I don’t agree with. To say that you implicitly assume a number is tantamount to requiring a one dimensional measure. What is the cutoff value of a year of life for someone 60 years old that would “justify” losing 6 months of pay for someone 30 years old? The question is ridiculous to me – it doesn’t really matter whether you think the number is high or low. Some questions are not meant to be asked (like the trolley problem). Of course, whatever decision you make can be translated into a value X that answers that question, but that is not the same as saying that your value must be X (or less, or more) if you decide to end quarantining sooner (or later). Not everything is commensurable even if you can calculate a one dimensional measure after the fact. The question is how you reach the decision.

          Multicriteria decision making used to be popular in Europe and never seemed to catch on in the US. People didn’t like it because it was messy and often ambiguous. It forces you to make tradeoffs without measuring them in one dimension (money). You may see it as a way of avoiding declaring the value you put on lives, but I see it as an invitation to discuss values and engage in discourse. Reducing ethics to determining the appropriate value of X just seems wrong to me.

        • The thing I find worse is if we let people get away with treating some lives “as if” the were very low value of even negative value.

          Suppose we have two options, A and B, the first one of which is tantamount to murdering a bunch of x type people and giving their assets to y type people and one of which considers all people of equal value… If the situation is complex it may not be obvious what is going on in the first scenario. Particularly if we shun the formal analysis, it can be relatively easy for groups to argue for doing A. Some of these people may even figure out that in the end they’re murdering X and paying Y but they approve of that… Most of them may be basically unaware of this, as it’s not salient to them.

          This is how we wind up with terrible decisions.

          Instead we should attempt to very carefully analyze our situation and be explicit about our values. Only then can we be sure that the assumptions are not hidden behind a veil.

        • I agree this is a concern. Despite recent discussion of algorithms of mass destruction and their ability to create various injustices, it strikes me that at least in such methods we can see clearly what the assumptions are and substitute with others, whereas under other decision making methodologies such choices may be much harder to uncover. In the current context where say, the US supreme court can rule that the Wisconsin election in April is not substantially different from an ordinary election without looking at risk at all, this is of great issue.

        • Suppose you want to make a decision between two distinct plans of action, A and B. The results will differ in many dimensions: total cost, cost to different people, total lives lost, whose lives they are, total jobs lost and which jobs, etc. etc. You have many things to take into account…but the decision has to come down to A vs B, which means you have to have a metric — a utility function — that lets you decide that, taking it all into account, A is better than B, or vice versa. I’m afraid there’s no escaping it, everything has to be compressed to a single dimension somehow. Or rather it _should_ be: Zhou Fang is right that you could introduce a probabilistic element, like flip a coin if you can’t decide, but that’s really a way of giving up on trying to optimize.

          I think Paulos (probably among many others) pointed out that it’s fine to go with your gut or to make a random choice if the answer doesn’t matter very much, but the more important the decision the more important it is to be rational about it. In current circumstances, huge numbers of lives and livelihoods are on the line, so it’s really important to make rational choices.

        • Phil:

          I think you’re missing something here. Suppose you are evaluating a decision in two dimensions, x and y (for example, x is dollars and y is lives). And suppose that, for any particular x, you have a threshold for y. If things are linear so you can write your decision in terms of a utility function, ax + by, then this corresponds to a “value per life,” and you can map back and forth between utilities, value of life, and threshold. I know you know this because it’s in our 1999 radon paper!

          But the part you’re missing is that this equivalence falls apart if a and b are different for different scenarios. So, yes, we can talk about the “dollars per life” rule corresponding to any particular decision, but that does not mean that we can port this “dollars per life” value to other decision problems. There are several reasons for this. One reason is that we just think differently about mass epidemics than about individual medical treatments; i.e., the “lives” mean different things. Another reason is that the economic consequences of $4 trillion is not the same as 1000 interventions of $4 billion each. Or, maybe it could be, but there’s no reason to think it would be. I.e., the “dollars” mean different things.

          I agree with you that it’s important to make justified choices (what I sometimes call the “paper trail” for the decision); I’m just skeptical about calculations that are made by generalizing “value of life” from one setting to another. And I also get annoyed by economists doing this sort of thing without even seeming to recognize the many shaky assumptions underlying these procedures. I’m with Peter Dorman on this one.

        • If a and b are different for different scenarios, then I think I would be tempted to say, ah, this just means there’s a latent unobserved variable z, so our real decision should be based on a(z)x+b(z)y, say. Then if z (or indeed the functions a and b) are not known we could try and estimate and perhaps do some kind of posterior mean based argument……… I think that’s still kinda an univariate decision framework though. I think we all appreciate the difficulties and heroic assumptions at play, I suppose the point where we differ is how significant we think the flaws are at this point. Sometimes the Money Man is at your door, demanding your answer, and if you don’t give them one soon enough he’ll go to someone less scrupulous….

          There’s good arguments for probabilistic decisions I think, in cases where for example you are decision making vs some adversary where you don’t want the bad guys to be able to predict your action – in the case of easing lockdowns perhaps you want to reduce incentives for data-manipulation for instance. Thinking more broadly there might be some value in randomisation in the ending of lockdowns, so that we can do A/B testing of outcomes of different policies. I wouldn’t just call it giving up on rational decision making.

    • Perhaps it’s just my Civil Engineering background that makes this seem so obvious to me.

      Every day hundreds of times, CEs look up some societal estimate of the appropriate tradeoff between lives lost and economic costs in terms of how many nails you have to have in each OSB panel, or what sorts of bolts you need to put into the sill plate of a house to prevent earthquake damage and what’s the allowable spacing between floor joists of a certain size given a certain load.

      The magnitudes involved are nothing like the “we saved 2 people’s lives by having 100000 people wear kneepads” or whatever the econ profession likes to play around with. This is literally there’s an earthquake in Los Angeles and 10 Million people are living in homes that were designed to codes. If we had a low value of life, we’d let people make very shoddy buildings, and then some large fraction of 10M people would have died… but instead we build buildings that cost hundreds of thousands of dollars per person, so that every hundred years we only lose a few thousand people instead of a couple tens of millions.

      Building a house for 4 people to live in in the LA area costs ~ 200-400k (building cost) and large earthquakes only happen every 20-100 years. Why don’t we let people build $40k houses out of plaster and rubble? Because every 20 years 5M people would die all at once.

      The appropriate place to start figuring out how to spend against a pandemic is to figure out how much we spend to prevent large dams from collapsing and snuffing out 500k people, or stadiums, or hospital buildings, or ports, or how we set setbacks from sea walls to prevent inundation during hurricanes etc.

      What we can’t do is just say “hey you know what, build whatever kinda dam you want”. We have to make the tradeoff and codify it into the building code. In general, the way we’ve gotten the building codes we have is that shoddier construction was allowed, and then large scale devastation made people realize they needed to strengthen the rules. This suggests that people’s biases are against paying for lives not because they don’t value the lives, but because they’re not very good at estimating the right value to plug in for uncertain future lives, and they’re very good at estimating how much the upfront cost in dollars hurts. It just points towards making the amounts bigger than whatever people’s gut instinct is, like $20k for a life for the NHS in the UK or whatever.

      • I like this justification for your views on the dollar value of a life. It is simple, intuitive, practical, and reasonably robust.

        1. The dollar value of a life is theoretically a very difficult topic.
        2. But we have to act, so we need to, at least implicitly, decide how much a life is worth.
        3. Engineers do this all the time. They have to make real decisions in the real world that affect real lives.
        4. So lets look at how engineers value a life.
        5. Since engineers are responding to societal cues about the value of life, this gives us some sense (albeit imperfect) of how society values life.

    • I heard something on NPR earlier today about allocating scarce resources — in particular, ventilators for use by coronavirus patients. But I haven’t been able to locate it by a web search. One thing I remember is that the resources are not all allocated by the same rule — that they are split into groups to be allocated by different rules, but a “case” might be switched to another group if there are no resources (e.g., ventilators) in the group to which they case was originally assigned. Can anyone else locate it?

  11. Many of the comments to this post dwell on issues related to the “value of a statistical life” (VSL). I wrote a book about that a while back, so I suppose I should weigh in. Here are a bunch of observations which I’ll just state for the record; I can explain their basis if there’s interest.

    1. Andrew’s point about scaling is correct and applies to *all* applications of marginal valuations to situations where the total is non-marginal. The linear scaling of the height of a curve at a particular point, like the present size and allocation of health risks, is a poor indicator of the area under the curve between that point and some other distant point. This is true not only of VSL (if it exists) but the price of fish, for example. (My book on climate change, which I hope will appear some day, works through that example.)

    2. There are large methodological issues with the estimation of VSL. My own work has focused on revealed preference methods involving the labor market, but problems abound in stated preference studies as well. If you look closely, the estimation problems have at least part of their basis in conceptual problems that are intrinsic to the VSL itself.

    3. Why should there be such a thing as a VSL?

    a. We face tradeoffs between mortality risk and other objectives, so it would be convenient if we had a VSL we could plug in to do cost benefit analysis. But this is not an argument for the coherence of VSL. Moreover, the convenience argument rests on the prior belief that the preferred solution to poor political decision-making is to supplant it by algorithmic decision-making by trained experts—as if the political system would defer unconditionally to this delegation and the implementation of the algorithm could be perfectly sequestered from political bias or interference.

    b. Additional years of life increase utility by providing additional opportunities for consumption; thus willingness to pay for greater expected longevity is a precise measure of this benefit. (This is the theoretical argument employed by Viscusi and other mainstream economic advocates of VSL.) But this is a grossly impoverished account of how people value their own longevity, the longevity of their loved ones, and the meaning they derive from life. It’s embarrassing we should even have to argue about whether it’s all about consumption.

    c. Expected longevity is a useful metric for health status. (If you don’t think it’s informative enough, you can modify it as expected QALY’s.) So let’s put a value on it. But this assumes what needs to be demonstrated, that people really do place a single value on a year of life (quality adjusted or not), irrespective of the context in which health status is determined. How should we value the expected health impacts borne by a kidney donor? A long-haul truck driver? A nurse on the front lines of Covid-19? Why should we assume a year is a year, or a QALY is a QALY? FWIW, I have argued that, in employment situations, workers typically distinguish between what they regard as discretionary versus non-discretionary risks. The first are the result of employer choice; the second are intrinsic to the work itself. For the first they demand changes in employer behavior, and only for the second do they demand hazard pay. (I believe we are seeing this sort of response among warehouse workers, Instacart shoppers, etc. right now.) The larger point is that all material impacts on our well-being, including health risks, are mediated by hermeneutics, the systems of meaning we live by.

    Well, I’m violating my own strictures on brevity, so it’s time to stop.

    • Thanks Peter, I think your 3b states the same thing as I say elsewhere above, which is that the value of a life-year if it’s meaningful at all, has to be more than the consumption (I said it basically that GDP/capita * N is a lower bound on N life years).

      as for 3a, I don’t think it requires perfect sequestering. There is plenty of politics in the building code, but I still think the building code is a vastly better decision making scheme than whatever we’re doing right now RE COVID-19. And I think the building code would be better still if as Engineers we did some explicit calculations on life-years, because we’d probably find out that we are being inconsistent, and we should reduce certain building requirements and use the money saved to increase others… thereby improving safety at constant cost. For example we maybe should reduce certain requirements for parking, and put more money into keeping school buildings standing up during earthquakes.

      • Your point about partial sequestration is correct — clearly there is a spectrum from perfect sequestration, which is what one sees in a cost-benefit analysis textbook, and complete suborning. I’m pushing back against that first position, since it’s implicit in most CBA advocacy; see Sunstein for instance. It’s really an empirical question, and my sense is that it’s quite common for methodological choices to reflect the party that pays for the analysis or has predominate political influence. This is often asserted, but I don’t know if it has ever been studied carefully.

        But there is another aspect to this, which is an implicit excluded third (or fourth or fifth) approach. We are supposed to think there are only two alternatives on offer, politicians acting on some combination of their personal bias and donor interest versus algorithmic decision-making. I’m for a third: public reason. These are structured decision-making processes in which arguments need to be made, assumptions and values made explicit, etc. We use something like this for judicial oversight of regulatory agencies, and my impression is that it works well enough overall. I’m interested in approaches that extend this model to broader decisions over rule-writing as well as enforcement. If we used such an approach, expertise would go primarily into methods to distill and communicate technical factors, like health and other risk impacts, and not into assigning values that pre-empt deliberation.

        • Deliberation *about values* is very very important. I agree with you that we shouldn’t as experts say “x is the value of a life year”… we need to *argue* that “x should be the value of a life year because of …”

          only by doing that can we then engage a broad enough range of people that leads to “public reason”.

          Algorithmic decision making is just a tool, like a computer algebra system. Maxima or Mathematica are tools. *once you decide which equation should be solved* then they can solve it for you… It is up to us as people to have a conversation about what that equation is.

        • Look, this is very hard. There is no objective (invariant) answer. But, insisting on one is worse than accepting that no answer exists.

  12. Rant; skip to Rant off below. I just came here to write that while I long merely suspected that our institutions are failing us I am now certain of it.

    Our leaders treat us like children and justify their various and sometimes conflicting pronouncements with little more than “Because I said so.” Thus, rather than being honest and discussing the limitations of respiratory protection and contextualizing their role in reducing risk they are presented as useless one week and adamantine the next. The various RT-PCR tests (assays actually) for SARS-CoV-2 are likewise presented as something akin to an oil dipstick. It took more than a month for science journalists to start talking about sensitivity/specificity and it wasn’t until yesterday that I happened across, while trying to find something shiny in the gigantic landfill that is COVID-19 preprints, a sensible (to me, a lawyer, so YMMV) analysis of the efficacy of RT-PCR and the multiple seroconverstion tests being rolled out, given their sensitivities/specificities, using different sampling strategies and assuming different prevalences in different populations.

    Elsewhere a Pharma company touting a miracle drug added thousands more patients to its uncontrolled “trial” and people who dared point out on social media things like continuing to add N when hunting for p-less-than-magic-number is p-hacking are publicly called idiots and fifth columnists by not just a few people (including several MD and epis) who two months ago I thought were pretty smart. Of course people like Senn and Harrell have shined all the brighter but they’re drowned out by all the peddlers of statistical certitude who have come out of the woodwork.

    Finally, even though “the old man’s friend” pneumonia progressing to -> what is now known loosely as ARDS has been known for millennia to sweep like a scythe through the elderly and for decades that nursing homes, senior living facilities, clinics and hospitals are playgrounds for viruses and bacteria, nobody thought to completely lockdown the nursing homes until they began stacking the dead like cords of wood. (Note: in what you scientist types would I think call a natural experiment many are already observing that following the institution of strong infectious disease control measures the number of other infections e.g. C. difficile have dropped dramatically). What a mess. And increasingly I despair that we’ll learn anything from it.

    Rant off. So, anyway after reading the excellent post and all these well-reasoned comments I just wanted to again express my gratitude for this blog and its many commenters; for being a little island of sanity in a sea of intellectual chaos.

Leave a Reply to Andrew Cancel reply

Your email address will not be published. Required fields are marked *