Skip to content
 

New coronavirus forecasting model

Kostya Medvedovsky writes:

I wanted to direct your attention to the University of Texas COVID-19 Modeling Consortium’s new projections.

They’re very similar to the IMHE model you’ve covered before, and had some calibration issues.

However, per the writeup by Spencer Woody et al., they do three things you may be interested in:
They fix an error with what with looks to be serial correlation in the IMHE model.
They’re fit in RStan (which you suggested the IMHE model should be written in lieu of using the non-linear least squares approach IMHE took).
They’re fit using U.S. data, as opposed to the IMHE model which was originally fit using data from Wuhan.
They also add in some stuff about cell phone data, but I frankly don’t know if that’s material or razzle dazzle. That data could be meaningful, it could not be. I’m vaguely skeptical, but who knows.

At a first pass, they projections look like a clear smell-test improvement over the IMHE model because they get less confident in their projections over time:

By contract, IMHE gets more confident over time:

I’m glad they’re using Stan, and I really like the transparency of what they’re doing, but I’m skeptical of these forecasts because of their high certainty level. For example, here’s what’s on the home page right now:

I can believe that their best estimate is that we’ve already reached the peak. But a probability of 97% that the peak will have passed within 7 days? That seems so high. But I guess the point is that (a) they’re curve fitting, and as the curve above shows, the second derivative has gone to zero, and (b) the results depend on policy, so there’s an implicit stationarity assumption: if deaths go up in any particular location, the local government can shut things down for awhile to stabilize the rates.

Anyway, it’s good to have all the code so we can see the mapping from assumptions and data to conclusions.

67 Comments

  1. Minor typo/preposition

    “By contract,”

    should be

    In contrast,

  2. Thanks for the review. Static policy assumptions seem to be a highly flawed assumption given our highly partisan environment complete with anti-science demagoguery. My one wish is for their graphs by region to allow multiple choices. Then I could group the sanely led states that are in the same region together. And see data/projections for all the crazy states separately.

    • Adrian says:

      But adding that dynamic policy response to the model is not easy. There is not enough data to train it, and if you want to model it then you’ll necessarily end up making very strong assumptions about how do governments respond conditional on the data that may not be useful for forecasting.

      I think it’s better to think of these models as an educated guess on what will happen if policy remains constant, which is what they are, rather than demanding something from them that’s pretty much impossible to fit (given the data we have so far) and holding them accountable for not predicting the evolution of KPIs over time. This still makes them very useful.

  3. Garrett says:

    The model’s performance in the upcoming week will go a long way towards establishing their credibility

  4. Brent Hutto says:

    FWIW, in my home state the Texas model’s cumulative death projection through April 30 is 171 (145,219) while the IMHE model projects 188 (125, 375). So in our case the uncertainty range for IMHE is considerably wider.

    Looking at USA as a whole, I also see wider uncertainty for IMHE cumulative deaths through April 30.
    Texas 52K (48K, 58K)
    IMHE 60K (34K, 140K)

    So I’m not sure how to reconcile the large uncertainty charts you posted with what I can get from the Texas web site.

  5. jim says:

    All of the forecast models we’ve seen so far should be prefaced with words like “preliminary” or “experimental” or even “speculative”.

    • Brent Hutto says:

      The IMHE model was pretty explicitly intended as a planning model for governments to use in forecasting health care resource usage. So it would rather defeat the purpose if they said basically “But don’t trust this it may not be right”.

      It has seemed to me they (IMHE) are updating not just the underlying data but the list of data sources and even the modeling parameters and assumptions on a week-to-week basis. So yes, at any moment their model is simply their best guess about what will happen state-by-state over the next several months. But they seem to consider it a feature and not a bug that each update my radically change the predictions. That’s what happens when you just show your best guess.

      In other words, this is more akin to some business planning tool than to a scientific undertaking. I don’t think the IMHE folks view each update of their model as something to be peer reviewed and critiqued before moving on to next week’s best guess.

      • jim says:

        “So it would rather defeat the purpose if they said basically “But don’t trust this it may not be right”. “

        To me it’s unethical to assert that a speculative model is an accurate tool for planning public policy.

        • Brent Hutto says:

          When the IMHE group was asked (according to them) by government agencies all over the country to make forecasts for all 50 states, they had two options. They could simply say, “No, it is not possible”. Or they could do their best to produce the forecasts and then be as transparent and honest about their methods as they can.

          If they had declined to do it, some other group would. Every agency on the planet is using some basis for their planning. It isn’t like they’re going to take “Can’t be done” as an answer.

          • jim says:

            In the case you describe it’s appropriate to inform the requester that the results aren’t likely to provide any better outcome than a guess and – yes – decline to provide the model in the context of a guide for government policy.

            It’s OK to still create the model and make the forecasts public. But just as is done with speculative investment forecasts the output needs to be labeled for what it is. It should be clearly stated that the output has not been verified.

      • Joseph Candelora says:

        They’ve been light years off on their resource-use projections since jump. No good decisions could have been made from those projections. Is giving someone a terrible estimate really better than no estimate?

        • Brent Hutto says:

          Even if in retrospect the resource usage predictions did not pan out, they didn’t know that when they started did they?

          And FWIW, in my state their hospital bed usage predictions a week or two out have pretty much ended up matching what the local health department tells us. That’s just from my eyeballing it every day but not really “light years” off has occurred.

          Then again, both the predictions and actual usage have turned out to be pretty low for us which no doubt makes it easier for predictions (low) to match reality (low).

        • Phil says:

          Joseph,
          Decisions have to be made somehow. Even if you decide to just pick a random number, you have to decide what distribution to draw it from.

          Giving someone a terrible estimate might be worse than giving them no estimate, but only if there’s somewhere they can go and get an estimate that is likely to be less terrible.

          When viral infections can double every three days or so, it’s really easy to be off by a factor of 8 or more when asked to predict how many X will happen, or how many Y will we need. A factor of 8, that’s pretty terrible. But it’s better than being off by a factor of 80, which a lot of pundits and blowhards have been.

          • Brent Hutto says:

            At its root, the IMHE approach is basically saying to those requiring a forecast, something like…

            We don’t know much about your actual state or region. But if the pattern is similar to Wuhan and Lombardy (since updated to include more areas past the peak) here’s how it will play out for your area.

            Each forecast is basically a plot of one from a family of generic trajectories. The main thing they do for each state is pick when curve from that family to show, based on a few very generic descriptive variables.

          • jim says:

            “Decisions have to be made somehow.”

            True but in the case of total uncertainty dressing that uncertainty up with bad models isn’t a benefit; it could just as well distort the outlook and mislead people into doing the wrong things.

            Aside from the fact that significant decisions are being delayed pending the results of unreliable models.

            • Phil says:

              Yes, but this. And true, but that. And on the other hand, this other thing. And yet: decisions have to be made somehow.

              I’m not arguing for any specific model, or for any family of models, or indeed for formalized models at all.

              A decision requires a comparison between what you would happen if you do A vs what you think would happen if you do B (or C, or D…), and that comparison requires a model. That model can be formal or informal, explicit or implicit. It may or may not involve an uncertainty estimate. It might come from eyeballing a curve on a plot, or using ‘common sense’, or asking Jimmy the truck driver what he thinks will happen if we choose A vs B, but somehow that comparison must be made.

              In other words, I’m making a point that I would have thought was so trivial and obvious that it wouldn’t have to be made: you cannot make a decision without a model. You can pick which model you want to use, but you have to use something.

              • jim says:

                Apologies, Phil, I foolishly presumed you were specifically referring to mathematical/quant modelling. I agree: any conceptual idea of anything is a “model”.

  6. Zhou Fang says:

    The current chart on their page I’m seeing has “probability peak has already passed” at 99%.

    Something clearly has going terrible wrong.

  7. Dale Lehman says:

    If you look at their predictions for Alaska, a couple of strange and worrying things appear. First, the death data is so sparse that I don’t think they have any business making any projections. Of course, once you have a model you can always make predictions. But, in my mind, it lessens their trustworthiness when they go ahead and provide forecasts anyway. Second, the Alaska data shows different data for actual deaths on April 5. The first graph seems to show actual deaths of 1, and the second graph (illustrating potential uncertain forecasts) appears to show actual deaths of 3 on that day. I don’t understand why the actual data varies between the 2 graphs. Another red flag for me.

    More seriously, as more and more models are rolled out, I am getting overwhelmed. Too little time to try to distinguish the wheat from the chaff (if that is the right metaphor). I’m overwhelmed with even how to decide which models to bother with – reputation, degrees, titles of the authors? Decide on the basis of who does the best jobs of providing their data and methods? There are almost as many efforts vying for criteria for choosing as models themselves?

    Part of the dilemma may be resolved by considering how the models may be used. Despite inadequate data, decisions must be made about how much to shut down, when and how to relax shut-downs, etc. I think it is better to make these decisions based upon some modeling rather than none. So, then the political question becomes: whose model will be used to help make these decisions? How are policy makers choosing which model(s) to use? Are they in a better position than me to make these choices (my guess is that Texas politicians will be likely to use a model produced by reputable Texas academics, provided that the model results comport with their political wishes).

    I started playing with modeling almost 2 months ago but stopped awhile back. Then I started playing with trying to evaluate other people’s models, but have since given up. Then I started reading other people’s evaluations of the models, but I’m giving up on that as well. What would be your advice, as the number of models appears to be exceeding the number of reliable data points?

    • Terry says:

      I started playing with modeling almost 2 months ago but stopped awhile back. Then I started playing with trying to evaluate other people’s models, but have since given up. Then I started reading other people’s evaluations of the models, but I’m giving up on that as well.

      I share your pessimism about these models, indeed, I am pessimistic about almost all analyses of what is going on. Reading the comments on this blog have convinced me that the data is just too horrible and the uncertainties are too huge.

      The only reasonable approach seems to be eyeballing plots of excess deaths. I don’t understand why we see so little of that.

      • Brent Hutto says:

        Doing a curve-fit model (assuming it’s a good curve fit model and not one with obviously flawed methods) is fine for what it is. You keep adding data as it comes in, your short-term extrapolations are probably reasonably good but also of limited use (because they aren’t providing much information beyond the raw data). Longer-term extrapolations are very questionable. But to me when something is presented a a curve-fitting extrapolation I know what I’m dealing with and not to read too much into it.

        It’s the attempts to do highly mechanistic multilevel models in the almost complete absence of reliable data at any of those levels that strikes me as spitting into the ocean. Any model like that at this point in time had better show an uncertainty interval that’s so wide as to be almost laughable. If it doesn’t, then I think the modelers are reaching way beyond their data.

    • Ben says:

      A curve fitting model seems like a nice utility to plot data.

      Like the tables at https://covidtracking.com/data are nigh-overwhelming. Get on the plotting machine folks!

      A few things I like about these plots:
      1. Data points aren’t connected with lines
      2. Can choose to do log scale or not
      3. Daily vs. cumulative stuff
      4. Plotting individual predictions and intervals

      I guess a couple things missing that might be helpful would be:
      1. A way to look at predictions made with data only up to a chosen day (I know this isn’t quite the same as making predictions from today — but it’d give us a way to think about the Alaska data and this model, for instance, though I guess we should kinda know what to expect from a curve-fitting thing)
      2. Fits of the data itself

      That doesn’t says much about the utility of the plots for epidemiology or policy making though I suppose.

      > Second, the Alaska data shows different data for actual deaths on April 5. The first graph seems to show actual deaths of 1, and the second graph (illustrating potential uncertain forecasts) appears to show actual deaths of 3 on that day. I don’t understand why the actual data varies between the 2 graphs. Another red flag for me.

      Oof, that’s quite a problem. It shows up for me too. This e-mail address was in the tech report: utpandemics@austin.utexas.edu . Maybe just shoot them a note?

    • jim says:

      Dale Lehman says:

      “I think it is better to make these decisions based upon some modeling rather than none.”

      I don’t.

      The data is highly suspect and the number of assumptions – whether they are explicitly coded for or not – is huge, so the potential for bias extraordinary.

      The “indecisiveness” of the models is supporting political indecisiveness that’s exacerbating social impacts. Political leadership is waiting for data (Oh, if we just had better data we could get it right!!) and models to tell them what to do, rather than planning for reasonable contingencies and making the necessary decisions to mitigate the situation.

      There’s nothing wrong with modeling, but there’s a lot wrong with using these models for policy, and a lot wrong with releasing them without appropriate caveats.

      • Dale Lehman says:

        I am inclined to agree with your comments. But it really shows up how almost all of our institutions have failed. Ultimately, in the absence of reliable data and models, it should be a political/social/ethical decision. And it is. But what faith do I have in our institutions (Congress, elected officials, community organizations, etc.)? I’ll treat that as a rhetorical question rather than committing myself to an answer.

        • jim says:

          I see your POV. But I guess my tendency is to put the responsibility on the people who are elected and therefore directly accountable. If they can’t handle it, the electorate can rid us of them (fingers crossed).

        • Ben says:

          Maybe this is a pedantic point, but looking at any of these numbers involves some level of modeling. Like, looking at state level deaths vs. national deaths, or young deaths vs old deaths.

          I could see trying to get away from predictions, or trying to stop generalizing from Italy to the US. The difference between now and 1.5 months ago is there’s a bunch of data from the US.

          Sure there are things like the positive/negative test rates that are kinda meaningless (especially in NYC), but the data about deaths and hospitalization (how many, who) should mean something (and putting that in tables and plots and shipping it around and whatnot involves modeling).

      • Brent Hutto says:

        Or put another way, the Powers That Be are demanding data on which to base their enormously costly (in either direction) policy decisions. When no data is forthcoming, they accept dodgy models as a surrogate for data.

        Big mistake.

  8. Carlos Ungil says:

    If you want to see how the IHME forecasts have changed across releases (European countries and US states): http://ungil.com/ihme.html

  9. Ernie says:

    This is not my area of expertise, but it looks to me like the SafeGraph cell phone data is very important for this model. For instance, it seems that if the cell phone data was not informative at all, the model would effectively fit a single scaled Gaussian Kernel across all times and areas. (i.e. In Equation 5, the learned Gamma would be a 0 matrix and the learned mu would be the single set of Betas.)

    If that’s right, that seems sub-optimal. Even if the cell phone data is somewhat informative, it doesn’t seem right for the model to lean on it so heavily. I’d think just the recent trajectory of death counts for a particular area would allow for reasonable death count estimates for a few days into the future, and I don’t see how this model can take advantage of that.

    But, as I said, this isn’t my area of expertise, so it’s extremely possible that I’m missing something.

    • Ernie says:

      I should have said, “I don’t see how this model can _directly_ take advantage of that.” IIUC, it can only take advantage of the recent trajectory to the extent that it’s reflected in the cell phone data.

  10. Carlos Ungil says:

    From a bunch of physicists (it seems): “Total COVID-19 Mortality in Italy: Excess Mortality and Age Dependence through Time-Series Analysis”

    “We perform a counterfactual time series analysis using two different Data Science methods applied to 2020 mortality data reported from towns in Italy, with data from the previous five years as control. We find an excess mortality that is correlated in time with the COVID-19 reported death rate time series. Our analysis shows good agreement with reported COVID-19 mortality for age<70 years, but an excess in total mortality increasing with age above 70 years, suggesting there is a large population of predominantly old people missing from the official fatality statistics. We estimate that the number of COVID-19 deaths in Italy is 52,000 ± 2000 as of April 18 2020, more than a factor of 2 higher than the official number. The Population Fatality Rate (PFR) has reached 0.22% in the most affected region of Lombardia and 0.57% in the most affected province of Bergamo, which constitutes a lower bound to the Infection Fatality Rate (IFR). We estimate PFR as a function of age, finding a steep age dependence: in Lombardia (Bergamo province) 0.6% (1.7%) of the total population in age group 70-79 died, 1.6% (4.6%) in age group 80-89, and 3.41% (10.2%) in the age group above 90. We combine this with the Test Positivity Rate to estimate the lower bound of 0.84% on the IFR for Lombardia. We observe IFR to trace the Yearly Mortality Rate (YMR) above 60 years, which can be used to estimate the IFR for other regions in the world. We predict an IFR lower bound of 0.5% for NYC and 26% of total COVID-19 mortality arising from the population below 65 years, in agreement with the existing data and several times higher than Lombardia. Combining PFR with the Princess Diamond cruise ship IFR for ages above 70 we estimate the infection rates (IR) of regions in Italy, which peak in Lombardia at 23% (12%-41%, 95% c.l.), and for provinces in Bergamo at 67% (33%- 100%, 95% c.l.). This suggests that Bergamo may have reached herd immunity, and that the number of infected people greatly exceeds the number of positive tests, by a factor of 35 in Lombardia.”

    https://www.medrxiv.org/content/10.1101/2020.04.15.20067074v2.full.pdf

  11. Peter Dorman says:

    My concern with curve-fitting of the IMHE variety is that it appears to assume what needs to be demonstrated, that infections and deaths are monotonically increasing up to a peak and monotonically decreasing after it. Clearly there will be spikes based especially on changes in policy, but that’s not the same as a cumulative curve with a single inflection point on its way to an asymptote. If we were rapidly approaching prevalence in the vicinity of herd immunity (say 80%), then yes we could reason backward from that asymptote, but it seems instead we have a far smaller prevalence amid a very large pool of susceptibles. In this case why couldn’t new inflections approximately plateau and remain elevated for a considerable time?

    To put it differently, if you look at Italy, for instance, there is a large spike associated (with a lag) with the transition from insufficient to tight policy. The dropoff after the peak reflects this. But why should we assume this drop will continue down to zero? Maybe there’s a plateau ahead corresponding to the degree of policy now in place, with the potential for subsequent spikes as the guard is let down?

    To be clear, I’m not saying my alternative scenario is demonstrated in the data, only that I don’t see why it helps to assume one pattern (the IHME’s curve-fitting) is the framework rather than the other or — my preferred — we just don’t know at this point.

    ps: I suppose we would have a basis if credible data from Wuhan showed steady progress toward zero or negligible new infections, but do we have that?

    • Steve says:

      + 1 I think we have credible data from Singapore that there will be second (third, fourth) waves of infection. We have no reason to believe that the we will continue on a path to zero. Also, even if the Wuhan data were taken naively, their lock-down was much more “rigorous.” They also tested, traced and isolated (not just a gentle suggestion to “self-quarantine”) all positive results. Finally, it continued longer. They are only easing up now, while we are discussing getting back to work. It’s crazy.

      • Brent Hutto says:

        It’s not crazy. It’s necessary so that the economy doesn’t crater completely.

        I completely agree that everywhere is going to see multiple waves of this thing. And unless a miracle produces a vaccine that will probably continue for many years into the future.

        Public policy can not afford to be “Keep a third of the economy shut down until there are zero COVID-19 deaths”. That day will quite literally never arrive. If it’s not crashing a region’s health care system and it’s not killing more people than other leading causes of death then it is irresponsible not to start resuming normal economic activity.

        Maybe some places are doing it a week or a month too soon. Maybe some will wait a week or month too long. But an indefinite shutdown is not among the available options. We’re quibbling over a judgement call.

        • Phil says:

          The shutdown bought us time. Actually that understates it. The shutdown stopped the rest of the country from experiencing the tragedies experienced in New York (and in Spain and northern Italy), with sick people dying because there wasn’t enough medical help available, and health care workers dying because of the sheer magnitude of the viral load they were facing plus the fatigue of spending hour after exhausting hour trying to save lives.

          But Brent is right that it’s not feasible or even possible to simply outwait the virus, remaining in lockdown until it goes away completely. Not going to happen.

          That said, I favor a late and gradual restart, and not, I think, just because I’m one of the privileged people who can afford the economic hit. Two things happen with every day we wait: the number of people who are actually infected goes down (assuming we are past the peak…I guess that’s a tautology), and the capacity to deal with COVID-19 victims increases. We are (collectively) getting results on whether choloroquine helps, whether megadoses of Vitamin C help, whether smokers or asthmatics or people with COPD are at elevated risk of infection or death, and on and on. We are restocking facemasks and face shields and medical gowns and building new ventilators. We are working on a vaccine. And perhaps most important of all, we are developing the capacity to do widespread, rapid testing.

          Yes, I know, we are also losing our jobs and running up debt and suffering from social isolation and loneliness. There are big, big negatives of the shutdown, too. But I think that at the moment the benefits of one more day of shutdown outweigh the costs, and I think I’ll feel that way tomorrow too, and the day after that. And I think there’s a big asymmetry: loosening restrictions a day too early is worse than loosening a day too late. I don’t always think it’s better to err on the side of caution, but in this situation that’s the way to go.

      • Echoing Bruno: The original (correct) motivation for lockdown was to “flatten the curve,” noting that the area under the curve is fixed (barring vaccines, which won’t come rapidly). We seem to have forgotten this, for reasons I don’t understand, and “discourse” (to be kind) is polarizing between the idiotic “don’t do anything” and the almost as bizarre “make the infection rate zero forever.” The goal of a lockdown is to manage the spread in the best way possible, which requires acknowledging the heterogeneity of the disease, putting resources (much more than we’re doing) into helping the hardest hit groups, etc.

        • Brent Hutto says:

          I live in an area where if you believe IMHE and/or your own eyeballs looking at the daily raw counts, our rate of deaths has been flat for a week now and the new-positive-test rate has declined for a few days in a row. So we are at or past the peak of this “wave”.

          Neither our hospital bed utilization or ICU utilization for COVID-19 cases ever exceeded 20% (I don’t think it even exceeded 10% but very briefly) and the PFR is currently sitting at .003% or less.

          At this point, any continuation of extensive shutdown measures is purely political. There is no remaining “flattening the curve” rationale. But surely as day follows night, there are starting to be calls for waiting until some vague later date when either there are no new cases or there’s a vaccine or we magically gin up a huge Public Health workforce to do South Korea levels of testing/tracing/segregation of new cases.

          The “flattening the curve” thing is beginning to look quaintly nostalgic.

        • Steve says:

          Raghuveer writes: “for reasons I don’t understand, and “discourse” (to be kind) is polarizing between the idiotic “don’t do anything” and the almost as bizarre “make the infection rate zero forever.”

          To clarify, what I was referring to as crazy is the view that we have flattened the curve and can return to normal after a few days of positive numbers. There will be outbreaks and if we let up. This pattern will repeat until we reach herd immunity, which requires half the population to be infection, which will cost hundreds of thousands of lives or more.

          It is not “bizarre” to want to make the infection rate zero forever. The only thing bizarre is that seemingly educated people think that goal of completely containing the virus (which is the goal in all infectious disease outbreak), is somehow unattainable. We have an ebola outbreak now, and it is being contained in Central Africa with the goal to be zero infections, which they are going to achieve. Maybe, you think, “Well, that’s central Africa, we don’t have their medical expertise or resources.” Maybe you think, “testing, tracing and isolating everyone with the virus would take a small army.” True. I remember a few years out walking to work, and look up at a building on fire, and hearing a woman scream, we’re under attack. It was 9/11, of course, and what did we do. It short order, we overhauled security at all of our airports (it was virtually overnight), we overhauled our financial system to track terrorist funding (coordinating internationally), we made Libya give up nuclear weapons, we deployed an army all over the world, etc. That was for approximately 2,700 deaths. Nobody pointed out that that was less people than the flu kills. Today, the loss of thousands of Americans is a daily event. The idea that we could not completely contain this virus if resources were not properly deployed and coordinated is laughable. It would cost considerably less than we spent in response to 9/11. You could hire large numbers of out of work people, give them training and have them run all over testing, tracing contacts, and setting up isolation housing for the exposed. That would employ people and bring a quicker end to the virus with less economic pain. We are collectively making a choice to let people die because we have leaders who don’t know how to do difficult things and don’t care. We are spending trillions to bailout industry that could just use bankruptcy laws to properly deal with there losses, while we could spend billions to eradicate the virus, save lives, and bailout individuals.

          • Brent Hutto says:

            Did we shut down a sizable chunk of of the global economy until we drove terrorism to zero after 9/11? Or did we briefly reel in horror, pause to mourn, then get back to normal life while attempting to address terrorism as prudently as possible?

            I agree with Raghuveer’s characterization of your desire to shut down until COVID-19 is eradicated as “bizarre”.

            • Steve says:

              Brent writes: “I agree with Raghuveer’s characterization of your desire to shut down until COVID-19 is eradicated as “bizarre.”

              That is not a serious or honest response to anything I wrote. I never said we should shutdown until the disease is eradicated. In fact, the only reason that we have had to shutdown at all is because of the mishandling of the crisis. Hong Kong, Singapore, S Korea,Taiwan, New Zealand have had far less disruption to their daily lives and far less death because they tested and isolated. Even the President’s own guidelines state that social distancing measures should not be let up until we see 14 days of a downward trend. If proper testing and isolation were put in a place, we could reopen immediately. But, it is not.

              With regard to 9/11, no we never returned to “normal” (pre-9/11) life. The entire security apparatus changed. As I said, security around airports permanently changed. As just one example, virtually every office building in New York has an extra layer of security that wasn’t there before. There need to be permanent changes as a result of COVID-19. After SARS-1, Hong Kong cleaned up considerably. Every time I travelled through Hong Kong airport there were temperature guns. All children’s temps were measure and a random sample of everyone else. None of these things are happening in NYC yet and it’s April. You can’t even get temperature guns, a minimal measure to make a return to work safe. Why?

            • Phil says:

              The analogy to terrorism is completely specious.

              To try to take it as seriously as possible, though: If we imagine that somehow the deaths from terrorism were increasing exponentially, doubling every few days, and that the only way to stop that from happening was to shut down much of the economy, then yes, we would have done that.

              • Steve says:

                Exactly my point. In the past, we have taken even non-exponential threats more seriously than some people are taking this threat.

      • jim says:

        I’m not clear on why we should expect more than a few waves of infection. My understanding is that there were only three waves of infection in the 1918 pandemic, all occurring within a year, and that typically the virulence of pandemic pathogens moderates relatively quickly due to selective pressure.

        • Brent Hutto says:

          I sure hope you are right.

          There might not have been but three waves of the specific H1N1 variant that caused the 1918 pandemic but not-too-distant relatives have killed people many times since.

          If we’re fortunate, the descendants of COVID-19 in circulation three or four years hence will have lost their appetite for killing old people and those middle-aged but vulnerable due to comorbidity. But I expect something very like COVID-19 will be circulating for a very long time.

        • Steve says:

          Jim writes: “I’m not clear on why we should expect more than a few waves of infection.”

          We should expect it because everything on earth that replicates does so at an exponential rate until it reaches the carrying capacity of its environment. This is the basis of Darwin’s theory of evolution. We don’t have any reason to believe some portion of the population is already immune or that the virus will magically mutate into a less infectious or deadly version. Sure, that could happen. But, basically anything that replicates doubles until it can’t. Polio is still spreading despite decades of work. Only in pockets of the population without herd immunity and at a much slower rate. It is no less debilitating than it was in 1940. The measles are still spreading (and killing) despite the enormous efforts to contain it. Left unabated, this virus like all viruses will spread through about half the human population and then slowly spread to the rest of the population over time.

          • jim says:

            “We don’t have any reason to believe [1] some portion of the population is already immune or that the virus [2] will magically mutate into a less infectious or deadly version. “

            [1] some estimates say 25% of population may be asymptomatic.

            [2] Yes, there could be selective pressure to for a less virulent form: dead people don’t propagate the virus while asymptomatic people do. In evolutionary terms, the virus doesn’t get any benefit from being deadly or disfiguring unless that directly helps it transmit, so there certainly isn’t any selective pressure for it to become more virulent.

            I recognize that there have been continual outbreaks for many diseases. But the 1918 pandemic doesn’t seem to have produced waves of equally virulent outbreaks in subsequent years, so it’s not required.

  12. Jeff Zimmerman says:

    I believe they are making one major mistake, they are forcing nation-wide changes to states. Each state/region needs its own model.

    I’ve been tracking KS and the eight surrounding states.

    https://twitter.com/jeffwzimmerman/status/1252664140407201793

    While there looked to be a possible flattening, the whole region is headed up … at the highest rate seen. I wish these tools would allow for multiple states to be selected.

  13. Jonathan (another one) says:

    A nice paper (from economists) discussing the modeling problem in general, with links to the models and to the data: https://www.nber.org/papers/w27007.pdf

  14. mbn says:

    There’s been a lot of talk about decision makers. I’ll make the pint different decision makers need different things. For example, if I’m making a decision on how many beds I need to prepare, I’ll ask for a 95% confidence interval of my bed needs, and try and prepare for the upper end. If I am in charge of movement restrictions, I’ll want a prediction based on status quo, a prediction based on voluntary, and maybe a couple of levels of tightened restrictions. This will vary by state, as will the economic and other consequences (such as being returned to government next cycle). I’ll assert* that voluntary compliance would vary by state, which is also related to the economic consequences and also related to predominant values and voting intentions. A single prediction that is both independent of action, and likelihood of compliance, is not helpful for that type of decision maker.

    *Based on experience living on a rural area of a midwestern state, working with both very low and very high income individuals, where the war sometimes known as the civil war is also known (more commonly?) as the war of northern aggression.

  15. James Scott says:

    E-mailed Andrew some things after he drew my attention to this post. Posting them here as a comment as well.

    Read the comments and a number of fair points raised. Appreciate your drawing my attention to them, and also appreciate the time your readers have taken to weigh in (even skeptically). Some brief clarifications:
    – we do assume that log E(y_it) is locally quadratic (and thus presumably locally log concave, i.e. will eventually turn around — the log-concavity isn’t baked in as a shape restriction.)
    – we don’t regress on state-level policies. This is a big problem with the IHME model, because it’s a linear regression on policy timing. Moving further away in time from the date of policy implementation even more aggressively forces the parameters of the locally log-quadratic approximation to take a nose dive. On past projections this has resulted in some heroically optimistic guesses that didn’t pan out (see Italy, Spain in our writeup).
    – instead we regress on observed social-distancing behavior. It saturates at a reasonably low level in most states, but the timing matters a lot for forecasting deaths. The saturating vs nonsaturating nature of our covariates vs IHME covariates is an important difference.

    It’s just a curve-fitting approach. We feel it’s a reasonable guess as to what will happen over the next two to three weeks, because most of the social distancing behavior relevant to predicting deaths over that time period has already been observed (best published guess = 23.5day average lag between infection and death).

    For what it’s worth, we ran a two-week backtest and beat the IHME model considerably (about a 40% reduction in mean-absolute forecasting error for daily deaths). Our 95% prediction intervals trained through April 2 covered 98% of the data April 4-16. This is all going into a second version of the writeup you see on the website. Including the cell phone social distancing covariates versus not m,akes a large improvement in forecasting error (about 40%). Not using covariates in our model gets performance roughly comparable to the IHME model that uses policy as a covariate and estimates the model in a non-standard way.

    To me it seems reasonable to borrow information from states further along in the curve to those less far along, as least insofar as how social distancing bends the curve. We picked covariates for which the epidemiologists believed this to be approximately true. E.g. going to a Wegmans in NY is qualitatively similar to going to an HEB in Texas. You’ve seen distance traveled from cell phones as an SD metric as well, but a mile of distance traveled in NY is probably not qualitatively similar to a mile of distance in Texas. One’s walking or subway, the other’s in an F150. So we didn’t use that metric.

    Our approach definitely inherits many of the flaws of the IHME model: it’s just curve fitting, it can’t be used to reason counterfactually, certainly not useful in the event of lifting of social distancing restrictions. But it’s a local improvement to the model in the sense that it doesn’t inherit ALL its problems. Since people seem to actually be quoting the IHME model, we wanted to provide something better in some obvious ways.

    We’re all hands on deck trying to fix issues with the website and improve the model as rapidly as humanly possible.

    The point you raise about growing vs shrinking error bars is an interesting one. Some error bars widen over time, some narrow. I can make a reasonable guess about the weather tomorrow, but not next month (widening errors). I can’t tell you whether the Yankees will beat the Orioles on opening day, but I can make a reasonable guess they’ll finish ahead in the standings after 162 tames. I guess whether you _qualitatively_ believe our error bars or the IHMEs (completely aside from the statistical points we raise) is whether you think predicting deaths is more like the weather or more like a baseball season. We and the IHME have different answers, at least on the time scale of a few weeks.

    Also, the point about the US P(peak past) seeming too high — so much of the contribution of the US peak is the NY and NJ peaks. Roughly a quarter of all deaths in the US two days ago were in NY, and NY seems clearly to have peaked in first-wave deaths.

    Our US posterior predictive draws are just summing up the state posterior predictive draws.

    • Dale Lehman says:

      What about the Alaska case: forecasts when most days either have 0 or 1 deaths and the April 5 anomaly where you have two graphs, one showing 1 death on that day and the other showing 3 (both presumably “actual” values)? Is that just an isolated case or are there others if we search the states?

    • Zhou Fang says:

      Maybe so, but I think dropping the probability from 11% to <1% on the basis of a single day's worth of data shows a disturbing amount of sensitivity that looks like overfitting. If you fit only to US data, I'm wondering if that has a negative impact there because in the US you don't have many examples of behaviour beyond the exponential-growth phase – you miss out for example on Wuhan's case where you had a few days in mid Feb of lowered deaths and then the numbers went up.

      I don't know to what extent your model captures the possibility of other locations in the US becoming new infection centers.

  16. Fabian says:

    Do they take the weekly pattern into account? The reported deaths within a week seem to decrease from monday to sunday (See last graph, esp last two Werks) and make a jump on mondays.

  17. Donald says:

    Some guys in Spain are sceptical about overconfident predictions. They argue (if I’m not wrong, see the ms here: https://arxiv.org/abs/2004.08842) that exponential growth is exponential growth in the end, and uncertainty grows exponentially too. There is a nice figure in the paper where they generate mock data and then do Bayesian inference (lognormal?). They see that even with perfect data you cannot tell. Actually, with the same data, you can predict the success and failure of confinement measures.

    I like that they end up suggesting that we should be happy to predict the epidemic like the weather: take a lot of data and be humble about what you can say in a week.

    They claim that they are not selling the model so I’m not sure if the message is deep or trivial. Any thoughts?

    PS: A lot of mathematicians are going to make them an offer they can’t refuse…

Leave a Reply