Skip to content

Jordana Cepelewicz on “The Hard Lessons of Modeling the Coronavirus Pandemic”

Here’s a long and thoughtful article on issues that have come up with Covid modeling.

Jordana’s a staff writer for Quanta, a popular science magazine funded by the Simons Foundation, which also funds the Flatiron Institute, where I now work. She’s a science reporter, not a statistician or machine learning specialist. A lot of Simons Foundation funding goes to community outreach and math and science education. Quanta aims to be more like the old Scientific American than the new Scientific American; but it also has the more personal angle of a New Yorker article on science (my favorite source of articles on science because the writing is so darn good).

There’s also a film that goes along with Jordana’s article:

I found the comments on YouTube fascinating. Not as off the wall as replies to newspaper articles, but not the informed stats readership of this (Andrew’s) blog, either.


  1. jim says:

    “we respond to the model’s predictions and prove them wrong”

    To the extent that our behavior changes the outcome of a sequence of events, humans are mostly reacting to information obtained through the many channels we have to understand our world, and almost certainly *not* reacting to scientific model projections. :)

  2. James Annan says:

    An interesting mishmash of misdirection and excuses.

    The onset of a pandemic is largely characterised by a single parameter, the doubling rate. This is clearly identifiable in numerous data sets. The modellers simply didn’t do the elementary task of estimating this parameter and as a result (in the UK at least) grossly underestimated the problem through Feb and March. This doesn’t require advanced modelling skills, it takes nothing more than a spreadsheet or even a bit of easy mental arithmetic. Just fit a line on a log plot. That’s literally all it takes, and adequate data were readily available.

    This isn’t the only mistake they made, but it’s probably the most glaring one.

    • Ben Bolker says:

      Responding to this comment, not to the original post.

      I can assure you that I and all of the epidemic modelers I have ever interacted with do indeed know how to estimate a doubling rate. Can you please provide some evidence for your claim? What kind of mistakes do you think modelers made in estimating the doubling rate? And what is your suggested method for dealing with temporal changes in doubling rate due to changes in testing rate, changes in testing bias, changes in fraction susceptible (including effects of population heterogeneity) and changes in behaviour over time due to non-pharmaceutical interventions and individual choices?

    • Rahul says:

      Isn’t this too simplistic?

      The doubling rate isn’t some static parameter. Identifying the rate right now is hardly the problem. It’s about predicting how the doubling rate will change in the future that’s the issue.

      If the doubling rate were some static parameter like the half life of radioactive decay then life would be easy. But it isn’t!

      • James Annan says:

        It certainly gets more complicated once we start to change our behaviour. At the outset, however, before people were doing anything on a large scale, the doubling rate was very stable and this is the primary determinant of the timing of the outbreak (and also strongly related to the height of the peak in the absence of mitigation). The lack of urgency in the UK response up to late March was directly based on the advice that the peak was still a couple of months off……even as Italian hospitals were overflowing…

        Of course there’s always some uncertainty, but the log plots (of both cases, and deaths) were all impressively close to straight lines with similar slopes, substantially steeper than the 5-6 day range that the UK modellers were fixated on.

      • jim says:

        Is the doubling rate a fundamental parameter or is it the expression of some other fundamental parameter?

        I saw in interview with Michael Osterholm the other day. While he missed the boat on masks, and although he’s not a modeler, his qualitative predictions have been pretty accurate. His view is that the progression of the pandemic from last winter until now is more or less what would be expected based on seasonal effects of corona viruses in general and other factors such as the emergence of various mutations.

        A lot has been lost by focusing on the quantitative instead of the qualitative.

        • Ben Bolker says:

          Can you clarify what you mean by a “fundamental parameter”? This is epidemiology (= virology + immunology + ecology + evolution + sociology + political science), so it’s hard to know what would be “fundamental”. The nice thing about the doubling rate is that it’s an easily quantifiable number that can give you reasonable short-term predictions.

          • jim says:

            A “fundamental parameter” is variable that measures a distinct natural process – as opposed to a number that’s an index or rough representation of a group of unknown processes.

  3. yyw says:

    Has anyone looked at Youyang Gu’s prediction? I’ve stopped tracking covid prediction a long time ago, but this guy seems to be getting some positive attention lately.

  4. dhogaza says:

    Hmmm apparently he’s started things up again, he had paused earlier. At the point in time when he said he was going to stop updating his model was doing quite well.

Leave a Reply

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.