Someone pointed me to this post from Cosma Shalizi conveniently using R to reproduce the famous graph endorsed by public policy professor and A/Chairman @WhiteHouseCEA.

Here’s the original graph that caused all that annoyance:

Here’s Cosma’s reproduction in R (retro-Cosma is using base graphics!), fitting a third-degree polynomial on the logarithms of the death counts:

Cosma’s data are slightly different from those in the government graph, but they give the same statistical story.

Its been a couple weeks so I’ll redo the graph running Cosma’s code on current data (making a few changes in display choices):

Hey, a cubic still fits the data! Well, not really. According to that earlier graph, the number of new deaths should be approximately zero by now. What happened is that the cubic has shifted, now that we’ve included new data in the fit.

Anyway, here’s my real question.

Cosma is using the same x-axis as the U.S. government was using, going until 4 Aug 2020. But where did 4 Aug come from? That’s kind of a weird date to use as an endpoint. Why, not, say, go until 1 Sept?

Cosma provided code, so it’s trivial to extend the graph to the end of the month, and here’s what we get:

Whoa! What happened?

But, yes, of course! A third-degree polynomial doesn’t just go up, then down. It goes up, then down, then up. Here’s the fitted polynomial in question:

coef.est coef.se (Intercept) 5.85 0.06 poly(weeks_since_apr_1, 3)1 17.45 0.54 poly(weeks_since_apr_1, 3)2 -10.94 0.54 poly(weeks_since_apr_1, 3)3 1.20 0.54 --- n = 75, k = 4 residual sd = 0.54, R-Squared = 0.95

The coefficient of x^3 is positive, so indeed the function has to blow up to infinity once x is big enough. (It blows up to negative infinity for sufficiently low values of x, but since we’re exponentiating to get the prediction back on the original scale, that just sends the fitted curve to zero.)

When I went back and fit the third-degree model just to the data before 5 May, I got this:

coef.est coef.se (Intercept) 5.57 0.07 poly(weeks_since_apr_1, 3)1 17.97 0.52 poly(weeks_since_apr_1, 3)2 -8.54 0.52 poly(weeks_since_apr_1, 3)3 -0.97 0.52 --- n = 63, k = 4 residual sd = 0.52, R-Squared = 0.96

Now the highest-degree coefficient estimate is negative, so the curve will continue declining to 0 as x increases. It would retrospectively blow up for low enough values of x, but this is not a problem as we’re only going forward in time with our forecasts.

Second wave. The model is perfect! And predicts the consequences of reopening too soon (recursively taking into account overconfidence caused by misreading the first part of the graph). Really quite impressive for an Excel function.

brilliant!

Ironically, the extended-range forecast appears consistent with Govt policy – or lack thereof.

> It would retrospectively blow up for low enough values of x, but this is not a problem as we’re only going forward in time with our forecasts.

Not fitting past data is as much of a problem as not fitting future data. With the advantage that this inadequacy can be detected ex-ante and it suggests that the theoretical basis for the choice of model is not very solid, to put it mildly.

How are people comfortable just throwing some curve fitting at some data and making a prediction based on fit. Don’t we need some explicable underlying mechanism to support why that particular fitted curve is reasonable other than it fits.

From basic math, we do know we can fit a polynomial to approximate any arbitrary continuous function. So yeah we can always find a curve that fits some data. But does that meaningless curve actually have any predictive power?

This isn’t true in general. The Stone–Weierstrass theorem requires that the domain be compact and Hausdorff and it’s clear that *every* compactly supported non-zero function $f \colon \mathbb{R} \to \mathbb{R}$ *cannot* be well-approximated in any $L^p$ norm (including $p = \infty$) by polynomials, so that condition on the domain cannot be dropped without replacing it with some other condition.

You can obviously have local approximations of suitably smooth functions, but it’s worth being precise about these things.

This is all true, but practically speaking any function of time of interest to humans only exists on a compact domain [big_bang,nowish]

I believe Anonymous is commenting as a representative of the Q Continuum and we should all broaden our minds a little….

What kinds of non-Hausdorff data do you got.

I’m reminded of Kuznets (sp?) cycles. And cyclic theory in economics in general.

Basic math: any function can be approximated as a sum of sine waves (or some flavor of cyclic function).

So the economists would look at the data, do a transform, find the largest of the low-order terms, and draw “cycles” of those frequencies. With no concern for historical reality, e.g. they’d do it for Japan, right though WWII. It didn’t matter to them that all Japan’s industry and most of its housing were destroyed in an (historically speaking) instant*. They just drew the graphs, looked at the low-frequency terms, and claimed explanatory power (or that they were doing/saying “meaningful” things, which they were not). Drove the historians nuts. (This was way back in the 1980s.)

*: Impulse function components in the data need an infinite number of terms to represent as sum of sines. No one appears to have told the economists this. (My best guess is that everyone here knows this math way better than I do. Sorry for redundancy.)

And here we are 40 years later using an even worse approximating functions.

Perhaps you mean Kondratiev cycles? As debunked by Slutsky (1927, reprinted in Econometrica (1937).

Given that the stated aim was to “help people visualize” what is going on, if you look at the NYT’s graphs they are just using a 7-day moving average to smooth the data, and those smooths seem to be a reasonable job of displaying overall trends. 7-days makes some sense given the the 3-5 days it usually takes to become sick after being infected, the delays in reporting, and as many people have noticed in many places there appears to be a weekend effect in reporting. More importantly, that kind of simple fit makes it clear you are not predicting the future, just helping to visualize trends in the data.

Personally I am not smart enough to know with any certainty will happen in the next few months. What I do know is the worst flu season we have had in a long time was about 60,000 – 70,000 deaths, but that was over the entire year and without lockdowns. So this would appear to call into question the analyses that conclude that COVID-19 is “just like the flu”. To have seen the number of deaths in the US that we have seen in roughly four months with lockdowns, there must be something in the dynamics of COVID-19 and how it effects people that differs. As for policy, one epidemiologist I read awhile ago put it very clearly, which was roughly ” the virus doesn’t travel, people travel”.

I read in another blog that the 60-70K flu-deaths figure was an estimate and that actual reported flu deaths that year was about 7000. So the apples-to-apples comparison is about 7000 to about 85,000 reported COVID-19 deaths and counting. Probably a higher fraction of COVID-19 deaths are reported, though.

In both cases I would like to see actual deaths (from all causes) versus some typical baseline.

On a death certificate, there is an immediate cause of death and then previous (primary, secondary, tertiary) causes that lead up to the immediate cause. Influenza would generally precede pneumonia. Same with Covid-19 preceding the immediate cause of death. So likely more appropriate to compare the Covid-19 numbers to the higher proximate Influenza numbers.

https://www.cdc.gov/nchs/data/dvs/blue_form.pdf

But the proximate flu numbers where the lower numbers, not the higher numbers.

I don’t understand what you’re saying.

Check the NYT among other places that have looked at “excess” mortality. I have seen sites that have it by country and some that have it by state, and is why some people feel that the mortality in the US is already over 100,000. I haven’t looked in depth at the analyses nor how the data were used (such as excess deaths compared to what – mean, median, maximum?), but given that the US already is over 88,000 deaths and looks like will break 100,000 in a week or so, not certain that it matters much what the exact number is in order to be able to say that if your analysis says that COVID-19 is just like the flu, than you have a lot explaining to do on why there are so many more deaths, not just compared to the mean number of flu deaths over a year but compared to the maximum number of flu deaths the US has seen in a long time.

I should add that I believe the NYT makes all the data they use in their COVID-19 analyses available for download, though I don’t want to be quoted on that.

Roy said,

“not certain that it matters much what the exact number is in order to be able to say that if your analysis says that COVID-19 is just like the flu, than you have a lot explaining to do on why there are so many more deaths, not just compared to the mean number of flu deaths over a year but compared to the maximum number of flu deaths the US has seen in a long time.”

This sounds like you are not taking into account that there is a flu vaccine, which many people regularly get, so that *current* flu deaths are not an appropriate comparison with current COVID-19 deaths — you would ned to compare current COVID-19 deaths to what current flu deaths would be in the absence of a vaccine — which would at best require a lot of guesswork.

Actually I am taking into account things like the flu vaccine. It still becomes a question of can we in policy decisions treat COVID-19 like the flu, if one of the reasons we can not do so is that there is a vaccine for the flu, that still puts the onus on any number of analyses that say we can just get on with life because it is no worse than the flu. And I think in terms of policy, the onus should be on people who say it is just like the flu to explain why the difference in mortality, rather than people who say whether it is because of no vaccine, or the virus is different for any number of reasons (R0 value, IFR or any other of the features of the virus that are relevant), the observed mortality is much higher than anything seen in the flu in a long time, so that maybe our policy and behavior for COVID-19 should not be based on our policy and behavior toward the flu.

And the studies that say COVID-19 is just a flu have gotten a lot of ink, in fact some of the authors have gone to reasonable lengths to make certain they do.

I would add two points here.

First, I still do not understand why in these analyses we are so dismissive of the accounts from frontline HCW. Both from talking to the ones I know, and from any of the interviews I have seen or read, not a one says what they see in patients reminds them of what they see with the usual seasonal flu.

Second in one of the many related threads, a commenter said that if opening up states causes an increase in cases we should have seen an uptick in the number of cases and we hadn’t seen one (I personally think it is too early to really see the uptick), but there is this:

“Huge surge in COVID-19 cases in Texas today. The state is reporting 1,801 new cases since yesterday – the biggest one day surge we’ve had to deal with.”

One day in one place does not a trend make (or perhaps I can fit a cubic going off to infinity and put it on the CEA webpage) but it is food for thought.

What huge surge?

https://i.ibb.co/3pvyPgX/TX0517.png

https://covidtracking.com/data/state/texas

Here is the updated chart: https://i.ibb.co/zxJ2x3X/TX0517.png

A huge drop in rate of new cases, the lowest in at least a month!

* the biggest decrease in at least a month!

Roy said,

“First, I still do not understand why in these analyses we are so dismissive of the accounts from frontline HCW.”

I agree strongly that information from frontline HCWs needs to be taken into account in any analysis of medical problems. For example, information from frontline HCWs was crucial in Semmelweiss’s work in reducing deaths from childbirth fever, and in Nightingale’s work in reducing deaths in the Crimean war.

For some reason there isn’t an option to respond to Anoneuoid comments, but there is this:

https://www.newsweek.com/texas-reports-highest-single-day-rise-coronavirus-cases-state-continues-push-ahead-reopening-1504659

I know I am old, and getting slower every day, but I thought I understood “highest single day rise” to mean just that. To be fair, the Governor says it is from increased testing, and it may well be, but even more, as I said one day in one place does not a trend make.

It is a totally meaningless statistic that was followed by an even more exceptional dip in day-on-day cases that is not being talked about.

The big rise in Texas cases yesterday was largely because a meatpacking plant outbreak near Amarillo (Potter County), I believe.

The higher cases in general over the past week are definitely due to increased testing — the % positive overall in Texas is about half of what it was a month ago.

So, the Amarillo thing is “real” (not purely an artifact of increased testing), but the rest of the ‘rise’ is not, and since meatpacking plants never closed (they’re “essential businesses”), even the real part of it doesn’t have much to do with whether reopening was a good or a bad idea.

It seems you have ready explanations for every mild rise in new daily cases but none for the even more extreme drop in cases yesterday.

I am not trying to “explain away” anything. The new cases reported were largely from a meatpacking plant outbreak.

I do think that the “real” trend (IE – actual infections, including those not tested) in Texas is probably downward.

Confused: the ascertained case per day trend is clearly upward with an upward concavity, so I don’t know maybe there’s some way in which the actual infections are falling, but I rather doubt it.

I’ve got pdfs of the state-by-state trends on my blog. now with bug fixes.

http://models.street-artists.org/2020/05/17/updated-state-by-state-graphs-with-bug-fix/

I saw no indication you did. I saw you had multiple explanations for when cases/day goes up but none for when it went down by a much more extreme amount.

@Anoneuoid: I didn’t think today’s drop really needed to be explained, since it’s Sunday, and reporting is often very low on Sunday.

@Daniel Lakeland: The confirmed-cases-per-day trend is upwards, yes, but not by nearly as much as the number-of-tests-per-day.

As of today, we’re at 693,000 total tests; 300,000 of those were in the last two weeks.

In March and probably still in early April, if you didn’t have to go to the hospital/ER, you probably didn’t get tested. So there were huge numbers of uncounted mild cases.

The % positive is less than half what it was in early April. So I don’t see why we would expect true infections per day to be rising, vs. that we are just undercounting them less badly.

If you look at my tested vs confirmed graph you’ll see that the growth rate of tests is much bigger than the growth rate of cases. That ratio dtest/dt should far exceed dpos/dt that indicates whether testing is sufficient. Under those conditions i agree we are under counting, but I don’t think the under counting rate has changed dramatically any time since April 1 or so. The cases per day probably reflect a relatively fixed fraction of all cases. So real infections are probably something like 5 to 10x the cases per day and growing in similar pattern.

OK, maybe I’m missing something. If “tests performed” are growing much faster than “cases confirmed” – ie positivity rate is dropping a lot – why should the undercounting rate (fraction of all infections that become “confirmed cases”) be staying pretty much the same?

I’m not an expert on this, so could be missing something relatively obvious. But that seems pretty counterintuitive.

JimV

I think your 7,000 figure for directly counted flu deaths are the average for the last decade. For the worst season, 2017-2018 the final CDC tally was 61,000 and the directly counted flu deaths around 16,000.

As far as how CDC estimates total deaths from a flu season, they say:

“because not all deaths related to influenza occur in the hospital, we use death certificate data to estimate how likely deaths are to occur outside the hospital. We look at death certificates that have pneumonia or influenza causes (P&I), other respiratory and circulatory causes (R&C), or other non-respiratory, non-circulatory causes of death, because deaths related to influenza may not have influenza listed as a cause of death. We use information on the causes of death from FluSurv-NET to determine the mixture of P&I, R&C, and other coded deaths to include in our investigation of death certificate data. Finally, once we estimate the proportion of influenza-associated deaths that occurred outside of the hospital, we can estimate the deaths-to-hospitalization ratio.”

So they’ll attribute deaths to flu for death certificates that don’t list flu as a previous cause.

I don’t think there exists, today, an apples-to-apples set of data to compare covid-19 and flu deaths. I think “it’s obviously a lot worse, despite SIP measures, business close-downs, etc” is good enough though.

>>I don’t think there exists, today, an apples-to-apples set of data to compare covid-19 and flu deaths.

I agree. Estimated flu deaths definitely aren’t apples-to-apples; but directly counted ones won’t be, either, because there is much more urgency to track COVID deaths, so probably far fewer of them will be recorded as just “pneumonia”.

>> I think “it’s obviously a lot worse, despite SIP measures, business close-downs, etc” is good enough though.

It is clearly worse than seasonal flu, but “good enough” depends on what you want to use the estimate for. There’s a big difference between 3x as bad and 30x as bad. 3x as bad probably wouldn’t be worth the costs of the measures we have taken to mitigate the impact (the 1957 and 1968 pandemics were at least that bad – 100k to 120k deaths even in a much smaller US population – and we didn’t do anything like this then). 30x as bad would be worth it.

There are obvious day-of-week effects so 7 days is likely the “right” smoothing interval.

Agreed — using a smaller or larger interval could miss or obscure the fairly regular day-of-week effects.

PS: Please say hello to Anne for me.

Dear Martha:

email me at my work email and I’ll

give you contact info for Anne.

Nick

Andrew

“Cosma is using the same x-axis as the U.S. government was using, going until 4 Aug 2020. But where did 4 Aug come from? That’s kind of a weird date to use as an endpoint.”

The August 4th date came from IHME and I’ve seen no explanation myself as to why they picked that particular date out of a hat.

The IMHE model starts on 4 Feb, so 4 Aug would be 6 months after that. Suspect that’s the reason. As for why 4 Feb, maybe this was when it was first run?

Can someone pastebin the R code? The link above is not directing properly. Thanks!

The Stone-Weierstrass Theorem states that every continuous curve on a closed interval can be approximated arbitrarily closely by a polynomial or order n. The Hassett Corollary (proof forthcoming) is that n <= 3 is good enough for government use.

Or, as they say around here, good ’nuff for gummint work.

“A third-degree polynomial doesn’t just go up, then down. It goes up, then down, then up.”

I’m feeling called out here.

https://statmodeling.stat.columbia.edu/2020/05/14/so-much-of-academia-is-about-connections-and-reputation-laundering/#comment-1337007

Should have stuck with the elephant model

Sorry, I haven’t followed this but what is supposed to be the point of this curve?

The red line drops down so it looks like we eradicated COVID a couple of day ago.

If US had started social distancing a week earlier, about 36,000 fewer people would have died, study says

https://www.cnn.com/2020/05/21/us/us-social-distancing-study-trnd/index.html