10 on corona

Here are some things people have sent me lately. They are in no particular order, except that I put the last item last so we could end with some humor. After this, I’ll write a few more blog posts, then it’ll be time to do some real work.

Table of contents

1. Suspicious coronavirus numbers from Turkey

2. Sensitivities and specificities

3. Unfrozen caveman law professor

4. Putting some model in your curve fitting

5. Why not give people multiple tests?

6. Fat or skinny: which is better?

7. Yet another forecasting competition

8. Kaiser goes medieval

9. A new way to do causal inference from observational data?

10. One of the worst studies I’ve ever seen in my life

1. Suspicious coronavirus numbers from Turkey

Abdullah Aydogan shares this short article he wrote on coronavirus data comparing Turkey’s strange fixed ratio of 0.021 for a 10-day period with the rest of the world. Now that the virus has taken root in the U.S., we haven’t been talking so much about international statistics.

In his article, Aydogan concludes:

The analysis proves it to be problematic to claim there is nothing unusual about Turkey’s 10-day long stability in total death to total case ratio. Simply put, excluding China and Iran’s trajectory after the 45th day since their first deaths, there is no country that has experienced variation as low as Turkey’s trajectory between the 19th and 28th days since its first death.

There may be a plausible explanation for this outcome besides allegations of data manipulation in official reports. But we will only learn about it if Turkey makes significant steps in improving transparency and accountability in data sharing.

I’m reminded of the suspiciously smooth time series of General Electric’s reported stock earnings.

2. Sensitivities and specificities

Joseph Candelora writes:

While looking for the new LA County serological study paper, I stumbled across this preprint, Estimating SARS-CoV-2 seroprevalence and epidemiological parameters with uncertainty from serological surveys, by Daniel Larremore et al., which looked interesting:

As I [Candelora] read it, my initial reaction was surprise at the statement on page 5 that for a simulated study with “sensitivity (93%) and specificity (97.5%) … when seroprevalence is 10% or lower, around 1000 samples are necessary to estimate seroprevalence to within two percentage points (Fig. 2)”.

That sounded great, like the Santa Clara County and LA County studies had a lot more to teach us than I thought, that even though seropravelence in those studies is in the low single digits and the test’s specificity was below 100%, we could still bound the actual seroprevalence within a reasonable interval. But then I read it again, and realized they were estimating within “two percentage points” and not “two percent”. Seriously, who cares? If the test seroprevalence is 1.5% and you’re trying to get a reasonable estimate of number of infections in the population, bounding that seroprevalence within 2 percentage points is pretty worthless. Am I missing something? Why would they focus on percentage points?

My reply: if your test has a specificity of 97.5% and the underlying seroprevalence rate is, say, 1%, then you’re drawing dead. Your test can give you an upper bound of the rate in the population, but any point estimate or lower bound will be extremely sensitive to the specificity, which you don’t really know. Indeed, just as “seroprevalence” isn’t really a single number—it varies by geography and demographics—we can also say that “specificity” isn’t really a single number either, as there’s some reason for the false positives, it’s not pure randomness. The point about estimating the rate within 2 percentage points is that this the best you’re gonna be able to do anyway.

3. Unfrozen caveman law professor

Daniel Hemel writes:

I’m a law professor—not an epidemiologist or a statistician. Just a note on the Santa Clara County and LA County tests that seems to have been overlooked. (Caveat: This isn’t remotely close to my area of expertise. Then again, teaching students how to poke holes in evidence is basically what we law professors do . . . . ):

— Tl;dr: The manufacturer of the test used in both studies appears to be reporting a negative predictive value meaningfully lower than 99.5% (i.e., a false positive rate meaningfully higher than 0.5%). If this is correct, then infection rate estimates based on an assumption of an 0.5% false positive rate will be too high.

More detail:

— As best I can tell, the prevalence-rate estimates in the Stanford study are based on the total number of positive cases by either IgG or IgM (see p. 6 of preprint: “The total number of positive cases by either IgG or IgM in our unadjusted sample was 50, a crude
prevalence rate of 1.50%”). I.e.: If you come back positive on one, you’re counted as positive. The manufacturer’s package insert (https://imgcdn.mckesson.com/CumulusWeb/Click_and_learn/COVID19_Package_Insert_Rapid.pdf) reports a 99.5% negative predictive value for the IgG version (369/371) and a 99.2% negative predictive value for the IgM version (368/371). As one of your readers notes in the comments, it’s unclear whether these probabilities are independent. If independent, that suggests a negative predictive value of 98.7% if I’m doing the multiplication right.

— But that’s not all. The manufacturer also reports that when testing 150 known-negative samples at Jiangsu Provincial Center for Disease Control and Research, 146 tests were negative and 4 were positive: https://imgcdn.mckesson.com/CumulusWeb/Click_and_learn/COVID19_CDC_Evaluation_Report.pdf. (All four false positives appear to have been false positive for IgM; one of those four was also false positive for IgG.) 146/150 = 97.3%. The same study finds a positive predictive value of 95%.

— If we use those parameters and 50/3,330 samples come back positive, and if I’m remembering the Rogan-Gladen equation from my undergrad stats class correctly, p=(t+β-1)/(α+β-1); t = 0.015; α = 0.95; β = 0.973; p = -0.013. Well, of course that can’t be the true prevalence rate, but you get the point. The crude prevalence rate reported in the study is less than what we’d expect if the false positive rate were 2.7% and the true prevalence rate were zero.

— For LA County, if the 4.1% figure is their positive test frequency (and I can’t figure out what their 4.1% figure actually is), then we’re talking about an estimated prevalence using the JPCDCR parameters and the Rogan-Gladen equation of 1.5%. Which, given 617 deaths in LA County and a population of 10.04 million, would be a crude fatality rate of 0.4% — not so far off what we’ve seen from China.

— But of course, you know much more than I do about all this. The main point is just that if you dig around the manufacturer materials on the McKesson website, then it looks like our point estimate of the false positive rate should maybe be 1.3% or maybe 2.7% but no apparent reason to think it’s 0.5%.

Then again, people who do this for a living are representing that it’s 0.5%, so who am I to say that they’re wrong?

My reply: I’m a statistician, not a biologist, and I don’t have any idea how these assays work. Indeed, the last time I did any research on lab assays was back in 2004. One thing I learned in that project is that the usual statistical analyses of assays—including the analyses done by expert biologists—are often crude oversimplifications of the measurement process.

4. Putting some model in your curve fitting

Paul Cuff writes:

IHME did some useful footwork, collecting data on social distancing interventions by geography, adjusting deaths for demographics, etc. Then they fed it into a brain-dead model. As a consequence, instead of providing simple, intuitive, and actionable conclusions, they provide predictions of very little use or accuracy and mainly serve to reinforce misunderstandings about ebb and flow of infections.

The intention of the IHME model is to understand the consequences of social distancing. They focused on four interventions, like closing schools, etc. They are modeling in the regime of no herd immunity. In this regime, R_t is memoryless. They could/should have provided a mapping from intervention cocktail to R_t. A likely good model would be multiplicative. Each intervention multiplies R_t. The last thing to estimate would be the delay between intervention and when it appears in the death data. Perhaps one additional set of parameter would be a mapping from population density to R0, or simply allow each locality to have a R0 floating parameter. Instead of giving “predictions,” they would be saying: “closing schools decreases R by a factor of x.”

The IHME are not the only ones to look at spread in an abstruse way. They did their sigmoid curve fitting in the wrong domain, but it’s also the domain that most graphics are presented in, including the popular NYT-style graphic that Phil referred to to analyze Sweden in his post on Monday. That is, cumulative deaths are presented on a log-scale. The log-scale serves a purpose for exponential growth, but that purpose is mostly lost when trying to track a time-varying Rt from cumulative numbers. As you know (in time-units of incubation periods):

current infections = \exp (\int \log(R_t) dt)
\log(current infections) = \int \log(R_t) dt

So the logarithm of current infections reveals R_t in such a straightforward way that it can be even done by eye through the slope. Daily deaths serve as a delayed proxy for infections.

On the other hand, the logarithm of cumulative deaths (or infections) does not yield any cancellation. Curves end up looking like sigmoids instead of straight lines, and you end up tempted to do really dump curve fits. There is merit to looking at cummulative quantities, especially if we are trying to make inferences about immunity or give report cards to each locality. Also, by luck, if Rt is constant, then you still get straight lines. But there’s really no sense in looking at recent slopes of cumulative quantities on a log scale, or trying to understand the Rt dynamics in that way. I understand that daily quantities are noisier and less pleasant to look at, but stats handles that just fine.

A least squares fit (in the log-daily-death domain) with each intervention getting a coefficient would be a quick and dirty way to start, and a thousand times more useful than the IHME model. I would be tempted to do it myself, but I don’t know how to access the data. Also, now that I’m not in academia anymore, my time is a bit limited. I did throw together some plots here for my own sanity.

I did not read this message in details, but I’m supportive of the general point that, even if your forecast is just curve fitting with no latent structure, it still makes sense to put some subject-matter information the model where you can. This point may seem obvious, but not everyone gets it. I’ve often seen analyses by what might be called “regression purists” who seem to feel that once you put your predictors in your model that you can’t look back, a sort of poor man’s preregistration that as a byproduct can destroy your ability to learn.

5. Why not give people multiple tests?

Andrea Panizza writes:

Before the Stanford study came out, people on Twitter were already claiming that antibody tests cannot reliably tell whether a single individual contracted or not COVID-19. Example.

This is a different topic, but somewhat related, to issues with serological studies on a sample of N individuals, like your coverage of the Stanford study.

I get your point about the possibility of the Stanford estimate being a statistical artifact, but I have more difficulties understanding why the test wouldn’t be reliable at an individual level. Basically, my reasoning is as follows: assume that in Italy (my country) the prevalence of the disease is 4.0%, with a [3.2%-5.1%] 95% CrI, as estimated here.

Estimates from other research groups are similar. Now, if we consider the Cellex test mentioned in the tweet thread I linked above, we have a sensitivity (false positive rate) of 93.8% and a specificity (false negative rate) of 95.6%. Suppose I get a test which results positive. Then a straightforward application of Bayes’ rule gives a posterior probability of me having contracted COVID-19 of 0.47, with aCrI [0.41, 0.53]. This is definitely not strong evidence of me having contracted the infection. However, can’t I just fix this by taking another test? If this second test also results positive, the posterior probability now becomes 0.95, with a CrI [0.94, 0.96]. What am I missing here? Is the inference wrong, because the same person is being tested twice, and thus I cannot consider the results independent? I got this objection, but it doesn’t seem right to me at all.

For example, if we consider the classical “fair coin” problem, then even if we a priori have a strong belief that the coin is loaded (e.g., a Beta prior on p centered on 0.05 and quite tight, with p being the probability the probability that we get “heads”), then, launching the coin a few times and obtaining “head” all times provides strong evidence against the “tail” face of the coin being loaded. [Actually, you can load a die but you can’t bias a coin. — ed.]

Do you think my analysis is sound?

My reply: I don’t know how the tests work, but it’s not clear to me that the positive or negative result is a purely random event. If you are negative and test positive, there could be a reason for this positive test (maybe you have some antibody that is similar to coronavirus) so it could show up on the test again.

6. Fat or skinny: which is better?

Gustavo Novoa writes:

I’ve recently come across an ongoing debate on the effect of obesity on covid-19 health outcomes. Researchers at NYU published a report arguing that deaths and serious complications from the virus were disproportionately obese—which was covered in the NYT.

An op-ed response was written on Wired arguing that the reports were dubious because they were not adjusting for socioeconomic factors, nor the discrimination that obese people face in hospitals (the latter is obviously significnatly more difficult to adjust for).

I [Novoa] looked at one of the studies, and they were only presenting descriptive data, without running any kind of model.

Do you have data that can be analyzed or any existing fitted models that can speak to the effect of obesity on covid outcomes?

My reply: I have no idea! But it seems that our esteemed Columbia University colleague Dr. Oz is on the case (link courtesy of Paul Alper).

7. Yet another forecasting competition

Igor Grossmann writes:

Many people today are wondering about the societal changes that would follow the current crisis. What do scientists think about the societal changes in the months to come? Moreover, how accurate are social scientists in their forecasts? To address these questions, in collaboration with several colleagues (Phil Tetlock, Cendri Hutcherson, Michael Varnum, Lyle Ungar and others) I am organizing a Forecasting Collaborative. I am inviting you and/or your students (alone or in a team with others) to participate to investigate the accuracy of forecasts for complex societal phenomena. If you know of others who may be interested, please share this note.

We plan to investigate the accuracy of forecasts made by social scientists about critical social issues in the US over the next 12 months: well-being, affect on social media, prejudice, gender bias, political ideology and polarization.

The basic idea is simple: Participants in this study will receive past monthly data for the domains they would like to participate it. They can use their expertise and/or data modeling to estimate points for each of the next 12 months, then answer a few questions about your rationale/model. After six months, we will contact you again to obtain possible updates on your forecasts based on new data. You can participate by yourself or in a team.

We are aiming to product a registered report style manuscript for publication in an interdisciplinary high-impact journal, such that results from this initiative will be accepted in principle, irrespective of the outcomes. Our goal is also to summarize the forecasts to the general public.

A direct benefit from participation in this study is that participants will obtain greater insight into accuracy of one’s forecasts (which can be used to inform future forecasts). Participants have an opportunity to contribute to the journal article as an author.
If you are interested in learning more about the Behavioral Science Forecasting Collaborative, please, click here.

Boldface phrases were boldface is the original email. If you want to join, I guess you can just click on the link.

8. Kaiser goes medieval

Kaiser Fung points us to a post he wrote “about the terrible charts in that Oxford study that claimed half of UK got infected by mid/late March.” He adds:

Also, previously, I [Kaiser] used that study to write an explainer for understanding statistical modeling. I’ve always found it a challenge to explain Bayesian models in a non-mathematical way to the general audience – and that Oxford study finally made me do it.

9. A new way to do causal inference from observational data? Bill Harris writes:

I saw this timely “Method for estimating effects of COVID-19 treatments outside randomized trials” in today’s ASA Connect Digest and wondered if you were familiar with the approach and had any observations.

The slides make it sound like an attractive approach, if it works. My first concern was that it might be too facile. My second, more substantive concern was that it seemed to ignore randomness in the data. That is, in their fake data example, the 13% in cohort 1 (and, for that matter, the 16% in cohort 2) are clearly treated as fixed (that part sounds a bit Bayesian), but pages 14 and 15 (slides 13 and 14) seem to bring in variation at the end.

If it is credible, it could at least provide a tool to assess claims made about observational study results.

Here’s the description, from David Wulf:

I [Wulf] share below an overview of a simple new method my team has developed that may help identify the causal effects of experimental COVID-19 treatments, in the hopes that (i) you have suggestions or criticisms, (ii) you have access to relevant data on which to run it, or (iii) you can connect us/the method to those who do.

A method my advisor (Dr Chad Hazlett, Asst. Prof. in Statistics and Political Science, UCLA) and I (PhD student, UCLA Statistics) have developed uses clinical, observational data to estimate the causal effect of new treatments that are given outside of randomized trials, and that works no matter how much unobserved confounding/selection bias there is. It accomplishes this using a different, more flexible assumption, which places limits on the change in outcomes we could have expected over time in the absence of the new treatment. We hope it can complement ongoing RCTs on COVID-19 treatments being used under emergency/expanded-access provisions (like hydroxychloroquine) by investigating a different population (those who MDs choose to treat) and giving answers without waiting for trials to complete. Clearly the stakes are high – failing to learn from these experiences would be tragic, while allowing invalid and possibly misleading comparisons to influence medical or policy decisions would be just as problematic.

We made a slide presentation detailing this application, and that presentation includes links to a paper that introduced the method, and to an applied paper currently R&R. Importantly, the only data needed to run the analysis are patient subpopulation counts, which we hope helps avoid some issues about PHI/data access.

Harris follows up:

I found Hazlett and Wulf’s ResearchGate project that contains other interesting papers, including “Estimating causal effects of new treatments despite self-selection: The case of experimental medical treatments” and “Inference without randomization or ignorability: A stability-controlled quasi-experiment on the prevention of tuberculosis.”

I have not had a chance to read this, so I’m just forwarding it to all of you in case you’re interested.

10. One of the worst studies I’ve ever seen in my life

Jonathan Bydlak writes:

I wonder if you could comment on this take by a political scientist that there is no statistical evidence for the lockdowns when you regress state death rates on a simple lockdown dummy and a handful of other population characteristics.

While this particular article has just begun making the rounds, many have been making this argument. My instinct is that the analysis is very incomplete, as it ignores 1) the path the virus has taken in spreading, 2) the likely correlation between low case rates and the lack of a lockdown in the six states, and 3) the time between the first case and when the measures were adopted. It would be very helpful to hear your take.

My reply: This is one of the worst studies I’ve ever seen in my life. It’s a master class in stupid.

175 thoughts on “10 on corona

  1. Evidence is starting to come out that many people with mild illness have a weak antibody response. Here 5/150 (6%) of patients had an undetectable response:

    It is also noteworthy that the levels of NAbs in patients were variable. About 30% of patients failed to develop high titers of NAbs after COVID-19 infection. However, the disease duration of these patients compared to others was similar. Notably, there were ten recovered patients whose NAb titers were very low, under the detectable level of this study (ID50: <40), suggesting that other immune responses, including T cells or cytokines, may contribute to the recovery of these patients. Whether these patients were at high risk of rebound or reinfection should be explored in further studies. On the other hand, two patients had very high titer of NAbs, which were over ID50: 15989 and 21567 respectively, but did not show any antibody-related adverse reactions.

    https://www.medrxiv.org/content/10.1101/2020.03.30.20047365v2

    NB: These “mild” patients were still hospitalized for a few weeks. So, I don’t know how mild that illness is or if that’s just how they do things in Shanghai right now.

    But this has three consequences I can think of.

    First, the longer you wait after infection the fewer people who will be detectable in these population antibody studies. So the estimates of past infection will be biased downwards. They a didn’t see consistent waning after two weeks so that gives us some idea that if there are originally antibodies present they should still be detectable for at least a month. But apparently 5-10% of “mild” infections may not be detectable via antibody tests, what about asymptomatics?

    Second, it poses a problem for any vaccine which basically mimics extremely mild illness to stimulate an antibody response. How long will the immunity last?

    Third, it means that letting the virus infect everyone at near the same time is more likely to lead to herd immunity than over a longer period. Depending on the timing, “Flattening the curve” could mean that many people become susceptible again before the virus stops spreading, so basically it becomes endemic.

    • I suspect this is closer to what I think of as “mild” illness:

      We then tested the 209 sera obtained from pauci-symptomatic individuals in Oise. Positivity rates varied from 27% to 36% between the assays, with a mean of 32% (Fig. 1 and Table 3). This range of variation was more marked than with hospitalized patients, likely because pauci-symptomatic COVID-19 individuals display lower viral loads than those requiring hospitalization and may generate lower levels and different patterns of antibodies. To our knowledge, these figures represent one of the first evaluations of SARS-CoV-2 prevalence in pauci-symptomatic individuals within a cluster of severe cases. The fact that only one third of the individuals were tested positive suggests that some of them may not have seroconverted at the time of sampling, and/or that other viruses or environmental causes were responsible for the reported symptoms.

      https://www.medrxiv.org/content/10.1101/2020.04.21.20068858v1

  2. #10:

    “‘we are making decisions without reliable data’, [Ioannidis] said. ”

    If all or even a majority of the data humans used to make decisions had to be reliable, we’d never have made it out of the Pleistocene.

    • Survival is experimental and adaptive by default. I don’t think is so much that data is unreliable as it is a matter of the fact that today reliable data can be withheld in the service of some interest. And more and more insiders are acknowledging this basic reality by writing books.

      • Some people are withholding and restricting data to exploit political interests.

        Other people are misrepresenting data and data analyses to exploit political interests.

        I just read an editorial in the Seattle Times by the Dean of the environmental college claiming that modelling of the epidemic has amazing implications for the benefits of data and policy going forward, holding up the IHME model as a shining example of how modelling can be used to create policy; meanwhile the same model is being widely panned among data science and data driven health professionals, and even a shallow analysis shows it hasn’t provided anything of practical significance.

        Humans survive because they can make decisions in situations where uncertainty is high and data reliability is low.

  3. #2: For Joseph. I haven’t read that paper but I did a similar calculation. If they want to bound to within 1 percent (99-100%) they would need 300 samples intead of 30. It makes sense that they would need 1000 if they want 2 percent points. A nice application of the rule of three.

    jim & #10: “we are making decisions without reliable data” That’s a favorite pushback of someone who dislikes the analyst’s conclusion. It’s always like that – order more analysis. But if you deliver them the interpretation that fits their pre-conceived course of action, then no more analysis required.

    #11: Does anyone find it concerning that California just released on a single day a huge pile of mostly negative test results? Test counts went up by 15% and percent positive dropped by more than 30% in a single day.

  4. #9 Help! Please!
    The slides describing the approach (that’s all I read) are interesting and the approach would be immensely valuable – if it can be relied upon. But I’m not sure it makes any sense. My understanding is as follows: we have a nonrandomized trial of something (e.g, a drug applied to COVID). We know if we compare the mortality rate for those on the drug and those not on the drug, that there are large selection bias problems. So, the method uses an assumption (they highlight this – it is an assumption) of what the mortality rate would be in a population not offered the experimental treatment. On the basis of this assumption, they are able to back out the effectiveness of the experimental treatment, along with standard errors, confidence intervals, etc.

    The transparent link between the main (and only) assumption – the mortality rate absent the trial – and the resulting estimate of the treatment’s effectiveness is a virtue of this method. It readily admits that we don’t have all the information we’d like, but links one key assumption to the potential effectiveness of the treatment. In that sense, it might be a practical and valuable way to view early and nonrandomized trials. But I am having a hard time seeing this as anything other than assuming the result and then deriving it. That is, once you make the assumption of what the mortality rate would be, absent the experimental treatment, haven’t you essentially assumed your result? Wouldn’t that make this methodology just an artificial way to assume the result, put it through some manipulations, to then end up with the assumption you made to begin with?

    I hope someone can take a look at this and tell me if there is more to this than I realize.

    • That worries me, too, but I think (hope?) things might be a bit better than that. Yes, you do seem to have to assume the mortality rate absent the trial, but then on p. 15 you seem to get the chance to see what assumption you would have to make to get (NHST warning) statistical significance (/NHST warning). IOW, you might not have a good estimate of the cohort-to-cohort trend, but you might be willing to bet it’s not reasonable to exceed a couple of percentage points, which would suggest you couldn’t claim a statistically-significant improvement.

      Or is there random cohort-to-cohort variation that would cause the “trend” to be big enough to signal a significant event, even if the true trend were 0?

      And I agree that this could be a valuable tool, if it passes muster.

      • It seems to me that the rational/appropriate thing to do would be to do the analysis with various values of the mortality rate absent the trial, which should give at least some idea of how the uncertainty/variability depends on this parameter.

    • The authors seem to have discovered interrupted time series analysis with original errors. There are … a lot of prerequisites to doing a valid ITS analysis.

      • I don’t think this is really an interrupted time series model – it could be, if there was a long enough history prior to the intervention (and other prerequisites held). They appear to only require a single data point aside from the experimental treatment – it could be from a prior period or from a different place (at the same time).

        • Thank you Bill and Andrew for noting this work and for all of your questions and comments. It is a relief to come to a corner of the web where readers are reasoning and not just reacting. To round up thus far:

          1. What is the assumption being made and is it just doing all the hard work?
          Dale, you asked: “…once you make the assumption of what the mortality rate would be, absent the experimental treatment, haven’t you essentially assumed your result?”

          This gets right to the crux of the idea. Our presumption is that there will be some cases where we can make a reasonable assumption about the difference between two cohorts (specifically, their average non-treatment potential outcomes). For example if the sort of people getting sick in a two week period are much like those getting sick in the following two weeks, and there haven’t been other changes in treatment, we would expect only a small change in how fatal a disease is between these two cohorts. It would have been much harder or indefensible to have made assumptions about how the treated and control groups compare to each other, since selection-into-treatment operates there. Likewise we might not believe an assumption of no unobserved confounding.

          Where a narrow assumption on this difference in cohorts is not defensible, it can be widened, at the expense of widening the consequent range of ATT estimates. We have also come to appreciate the accessibility and transparency of such an assumption. And showing results over the plausible range of that assumption avoids the problem of anchoring on a single estimate and asking the reader to consider it as “only suggestive” as in the covariate-adjustment tradition. (We know you recognized these points in your post already, so thank you!)

          2. Uncertainty.
          In the slides we don’t explain where uncertainty comes from nor how estimation addresses it. In our application using this to study a drug for prevention of TB, we explain our probability model and derive the standard errors used on slides 13/14. There are both closed form and bootstrap options.

          3. Relationship to interrupted time-series, and more. As Ryan mentioned there are some parallels here to interrupted time-series, and as Dale mentioned, it remains a bit different. We don’t require high frequency pre-treatment measures nor expect some kind of smoothness or continuity in Y(0) over time. Not everybody is treated in the post-period, where instead selection-into-treatment is allowed to occur. And there is the allowance for \delta. We are also interested in treatments that take some time to unfold, so there are complications like follow-up windows that may not always emerge in ITS settings. There is much to say about connections between our approach and RDD, DID, and especially IV. We have an extensive (or perhaps tedious) discussion of these relationships in the same manuscript noted above.

          4. This is a long response…happy to come back as the conversation continues.

        • Given what you have said, I find this approach valuable – but only in particular circumstances. In the case of COVID treatments, I don’t think it helps. For example, take the French trial of hydroxychloroquine that prompted much of the recent attention (from He Who Shall Not Be Named). 14 of the 20 treated group tested negative after 6 days while 2 out of 16 in the control group tested negative. We know there were selection problems. Your method would require some estimate of what the cure rate would have been if none of these 36 people had been in this trial. I can see ways I can make guesses, even based on pieces of evidence, but I see no way to feel comfortable with the assumption. I’m all for exploring sensitivity to that assumption, but I don’t see that we make any headway in this case. In fact, I’d say we have buried somewhat the uncertainty of what to make out of this small trial. The analysis of the critical assumption makes it appear that we have more information than we really do. The truth is that we don’t know what to make out of this small trial.

          So, I think your title and headline are over-hyped. This may be a valuable contribution to some analyses of non RCT data, but I think you are overselling what it can do. I really don’t think it helps in the current COVID situation, and it hurts in that it hides the precarious nature of what the “evidence” can tell us. It would be valuable to try to outline situations in which your technique would and would not be appropriate.

        • But doesn’t the approach get around the selection problems if handled carefully?
          Because you’d include the control group and everyone who was dismissed from the trial (or died) in the anysis as one group, and compared it a (hopefully similar enough) group that was treated at the same hospitals in the time before.

          The difficulty would be in accountig for other factors: for example, I’d expect mortality to drop as doctors gain more experience with ventilation treatment. If you do time-shift, you have to pretty much capture all of the factors that could possibly be relevant, or you might miss the crucial one. A good randomized trial (ideally double-blind) simply randomizes the factors you don’t know about, so you can have some statistical confidence in your result not being affected by that.

        • As I think about it a bit more, I have a harsher criticism. Non RCT of treatments are often used because there is a particular subgroup that you believe a new treatment might be effective for. By definition, this means that the baseline data for the untreated (and non-control group) population probably does not apply. So, where are we to get a reasonable assumption for what the death rate (or cure rate) would be had they not been part of this trial? A point that has been made frequently, is that most treatments may work for some subgroups and not others – the average effect is of little interest. Even if a treatment does not “work” on average, it may very well work for particular subgroups. This would suggest that the circumstances under which a non randomized trial can rely on your key assumption are quite limited. Exploring sensitivity to that assumption may be a nice exercise for modelers, but I’m not sure it is of much use for clinicians. In fact, it still runs the danger of pretending that the non RCT data is of more value than it really is.

        • Many of the points raised here are valid ones, that go into either deciding when the method should be applied, or into the range of plausible delta values to consider. I think what’s missing is that we are proposing a technology that makes valid assumption-result mappings, using assumptions that in many applications are more easy to understand and critically evaluate (e.g. on the baseline trend) than some standard ones (e.g. conditional ignorability/selection on observables, exclusion restrictions). This will not always lead to a narrow conclusion – and much of the time that might be exactly the point and the benefit!

          To say it “doesn’t work for COVID” would certainly be too general. In the infamous French study, absolutely – the authors don’t report several pieces of information we would need. We wouldn’t use it there. Was there a period of time, in the same facilities perhaps, when they weren’t giving HCQ, that we could have used? We don’t even know. But having discussed this with MDs treating COVID patients across several hospital groups, we know there to be many settings in which the method’s requirements are met.

          On Dale’s “harsher criticism”, which we read as “(i) ATEs are often not of interest, and (ii) certainly on broader populations than one would want to study,” perhaps we were not clear enough that we are (i) estimating an ATT, not an ATE, and (ii) you can define your population of interest to be as specific as you want. We certainly make no claims as to the effectiveness of a treatment in the populations that did not select into treatment, focusing instead only on whether those receiving treatment were helped by it. This is often a critical benefit of the method, and when used as a design-based strategy (useful if you don’t want to deny or force treatment in an RCT but still want to run a trial), it aligns the population receiving an estimate to those likely to receive the treatment in normal practice.

          But the most important thing we have to say here is in regard to the notion that using this approach somehow falsely elevates the status of non-RCT comparisons. Quite the opposite. People are, annoyingly and dangerously, believing results from naive or covariate adjusted comparisons in which there was neither randomization nor a good story to rule out unobserved confounders. In the TB paper we show how an apparently dramatic naive (or covariate-adjusted) comparison (16% of untreated vs 1% of treated getting TB) that will (and has, in several published papers) impress policymakers and physicians alike, can be powerfully undermined by thinking through this approach and showing the results under a range of assumptions. Similarly with HCQ we heard one doctor making conclusions from naive comparisons, changing their practice, and telling us HCQ is a “solved problem”. For real. I too am skeptical, to say the least, that HCQ is a miracle drug, but an approach that helps to reveal how biased these estimates could actually be under very mild assumptions (that people can actually understand, like a baseline trend) can help protect us against such dangerous conclusions being reached, whether our priors align with them or not.

        • No offense, but it seems like a method whose advantages stem from ignoring the parts of more complex analyses that make them checkable and useful. As discussed in the TB paper, when trying to account for any complexitites you are just doing one of the existing classes of analyses and renaming it. I really don’t understand after reading this why it is anything other than pre-post IV with an attempt at detrending.

          It the TB paper, the notion that nothing or only minor factors are concurrently changing within and between clinics has little support, especially with such a long time gap. Unlike an ITS, we don’t get to see the behavior of shocks to the outcome for getting valid SE or changes in patient makeup within or across clinics; we just have to hope that they are independent, that the time period is long enough for disturbances to have averaged out to negligible effect on uncertainty (and particularly that there is no unit root).

          The suggested application to COVID brings the limitations out: there are many simultaneously changing treatment strategies and stressors on resource availability. The pre- and post- drug cohorts are likely not the same as the denominators includes changing populations due to epidemic dynamics.

          While there are possibly some applications where asking “did the outcome rate change after exposure became available” is an adequate identification strategy, this is the most basic epidemiological analysis and the scenario “where treatment become newly available or popular, and an assumption on the stability of average non-treatment outcomes can be credibly made” will be invalid much more commonly than reality.

        • It seems like the major disagreement here is whether adding complexity in these situations always makes the model “checkable and useful” or whether it can in other cases make the model opaque and easy-to-be-fooled-by. The paper mentions the ability to add covariates to SCT/SCQE not as a standard practice recommendation, but because some people are so trusting of their covariate adjustment practices that they refuse to consider such adjustments may be generating overconfidence. Yes, COVID (and TB prevention) dynamics can be very complex. We trust the intuition of a chief ID doc (or TB program director) about cohort-to-cohort changes – including if their intuition is that they cannot make a narrow estimate of such changes – more than a regression’s ability to render a treatment unconfounded.

        • People do a lot of stupid things, especially when confronted with statistics. So, if your claim is that your method is better than what some (maybe even many) people wrongly do when faced with non-RCT data, then fine. That may be a genuine contribution, and in any case, we can disagree regarding what that contribution is. But, I do think your title is misleading and overstated. Much time has been spent on this blog about the need to not over-hype research. I think you are guilty of overstating what you method can do (and under what circumstances). If the major justification is that it is better than people ignoring the fact that non-RCT results were, in fact, not randomized, then why not label your study as a way to improve upon naive, incorrect, and dangerous inferences from non-RCT data?

        • It seems to me that Tukey, Stuart Hunter, or George Box have written about “pocket statistics”: methods that can be used while walking the shop floor, for example, to make sense of data you might encounter. The Tukey-Duckworth test (“quick test”) is one example.

          When I saw the SCT, I didn’t really think it would replace all the detailed work required by the FDA for approving new drugs–that’s what I meant by wondering if this were “too facile”–although I wouldn’t reject the approach if it turns out to be that good.

          What I really wanted was another “pocket statistic” (or pocket statistical method) to be used when encountering data, much as Ami and Chad seem to propose. I might be listening to claims of causality presented in a meeting or in casual conversation (“See? This table shows that changing the way we do X caused an increase in customer purchases” or “Wow. I know it’s early in the process, but reducing mortality from 16% to 1% has to show there’s something to our new medical procedure”!).

          If the SCT can provide that, I think it makes a valuable contribution. But what do we do with it? Take the 16% -> 1% example in the presentation. Based on the raw numbers, one might be tempted to think this was an astounding result and commission more in-depth research to this treatment. SCT should temper that enthusiasm. Sure, if there were other reasons (e.g., a PBPK-type model that suggested this might indeed be a good treatment), then one might still pursue it a while. Absent other evidence, though, SCT might save a lot of time exploring something that, in hindsight, looks like noise.

          So I see it as a reasonableness checker and preliminary screen, not a final arbiter except probably in cases with large effect size and low noise.

          Could the headline (title) be clearer? Perhaps. As they say in computer science, naming is hard.

          Incidentally, I wanted to reply to Dale’s comment starting “People do a lot of …”, but I think the blog wouldn’t allow that deep a hierarchy.

        • To my “It seems to me …” comment, what I’m really curious about is whether SCT meets the requirements for a useful pocket statistics method.

        • To Dale, 4/26, 8:28 (PT): Would you mind clarifying which title you see as overhyping the method? Chad and I would be happy to discuss how to temper any potentially overzealous claims. There is “Learning the effects of COVID-19 treatments given outside of randomized trials” for the slide deck and “Method for estimating effects of COVID-19 treatments outside randomized trials” for the title of my ASA connect posting, as well as “Estimating causal effects of new treatments despite self-selection: The case of experimental medical treatments” and “Inference without randomization or ignorability: A stability-controlled quasi-experiment on the prevention of tuberculosis” for the two full papers. There is also “A new way to do causal inference from observational data?” which was not a title written by either of us (I’m unsure whether Bill or Andrew wrote it).

          To Bill, 4/26, 12:18 and 12:19 (PT): I’m glad it didn’t come across like we wanted to replace randomized trials for approving new drugs. I don’t trust any expert’s suggested range for delta as much as I trust randomization. Though there is a discussion to be had about populations of interest (ATE vs ATT), and I think post-FDA-approval drug use increases do offer one of the most useful and believable applications for SCT.

          I found your “pocket statistics” (perhaps “statistical rules of thumb”?) search intriguing. While I don’t think we could adjust SCT to be as simple as the Tukey-Duckworth test, I do think it is simple to both calculate and grasp with a basic understanding of counterfactuals, and, as you say, potentially very helpful in countering biased causal claims.

        • Ami, it seems to me that Tukey and the other early EDA folk did a lot of testing of their methods with synthetic data. I think you did some of that in your papers, which I need to re-read (okay, /read/–I skimmed quickly the first time). Perhaps emphasizing that would help.

          To Dale and Ami: I’m not overly worried about headlines; after all, even Andrew seems to have mastered the skill of clickbait headlining from time to time. I am more concerned by PR pieces advertising research that seems a bit exaggerated in what has been done. To Dale’s specific comment (below or above this–one of those), would “Estimating the effects of COVID-19 …” have been better? Yes, mentioning COVID-19 does attract eyeballs, but that’s the purpose of a headline–I think I only mind that if it’s clearly lying. In this case, a claim about a malaria drug was at play, and SCT sounded like it might be useful.

          Finally, I see a “rule of thumb” as a guideline–like having 30 units in any statistical test you might do. I see a “pocket stat” as a process you can apply to answer a specific question when equipped at most with a small pad of paper and a pencil or maybe only with your brain.

          Which makes me wonder: how far can you go towards making it as easy as the Tukey example? Expressed appropriately, perhaps getting the location of the result is doable; if so, are there any rules of thumb to get the scale (sd) of the result?

        • David Ami Wulf
          I couldn’t indent this below your question, so this is as close as I can get it. The title I object to is “Learning the effects of COVID-19 treatments given outside of randomized trials.” In the current environment, where anything COVID immediately attracts attention, I think it is misleading. As you suggest, it may be possible to apply to COVID non-RCT, but it is far from certain that it can help there. In fact, the example I suggest (the French trial) you agreed would not be helped by your method. The fact that it potentially could be used is quite different from saying it helps. In fact, I still can’t think of a COVID circumstance where I could apply your technique meaningfully. And I think it runs the danger of obscuring how little we can say from any non-RCT at this point in time. I realize that the title is not quite inaccurate – and the COVID connection certainly grabs attention – but it is a slippery slope I think you are on.

  5. #3 “I’m a statistician, not a biologist, and I don’t have any idea how these assays work.”

    The key information for this assay is that it detects IgG and IgM, which appear at different times at different prevalences i the virus infection life cycle. It does this by way of two separate stripes that each show a color if the corresponding antibody is detected.

    The manufacturer instructs that the test is considered positive when one or both of these stripes appear (EITHER).
    The maths used in the Santa Clara stidy suggests that they considered the test positive only when both stripes appear (BOTH).
    The BOTH method leads to a higher specificity with fewer false positives (since a single stripe being falsely positive does not make the test positive) than the EITHER method, but to a lower sensitivity with less true positives.
    If you are trying to detect a very small prevalence, the BOTH method is obviously better suited.

    The problem is that when the method is described verbally in the paper, the word “either” is used, and that suggests the EITHER method. But that the IgM specificity is not mentioned and used in the preprint suggest use of the BOTH method, as does taking the smaller of both sensitivities for the test’s sensitivity.

    I am inclined to be charitable and hope for an editing error, but the final version should express itself more clearly.

    (The LA County study uses the words “First, we report the unweighted proportion of positive tests (either IgM or IgG) in the analysis sample (N=863).”, and this seems to be clear, but they reference the 100% sensitivity from the 30 tests, so I still wonder.)

    • I’m not an expert on the biochemistry, but here’s the kind of basic idea for those who are even more not experts than I am. I had my wife who’s a biologist but not an expert in biochemical assays read and edit.

      It’s based on the description here: https://www.assaygenie.com/rapid-covid19-antibody-detection-tests-principles-and-methods

      You have blood which contains antibodies that is placed on the end of the strip.

      You attach the antigen (the thing that is on the virus that the antibody binds) to a strip that the antibodies flow past. This is bound to some extra atoms (gold in this example).

      Later on the strip are antibodies made by rabbits that bind to the human antibodies.

      The blood’s antibodies are brought into the strip by capillary action and bind to the antigen.

      Then as it flows along, the rabbit antibodies bind to the human antibodies. If the human antibodies have the conjugated viral particles… then they precipitate a colored line.

      False positives can occur if the color precipitates even without the antibodies due to manufacturing issues, or if cross-reacting antibodies flow through, or whatever.

    • This is very interesting. Your close reading of the language does suggest a disconnect between sampling method and development of sensitivity & specificity.

      I find the manufacturer’s numbers ( https://imgcdn.mckesson.com/CumulusWeb/Click_and_learn/Premier_Biotech_COVID19_Package_Insert.pdf , page 2) a bit confusing because they don’t describe the extent of the overlap between the IGG and IGM test groups. But maybe it doesn’t matter? For what it’s worth, the IgG specificity is 369/371 and IgM is 368/371.

  6. https://www.buzzfeednews.com/article/stephaniemlee/stanford-coronavirus-study-bhattacharya-email
    Not directly related but a final nail in the coffin for the Stanford Santa Clara County study.
    One of the coauthor’s wife sent a group e-mail to parents of a single high school to encourage people to sign up for the study before the facebook ads went online.

    [The email, sent to a listserv for Ardis G. Egan Junior High School in the city of Los Altos on Friday, April 2, advertised a study set to begin that weekend. With the subject line “COVID-19 antibody testing – FREE,” the email described how participants could gain “peace of mind” and “know if you are immune.”]

    [“The email you reference was sent out without my permission or my knowledge or the permission of the research team,” [the author] wrote in an email to BuzzFeed News. He said that he believes the note was also shared on social media sites.]

    I don’t think there is any way you can salvage meaningful results from this study at this point.
    Statistics is not some magic tool you apply to get a result.
    Garbage in garbage out.

    • “He added that a revised preprint would be released in the next two days and would address criticisms of the study’s sample selection.”

      The reddit post about the email that has been shared in the comments on medarxiv and here also suggests that they closed the recruitment survey down for people registering from that area, only accepting signups from other areas. (This might be the reason why some people didn’t pass the ID check on the day.)

    • That email doesn’t bother me as much. It went to a group that seems highly motivated to get tested just out of sheer curiosity.

      I’m more concerned about the groups where they got very poor response — hispanics and asians were two they noted. Seems like you’re much more likely to get self-selection bias in a population that doesn’t have a general interest in testing.

      • It seems like the “FDA approved” language would be problematic with respect to subject recruitment if the test was not FDA approved, as claimed in the article.

      • “It went to a group that seems highly motivated to get tested just out of sheer curiosity.”

        This is exactly why it should bother you.
        Bias works both ways.
        Moms of Ardis G. Egan Junior High School students are being interpreted as representative of the zip code and representative of white women in Santa Clara County.

        Honestly, the whole paper is such a mess that even without knowing about the email, the only thing you could reliably extract from the data is the upper bound.
        The fact that they fail to mention the email at all just shows that their rigor in research ethics matches their rigor in statistics.

    • One of many possible “Aargh!-worthy” quotes from the paper Funko linked:

      “A growing consensus in the social sciences plots the historical rise of societies with ‘tighter’ social norms and more conservative cultures to the presence of pathogens in the environment. “

  7. Yes, and unfortunately, most people are not keenly attuned to the bias problem. Even in science papers, we state the bias as “limitations” and keep moving along as if it isn’t there.

    Even without the email, this problem persists because tests are severely rationed. People who want to get tested, especially those without severe symptoms, simply cannot get tested. So, the first person who got the free test will text all their friends and family about it.

    This effect is quite clear unlike other sources of bias (like what types of people go shopping). This factor causes over-estimation of prevalence. The more suspicious one is of having the virus, the more incentive one has to seek out the test. Self-selection means they don’t have enough of the uninfected.

  8. https://www.medrxiv.org/content/10.1101/2020.04.05.20054361v1

    “Based on the data until April 4, for the whole COVID-19 fatality season to-date (starting with the date the first death was documented in each location) the risk of dying from coronavirus for a person <65 years old is equivalent to the risk of dying driving a distance of 9 to 415 miles by car per day during the COVID-19 fatality season. Most of the hotbed locations that we analyzed are on the lower side of this range, where the risk of death is in the same level roughly as dying from a car accident during daily commute. The highest risk (in New York City) corresponds to the risk of dying in a traffic accident while travelling daily from Manhattan to Baltimore round trip for these 25 days."

    We can update the calculation based on data until April 23.

    In NYC, for a person <65 years old corresponds to the risk of dying in a traffic accident while travelling daily from Manhattan to Indianapolis round trip for these 44 days.

    Or, given that we were already commuting to Baltimore on those 25 first days, it corresponds to travelling from Manhattan to Oklahoma roundtrip during these additional 19 days.

    • “the risk of dying driving a distance of 9 to 415 miles by car per day during the COVID-19 fatality season”

      And just how did they arrive at the range 9 to 415? And why the heck did they give a range of miles driven, especially such a wide range? Weird.

        • Also –

          At about 1:07 in, he talks about the “collateral damage” from people not going to the hospital for needed care during mandated social distancing. His argument is that the mandated social distancing is what causes people to stay away from the hospital.

          Imagine that we had no mandated social distancing. Imagine that hospitals were rather overwhelmed with COVID 19 patients. Ignore the fact that healthcare workers would likewise be overwhelmed…

          Is the theory that those people who stay at home under the condition of mandated social distancing would be going to hospitals that are overwhelmed because there was no mandated social distancing?

          Also, he talks about people staying away because of the fear and panic because “all the news stories about how horrible this thing is..” under the mandated social distancing…

          Is the thinking that the panic and fear would be dissipated if there were no mandated social distancing?

          Maybe someone can help me understand his argument.

        • It seems like magical thinking to me: if we pretend it doesn’t exist, then… it’ll just “poof” go away and life will go back to normal.

          He seems committed to the idea that this is more or less like the common cold, and if we ignore it nothing bad will happen.

        • You know there’s a lot of room between “just like the common cold, ignore it and it will go away” and the extrapolations you prefer that shows it decimating humanity with millions of deaths if we don’t well and truly crash the world’s economy by shutting down for the next [fill in the blank] years.

        • In all fairness, I reread that comment I just wrote and I missed a piece in that he also attributed much of the panic and fear to the 2 million death estimates that he called “science fiction.”

          But I still think it’s totally unrealistic to argue that there wouldn’t be widespread panic and fear if hospitals were overwhelmed, a possibility that he seems to acknowledge would have been (at least more likely) the case absent mandates social distancing. (It seems his argument is that the same number of people will die in the end either way, so the difference is basically that there’s more economic hardship under the condition of mandated social distancing. I have many disagreements with that argument, one of them being that the greater the stress on the healthcare system, it seems to me the greater the disparity in outcomes in association with SES and ethnic/racial factors).

          I’m a bit torn. If you watch the video, there is some balance in what he says. But the failure of his logic, IMO, lies in what he doesn’t say. Basically, he doesn’t talk about the uncertainties that run against his conclusions.

          Perhaps magical thinking. I’m trying to avoid a pejorative characterization. But it’s hard for me to understand why he displays such a selective approach to uncertainty. And that’s on top of my being gobsmacked that he’s generalizing about national fatality rates based on extrapolating from infection rates among non-representative convenience samples in hotspots.

        • > You know there’s a lot of room between “just like the common cold, ignore it and it will go away” and the extrapolations you prefer that shows it decimating humanity with millions of deaths if we don’t well and truly crash the world’s economy by shutting down for the next [fill in the blank] years.

          He’s not really saying “ignore it and it will go away.”

          But from what I’ve seen, he’s taken a very selective approach to addressing the uncertainties, and he’s always winding up on one side of the uncertainty ledger. Watch it and tell me if you disagree. I’d really like to see someone present arguments in defense of his arguments but haven’t been able to find anyone to do that yet.

        • Of course. The extrapolations with millions of deaths are for *uncontrolled* growth with knock-on effects of collapse of the health care systems and panic / riots etc. They provide a worst case.

          The obvious middle path is the one we’re doing now and that most of the public health people want: remain closed as long as needed to wait for infections to decline to manageable levels, do as much research as possible on treatments, alter the way the economy works to create new kinds of non-contact jobs, change public policy to provide for the welfare of people who have no jobs because those jobs are dangerous to public health…

          the part that’s really missing is the part where we ramp up testing capacity, contact tracing capacity, and have a well reasoned plan with models to trigger localized shelter-in-place when growth seems to be exponential. There’s just a lot of planning and coordination required that we are still fiddling about with.

        • Daniel –

          > Of course. The extrapolations with millions of deaths are for *uncontrolled* growth with knock-on effects of collapse of the health care systems and panic / riots etc. They provide a worst case.

          You may want to watch this interview with a noted Swedish epidemiologist:

          https://youtu.be/bfN2JWifLCY

          I think there are quite a few questionable aspects of his arguments…but one thing that he said that I thought hit home was that the 2 million dead extrapolation assumed no increase in hospital capacity, and that such an assumption was unrealistic. In the modeler discussion podcast here:

          https://castbox.fm/episode/Epidemiological-Modeling%2C-Policy%2C-and-Covid-19-(Daniel-Kaufman%2C-Eric-Winsberg%2C-%26-John-Symons)-id37106-id254740525

          They discuss some 750? rather uncertain parameters that were part of the Imperial College forecast. I suppose we’d have to expect that many of the parameters in such a modeling would have to turn out wrong, and it would just be unrealistic to expect otherwise. But if what the Swedish guy said is true – it does seem that just assuming no increase in hospital capacity is a rather large mistake that should have been fairly obvious to catch?

        • RE: “…remain closed as long as needed to wait for infections to decline to manageable levels, do as much research as possible on treatments, alter the way the economy works to create new kinds of non-contact jobs, change public policy to provide for the welfare of people who have no jobs because those jobs are dangerous to public health…”

          That list of requirements basically describes a lockdown that continues for at least several years, absent a miraculous appearance of an effective vaccine.

          RE: “…we ramp up testing capacity, contact tracing capacity, and have a well reasoned plan with models to trigger localized shelter-in-place when growth seems to be exponential. There’s just a lot of planning and coordination required that we are still fiddling about with.”

          And honestly, do you think all that stuff could possibly be in place and effective a year from now? After seeing how this has been (mis)handled these past six months?

          The question isn’t what pie in the sky scenario we’d like to see implemented over the course of this spring and summer. The question is what sort of intermediate measures we take the next few months when we have a choice between limiting economic damage (with more, potentially catastrophic coronavirus spread) versus absolutely certain catastrophic economic damage from an unsustainable lockdown (with less coronavirus spread).

          Unfortunately, what I suspect we will get in our current politicized culture is one side advocating ill-advised, needlessly risky “poof it will all go away” while the other side just repeats like a broken record “everybody stay at home until it goes away”. Each side sits behind its own barricades refusing to budge, there’s nobody in a position to implement any sort of middle ground.

        • Daniel: In your “…remain closed as long as needed to wait for infections to decline to manageable levels” , the phrase “manageable levels” is doing a lot of work, and defining it may go along way to reconciling your views and Brent’s. (I will admit that my views are pretty similar to Brent’s. I generally really like what you write, but on this it seems like you lunge into extreme and uncharitable statements, then back away, then repeat…)

          Anyway: how do we know when we’re at “manageable levels?” Is there a single level for a state, a country, or the whole planet? Are some places there already? Why is there, in what you write, a binary choice between “remain closed” or “open fully?”

        • Raghuveer: manageable means just that, the people doing the managing keep it under control. Control is defined as cases increasing at a linear or maybe linear + a small quadratic term but definitely sub-exponential.

          South Korea and Taiwan and until recently Singapore were examples. Singapore ignored its low-class workers and of course they had a big outbreak. Now lockdowns will occur there until they get cases back to “manageable”.

          The alternative is exponential growth with doubling every N days where N is a small number. 2.5 if it’s totally uncontrolled, maybe 8 to 10 if its like Sweden.

          Those simply aren’t sustainable, and with a ~10 day delay from infection to seeking treatment, if it goes doubling every 2.5 days somewhere, it’ll be 2^(10/2.5) = 16 times bigger than you realize before you even start seeing cases increase.

          CA is in lockdown now and seeing ~800 cases a day. Suppose they’re all lag cases from 2 weeks ago… Suppose there are a few thousand cases in Sacramento today, including asymptomatic. If we release lockdown and it goes doubling every 2.5 days in Sacramento, we could have 32,000 cases next week but we wouldn’t even know it to beginning of the week after.

          That kind of thing just *won’t* work.

        • The key problem is the dimensionless ratio tseek/tdouble, where tseek is the time from infection to seeking care and tdouble is the doubling time.

          When tseek/tdouble >> 1 you’re looking “very far back in time” when you look at symptomatic cases. This virus can have this ratio as high as 3-4 or so. Which means the only way to keep it under control is either to have a fixed periodic lockdown with period ~ 2x doubling time on, followed by 2x tseek off, or to have lots of testing and tracing etc to prevent getting into the exponential phase in the first place.

          Since doubling time is maybe 5-8 days under Swedish type control, you could maybe do 16 days on, and then 20 days off continuously or something like that, but is that substantially better than what we have, particularly given what one assumes is a dramatic collapse of demand for most of the hardest hit industries… WE’ll see what happens in GA and FL and soforth. Maybe everyone will go get tattoos and perm their hair and everything. But I kinda doubt it.

        • Also, let’s look at the economic damage from hospitalization… Let’s suppose 10% of symptomatic cases need hospital treatment, and 2% need dramatic hospital treatment. so regular hospital treatment I’m guessing is ~ $30,000 and dramatic would probably be in the range $300,000. These are just order of magnitude based on kind of typical costs for hospital services. This suggests the hospitalization damage of letting it go through the whole country is about $9,000 per capita. So about $3T dollars. That’s JUST the hospitalization costs, nevermind the recovery costs, loss of work capacity, etc. I can’t imagine the knock on costs being less than 2x that.

          We’re stuck between a rock and a hard place. There’s no way to avoid lots of losses. The right thing to do is try to quantify them and to come up with a management strategy that minimizes them.

          we have NO LEADERSHIP. So the practically available options are all bad.

        • Joshua said,
          “But the failure of his logic, IMO, lies in what he doesn’t say. Basically, he doesn’t talk about the uncertainties that run against his conclusions. … But it’s hard for me to understand why he displays such a selective approach to uncertainty. ”

          My impression is that it’s his habit to bring up only what supports his opinion, and dismiss the rest (e.g., as “fake news”). Some people are like that.

        • I’m not an expert in control theory, but I know enough about control theory to know some basics.

          Imagine you are trying to drive a car down the street. Imagine that you do it remotely via a video stream. Imagine that video stream is delayed….

          Suppose its delayed 1ms, how fast can you drive?

          Suppose it’s delayed 10ms, how fast?

          Suppose it’s delayed…. 10 days, how fast?

          If you have a delay of t, and an error in position of d distance is dangerous, and you’re traveling v velocity, then intuitively (v*t)/d has to be much less than 1 to remain in control… so if 1m distance error is dangerous and you have a 10 day delay, then 1m/10 days = 1e-6 m/s ~ 2.6e-6 mph is much faster than you can realistically drive.

          Hence, the rovers on mars go really slow because the speed of light delay is ~ 15min. https://blogs.esa.int/mex/2012/08/05/time-delay-between-mars-and-earth/

          Same basic concept for control of this pandemic, we need to make our time to detect and react to spreading be much faster than the doubling time.

          So if we plan to have a few hundred infections per day, and R0 is ~ 3 then we need to be able to detect a few hundred people, and then trace down all their contacts which is probably ~30 each on average every day, which means finding thousands of people, and probably tens of thousands of contact tracers.

          For example for CA alone.

          This number is on par with the number referred to by: https://twitter.com/propublica/status/1253351877787467779 which was linked by Joshua below.

          if anyone wants to get angry and complain about being locked down, basically the people to get angry with are politicians who have control over the development of this level of contact tracing capacity.

          Until we get it in place… no doughnut for us, because *any* exponential growth doubling in less than say 20 days time is unsustainable, and getting *this* virus down to 20 day doubling time takes almost all of what we’re doing now evidently.

        • Daniel –

          > if anyone wants to get angry and complain about being locked down, basically the people to get angry with are politicians who have control over the development of this level of contact tracing capacity.

          Until we get it in place… no doughnut for us, because *any* exponential growth doubling in less than say 20 days time is unsustainable, and getting *this* virus down to 20 day doubling time takes almost all of what we’re doing now evidently.

          ++++++++

          That’s how I see it. I see there as being an inverse relationship between the existence of contact tracing/testing and the need for mandated social distancing.

        • “His argument is that the mandated social distancing is what causes people to stay away from the hospital.” I think this is a reasonable argument — at least as reasonable as the opposite that seems to be implied, that no one will ever go to the hospital again.

          In most of the U.S., hospitals are fairly empty, so much so that layoffs and closures are a problem. More relevant, there’s real concern among doctors that people are avoiding important care. From my local newspaper: Link.

          You could argue, I suppose, that if there were a small, *not* total, lifting of stay-at-home orders, people would use exactly the same detailed calculations they presently use to avoid going to the hospital, but this is not how people work. There is a strong sense of “we’re being told not to leave the house, so I shouldn’t leave the house.” The messages coming from the state, and the moral criticism implicit in contradicting it, drives many people’s behavior.

        • Raghuveer –

          > I think this is a reasonable argument — at least as reasonable as the opposite that seems to be implied, that no one will ever go to the hospital again.

        • Raghuveer –

          > I think this is a reasonable argument — at least as reasonable as the opposite that seems to be implied, that no one will ever go to the hospital again.

        • Raghuveer –

          Sorry – don’t know how that happened above:

          > I think this is a reasonable argument — at least as reasonable as the opposite that seems to be implied, that no one will ever go to the hospital again.

          I don’t know where you’re getting that implied argument. I certainly never implied it.

          My argument is that it isn’t the mandated social distancing that’s keeping people away from the hospital (in emergency situations), so much as it is the fear that the hospital is an unsafe place to me. Under the argument that mandated social distancing would actually keep the hospital from being overwhelmed (an argument that Ioannidis seems to accept as valid, at least to some degree) then it seems likely that mandates social distancing would actually have the opposite effect. People didn’t need mandate social distancing to be concerned about going to the hospital. Nor did they need the Imperial College projections to feel that way. And I would say that without the mandates social distancing, the news about overwhelmed hospitals would actually be worse, as would shortages of beds, and PPE.

          > In most of the U.S., hospitals are fairly empty,

          This ignore the potential of mandates social distancing playing a role in making that so. Ioannidis seems to accept that mandates social distancing actually did play a role in keeping hospitals from being overwhelmed. Do you not see the logical problem in the argument you’re presenting?

          > so much so that layoffs and closures are a problem. More relevant, there’s real concern among doctors that people are avoiding important care.

          Of course, that’s a concern. But the closures is basically a separate issue. And the concern about people avoiding care is not conditioned on the mandates social distancing. That’s the same point that I addressed above, and you haven’t actually addressed my point.

          > You could argue, I suppose, that if there were a small, *not* total, lifting of stay-at-home orders, people would use exactly the same detailed calculations they presently use to avoid going to the hospital, but this is not how people work.

          Lol. I love it when people ‘splain to me how people work.

          You still haven’t addressed the argument I am making. I’m saying that panic and fear would have been pretty much the same, if not worse, without the mandated social distancing. I have to say, it looks to me like you’re starting with a conclusion and working backwards to justify your conclusion.

          Please address the point that I made. If you accept what Ioannidis seems to accept, than the condition of hospitals being overwhelmed would likely have been worse absent social distancing. In such a situation, why would the panic and fear be a product of the social distancing rather than the state of the hospitals?

          Perhaps you are convinced that hospitals would not have been overwhelmed absent the mandates social distancing? If so, then that’s not the argument that Iaoannidis made, and so it isn’t really relevant to my questions about Iaoannidis’ arguments – that you weighed in on.

          > There is a strong sense of “we’re being told not to leave the house, so I shouldn’t leave the house.” The messages coming from the state, and the moral criticism implicit in contradicting it, drives many people’s behavior.

          How do you know this? How do you know whether or not people said “the news is that the hospitals are being overwhelmed with sick people. I have heard that the nurses and doctors lack the needed resources and equipment. I have heard that if you go to the hospital there are people lying in the hallways and there are people who are admitted and die without ever seeing their family members again. Therefore, I will not go to the hospital out of fear of it being worse than staying home.”

          In fact, that was very much the scenario in other countries *prior* to social distancing or other mandates being put into place.

          I will say this, however. If you insist that you have some gifted insight into how people think – there really isn’t anything further for us to discuss.

        • Raghuveer –

          I have a response to you in moderation. I don’t quite get how the moderation filter works at this blog….

        • Joshua,

          Exactly, people don’t go to the hospital because they are afraid that’s where the sick people are.

          I don’t think this fear that there are sick people at the hospital goes away (in fact it gets probably worse) when you lift mandated distancing orders.

        • Raghuveer said,
          “In most of the U.S., hospitals are fairly empty, so much so that layoffs and closures are a problem. More relevant, there’s real concern among doctors that people are avoiding important care.”

          My understanding (based both on what I’ve read and what I’ve encountered) is that when the coronavirus crisis developed, there were mandates that postponed many types of medical care in order to focus medical resources on dealing with the crisis. For example, a friend who is an abdominal surgeon had most of his scheduled surgeries canceled and was reassigned to the emergency room until further notice. Also, at my routine eye exam a bit before the corona virus crisis, my eye doctor said it was time for cataract surgery, and referred me to a cataract surgeon, so I made an appointment with him. As the corona virus crisis emerged, I thought maybe I should cancel or postpone the appointment. When I called his office to cancel the appointment, they told me that they had already canceled it.

          There seems to be a trend now to start to ease back on the polices. See for example: https://www.whio.com/home/coronavirus-pandemic-what-you-need-know-wednesday/SD5P6XXZI5BZNG5SE6GIENPAIE/

        • Pardon the interruption, but hospitals are laying off people who aren’t involved in direct care because there is no other business. I may be nitpicking but that’s different from saying they’re empty; this is displacing demand for services from the larger set of services to a smaller set.

        • “Hospitals are fairly empty” is caused by a mandates cancelling of elective surgery to free beds for a possible Covid-19 surge. The only way to fill hospitals is to either
          a) overwhelm the hospital with Covid-19 patients, or
          b) control the epidemic so that some of the safety reserve of ICU beds can be used for regular business again = physical distancing

          The only sense in which social distancing measures cause hospitals to be empty is that we don’t have a raging epidemic. That is the argument that is implicitly being made here.

        • Ioannidis point right from the beginning was that there is a lack of data and therefore all countermeasures taken/planned were not based on evidence. He seems to be right in this regard – the WHO requested a rapid review on these measures from Cochrane and reading it confirms that a lot is more based on modelling than on actual data.
          The fatality rates were quite crude estimates based on dubious data, the real transmissibility also, the sensitivity and specitivity of the common PCR tests are ( I think up to now) not known, but it is known that high mortality figures are correlated with age and preexisting conditions.

          So based on his assumption that the countermeasures based on missing data could cause more harm than the disease, if the real numbers aren’t as worse as estimated, he tried to find out, beginning with the cruise ship case, found that even under these conditions only 25% percent got infected, and that only 9 out of ~721 cases died (as of today the number is 12 iirc). So he concluded that the transmissibility isn’t as high as expected and the fatality rate is lower than the estimate from the WHO, but already back then he assumed that the real fatality rate would probably be between 0.05 – 0.2% pretty much like the yearly flu season numbers.

          Like you I suspect also, that he somewhere got too biased by the comparison with the flu and seems to underestimate the real numbers of cases that can arise in a population with no immunity against a new virus. The other epidemiologist Wittkowski based his calculations on the herd immunity of 60 – 70% in 100 days, and argued that it is common knowledge and basically no big deal.

          Quick calculation shows ~200 Mill. cases, means 102,000 deaths (o.o5%) – 408,000 (0.2%) in 100 days, combined with 30 Mill. people having severe symptoms.
          While the swedish comment is correct, capacity can be enlarged but to this extent?

          Even if the study results were correct and representative (numbers from New York seem to give some corrobation) wrt to the real incidence, it is still a long way to go. Am I missing something?

        • When you don’t have a lot of data, you rely on models that are derived from data. The WHO published that back on February 19, estimating an IFR of 0.3%-1.0%. I don’t yet see that proven wrong.
          https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200219-sitrep-30-covid-19.pdf

          “the sensitivity and specitivity of the common PCR tests are ( I think up to now) not known” — irrelavant, specificity expected to be close to 100%, sensitivity is good for the test itself but bad as diagnostic instrument since it depends on when people are shedding viruses and how well these are sampled and transported.

          “it is known that high mortality figures are correlated with age and preexisting conditions.” — known since Wuhan

          “countermeasures based on missing data could cause more harm than the disease” — I see no evidence for that, excess mortality and hospitalization rates show the harm done by the disease quite clearly

          “if the real numbers aren’t as worse as estimated” — don’t see evidence for that

          “beginning with the cruise ship case,” — observed deaths today fit within the fatality rates reported by China, within statistical uncertainty. The study was premuature and underestimated future deaths, i.e. it was “based on bad data”

          “already back then he assumed that the real fatality rate would probably be between 0.05 – 0.2%” — that is also based on bad data, and I’ll bet you he will turn out to be wrong. This esitimate is 0.1% lower than the WHO estimate, and like the WHO rate has an ~4x spread, so can we make that a fight: Ioannidis vs the WHO, under 0.2% vs over 0.3%?

          Wittkowski: “herd immunity of 60 – 70% in 100 day”, based on which starting date? Given unrestricted growth of the disease, assumed at x5.5 per week, the whole world has herd immunity in 100 days if I apply the standard forumla for logistic growth. That’s in line with common estimates for the rate of spread and not in dispute. The dispute is about whether unrestricted spread is politically desirable, aka how much do we care about the at-risk population? Given that old age is a risk, and this won’t be the last epidemic, eventually we’re going to be in that position.

          Your quick calculation uses the absolute lowest estimate and is not “based on evidence”, therefore it should not inform public strategy.

          You can’t put 10% of the population in hospital at the same time. Not even 1%. Germany has one of the highest per-capita TEST rates in the world, and we can’t even test 1% of our population per week, much less put them in hospitals. Why did you not look up hospital capacity and put that in the post, instead of leaving this an open question? That doesn’t seem fair.

          “It is still a long way to go” — yes. Well. If the NYC numbers were right (which we don’t know) and 20% were infected, you could probably tell everyone in NYC “go back to normal” and then have herd immunity in 10 days, but also incur at least 40000 more deaths and 120000 hospitalizations, 98% of which could probably NOT be hospitalized and might also die. This would raise the death toll to ~150000 or ~2% of the population. It would mean that 93% of these deaths would have been caused by the “let it spread” strategy, as compared to a strategy that manages infections to stay within hospital capacity.
          https://www1.nyc.gov/site/doh/covid/covid-19-data.page
          Let that sink in. “Let it spread” in NYC, based on official numbers only, which are unreliable (!), means 100,000 excess deaths within the month that can be avoided with a mitigation strategy.

        • I see Mendel has already replied better than I could, but I also wonder about the 0.05%-0.2% infection fatality rate that you used. Maybe you are not thinking of the general population but of a younger, healthier subset?

          If it’s supposed to apply to the general population it needs some justification. Numbers from New York City puts a very hard lower bound (assuming everyone has been already infected and no one else is going to die) at 0.19%, which is already in the upper end of the 0.05%-0.2% range. Plausible numbers are obviously quite higher. We could debate to what extent NYC is representative but you can’t just simply assume 0.05%-0.2% given what we know.

        • Jakob, never mind, I’ve seen your reply and understand now that you were just trying to show that even using Ioannidis’ range the risk should not be taken lightly, not necessarily defending that IFR estimate.

        • It’s his own paper: “Population-level COVID-19 mortality risk for non-elderly individuals overall and for non-elderly individuals without underlying diseases in pandemic epicenters” by John P. A. Ioannidis, Cathrine Axfors, Despina G. Contopoulos-Ioannidis

      • Follow the link in the comment to see the paper. It’s basic arithmetic (and lots of assumptions). The range is not related to uncertainty. The lower end is Germany, the upper end is New York City.

  9. #10… Ouch. Also, the author of that one:

    “WILFRED REILLY, associate professor at Kentucky State University, is the author of Hate Crime Hoax and The $50,000,000 Question. His writing has been featured in Commentary, Quillette, USA Today, and the Washington Times, as well as in Academic Questions and other scholarly journals. His research interests include modern American race relations and the use of modern quantitative methods to test “sacred cow” theories.”

    Alright then.

  10. The program differs from several others conducted elsewhere in the US in that it employs random sampling, weighted across the County’s 32 municipal statistical areas.
    […]
    Our data from this week and last tell a very similar story. In both weeks, 6% of participants tested positive for COVID-19 antibodies, which equates to 165,000 Miami-Dade County residents. This figure directly contrasts with testing site data, which indicated that there 10,000 positive cases, suggesting that the actual number of infections is potentially 16.5 times the number of those captured through testing sites and local hospitals alone. Using statistical methods that account for the limitations of the test (sensitivity and specificity), we are 95% certain that the true amount of infection lies between 4.4% and 7.9% of the population, or between 123,000 and 221,000 residents. These results are similar but not identical to other recent, non-randomized testing programs that have been conducted throughout the United States.

    Importantly, of the individuals who tested positive for the antibodies each week, more than half had NO symptoms in the seven to fourteen days prior to screening.

    https://www.miamidade.gov/releases/2020-04-24-sample-testing-results.asp

    • Thanks for this! This is consistent with my basic assumptions that in many places we’re at around 5% or less. The false positives are still an issue, they give a calculation in there of 4-8% or so based on what they thought false positives were.

      One thing this clearly shows is that your earlier models in which the pandemic is already like half-over and everyone had it already are clearly not viable. We’re left with the idea that although there’s under-ascertainment, for the most part we have a decent estimate of the growth rate in cities, of doubling every 2-4 days, and we’re in the “beginning” phases in most places other than around NYC.

      This suggests if we let-up but retain a bunch of “caution” in places like Georgia, or FL, we might have something sort of like the Swedish results: doubling every week, and viable for just a month or two before large numbers start dying.

      https://ourworldindata.org/grapher/covid-confirmed-deaths-since-5th-death?country=SWE+CHE+DEU

      Nothing makes me think that we’re anywhere near “herd immunity”

      • My take on the Swedish path is not that they underestimate the severity, but that they are looking at the vulnerability level by age, and thus are allowing faster spread among the age groups that are less at risk and which, they believe, will thus socially gather more. There is in that sentence an assumption the Swedes think the Swedes will act rationally. That is, they believe they’ll socially distance by vulnerability, with the young and healthy less, the young with compromised immune systems more, etc. Just a guess, but I think they under-estimated the virus’ ability to spread into the vulnerable elderly. I’ve read they have similar issue with it getting into nursing homes.

        My guess is they also thought (and think) they can survive this wave and that tests will be more widely available soon so they can adjust their strategy to better protect the vulnerable. That in my opinion is what we should be working toward here. As in, I live a stone’s throw from a smallish nursing home which was reported today as being almost entirely infected (45 patients + 13 staff). I can poke at the security: it has never been ‘shut’ with a guard in front (perfect for the National Guard?), and I see people come and go often. That is, even though I only pass it by 3 or 4 times, over half the time someone is going in or out, and I can’t believe those are the only times that happens during the day. (I’ve noted before that the state so under-thought the issue that they actually started emptying nursing homes for use as extra hospital beds when they discovered pervasive infection had occurred. I can’t fathom why they thought moving the most vulnerable made sense, but they actually wheeled these ancient people out on gurneys.) But … does it matter?

        My belief bifurcates. On the one hand, I think we should have isolated the elderly as our first public health action. We even have structures in place now for doing that. I’m part of that: our Town has a network of people who check old people and other shut-ins during emergencies. And then I wonder how well that could have worked given what appears a high transmission rate among the elderly vulnerable. These nursing homes are little experiments: the population appears to share an attribute that means even a relatively rare injection of infection will break through the barriers you can erect until and unless you have reliable tests that can be quickly administered. As in, I kept shaking my head over all the screaming about ‘we need more tests!!!!’, when that kind of testing appeared to me to be of little immediate utility because it took several days or longer to get a result. What if you took the test a day later and it was positive? So testing with such a time lag would, to me, encourage riskier behavior by those tested. And those tested were most likely to be people who worked with the elderly in a nursing home. So if you have 20 people working there, and you test them all, and 1 or 2 would have tested positive in the intervening 3 or 4 or more days, then you may have just killed a bunch of old people.

        What that suggests is something else that sort of amazes me: we have done next to nothing to isolate the people working with the known sick. I know people who go home to families. They can’t truly isolate at home and that makes their families vectors. I’d have given healthcare workers a test every day so there would be a rolling negative counted each day. I think of unknowingly infected healthcare workers putting on their PPE, except of course they put on their PPE while they were infected.

        I’m sorry for wandering into this from Sweden, but the logical branchings of this kind of a problem fascinate me. We have groups with fairly well known vulnerability, and that profile has been developing with case load (confirming age and, for example, obesity-related comorbidity), and you need some form of herd immunity, whether by exposure or by vaccine (or both), Sweden sees a path through the young. Lots of poeple see that path. But you need to get that path to work.

      • Nothing makes me think that we’re anywhere near “herd immunity”

        1) Herd immunity depends on the R0, I think outside places like NYC it is much lower than people have been saying. Since I was modelling the national data I figured the average R0 across the country (before the hysteria began in March) was more like Swine flu at 1.5. But now I’m thinking I should go lower, more like a normal flu strain at 1.1 or so. In that case herd immunity is reached at lets say 10-30% of the population immune. I’m still playing with it to calibrate to total US mortality, etc (obviously there aren’t ten million extra deaths or something).

        2) See my post above about 5-10% of people who were hospitalized for a couple weeks in Shanghai that tested positive had no antibodies.

        3) Check the claimed specificity (90%) and sensitivity (88%) of the test used in that miami study:
        https://www.miamiherald.com/news/coronavirus/article241750556.html
        https://www.oxfordbiosystems.com/COVID-19-Rapid-test

        4) Here are our friends the WHO to share their evidence based medicine approach:

        “What the use of these tests will do will measure the level of antibodies. It’s a response that the body has a week or two later after they’ve been infected with this virus,” she said at a news conference at WHO’s Geneva headquarters. “Right now, we have no evidence that the use of a serological test can show that an individual is immune or protected from reinfection.”

        https://www.cnbc.com/2020/04/17/who-issues-warning-on-coronavirus-testing-theres-no-evidence-antibody-tests-show-immunity.html

        How could someone say there is no evidence that antibodies confer immunity? This organization needs to be shut down, they are entirely incompetent.

        • Good find about the test! Here is the paper written about it, it explains in great detail (with pictures) how lateral flow assays work, and how they’re used, and it gives the table that is missing from your source.
          https://onlinelibrary.wiley.com/doi/full/10.1002/jmv.25727

          We learn that EITHER IgM or IgG specificity is 116/128 = 90.6% (95% CI 84.2-95.1%), and that BOTH IgM and IgG specificity is 127/128 = 99.2% (95% CI 95.7-100%).
          The BOTH method reduces the sensitivity to 256/397 = 64.5%.

          If they went with the manufacturer’s instructions, their result is meaningless.

        • > “How could someone say there is no evidence that antibodies confer immunity? This organization needs to be shut down, they are entirely incompetent.”

          You are more of an expert than the technical lead researcher from the WHO with over 100 publications?

          The way I understand it, having listened to Prof. Drosten’s podcast and looking around a bit on the web just now, our bodies have 3 ways of fighting the virus.

          a) Innate immune reponse, using T-cells. In this case, antibodies might not appear at all.

          b) weak antibody response: antibodies bind to the virus, but don’t stop it from spreading. White blood cells digest the antibody-marked viruses and eliminate them. Or the T-cells win in time.

          c) strong antibody response: neutralizing antibodies bind to the SARS-CoV-2 virus in a way that blocks it from accessing the ACE2 receptor in human cells: the viruses can no longer enter body cells and replicate, the infection fades out.

          What you want for immunity is response c), but this is hard to test for. Normally, you’d run a neutralization test with a patient’s serum and see if it keeps the virus from growing in lung tissue in the lab. You can design an easier antibody test and check if it finds the neutralizing antibodies specifically, but this is difficult and takes time.

          And of course there’s cross-reactivity, i.e. the test might detect the immune response to a different virus.

          The Florida test was built using some genetic information from the virus to figure out what a surface protein looks like. It might only bind to weak antibodies, and that does not prove you have immunity.

          Further reading:
          https://www.sciencemediacentre.org/expert-reaction-to-comments-made-by-dr-maria-van-kerkhove-at-the-who-that-there-is-no-evidence-antibody-tests-can-show-that-an-individual-is-immune-or-is-protected-from-the-infection/

          P.S.: You sometimes assert things as facts that you are not sufficiently knowledgeable about and that you have not researched enough. That happens to all of us, but when you do so rudely and aggressively, it reflects badly on you.

        • Here is what she said:

          “Right now, we have no evidence that the use of a serological test can show that an individual is immune or protected from reinfection.”

          There is evidence from many different illnesses that an antibody test against something found more often in the sick than the not sick indicates protection. We don’t need to wait for a peer reviewed publication to consider that evidence.

          Whether these specific tests are inaccurate, or the antibodies wane, or even whether this nCoV-19 they are testing for is even the cause of the characteristic severe HAPE-like illness is not established. But I would not say there is “no evidence” for it.

          My paraphrase of that is: “Antibodies show up after challenge with the the suspect infectious agent and have been shown to be protective in the past for other agents but that is not evidence that antibodies protect in this case because as we all know: correlation is not causation.”

          https://www.bmj.com/content/363/bmj.k5094

        • https://www.who.int/docs/default-source/coronaviruse/transcripts/who-audio-emergencies-coronavirus-press-conference-17apr2020.pdf?sfvrsn=1bd9c4cd_2 page 8
          “ED Yes, thanks. Hi. I’m Eduardo from Teletresa; thanks for your time. Here in Chile a card will be applied for recovered patients from COVID-19. The Government said these patients stopped infecting the rest of the population. We know that it is a measure that is also analysed in other countries. My question is, what is the WHO opinion on this measure, is it recommended and should a PCR test be required to deliver this discharge card? What is the international experience like? Thanks”

          […]

          “MK To supplement, yes, this is an ongoing issue and we will be issuing some guidance over the weekend on this because there are a lot of countries that are suggesting to use rapid diagnostic – rapid serologic tests to be able to capture what they think will be a measure of immunity.

          “As Mike has said, right now we have no evidence that the use of a serologic test can show that an individual is immune or is protected from reinfection. What the use of these tests will do is measure the level of antibodies and it’s a response that the body has a week or two later after they’ve been infected with this virus and these antibody tests will be able to measure that level of seroprevalence, that level of antibodies but that does not mean that somebody with antibodies means that they’re immune.”

          This was not a statement of principle that you are treating it as, it was a statement about the current evidence for the currently existing tests, and it is entirely true for the reasons I have explained.
          These tests can find antibodies, but it is highly likely that they can’t find neutralizing antibodies, and that means they won’t work to test for immunity. Now Ms. Kerkhove can’t say that, because she has no evidence that these tests do not work, so she says she has no evidence that they work to find immunity.

          (It’s also worth reading Mike Ryan’s anser that comes before Maria’s, he has a very thorough and well-thought approach to the more organizational side of that issue. I have watched some WHO press briefings from beginning to end on youtube and have only the highest respect for these people.)

        • Plenty of people are taking the results of these tests as evidence for actual protective antibodies, as they should based on the track record of such tests for other diseases and the info presented to us (they are more often positive in people who tested positive via pcr, etc).

          Like I said, I am open to the idea these tests are not even for antibodies against the correct infectious agent. But saying they offer “no evidence” for protection is wrong.

          Calling the test results “no evidence” is totally irrational.

        • Anon,
          The antibody producing cells respond to infection by multiplying. The part of their genome that codes for the antibody protein is prone, by design, to mutation, so a whole range of antibodies are produced each with different affinity for the viral protein. During infection there is natural selection for higher affinity antibodies and the cells that make these high affinity antibodies are the ones that persist after the infection is passed and provide ongoing immunity. This whole process Is modulated by the host-virus interaction so that the residual antibody affinity varies depending on the virus and between individuals. The antibody tests generally detect both low and high affinity antibodies, but low affinity antibodies may not provide long-lasting protection against re-infection.
          Hence the assertion that having detectable antibodies does not necessarily confer immunity – this is a cautious statement.

        • Hence the assertion that having detectable antibodies does not necessarily confer immunity – this is a cautious statement.

          This was not the statement. The statement was that there was “no evidence” for protection. The presence of detectable antibodies is 100% evidence for immunity. Anyone who says otherwise has no idea what they are talking about.

          I can’t believe people are still giving the WHO some benefit of the doubt after every single thing that was obviously true they said there was still “no evidence” for was later shown to exist.

        • Anoneuoid wrote:
          > The presence of detectable antibodies is 100% evidence for immunity.

          Support it!
          Give me a citation, please!

        • The purpose of antibodies is immunity:

          An antibody (Ab), also known as an immunoglobulin (Ig),[1] is a large, Y-shaped protein produced mainly by plasma cells that is used by the immune system to neutralize pathogens such as pathogenic bacteria and viruses.

          https://en.wikipedia.org/wiki/Antibody

          We do not need to wait for a study of these specific antibodies to take them as evidence for immunity.

        • I think we need to maybe disambiguate the word “evidence” here.

          Merriam-Webster gives definitions

          1)
          a) an outward sign: INDICATION

          and

          b) something that furnishes proof: TESTIMONY

          Positive antibody tests and an INDICATION that there is immunity, they are not PROOF.

          Anoneuoid is using 1a, there is definitely evidence in favor of immunity, not conclusive evidence, but definitely evidence. Whereas, before you take the antibody test, we have NO evidence, afterwards we have SOME evidence.

          The problem is “Evidence Based Medicine” has come to mean “conclusive proof through substantial RCT based medicine” and therefore essentially useless in this pandemic. Worse than useless, it ignores all historical precedent or model in favor of incorrect reasoning along the lines of (though not always quite so extreme as) “If you haven’t proven it through an RCT you should assume it’s incorrect”.

          That is very wrong.

        • Anoneuid –

          > We do not need to wait for a study of these specific antibodies to take them as evidence for immunity.

          Why aren’t they just evidence of the body fighting off infection, as opposed to evidence of immunity.

          Do all antibodies in our system convey immunity?

        • Anoneuid –

          > We do not need to wait for a study of these specific antibodies to take them as evidence for immunity.

          Why aren’t they just evidence of the body fighting off infection, as opposed to evidence of immunity.

          Do all antibodies in our system convey immunity?

          For dengue fever the presence of an “anti-sweet spot” level of antibodies can mean you get sicker if exposed to a similar strain. That does not mean the presence of antibodies is not evidence for immunity.

          If you see antibodies, it makes immunity more likely to be the case. Anyone who says otherwise has no idea what they are talking about.

        • Anoneuoid –

          > This was not the statement. The statement was that there was “no evidence” for protection.

          I thought the statement was that dhres no evidence of immunity. There’s a pretty big difference there. Why did you change it?

        • Anoneuoid –

          > That does not mean the presence of antibodies is not evidence for immunity.

          I don’t think anyone said that. I know I didn’t.

        • Anoneuoid –

          > The purpose of antibodies is immunity:

          Wouldn’t it be valid to say “the” purpose of antibodies is to fight off a pathogen.

          Maybe the problem is with the use of “the” there?

        • Daniel –

          I think this is a pretty silly argument. We all know what the statement was intended to mean. Nonetheless…

          > a) an outward sign: INDICATION

          I think that an indication of immunity is when someone is exposed to a virus and doesn’t get sick. An antibody, it seems to me, is an indication that the body is fighting off a pathogen.

        • Daniel Lakeland:

          How would you characterize the evidence provided by these tests conditioned on our current understanding of covid-19?

          1. Very Weak
          2. Weak
          3. Moderate
          4. Strong
          5. Very Strong

        • Id probably say moderate. Evidently they have some antibodies, so they will react more strongly to a second infection challenge. Sometimes this won t be enough, sometimes it could even hurt (hyperinflammation) but there aren’t hoards of reports of second infections even though there is probably plenty of opportunity for such in say Italy or NYC. So, moderate.

        • What is your over-under percentage of immune/total-positive?

          All I know, is that the presence of antibodies is evidence for protection against whatever those antibodies were raised against. Saying that is “no evidence” for this is ridiculous. Unless you think all the people who came up with these tests and everyone who gave feedback on them are totally incompetent or scammers, the positive antibody tests are indeed evidence for immunity. It may be weak evidence, but it is evidence.

          But also I couldn’t make a bet like you describe, because most antibodies wane eventually. And this should be quicker if there is mild illness. Here is a report that IgG was stable from 18-51 days post symptom onset, but it looks like IgM was already waning. This is totally consistent with what we see to antibody responses to other viruses:

          The study included 211 confirmed COVID-19 patients in Chongqing Three Gorges Central Hospital
          from January 23 rd to March 3 rd , 2020, among which were 181 mild and moderate cases (the mild group)
          and 40 severe and critical cases (the severe group). There were 86 female and 135 male patients, with an
          average age of 47·8 (47·8±15·1) years.

          https://www.medrxiv.org/content/10.1101/2020.04.20.20065953v1

          They say mild, but from other papers I saw “mild” could mean in the hosptial for 3 weeks. I don’t know if that is a cultural thing or it means they use more severe definition of “mild” than I would use (cold-like symptoms that you normally self-treat at home and even go to work with). I only skimmed the paper so maybe there is further detail in there.

        • Anoneuoid –

          > This was not the statement. The statement was that there was “no evidence” for protection.

          I thought the statement was that dhres no evidence of immunity. There’s a pretty big difference there. Why did you change it?

          This is the quote I am referring to:

          “Right now, we have no evidence that the use of a serological test can show that an individual is immune or protected from reinfection.”

          Also, please excuse any blank or double post. I think firefox has some new shortkey that submits comments for some reason that I hit on accident when trying to paste.

        • The confusion that can arise is an indicator that this was lousy messaging. I think responsible messaging from the WHO would have said something like

          “these antibody tests offer some evidence that people will have some level of immunity, but they are not strong evidence that people won’t get sick again, both because we don’t know how effective the response will be, and because there are a certain percentage of false positives”

          The WHO has consistently come out saying what sound like strong statements “there is no evidence that people will have immunity” or “there is no evidence that people will get this at hospitals” or etc etc. Sometimes it seems like they’ve walked back almost everything they’ve ever published to the public. Probably not, but there have been plenty of these bad messages.

          Yes, thank you. But we both know that this messaging is not just the WHO playing politics but represents a philosophy that has been adopted by a seemingly influential subset of the medical profession that basically denies using any prior experience or information.

        • > “Whether these specific tests are inaccurate, or the antibodies wane, or even whether this nCoV-19 they are testing for is even the cause of the characteristic severe HAPE-like illness is not established. But I would not say there is “no evidence” for it.”

          We have evidence that the lateral flow assays are inaccurate (discussed here before).
          I don’t think we have evidence for the long-term prevalence of the antibodies except by analogy, I agree with you on that (at least I couldn’t find it).
          We have evidence that the SARS-CoV-2 virus (what is “nCov-19”?) causes the illness you seem to be referring to (everyone who has it shows the characteristic lung CT, and can be made to test positive for SARS-CoV-2).

          > “There is evidence from many different illnesses that an antibody test against something found more often in the sick than the not sick indicates protection.”

          Yes, because these tests aren’t rubbish, they’re designed to detect neutralizing antibodies. The tests on the market now are not so designed, or they would mention it and market that. The paper on the Miami test makes it clear that that’s not what the test was designed for. The chance of that happening at random is low.

          > “Plenty of people are taking the results of these tests as evidence for actual protective antibodies, as they should based on the track record of such tests for other diseases and the info presented to us (they are more often positive in people who tested positive via pcr, etc).”

          These results are not evidence of immunity. What people are taking them for is not evidence of what they do. Evidence doesn’t work by voting. The tests for other diseases are designed to detect protective antibodies, these are not. There is no information presented to us that these tests say something about immunity.

          We have evidence that these tests detect some antibodies to SARA-CoV-2, I agree with you on that. We have no evidence that these antibodies are neutralizing antibodies, the kind of antibody that would prevent a re-infection. People hope these tests prove immunity, but hope is not evidence.

          You are acquiring a reputation to hold on to unsupportable facts.

        • Ok, how much do you want to bet people who have antibodies are less likely to get sick again from the virus they are raised against?

          I will bet 1 bitcoin, escrow is available.

        • Anoneuid –

          > Ok, how much do you want to bet people who have antibodies are less likely to get sick again from the virus they are raised against?

          Doesn’t seem to me that being less likely to be sickened by a pathogen (as a result of having antibodies) means that you’re immune from that pathogen.

          Nothing like blog arguments about semantics, eh?

        • Apparently it’s the word immunity that needed defining. The first Google search gives:

          >the ability of an organism to resist a particular infection or toxin by the action of specific antibodies or sensitized white blood cells.

          So, yes it does mean immunity.

        • Daniel –

          Sometimes dictionary definitions are of limited value in these discussions.

          I think that in common usage, we use being “immune” to a virus to mean that you won’t get wick from that virus.

          For example, when speak of “herd immunity” as meaning that the people with immunity won’t get sick. Not that they would be less likely to get sick.

          My guess is that the spokeswoman meant that it’s still very much up in the air as to whether people won’t get sick of they got sick once and recovered.

        • I think it highlights the diversity in this blog. Anoneuoid was a biomedical researcher in the past, and I’m married to a biomedical researcher and do analysis of biomedical data at times. So to me immunity means that the antibody mediated pathways for immune system reaction will be activated immediately as opposed to days to tens of days after challenge.

        • Daniel –

          Fair enough.

          But still, my sense is that the spokeswoman was making a statement that targeted the common use of the term immunity. IOW, she was saying we don’t have evidence that people who have been sick and recovered can’t get sick if they’re exposed to the virus again.

          Seems to me that’s what people are referencing when they say that the immunity might be limited – meaning the period during which you wouldn’t get sick if exposed again is limited.

          Or it’s what peple mean when they say that it might take a while to develop immunity – meaning that it might take a while before you definitely won’t get sick if exposed again.

        • The confusion that can arise is an indicator that this was lousy messaging. I think responsible messaging from the WHO would have said something like

          “these antibody tests offer some evidence that people will have some level of immunity, but they are not strong evidence that people won’t get sick again, both because we don’t know how effective the response will be, and because there are a certain percentage of false positives”

          The WHO has consistently come out saying what sound like strong statements “there is no evidence that people will have immunity” or “there is no evidence that people will get this at hospitals” or etc etc. Sometimes it seems like they’ve walked back almost everything they’ve ever published to the public. Probably not, but there have been plenty of these bad messages.

        • Anoneuid –

          > Ok, how much do you want to bet people who have antibodies are less likely to get sick again from the virus they are raised against?

          Doesn’t seem to me that being less likely to be sickened by a pathogen (as a result of having antibodies) means that you’re immune from that pathogen.

          Nothing like blog arguments about semantics, eh?

          I’d define it as the WHO will say “It turns out antibodies are protective”, or something along those lines.

          Also, this concept of absolute immunity does not exist. You have probably been reinfected with measles many times but only got sniffles for a day or a slight diarrhea or whatever.

        • The exact wording was “no evidence that [..] an individual is immune or is protected from reinfection”. This is freely spoken answer, not pre-written. Therefore, I take “protected from reinfection” to be a clarification what she means by “immune” — not a likelihood, but protection.
          Aoneuoid, you’ve said there is no protection from reinfection. Therefore, you agree with Van Kerkhove’s statement.

          You are using out-of-context semantics to somehow prove she is wrong, and that’s a terrible way to conduct a debate.

          It is my opinion that these point-of-care serological assays do not indicate with satisfactory certainty that an individual who tests positive will experience no symptoms when exposed to the virus again.
          (I was going to put that certainty at 95%, but we already have evidence that this is the lower bound: in January in Munich, only 11 of 217 people exposed to a virus carrier for over 15 minutes at under 1.5m showed symptoms.) The false positive rate on most assays alone, coupled with presumed immunity prevalence in the population, prevents this certainty, notwithstanding our disagreement that the positive test might be predicated on what turns out to only be a weak immune response.

          It’s not medically responsible to give people a “you can’t get it again” pass on the basis of these tests, given the evidence that we have. That is the meaning of van Kerkhove’s statement in context, and I agree with her. (Your wikipedia quote did nothing to convince me otherwise.)

          It seems likely to me that we are going to have a serological test that can document a strong immune response with high confidence. Once such a test exists and has been documented with evidence, the WHO will change its stance. But that will not mean that they “walk back” their statement, it means that now, we do not have that evidence, but then, we do.

        • There’s an additional aspect that I think needs to be kept in mind…

          We’re not just hoping those who have recovered from an infection are immune from having symptoms if re-exposed. We are hoping that they are immune from beginning to shed virus again. From an epidemiological aspect, if you can re-acquired the COVID-19 infection repeatedly as shed virus each time, that’s very bad news.

        • Daniel –

          What Mendel said.

          What Brent said.

          I actually think that a messaging that would have worked for you would have created more confusion for most people, who like me don’t have a technical background, and who are looking at the word immunity to mean “can’t get sick again” and “won’t be infectious in the future.”

          But I learned something from the debate about the semantics.

        • Daniel –

          As it happens, I just surfed over to the FOX News website. Saw this sub-headkine:

          > Coronavirus immunity not assured after infection, WHO says, imperiling ‘passport’ idea

          Below, was this:

          > Some governments have suggested that the detection of antibodies to the SARS-CoV-2, the virus that causes COVID-19, could serve as the basis for an ‘immunity passport’ or ‘risk-free certificate’ that would enable individuals to travel or to return to work assuming that they are protected against re-infection,” the WHO wrote. “There is currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection.”

          In that context, I think the important message got out. Would you disagree?

        • In that context, it isn’t so bad, but I still think the messaging should be about risk tradeoffs. a message like:

          “It is still plausible given our knowledge today that people who test positive for the antibodies could still spread the disease, making passports a dangerous idea that could ignite further waves of transmission”

          That’s the real issue IMHO. One of the mechanisms for asymptomatic spread today may well be that people with exposure to commonly circulating coronaviruses have partial immunity that keeps the infection from raging while still letting them spread. The same could be true for SARS-Cov-2 antibodies as well.

        • So no one wants to take the bet?

          That means you believe that antibodies are evidence of protection, but are fine with the WHO telling everyone the opposite. Apparently because “messaging” is a more important role of the WHO than providing correct information about health. Ie, you think it is some kind of propaganda organization and that’s ok.

        • Anoneuoid, I agree with you that the tests provide evidence of immunity in the immunological sense. What you might call “at least partial immunity”.

          So when WHO goes out and talks to the public, that’s what they should say… “there is some evidence for at least partial immunity” but then I also think they should go on to say something like “however we don’t know whether that’s sufficient to keep people safe and prevent them from spreading the disease, so at this time we don’t recommend using these tests for a kind of passport”

          I think those are both good things to say.

          Lots of people *don’t even really understand what a virus is*. Even lots of people who are in charge of entire countries. You have to remember that you and I and others like us are at a totally different background education level than many of the people in the world.

          So, no I don’t want to take your bet, because I agree with you, but I still want to say that WHO should play an accurate education role, and as such they should be more accurate, and more clear in their education role than they are.

        • So when WHO goes out and talks to the public, that’s what they should say… “there is some evidence for at least partial immunity” but then I also think they should go on to say something like “however we don’t know whether that’s sufficient to keep people safe and prevent them from spreading the disease, so at this time we don’t recommend using these tests for a kind of passport”

          I 100% agree. They should provide accurate information. I mean there are apparently people celebrating after testing negative on these antibody tests anyway. No matter what they say people are going to be confused, so just be accurate.

        • At this point, except as an academic matter, I don’t really give a shit if people have antibodies. I care about whether or not they can go back to work with a very high degree of confidence that they won’t get sick and don’t present a risk to others.

          I’m not able to evaluate what level of which antibodies conveys what level of immunity.

          As a matter of communicating the state of the science to scientists, of course the WHO should be presenting the details of the science in full context. If they are making broad statements to the public in the middle of a pandemic, where peple are talking about issuing “passports,” I’m fine with overly broad characterizations of the state of the science – within some parameters. What those parameters should be is tricky, and I’m willing to try to assess whether they’re close enough to the balance point.

          This isn’t a unique context. I think of Steven Schneider talking about how to communicate about the uncertainties in climate science. Yes, if peple are so inclined, they can find fault with his position – but Baaixslky he was talking about the conundrum.

          As Andrew says about statistics, communicating with the public about this pandemic is hard. I’m willing to accept that there is a range of acceptable public messaging.

          Providing derailed evidence to the expert community about the state of knowledge is a different matter. I would have very little sympathy for crafting messaging in that context.

          I look at something like thaf Fox News arricle, and in context, I think that what the WHO is communicating gets the job done reasonably well.

          You are in a very special category

        • The WHO published a “Scientific Brief” on the topic of immunity passports on Friday, it’s worth reading in full.
          https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19

          Partial quote:

          “The measurement of antibodies specific to COVID-19

          The development of immunity to a pathogen through natural infection is a multi-step process that typically takes place over 1-2 weeks. The body responds to a viral infection immediately with a non-specific innate response in which macrophages, neutrophils, and dendritic cells slow the progress of virus and may even prevent it from causing symptoms. This non-specific response is followed by an adaptive response where the body makes antibodies that specifically bind to the virus. These antibodies are proteins called immunoglobulins. The body also makes T-cells that recognize and eliminate other cells infected with the virus. This is called cellular immunity. This combined adaptive response may clear the virus from the body, and if the response is strong enough, may prevent progression to severe illness or re-infection by the same virus. This process is often measured by the presence of antibodies in blood.

          WHO continues to review the evidence on antibody responses to SARS-CoV-2 infection.[2-17] Most of these studies show that people who have recovered from infection have antibodies to the virus. However, some of these people have very low levels of neutralizing antibodies in their blood,[4] suggesting that cellular immunity may also be critical for recovery. As of 24 April 2020, no study has evaluated whether the presence of antibodies to SARS-CoV-2 confers immunity to subsequent infection by this virus in humans.”

          I understand that “cellular immunity may also be critical for recovery” means that if that turns out to be true, a re-infected person may have symptoms again.

        • I read “cellular immunity may also be critical for recovery” as that if you’re reinfected you might be likely to be *asymptomatic* but still infectious. That is, the virus is replicating, the T-cells are destroying it, so you don’t get sick, or very mildly sick maybe, but you’re still expressing the virus.

          It’s likely that if you do have a cellular immunity response you’re not going to be one of the people in the hospital, and might be likely that you don’t even feel sick enough to stay home, but you might well be one of those people who feels basically fine, but spreading the virus to 9 people at a restaurant while eating lunch.

    • Anoneuoid,

      Nice find! Thanks for sharing!

      A few thoughts after reading:
      … 1. “who were randomly selected to participate in the initiative” — ah, the trappings and evils of passive-voice. Who did the selecting and how. This is crucial. Hard to fully evaluate the study without know this part of their methods — as the Stanford Study has made clear. It *seems* like they followed a good strategy: randomly selecting participants by age, etc., and calling them with a request to participate (https://www.miamiherald.com/news/coronavirus/article241750556.html). If further details are available and I am missing them, disregard this comment and share them!

      … 2. Kudos to them for recognizing the importance of determining the true count of infections. Further kudos for recognizing and admitting the current public testing sites are not providing limited information. Although I’m unsure why they don’t mention how this value is crucial to determining fatality, too. But that is my being nit-picky more than anything else.

      … 3. “While 6% represents a substantial infection rate, the fact that this number held steady implies that our physical distancing is working and that together, we are “flattening the curve.” Not sure if they really can make this claim, for two reasons: (1) the do not stratify or control for the effects of Social Distancing from other factors like weather, improved hygiene, natural course of the disease, something we don’t know, etc., that could be causing decrease in infections; (2) Is a 2 week sample really enough? And is 700 samples per week enough? I’d have to do the math, but intuition suggests it may not be enough.

      Also, this article made me wonder two things:

      A. Could weather be factor in lower infection? The climate in Miami vs NYC differs, with Miami overall being a less welcoming environment for coronaviruses (which overall seem to not like consistent heat or sunlight). This is food-for-though, more than anything else.

      B. It seems like, for some reason, epidemics have “fat” tails is canon — e.g., they decay exponentially, so the tails can have as much an effect as the bulk and also pose great risk. NN Taleb is all over Twitter popularizing this idea; he even has a paper forthcoming in Nature Physics on it. (https://arxiv.org/pdf/2004.08658.pdf)

      What data justifies this conclusion? I have looked, but only found derivations and simulations. Not much actual epidemiological data (if I am missing it, please share!). So is it really fair to assume pandemics have fat tails?

      Why? What if at some point, the behavior of the virus changes or some other change happens, and it starts to decay exponentially from the current value, e.g., y = b – e^x? This would mean that after some threshhold, the number of cases, etc., decays rapidly to some vertical asymptote. We know biology is tricky; cells, bacteria, viruses — living organisms in general — are far more complex than we often realize. They are also apt to act in unexpected in surprising ways that ruin your assumptions and models. (As an engineer doing research in molecular/cancer biology, I can attest to this; sometimes I miss the relative certainty of mechanics, dynamics, etc.)

      Sure, one hedges risk by assuming the worst case — i.e., pandemics DO have fat tails. Fair in the short term. But it is still a hypothesis and therefore needs testing. Because it being wrong can lead to poor choices and policy as well.

    • 400 particiopants the first week, 700 each week thereafter.
      6% positive both later weeks means 6% of 700 = ~42 samples.
      165000 residents presumed infected, with 287 deaths now we’re looking at 0.2% fatality rate, assuming deaths are not undercounted, delays cancel out, yada yada yada. Deaths still double weekly.
      Age prevalence is not given, another uncertainty that affects the fatality rate strongly.
      We do not know anything about the assay employed nor the test methodology. We do know that the probands were selected randomly.

      The confirmed cases are still going up by 20% weekly, doubling weekly 10 days ago.
      Why isn’t the antibody prevalence going up?
      Why is it staying at 6%?

      Florida data is at https://fdoh.maps.arcgis.com/home/search.html?t=content&q=tags%3A%22COVID%22&start=1&num=20&focus=layers-layerfiles

      • RE: “The confirmed cases are still going up by 20% weekly, doubling weekly 10 days ago.
        Why isn’t the antibody prevalence going up?
        Why is it staying at 6%?”

        I suspect every working statistician has had the experience of someone bringing in a dataset that has obvious (to the statistician) problems with data quality or with the measures/data sources being very poor.

        The statistician of course points this out first thing but the person with the data comes back again and again with specific questions about why this number seems too big or why the regression model has the sign “wrong” on several coefficients or various other things just not adding up in the results.

        All you can do in those cases is keep reminding them that the data they started with is, pardon the term, crap. And that if the results had actually made sense, it would have been because of some other error obscuring the data quality problems.

        That seems to be the situation for the vast majority of COVID-19 related analysis being published so far.

      • RE: “The confirmed cases are still going up by 20% weekly, doubling weekly 10 days ago.”

        By “confirmed cases” you mean the number of infections ascertained by an ever-varying number of tests rationed by an evolving set of pre-qualification criteria? That’s what “confirmed cases” mean in most USA jurisdictions at the moment. Reifying the rate of change “number of positive tests” as being equivalent to the rate of change in infections is a rookie mistake, albeit one being made daily by plenty of public health officials who ought to know better.

        • Deaths also go up.
          There’s debate in how much delay-to-death and delay-to-antibodies differ, but if they’re not too far apart, this indicates we should still be seeing antibody prevalence grow.

        • All of these studies are providing upper bounds. The selection bias tends to push up in the santa Clara and LA studies, it seems selection is less an issue in the FL study, but in all of them the false positive rate is somewhere in the single digits percent… so when we see 6% in Miami, we should think basically “up to about 6%” deaths will increase for weeks after cases flatten. one of the biggest issues in this virus for control is that infections can grow much faster than the indicators because of significant time lags in the indicators compared to the fastest timescales this virus can spread at (~ 10 days for seeking care, ~20 days to death, but only 2.5 days doubling time under fastest conditions)

        • I didn’t see selection bias in the LA County study? The researchers contacted the selected participants.

          Seroconversion occurs 5-15 days after onset of symptoms. Therefore, a serotological assay does not really help with timeliness. And the uncertainty about how long after infection antibodies can be detected by these tests doesn’t help here.

          But the fact that it can be administered quickly and cheaply at little risk to the people doing the survey, and that it can detect infections over a larger window of time, would make it suitable to detect a case undercount, if its accuracy permitted it (but it doesn’t), and if we knew how many infected persons don’t develop antibodies (but we don’t).

        • In the LA study there was at least non-response bias (people who REALLY WANTED a test because they thought they might have had it were probably more likely to respond than people who are happy to stay home and avoid any risk), and some unknown selection bias induced by whatever selection method was used by the market research firm.

          There’s no question these tests were valuable. They just don’t quite mean what the authors wanted to push them out as meaning.

        • Daniel, “those unable to come to the testing sites were offered in-home testing.”
          I don’t see how a selection bias from the database would “tend to push up”, as you write.

          The absolute kicker is that they seem to have overestimated the specificity of their test by getting confused over the combination of the IgG and IgM subtests, and omitting IgM specificity from the analysis; if they had included it, then, based on the manufacturer data, they might have had a mere 5 true positives in that sample (maybe as many as 15). You can forget about doing any weighing, this amounts to anecdotal evidence, and deprives the study of its “50-85-fold” headline.

        • What was the LA response rate? I still haven’t seen any details on that, is there a scientific description of the study yet? How many people who were “offered in home testing” actually let someone come into their home and test them?

          Our prior on the selection bias by the marketing firm is probably fairly symmetric.

          However, as you say, the specificity numbers tend to push the estimates towards 0. So with all that put together the main take away is still “probably less than 5-6% in LA”

  11. Jonathan –

    If you want to watch something about Sweden, interview with Swedish epidemiologist

    https://youtu.be/bfN2JWifLCY

    Basically says everyone’s going to get it either way, so you may as well let that happen with less economic cost. The biggest problems as I see it for applying to the US (there are more but these are the biggest).

    1) it assumes that even if you slow spread down, you won’t develop effective treatments and/or a vaccine and/or an effective surveillance system that are effective enough to really prevent basically “herd level” prevalence.

    2) assumes you really can protect the vulnerable. Like you, I doubt that true in Sweden let alone this country .

    3) ignores the disparate impact to higher risk communities, such as minorities. Assumes we have no responsibility to prevent disparate impact because of healthcare disparities.

    4) assumes facts about immunity which may be probable but still unproven. As such, basically avoids dealing with highest impact risk of fast total spread with no immunity.

    5) assumes what would work in Sweden will work in the US.

    6) assumes healthcare system won’t be overwhelmed by fast spread or that at least healthcare system being overwhelmed won’t result in higher fatality rate.

    7) assumes that we don’t have a responsibility to healthcare workers to prevent them from dying or getting sick due to lack of resources, due to faster spread.

    8). Assumes certainty of low fatality rate.

    • I approach Sweden using the lens of prior beliefds. If they were North Korea, my priors would be they want to kill a bunch of their own people to make some kind of social control point. My priors about Sweden are they are one of the leading ‘caring’ countries in the world. That is,Sweden has regularly shown a higher degree of social concern than nearly all countries in the world, and certainly more than the US. They provide far more social welfare than we as a nation have ever done. So when a country like that makes a decision, I think it makes sense to ask why they would, and that requires trying to give them credit for their thinking. This blog constantly preaches priors.

      I’ve seen no evidence Sweden doesn’t grasp the concept of hospital overload. Or that they’re unaware of the other negatives. They do have a substantial minority population now, especially in cities like Malmo. So I can’t just listen to a video that sounds like the most uncaring approach and think ‘That’s Sweden’. Rather, because it’s Sweden, I assume they’ve got more mental and social thought behind this than a single person’s summary which we translate into American terms because we hear with our American minds what they are saying with their Swedish minds.

      So I’ve talked to some Swedes about this. Their feeling is that you have to know Sweden: as in they so regularly practice social distancing that top of the list of rude things is sitting next to someone on a bus before using up every other possibility, or standing near somone. They share pictures of Swedes arranged like pigeons on a roof with equal spaces between. They also believe Swedes have such a strong sense of communal responsibility that the less vulnerable will be careful around the vulnerable.

      By contrast, Israel locked down because a) they know they don’t socially distance and b) they have a strong moral issue regarding the elderly and the Holocaust. And Italy was forced to lock down because the Italians know they don’t listen. It has been amusing to see the UK: they didn’t lock down and then they went so far that people sitting by themselves outside have been harassed by the police. The British ‘respect’ for order at work; they don’t socially distance but they do queue.

      But in the US, we can’t imagine that NYC is the social norm. I sometimes wonder if people know what it’s like in less densely populated areas. For example, Boston in lockdown is busier on a car and pedestrian basis than most places in the US. I compare it now to Rochester, NY where you walk into a coffee bar and maybe there’s one person in line, and there’s room to sit. Rural and semi-rural America is more like Sweden than like the east or west coast. I go into places in NC, for example, and it’s me and the proprietor and we chat about the weather because there’s not much going on. When I’d visit family, my speech would physically slow. (Though now they mostly live in Raleigh, which is faster paced.)

      This is what’s driving many of the protests and decisons made in other parts of the country. I’m from Michigan: out-state is nothing like the Detroit Metro Area. Because I can think like them, I can hear in their voices that they feel the ‘city’ people are imposing city rules on them. That is a long-standing argument there: out-state and Detroit fight over who has the say, and lately out-state has won (taking over the city and righting it after it imploded), so now the out-state people are upset that now the city people are telling them what to do. I’m not arguing for one or the other. I just think that if you can’t understand the other position, then you can’t accommodate each other’s needs very efficiently, and you devolve into name calling and power struggles over who gets to impose their will.

      But, without quick and accurate tests, then we’re kidding ourselves. And reliable, to me, includes knowing better how early the tests are accurate. Not mistaken false negatives but a false negative from how early you are in the infection. That again suggests people exposed might need to be tested over a period of days. Ideally, you reach the point where every healthcare worker is tested every day until and unless they are effectively immune.

      • Jonathan –

        Great comment. Thanks.

        My first reaction in response is that my comment wasn’t so much meant as a critique of Sweden’s approach (and they’re value judgments, as you allude to), but to the *application* of Sweden’s approach to the US.

        But as you point out, it’s pretty useless to consider that while implying that the US is some kind of monolithic block.

        But here’s what I’m thinking about this morning.

        Imagine the following scenario: (1) Immunity turns out to not be a best case – either it doesn’t happen or it takes a long time to develop or it doesn’t last very long. (2) in 2 months, someone develops an effective therapeutic that significantly reduces fatality rate.

        Considering that, how should be bet on the differential economic benefit, and health outcome benefit from things like being less cooped up, from policies based on the idea of potentially letting the virus infect people more quickly and more widely, so as to keep a hobbled economy and national psyche limping along for those 2 months?

        Thoughts?

        • Jonathan –

          One other thought in response:

          > I’m from Michigan: out-state is nothing like the Detroit Metro Area. Because I can think like them, I can hear in their voices that they feel the ‘city’ people are imposing city rules on them.

        • Damn – I keep hitting some kind of keystroke sequence that leads to premature post-ation.

          To continue…

          I have little doubt that the thinking you describe is very much in play. And there is certainly legitimacy to people having such a reaction.

          On the other hand, while trying to not be condescending, I think it’s at least worth considering how much of that reaction you describe is rooted in a tribal hatred of libz as much as a response that’s really a direct reaction to the risk conditions on the ground.

          That doesn’t de-legimiatize their reaction. But I still think it’s an important context.

  12. Mendel: Seems that I expressed myself poorly, as I did not intend to defend the reasoning, but wanted to remind where Ioannidis was coming from (IMO he was correct at the beginning pointing to the missing evidence).

    My quick calculation covered the spread but using even the lowest estimates already shows an overwhelming number of cases.
    Even if using a letality rate of zero, the number of people needing hospitalisation and admission to ICU already would be too large.

    Basically I’m with Joshua wondering about Ioannidis numbers (more precisely that it doesn’t make sense) and in addition I’m wondering that Ioannidis seems to be unable to realize the problems with his approach/hypothesis. (blind to his own bias? )

    Wittkowski is a bit different, but dead sure that any respiratory desease in the northern hemisphere spreads roughly in the same way (as mentioned around 100 days to get to herd immunity) during the fall and winter months and nearly stops in spring and summer. His position seems to be “let it roll” til herd immunity and which would provide the lowest death rate, but IMO he is also not considering the burden on the medical system, due to the huge number of cases in these 100 days.

    • Well, then we worked together, you providing the point and me providing the counterpoint!?

      The Stat article by Ioannidis was published March 17, 2020.
      I think, for all of the reasons mentioned in my reply, that it was wrong for Ioannidis to question the then existing knowledge and estimates, and that the then existing serious estimates, along with their uncertainties, will prove correct in the end.

      Obviously, with hindsight, I can drop bad estimates now. (But honestly, these smooth “resource use” graphs never looked like reliable predictions to me.)

      But “needs further study” is also stage 3 on Mick West’s 10-step scale of conspiracy theories. When there is established knowledge that you’d like to deny (and denial is absolutely natural in the face of death or other catastrophes), casting doubt on the evidence is one of the first steps. In the case of conspiracy theories, it’s usually accompanied with a refusal to investigate the evidence that does not support what you’d like it to say, and that’s not something I can accuse Ioannidis off, him having participated in studies of the pandemic. The conspiracy theorists says “we need to keep looking” when it is pointed out that their alternate world views lack data to support it, and then only publish inconclusive evidence or hide the evidence that disproves them. (Seriously, some Flat Earthers *measured* the rotation of the Earth, and expected that they’d publish it once they’d found an explanation for it.) They also feel superior because they are “critical” and the rest of the world just “believes”.

      When Ioannidis put this out there, suggesting that mid-March policy decisions were not justified by data, without reservations and without pointing at data that *could* be relied upon, it made him the poster child for every conspiracy crackpot under the sun and lent scientific legitimacy to their causes. “We need to keep looking” said “you don’t need to trust anyone, because nobody knows anything” to these people.

      Many countries and states are basing their public policy on the best knowledge we have. Calling that knowledge in doubt and substituting wishful thinking is not a better option, it’s worse.

      • Forgive me if this was posted, but these currently are some of the best data on test accuracy that we have.

        https://covidtestingproject.org

        You will want to match up the test and the study that used it.

        The Premier test is the Santa Clara and LA County studies. Note that false positive rate detected here is worse than that claimed in the Stanford preprint. It is not clear what this relates to but one reasonable possibility is the amount of control samples that come from individuals with exposure to seasonal coronovirues.

        The Biomedomics test is the Chelsea MA I believe.

        I am not certain what the Miami test is- I have read that their test has a higher false positive rate than the Premier test (which is imported from a Chinese company). The manufacturer’s data and Stanford’s data with this test show a lower false positive rate than the survey posted above and in a more limited UK study where the Premier test is referred to by the name of the Chinese manufacture. That study is on medrxiv. I do not have the link handy.

  13. Mendel –

    > In the case of conspiracy theories, it’s usually accompanied with a refusal to investigate the evidence that does not support what you’d like it to say, and that’s not something I can accuse Ioannidis off, him having participated in studies of the pandemic.

    Yes and no. I think that (what is in my opinion) Ioannidis’ one-sided treatment of the uncertainties in his public-facing interaction, is something that approaches not investigating contradictory evidence.

    If, for example, Ioannidis had said he’s investigated the level of uncertainty in the “died with” vs. “died of” categorization problem, and he’s examined the “died outside of a hospital setting without being tested” uncertainty, and said that the first significantly outweighs the other, then I’d react quite differently to his policy advocacy.

    • This was what was meant…

      This might be an interesting comment, from Jason Brennan, Robert J. and Elizabeth Flanagan Professor of Strategy, Economics, Ethics and Public Policy at the McDonough School of Business at Georgetown University. “…as economists and other math savvy people look into epidemiology, it’s becoming clear that the models they use are quite poor, because they have difficulty with endogeneity and with variance.”
      link: https://bleedingheartlibertarians.com/2020/04/government-authority-incompetence-and-sars-cov-2/

      This is supported by a link to Robin Hanson’s blog, Overcoming Bias, a post named “Beware R0 Variance.” link: http://www.overcomingbias.com/2020/04/beware-r0-variance.html

      As a math savvy person particularly keen on overcoming bias, would you care to comment?

      • Steven:

        I don’t think this sort of comparison (economists good, epidemiologists bad) is so useful. It’s my impression that the variation within each field is much larger than the average difference in numeracy between them. In particular, there are prominent epidemiologists who have been doing sophisticated quantitative work, and there are prominent economists who’ve been doing crude work that is not so particularly math savvy.

        Relevant to this discussion is that there was project that economist Tyler Cowen used as an example to demonstrate the incompetence of epidemiologists . . . and it turned out that it had been done by an economist! See here: http://observationalepidemiology.blogspot.com/2020/04/more-on-tyler-cowen-and-ihme.html

        • IMHO the variance in Epidemiologists is huge, Epidemiology departments at three universities I have worked at, really crappy, but at one university I worked at briefly, stellar.

        • P.S. I followed the link and read Robin Hanson’s post, where he writes of “different relatively-isolated subpopulations, each with a different value of R0.” In the U.S. context, I guess that different relatively-isolated subpopulations would represent different geographic regions; at least they’re relatively isolated now that people aren’t flying. And maybe the argument is that places like NYC have higher R0 than places like Phoenix because we spend more time on the bus and the train? I’m not really sure what he means by a “distribution across subpopulations”: are these different cities? Or isolated communities?

          I think what’s Hanson is missing is the interaction between R0 and prevalence. In places where prevalence is low, it makes sense that R0 will be high, because people won’t be scared, so they’ll live their normal lives, shaking hands, breathing each others’ exhaled air, etc. In places where prevalence is high, it makes sense that R0 will be low because people are scared and will socially isolate. Maybe some people are too poor to socially isolate and they’ll have a higher R0, but then they won’t really be an isolated population, right? This has come up in our discussion of prisoners. Prevalence and R0 could be high in jail, but jails are not isolated; prisoners and staff keep coming in and out.

          In any case, I don’t think this sort of theoretical model will be keeping epidemiologists out of business.

          More generally, I agree with Brennan that we shouldn’t blindly trust experts on this. Just as we don’t blindly trust epidemiologists to tell us what to about epidemics, please let’s not blindly trust economists to tell us what to do about the economy.

      • You didn’t ask me, but may I have a shot?

        The bleedinghearts article is full of unsubstantiated assertions, like
        ” First, governments could have collected better data earlier, before they shut the world down. Second, few governments are trying to collect good data now.”
        This is not supported by the way I see the world: I know the government of my country does both, I think the Chinese did, and their shutdown was successful, and WHO’s job is to collect those kinds of data and disseminate it.

        The one number they have is doubtful:
        “When you read that as of March 3, the WHO estimated the death rate of COVID-19 cases at 3.4%”
        Ths is misleading, as the actual quote that they link (incidentally, to a secondary source which does have a link to the primary source, so why the rigmarole) reads thus:
        “Globally, about 3.4% of reported COVID-19 cases have died.”
        It turns out 3.4% is not an estimate, it is a statement of fact.
        And it is a statement about the case fatality rate, not about the infection fatality rate!

        The WHO did publish an estimate for the infection fataility rate in February, well in time for bleedinghearts and the people who wrote the worlometers article to notice. I saw it in the February 19th situation report in a section entitled “SUBJECT IN FOCUS: Advanced Analytics and Mathematical Modelling”. Why did the bleedingliberals blog not use that when they critique modeling?
        “Based on these available analyses, current IFR estimates range from 0.3% to 1%. Without population-based serologic studies, it is not yet possible to know what proportion of the population has been infected with COVID-19.”
        I figure they didn’t use it because they wanted an “estimate” that they could crucify, and they wanted to field a demand (“Better data!”) that they wanted to depict as no government heeding: with the WHO evidently looking for better data back in February, demanding that nobody cared for data all along wouldn’t work so well.

        So there’s a lot of bias (prejudice?) and no maths in that article.

        First, the R0 article gets the terminology wrong. R0 is the base reproduction rate, it’s a measure of the spread in a naive population. Once you start influencing it, you get just R or maybe Reff.
        They basically make the observation that a country’s reproduction rate R won’t go below 1 if a certain number of subgroups still are above 1, and proposes to reduce variance. That’s overshooting the mark, because obviously you just need to identify these high-R subgroups and get them to move R down: you know, by prohibiting large gatherings, closing down schools, and mandating extra protection measures for hospitals and care homes. That is the smart, obvious way to get the variance down, right?

        It’s quite ironic that the liberalhearts people turn to an article that states “Assume also that these different R0 are distributed log-normally” when they want to criticise the simplifications inherent in epidemiological modelling. I bet it turns out their conclusion entirely depends on the underlying properties that they’ve chosen with no other justification than convenience, with no connection to the real world. I’d say there’s a lot of bias to overcome there.

        Now I’m waiting what the expert’s comments is going to be! :)

        • Agree R0 is misunderstood by Hanson. Hanson basically hypothesize R0 is different in subgroups, which he thinks means good studies have to study the individual subgroups and weight the results, rather than randomized studies of a whole population. Absent a reason to think interacting groups like ” different nations, classes, cultures, professions, industries, etc.” have different R0 may be a falsifiable hypothesis. But until there’s a reason to think they are first of all isolated, Hanson’ critique strikes me as wrong-headed. Lastly, I suspect the “etc.” in that list is doing a lot of work. But, Hanson doesn’t actually *say* race.

        • He starts without a model of transmission. “Imagine a population composed of different relatively-isolated subpopulations, each with a different value of R0.” That’s just divorced from reality, and he doesn’t support it in any way.

          He’s building a model where a bunch of numbers grow exponentially at various rates, uses a graph to make the simple point that the exponentials with the higher coefficient always dominate, and then says that epidemiologists don’t understand variance without ever having understood what epidemiologists do.

          “So, as long as there is substantial mixing in the world, or within a nation, it is far from enough to get the R0 for the median subpopulation below 1.” That’s a straw man, nobody is doing that, nobody is measuring success in public health by a median.

          “All of which is to say that suppressing a pandemic like this, with high R0 and many asymptomatic infected, after it has escaped its initial size and region, is very hard. Which is also to say, we probably won’t succeed.”
          None of this is connected in any way to reality.

          I think it’s denial, a natural reaction when we’re faced with a crisis. “It’s the other groups that ruin it for everyone, and if we just did it my way, it’d be over quickly.” That’s a denial of responsibility. It’s a personal fight with a personal crisis, couched as a layman’s opinion on public policy.

          The WHO’s job is to suppress infectious diseases that are hard to suppress. They have achieved remarkable successes, e.g. helped eradicate polio from several continents. You don’t do that by “refucing variance”. You do that by fighting the infection everywhere, with the best tools that you have. You don’t do it by turning groups against each other, and we don’t do it by giving up.

      • Steven:

        To follow up on this a bit more: I’m glad when various academic and public figures take a stance in favor of numeracy and quantitative reasoning, but sometimes it seems that it’s more symbolic than anything else. For example, Steven Pinker has said publicly that everyone should learn statistics (or something like that), but then he supports the promotion of innumerate work—not by him, but by people in his social and academic circles. He likes the idea of numeracy, but not if it stands in the way of other things that he values.

        • I was nodding my head in agreement with this, because I’m slow on the uptake. I must confess I had thought way too many of the graphs in Pinker’s Better Angels were innumerate. I am surprised to learn I’m wrong about that.

  14. On your 2. Sensitivities and specificities:

    Florida is conducting community-based SARS-CoV-2 antibody tests using kits by BioMedomics. https://www.miamiherald.com/news/coronavirus/article242260406.html (See link in 2nd paragraph).

    These tests with a false positive rate of 9.4%-13% through some mysterious alchemy finds 6% community spread!

    BioMedomics website states under FAQs – How accurate is the COVID-19 Rapid Test? “Twelve of the blood samples from the 128 non-SARS-CoV-2 infection patients tested positive, generating a specificity of 90.63%.” That is 9.4% of the positives are false. https://www.biomedomics.com/products/infectious-disease/covid-19-rt/

    The 9.4% false positives come from their own study: https://www.biomedomics.com/?fldl=3050

    The 13% false positive rate for BioMedomics comes from
    https://www.dropbox.com/s/cd1628cau09288a/SARS-CoV-2_Serology_Manuscript.pdf?dl=0 (See Table 2).

    Most troubling is that the UM researchers are releasing information on their tests without also publishing the details of their study and methodology that supposedly validate their conclusions. This should be a huge red flag to anyone.

Leave a Reply to Fred Cancel reply

Your email address will not be published. Required fields are marked *