Are we constantly chasing after these population-level effects of these non-pharmaceutical interventions that are hard to isolate when there are many good reasons to believe in their efficacy in the first instance?

A couple days ago we discussed issues of communicating uncertainty in a coronavirus mask experiment. That study itself is not so important, but I remain interested in the larger issues of inference and communication.

I sent the discussion to epidemiologist Jon Zelner, who wrote:

The struggle is real! I think this is a nice example of a time where some proof-of-principle lab studies can be more informative than population-based ones, like this one from Nancy Leung et al. I worry that we’re constantly chasing after these population-level effects of these non-pharmaceutical interventions that are hard to isolate when there are many good reasons to believe in their efficacy in the first instance.

Shameless plug on this end; we put up this preprint recently using a simple model that shows how easy it can be to get type M errors in intervention studies when you preferentially use data from large outbreaks.

Not quite the same topic, but I think it highlights the challenges of inferring these effects, and to me it sort of opens up the question of (a) whether there’s a world in which it is feasible to do so and (b) whether we should be putting our efforts in a different direction anyway.

I’m not sure what to think? I see Jon’s point; on the other hand, we’re also interested in population-level effects, so it makes sense to try to estimate them too, as long as we can be open about our uncertainties.

P.S. Here’s the title and abstract of the second paper that Zelner links to:

Preferential observation of large infectious disease outbreaks leads to consistent overestimation of intervention efficacy

Data from infectious disease outbreaks in congregate settings are often used to elicit clues about which types of interventions may be useful in other facilities. This is commonly done using before-and-after comparisons in which the infectiousness of pre-intervention cases is compared to that of post-intervention cases and the difference is attributed to intervention impact. In this manuscript, we show how a tendency to preferentially observe large outbreaks can lead to consistent overconfidence in how effective these interventions actually are. We show, in particular, that these inferences are highly susceptible to bias when the pathogen under consideration exhibits moderate-to-high amounts of heterogeneity in infectiousness. This includes important pathogens such as SARS-CoV-2, influenza, Noroviruses, HIV, Tuberculosis, and many others.

Seems like an important point.

62 thoughts on “Are we constantly chasing after these population-level effects of these non-pharmaceutical interventions that are hard to isolate when there are many good reasons to believe in their efficacy in the first instance?

  1. “we show how a tendency to preferentially observe large outbreaks can lead to consistent overconfidence in how effective these interventions actually are.”

    The simple, standard term for this is “Bias” — always a problem in any type of research.

    Also, “proof-of-principle lab studies” versus “population-based ones” is the standard ‘Development Testing” versus ‘Operational Testing’ in all fields of applied science/engineering.
    Bias is more likely in narrow laboratory type testing with small teams of “dedicated” researchers, isolated from the many real-world variables that might significantly affect the supposed proof-of-principle.

  2. It’s obviously true that it is a more important task for research to be focused on, say, medical treatment of disease as opposed to trying to quantify the effectiveness of masks. But isn’t that always the case with science? The vast majority of it is effectively unimportant and useless, but you never know which small results will lead to some insight on a bigger problem. We can’t all be high caliber researchers working on the biggest most important problems. Science isn’t just about saving human lives and making our lives more safe and healthy. It is about extending human knowledge. Period. I’m going to work on my little problems that nobody cares about. I’ll make some connections that a small number of people will cite over the next few decades. In the grand scheme it’s not important, but it is a real piece of the puzzle of knowledge nonetheless.

    So, it would be really nice to have more and carefully designed studies about masks or whatever difficult-to-study interventions. The risk is not that it is displacing more important research. How many researchers are opting to study masks instead of COVID-19 virology, for example? I bet it is mostly mutually exclusive pools of researchers with mutually exclusive sets of abilities.

    The only reason we have a probably accurate intuitive understanding of mask efficacy is because of researchers doing a bunch of little things that fed into the overall body of knowledge: statisticians, mathematicians, biologists, physicists, etc. Personally, I’m thinking about people who studied stochastic processes like Brownian motion. The intuition about masks was sort of there 100 years ago, but was still really a big unknown. All the work on biology/math/stats since then has created such a stronger and more justified intuition.

  3. I think it makes sense to admit that there are places where we can’t use statistical analysis to tell us anything useful, or at least, to get accurate estimates – perhaps samples are to small, variance is too larger, confounders are unknown, the people involved could be actively changing behavior in response to both external factors and the interventions themselves in ways that may depend on the populations, the data gathering process is poorly understood, the underlying dynamics might be incompletely understood, and so on.

    Population level effects of NPIs which are inconsistently applied and enforced irregularly during a pandemic where the transmission dynamics are only partly understood seem like a great example.

    And yes, preferentially observing large outbreaks can be an issue, but if that really were the issue, we could correct for it. In fact, there are tons of issues. The decision question for masks isn’t at issue – the evidence is sufficient to dictate that we should advocate their use – we’re lucky that we have fairly good evidence of the effectiveness of the intervention in the lab, and more than enough reason to think it works in populations as well based on observation. Better estimates of the exact effect size seem unlikely to resolve anything.

    • I agree that some people view EBM as equivalent to reliance on NHST, but there are a huge number of people in health economics, and a growing number in medicine itself, who take an explicitly Bayesian / decision theoretic stance about how to use evidence.

      • I think you will find there are many people at the highest levels of healthcare organizations (ie, the WHO) who deny the usefulness of reasoning and generalization altogether when it comes to medicine. That is how you get them tweeting about “no evidence antibodies confer immunity” despite millions of papers about antibodies doing that.

        It is also why apparently vitamin deficiencies should be avoided/corrected in otherwise healthy people, but there is “no evidence” to do that if they also have covid.

  4. This issue has been bothering me for a while now.
    The only practical outcome that these population-level mask studies achieve is strengthening people’s priors.
    If the result matches your belief, then you say “See? I was right.”
    If the result does not match your belief, then you say “Confounding variables everywhere! Compliance! Correlation is not causation, etc.”
    A subtle and controversial, but an important point is that a lot of the researchers who are capable of clearly articulating and controlling for limitations and uncertainties of these population-level studies also understand its futility and simply chooses not to spend their time and energy.

    • Fred –

      I don’t know on what basis you make such a broad statement as what you made in that last sentence…I suspect you’re wrong to be anywhere near so broad.

      But as for the first three points you make, I say “Yup.”

  5. > I’m not sure what to think? I see Jon’s point; on the other hand, we’re also interested in population-level effects, so it makes sense to try to estimate them too, as long as we can be open about our uncertainties.

    My view is that yah, people just aren’t comfortable with the uncertainties. So they take valid approaches that are valid within a limited frame and try to project them into a larger frame.

    With COVID (as with climate change and quite a few other issues), the problem is exacerbated by ideological “motivations.” We are in the early stages of the pandemic, but people want answers.

    This is a problem of high damage function risk analysis that plays out over fairly long time horizons with a high degree of uncertainty. It requires that people make decisions without really know the answer – so as to hedge against potentially large downside risk. You have to decide before you know the answers, or your will be enacting policies too late to be effective.

    But people just aren’t comfortable with that situation, and conditional probability is hard, so they frame the evidence as an avoidance technique.

    Mike Ryan, the WHOs emergencies chief sums it up rather well, IMO:

    https://twitter.com/michaelmina_lab/status/1335377006578503686

    • +1 to difficulty of deciding before the answers are available, and of maintaining awareness that you’re acting conditionally. Perhaps what’s surprised me most about reactions to uncertainty under covid is that for many people, once they decide to adopt some action or level of precaution (eg distancing, stopping certain activities, mask wearing, etc.) they seem to become much more immune to new information. Sort of like a preferred narrative sets in that becomes very hard to dislodge, even among the very data literate who should be aware that early on evidence will be sparse. Belief updating is hard I guess, and maybe awareness of that drives the seeming need for answers early on.

        • Or for some people the thought of personally having done and continuing to do something harmful is so nauseating to them that they choose to believe they are right instead of facing reality. Maybe the psychic cost of pretending the truth isn’t the truth is just less than the cost of admitting that it is. Just like some academics deny it when people point out a clear mistake they made; and some drunks argue they are good drivers. Because if that weren’t true they would be in the wrong.

          I mean is it really hard right now to believe that a lot of people will choose a fantasy in which they are “right” over a disappointing reality? I think an obvious candidate explanation for why people do that is that the fantasy makes you feel better about yourself than the reality. I know, deep analysis right…, but I don’t think it’s about avoiding over-commitment. It feels more like doubling-down on over-commitment to the original position that was probably chosen largely for non-empirical reasons (meaning preferences, moral weights, whatever where some marginal differences in the world further empirical evidence could give us just wouldn’t matter… you know it’s outside the vote margin).

          tldr; maybe a lot of people aren’t looking to update their beliefs here; maybe they are just looking to update their arguments for their beliefs.

        • “maybe a lot of people aren’t looking to update their beliefs here; maybe they are just looking to update their arguments for their beliefs.”

          +1

          I suspect many people have the thought (belief? mindset?), “I know it’s true, and I just need to figure out how to convince others that it’s true.”

      • Jessica –

        It is an interesting question – why belief updating is so difficult. I mean it seems obvious to me that it’s tied into the whole motivated reasoning/partisanship/tribalism phenomenon.

        But I’m wondering if you could expand on your thinking.

        Are you suggesting a kind of bi-directional flow, whereby maybe in realizing that in a certain context updating beliefs will be difficult (e.g., what if my original belief was associated with many deaths?), we lock harder in a kind of anticipatory fashion to beliefs sooner in ways that make them harder to update?

        When the stakes are this high, we’re reluctant to act, because somehow by not acting we can pretend that we’re not deciding (but as Ryan points out – not acting is a kind of decision). But when we act in the face of not actually knowing what the right answer is, we’re very vulnerable to attack for making the wrong decision or acting prematurely, and so then we’re necessarily defended against being open to critique.

        For me, early on I was shocked at how long it took to issue shelter-in-place orders, close schools, etc. If there’s any chance that you might have to take those actions eventually, you’re better off if you take them sooner rather than later. What were they waiting for? In hindsight, closing schools at least may well have not been the right decision and yeah, that’s been an idea that I’ve resisted accepting (I did start updating that fairly early on when I heard a podcast of modelers saying that there was evidence that closing schools had no benefit beyond a few weeks out, although they couldn’t answer why such a counter-intuitive outcome resulted. I mentioned that here at Andrew’s crib before):

        https://statmodeling.stat.columbia.edu/2020/04/30/updated-imperial-college-coronavirus-model-including-estimated-effects-on-transmissibility-of-lockdown-social-distancing-etc/#comment-1325702

        But I think that people tend to look at this maybe a little but unidirectionally:

        Motivation (identity orientation) ===>>> motivated reasoning ===>>> biased beliefs that are resistant to updating.

        But maybe there’s another way to look at it:

        uncertainty ===>>> requires opinion formation before the answer is known ===>>> requires a defense against information that would prove the action wrong.

        Of course, that 2nd chain of causality wouldn’t explain why there’s such a strong signal of identity orientation in what people believe about COVID policies. But I”d guess that no matter what the real most likely mechanism is, there’s some combination of mechanisms that mix in together.

  6. “I think this is a nice example of a time where some proof-of-principle lab studies can be more informative than population-based ones, like this one from Nancy Leung et al. I worry that we’re constantly chasing after these population-level effects of these non-pharmaceutical interventions that are hard to isolate when there are many good reasons to believe in their efficacy in the first instance.”

    I get what he means, particularly when the NPI’s are followed in public but not private settings, so it’s hard to communicate how the science of a particular mitigation strategy works in “the wild”. But I also think someone would rightfully say that was hogwash if it were a vaccine or other medical treatment rather than something like face coverings.

    Can you imagine someone saying, “I think medical treatment X should be prescribed widely. It was effective in the lab but had no noticeable effect on population health.”

    • Jfa:

      It depends. You can have an effective treatment for a rare condition: imagine something super clean, like you take a pill and the condition goes away, there are no observed side effects, and based on our biological understanding there’s no reason to believe there will be side effects.

      My point is that if a treatment addresses a rare condition, then it’s basically impossible to estimate its effect on population health without some strong assumptions.

  7. Andrew writes: “I’m not sure what to think? I see Jon’s point; on the other hand, we’re also interested in population-level effects, so it makes sense to try to estimate them too, as long as we can be open about our uncertainties.”

    In the one sense, I think this statement is non-controversial. But, the caveat “as long as we can be open about our uncertainties” is doing a lot of work. I think there is an ethical constraint here that means that certain research just shouldn’t get done. In the case of masks and social distancing, it should have been obvious that observational studies would be noisy. It would not be clear if the study was measuring the effects of mask wearing or the effects of suboptimal mask wearing or the effects of other interventions that swamped the effects of mask wearing. It should also have been obvious that such studies would be used by political groups to argue against mask wearing, and damage public health. So, I think that scientists have to ask the ethical questions, will the contribution of this study to our knowledge of a subject outweigh the possible harm it will do (even if from misrepresentation). I think the pandemic has witnessed several instances of studies that were better off never having been done.

  8. It’s all well and good to say “embrace uncertainty, or variation, or whatevs” but the real problem here is a catastrophe of public relations, not science. We live in just about the lowest trust time ever, so capital-S Science is aimed to shut people up. Experts who admit what they don’t know are probably the best experts, but they’re also people with no credibility in the public square, unfortunately. Experts who say: “You don’t have the scientific data to back up your claim” while saying things that are (a) probably prudent (wear masks) and (b) have little to no traditional scientific evidence to support that prudence have lost their ability to say “Trust Science.” Instead, they are saying “Trust the experts, and I’m one” and that stand is simply no longer convincing, particularly when people’s livelihoods are on the line. This is particularly true when the thing with the weak evidence is not the thing that directly impacts the livelihood — it’s the *other* recommendations that do that.

    For better or worse (and it’s probably all worse) we live in an age when anything other than utter certainty is treated with suspicion. Since no one honest can assert utter certainty, we are left with either charlatans, motivated bias, or, at best, people who have to sound considerably more confident than they would in private. Masks surely have some modest benefit, But when you say the face mask wearing, social distancing and handwashing will fix things, you are so far over your skis that you shouldn’t be surprised when a winter surge leaves you buried under an avalanche of doubt.

    • Another big part of the problem is what Andrew has called the her model of scientists. After all, there are lots of questions that actually *have* been settled by science. And plenty of other things, while not exactly *settled* by science, are at least put in a well-defined scientific process for public policy purposes, like drug approval.

      It is a remarkably small step for the self-styled hero of science to want to become hailed as the lifesaver who made people wear masks. Not having the data to back him or her up is just a minor nuisance.

      • Jonathan:

        This brings us back to the James Watson discussion. Watson as an accomplished scientist who happens to be a racist: not a huge deal, somewhat interesting to the extent that the racism fits in with his scientific interests. Watson as a science hero who happens to be a racist: This is more of a scandal.

    • > but the real problem.

      Is there really a “the” real problem?

      >> But when you say the face mask wearing, social distancing and handwashing will fix things, you are so far over your skis that you shouldn’t be surprised when a winter surge leaves you buried under an avalanche of doubt.

      How do you know this? Perhaps you’re overly-certain?

      Are you certain that the winter surge is attributable to the lack of robustness of mask-wearing, social-distancing, and hand-washing as a way to slow the spread, as opposed to the lack of compliance with those behaviors – largely attributable to people who want to blame scientists for bad PR?

      I can’t agree that the problem is even primarily one of scientists getting out over their skis. And I think at least part of the problem is people attributing the problem to scientists getting out over their skis.

      • The more I read this:

        > >> But when you say the face mask wearing, social distancing and handwashing will fix things, you are so far over your skis that you shouldn’t be surprised when a winter surge leaves you buried under an avalanche of doubt

        The more it bugs me. People hold “experts” accountable for the failure of things they never said. How many public health official, or epidemiologists, said that those measures would “fix things?” What does “fix” things even mean?

        I’m pretty sure what most public health officials and epidemiologists said is that absent those things would be worse, to the extent that people practiced those behaviors, things would be better. The modeling indicated that many people would die even with those practices but that more people would die without them.

      • My confidence is unimportant, since I’m not asking anyone to trust me — I’m just blathering on a blog. And sure, it *could* be that all of this stuff would work if there were only compliance. But: (a) it’s not what I see; and (b) As I said on the previous thread, the April-May surge had very little mask-wearing, since it’s in the period when Fauci and the Surgeon General were telling us all not to wear masks because they don’t work. They changed their position without any (published) evidence at all. So whatever lack of compliance we have now, it’s orders of magnitude higher than it was when mask-wearing was actively discouraged.

        But sure. I fully grant that my opinions are informed only by low-powered studies, my own personal observations, and my priors on public health officials. But then, again, I’m not trying to tell anybody else what to do, or whether or not they should go bankrupt.

        • But it’s even worse than just blathering on a blog (which I have been known to do on occasion)…

          It’s a part of the problem. For example, you say maybe all that stuff would work. What does that mean? End the virus? Reduce the spread. The problem is when people treat what “experts” in such a way that they could never be right.

          Yes, scientists are going to make mistakes and have to revise their opinions. What’s the alternative?

          Just feels like reckless venting to me. My reaction is because I see it a lot and I think it reflects a real problem.

        • I think these measures could reduce the spread somewhat. (My comment on “stopping” above is awaiting moderation, but I think you can find plenty of experts out over their skis.

          That said, I agree there’s a problem here. People demand certainty, experts lean toward certainty, experts are (inevitably) imperfect, people distrust experts. On the other hand: People demand certainty, experts give them hedged statements, people ignore experts. Mistrust is not really any better than distrust. The root problem is the demand for certainty, particularly when hard choices are being made. I don’t think a properly hedged version of Fauci would have saved lives, if that’s what you’re saying. And I think that’s really unfortunate. But I don’t think he’s done anything to enhance the credibility of his recommendations.

        • Jonathan –

          > I don’t think a properly hedged version of Fauci would have saved lives, if that’s what you’re saying. And I think that’s really unfortunate. But I don’t think he’s done anything to enhance the credibility of his recommendations.

          I agree somewhat.

          But actually, I think his credibility has been enhanced with a lot of people – I’d say most people inclined to vote Dem. And he’s a focus of attack and discredited among a lot of people – among people inclined to vote Pub.

          And what he has and hasn’t said might be the explanation that people on both sides of the aisle offer to explain what shaped their view of his credibility. But I offer another perspective. People on each side of the aisle have a filter they apply when assessing what he says, and their view of his credibility isn’t so much a function of what he says as it is a function of who they are. I think it extends deeper than what he does or doesn’t say.

        • We’re getting so close to agreement now that I’m tempted to stop. But while I agree that Fauci’s reputation has risen among the Dem Twitterati, and maybe among Dems generally, that’s just as bad, since that’s entirely a function of their filter and not of anything he’s done or lives he has saved. If he had said the same things he said, but Trump praised those things, he would be despised by the same people praising him. (I can’t prove that, of course.) That’s an amazing political feat, but not much of a scientific one.

        • > But while I agree that Fauci’s reputation has risen among the Dem Twitterati, and maybe among Dems generally, that’s just as bad, since that’s entirely a function of their filter and not of anything he’s done or lives he has saved.

          So flip that over for pubz. No reason to expect the dynamic works differently with them.

          > If he had said the same things he said, but Trump praised those things, he would be despised by the same people praising him. (I can’t prove that, of course.)

          I don’t think that’s true, actually. Trump was pretty praise-worthy early on and Fauci was still mostly viewed very favorably by demz. Trump is obviously a big pivot point but he isn’t the entire sum of political. Identity markers.

          > That’s an amazing political feat, but not much of a scientific one.

          You’re still stuck on holding him responsible for dynamics that are much larger than he.

          Here’s the context I come to this from. If you ask rightwingers, who think that mitigating ACO2 emissions will cause the death of capitalism, whether “Climategate’ ruined the credibility of climate scientists, they will say “yes.” If you ask lefties, who think that not mitigating ACO2 emissions will cause the death of our economy, whether “Climategate” ruined the credibility of climate scientists, they will say “no.” In fact, many of them will say it only enhanced their view of climate scientists

        • “So flip that over for pubz.” Agreed.

          “Trump was pretty praise-worthy early on” Quotes? Not in my FB feed.

          We clearly disagree on Climategate. I think it hurt those scientists quite badly, and I think that the consensus is way out over its skis, even as I agree with the basic physics and the general point. I am in no way a climate warming skeptic, but I think the IPCC, the Mann lawsuit and Climategate reflect really, really badly on science. They are, to me, almost perfect exemplars of the same thing I think Fauci is doing: advocating for things they sincerely believe in and using the science as a crutch, not a magnifying glass. Let me be clear: the guys peeing on the IPCC work are no better, and probably considerably worse — their complaints tend to be half-educated and full of AHA! moments that aren’t.

          And finally, I don’t hold Fauci responsible for anything but making me think worse of Fauci. Well, maybe a bit more than that, but not much more.

        • > We clearly disagree on Climategate. I think it hurt those scientists quite badly, and I think that the consensus is way out over its skis, even as I agree with the basic physics and the general point.

          I didn’t express my opinion on the impact of “climategate” on the credibility of climate scientists. And your opinion on the impact isn’t actually relevant.

          The data on public opinion related to “climategate” are pretty unambiguous. You can pretty much predict how someone views the issue based on how they line up ideologically. It didn’t move the needle. Don’t make the mistake of extrapolating from individuals (and certainly not yourself).

        • Sorry… I misunderstood your point re climategate. I agree that the reactions are strongly reflective of the priors. But then again I don’t expect the public to understand science. And I’m not sure they even want to, though a substantial number probably do. But those that really want to understand aren’t helped by a press that reports it badly (sometimes on purpose, sometimes out of incompetence) or by the scientists themselves who understand only too well that the science explanation game is so different from the science game.

        • Substantial numbers don’t understand the science but they think that they do. Polling shows large numbers think they don’t need any more understanding to evaluate the science so as to be able to formulate an opinion. Largely that’s because trust their experts – which is the subset that they find palatable ideologically.

          What can you do as a communicator when you face such a situation? My personal bias, as is yours, is to forefront uncertainties. But I’m not sure that’s anything more than a personal preference.

          The mechanisms that govern how the public receives what scientists have to say about the evidence, when an issue becomes polarized, are pretty well locked in due to ideological biases. Sure, scientists could do things differently. Perhaps even “better.” What would the impact be?

          My guess is marginal, at best. For every time that a Fauci might forefront uncertainty, there’s an Atlas who seeks to weaponize that uncertainty. For every time a Fauci backgrounds uncertainty there’s an Atlas to focus on any discrepancy in outcomes from what was projected.

          Net effect is zero change in public perceptions, imo. It’s common sense to think that used car salesmanship turns people off or that people lose faith in scientists when their projections don’t work out. But, imo, evidence shows it doesn’t actually work that way as an explanatory mechanism.

        • Agreed. The effects are marginal. So then, as long as you aren’t going to have any public impact, why not just do it right? If they did that, at least they wouldn’t annoy *me.*

        • I think there’s value in being open / accurate about uncertainty regardless of whether it will actually help public perception, simply because it’s honest.

          And that goes both ways; don’t overstate uncertainty as a justification not to act, but also don’t understate it as a justification to do *this specific action* rather than some other one.

          And also being clear about *where* the uncertainty is… I would say there is little to no uncertainty about the basic physical mechanisms of greenhouse effect/climate change, but much more about e.g. specific economic effects (especially as those will be affected by technological change…).

          I think that also means being careful about what platforms one uses, e.g. maybe some complex issues simply just shouldn’t be discussed on Twitter.

        • Since July, the US has been doing far more testing. The one question that has remained unanswered is whether you can test positive for SARS2, if you are infected with one of the other 4 coronaviruses.

        • Sameera –

          > The one question that has remained unanswered is whether you can test positive for SARS2, if you are infected with one of the other 4 coronaviruses.

          Irrespective of the answer to that question, and irrespective of how much testing is going on, deaths as a % of positive tests has been pretty consistent for a while – around 1.5%,

          BTW – I thought you’d like this article:

          https://nymag.com/intelligencer/amp/2020/12/how-rapid-antigen-tests-could-end-the-pandemic-within-weeks.html?__twitter_impression=true

        • Thanks a bunch Joshua. How epidemiologists compute the Case Fatality and Infection Fatality Rates is an important query. My question is even more narrow. If the question is silly or dumb, someone should say so and explain why.

        • > (b) As I said on the previous thread, the April-May surge had very little mask-wearing, since it’s in the period when Fauci and the Surgeon General were telling us all not to wear masks because they don’t work

          Is it? This “Surgeon General, Dr. Jerome Adams, shares ways to create your own face covering in a few easy steps” video is dated April 3. https://www.youtube.com/watch?v=tPx1yqvJgf4

          (I agree that the initial opposition to masks everywhere outside of Asia and subsequent pendulum swift are remarkable.)

        • The point was that April-May is not “in the period when Fauci and the Surgeon General were telling us all not to wear masks because they don’t work”. At least for the latter, the message had changed since Februrary.

        • Sorry… I see I misinterpreted your comment. Though April 3 was really a little late to stop the problems in April. But I agree that they had all reversed course by at least mid-April. But then there’s the inevitable lag to actual adoption, so I don’t think it changes my point substantially.

    • The defect in credibility will be corrected — in real-time — by the stupid virus itself; whether the charlatans say this or that. I notice that even the charlatans in North Dakota must have changed their tune; at least temporarily; unless I misread the Johns Hopkins Graph.

      • Agreed. Something has cut back on infections in the Dakotas, and people behaving more sensibly seems like the leading candidate for an explanation. On masks, either the NY Times or the Washington Post had a nice figure recently of the Covid time series in counties in Kansas that did and did not adopt mask mandates. I can’t find the link, but the data seem to be in the Morbidity and Mortality Weekly Report for 11/27.

        • Well, South Dakota (with no mask mandates) has over 8% *actually tested positive*; given that positivity rates were over 20% for a while, could they have actually hit herd immunity?

          While herd immunity claims in Sweden clearly didn’t work out (seasonality maybe? the low case rates in July-August-early September there *are* strange) I’m not sure we can rule it out for some Great Plains places with much higher infection rates.

          Also, could herd immunity thresholds be lower in these very rural areas?

    • The technical name for the defect which needs to be filled is not “expertise”, it is not “public relations” — it is “leadership”; which stands anterior to and has primacy above all of these.

      “Leadership” is not a mere shell of some half-remembered from high-school civics; an empty phrase with which to pad a resume. It is a concrete entity, a visitation found to varying degrees in certain persons; no less so than pianism or marksmanship.

      Sitting on one’s hands for excruciating intervals of days and weeks, while case-counts accelerate, trying to decide “what the data say” and “how to craft the best message” may be all that the second and third-raters can come up with; and to cover over their lack of confidence, in their own authority, their diffidence in respect to what consensus they can or cannot elicit among the people, they take heart in the pablum of “educating the public”. However this is not leadership. It inspires nothing but cynicism; it is destructive of morale and of the will to act.

      The captain of a ship on a raging sea does not “educate” the crew, the captain *leads*. For better or for worse, all the crew “educate” themselves by facing together the storm; whether or not they survive follows from a combination of luck, talent, know-how, experience, and leadership.

  9. Population level effects are risky measures of association; because we are often ignorant of the multifarious subdivisions of the population; under which the effects of interest disappear or are reversed (Simpson’s reversals in ratio estimates, regression lines that go upward for this grouping and downward for that grouping, etc).

  10. The emphasis given to different possible modes of transmission has shifted significantly since covid was identified, with much of the earlier work drawing very heavily on old studies of flu transmission – old enough to be doubtful even for flu. I wonder if it would be possible to use simulation and/or model fitting to fit a general model, so that next time round we could spend a couple of weeks studying virus lifetime in different environments and viral output from a sample of patients and then plug these in to be able to estimate the relative likelihood of transmission in different situations?

    • Here a population-level analysis; which addresses a urgent question: can the relative risk of covid transmission for different activities or routes of transmission be discovered; and thereby provide concrete basis for guidance?

      If all case-reports were collected by a common data-base they could be divided (with enough sweat and hours) into two classes: those with historical detail possibly sufficient to infer the possible route(s) of exposure; and those without. The former latter category is put aside, and the former sent along to the next tier of analysis. In the next tier we separate case reports into categories of presumptive route of exposure: home, work, bank, grocery, aeroplane and so on. The divisions are made to whatever extent the competition between swiftness, accuracy and the limited detail of the report allows.

      If such subdivision is made habitual and semi-automatic even, then as time progresses, and the count of cases grows, then so does the quantity of cases for which such analyses are profitable — not necessarily in the individual case; but in the aggregate! So here is a setting in which “population-wide” analyses I think can elicit information on which pragmatic guidance may stand: the relative risk for transmission of various activities, sites or circumstance. On top of that, the thing need not be static: the analysis can be done regularly; and the guidance becomes dynamic. As the numbers grow enormously, then the analysis can be stratified — by county, by age group, by any relevant subdivision.

      Perhaps the outcome is still ambiguous? Perhaps in the end we’d still know no more than we did in April. But we cannot *know* that we’d know no more — not unless we do the detective work; in real time; and in large numbers. Unfortunately, the numbers are indeed large; but then there’s no excuse for not looking to see what patterns we can find!

  11. “In this manuscript, we show how a tendency to preferentially observe large outbreaks can lead to consistent overconfidence in how effective these interventions actually are.”

    I don’t see it in the other comments, but doesn’t this argue that the measured effect should be biased towards greater effectiveness in an epidemic? Therefore, the lack of a population effect means it isn’t as useful as people think.

    I assume squaring the circle is that the intervention helps with certain subpopulations (healthcare workers, elderly, people around the elderly) but is swamped by the general pop variance. The mask mandate argument has the implied assumption that infection probability is homogeneous and masks are equally (and very) effective for everyone. A more nuanced discussion regarding uncertainty (it may not be necessary for everyone) and relatively low cost (but do it anyways) may have received more buy-in on the margin.

  12. I struggle with this issue every day: what is proof? Is the proof the model? We test the model with results, but are those correlations meaningful? What I read in your post was the question of restrictive comprehension: which universe of results matter, which do we exclude or otherwise restrict, so we can see the restrictions imposed externally meet the ones that we use internally to define whether and how one universe fits within other universes. I use universes on purpose because JvN’s method of constructibility applies: between universes is some form of modular transform or operation that compares in and out. This gets at a fundamental – and I think really fundamental – issue: we know the boundary of that is described by pi and that growth is defined by e, and we can put those together in the unit circle, which then underlays all x and y plots, but what is it that’s making the modularity that becomes the action or operation that generates the difference between the identity levels of the universes. This is hard when you idealize to one case, where there’s just outer and inner: it rapidly complicates to relative existences bound by this same process which determines whether this is part of that or this, and in what ways, and how.

    This highlights one your righteous obsessions: if we need to restrict to see a result, then to what extent are we seeing a false result, one that we find because that is the one that rewards us, whether with internal cognitive validation or in prestige and money? And when does that become knowing? In law school, one of the questions I had to answer was ‘what is willfulness?’ What is willful versus negligent? Is it willful to do an act when you could not foresee the consequences? What if others could? Who? This gets to ‘what are the limits of consent’, and all these are versions of which universe extends how far, and thus what distinguishes this universe as it is constrained externally and as that meets internal generation.

    I saw that Newton’s notes about pyramids are up for sale. The general approach is that he was somehow foolish for investigating them, but this was the man who found that there is an invisible force which holds us to the ground and shapes the orbits of the planets and stars. The tides shifting on earth as the moon shifts the gravitational force. It was rational to assume the ancients metaphorically represented how they saw fundamental truth because Newton was literally the man who took the metaphors and tied them to mathematics. Everyone knew everything falls, but no one could say why, so the idea remained metaphorical. If you extend the metaphor of metaphorical, then you can describe the progress since then as reducing the metaphor to math in stages. Here’s my point: to go backwards, to induce backwards from now to then, you can end up with “Newton was a fool in many ways’ or ‘even the smartest guy believed stupid things’. Would you notice that you’re wrong? Would you care that you cant fit your universe to his without adjustment?

    It matters to me because this same problem appears everywhere. Example is our non-friend Covid: we are finding the model for how the virus works as we are testing results. That is entirely normal. I like to use the example of WWII in the Pacific: they replaced the admirals when they decided to be strategically aggressive and the guys they had in place no longer fit the need. There is no shame in adjustment. But in the chaos of the moment, we see what we can see, and we can see what I call ‘pins’, meaning the places where ideas connect to specific events, so that pins your underlying model to reality or actuality of some kind. We can segment between pins, which creates complex rotations between measurements, and we can segment within pins, which creates complex rotations within the measurement itself, including but limited to literally how we measure. We construct perspectives across pins, and those perspectives connect at any pin somewhere along its segment. This guarantees a process by which many threads become a larger thread that eventually defines the model that better toward best fits, whether by multiple iteration, like in simulation, or through continual readjustment of perspective so the pins seem to remain aligned. We cant help but try to align pins before shifting belief to another perspective.

    I love ‘cant see it until you see it’ manifestations. To me, they’re real life versions of the ideas behind P and NP, that there are some problems which have a determined universe with either actual or imposable endpoints which pin that end so you can see the fit. And there are some problems where you cant see it until you see it, because you impose an endpoint and thus a universe or process which resolves to or terminates at that as a 0, and then you have to try again because you dont know where the actual pin on the other end is until you find it.

    With stuff like drug research, it seems entirely normal to focus on marginal drugs, and not only because multiple pathways to similar results can generate both understanding of the underlying condition, and can be evaluated based on issues like expense and side effects. But also because we learn when we fill up one level of understanding. The math behind that is invertible identity: as the identity completes, it doesnt just reflect back at you, but also to the side. Like when you look at a painting; it has within it, the connections to its time, to the lives in it, who made it. The more you see in the painting, the more it conveys the meanings that you at first cant see because you have not delved into what lies beyond the ‘mirror’ surface that reflects light at you It is a mirror made in the past that carries over time to you. I again use a barbell image: past event connected over complexity to present event, both events themselves complex so you have the connection of two complex fields over a third.

    One place this goes is that it defines any result as being written in potential before it is written in fact, so every event is a partial realization, like the inking of a drawing that has been sketched in pencil, where the inking fixes the image but any fixing is a choice presented to you by the potential constrained by the drawing, constrained by the context, etc. I find it interesting with stuff like Covid: the threads generate, often chaotically, toward better fitting explanatory models, because they pin to the actual model that we are trying to uncover as it is physically embodied in sickness and death.

    I really love the issues you highlight. They spark a lot of thoughts in me.

Leave a Reply to Jonathan (another one) Cancel reply

Your email address will not be published. Required fields are marked *