What can we learn from COVID burnout?

Burnout has become a central theme of the COVID-19 pandemic, impacting essentially everyone in different ways, from those who were, and continue to be subject to endless, stressful ‘essential work.’ to those in the relatively privileged position of being trapped at home for more than a year. While there is a pretty clear, clinical-ish definition of burnout from an occupational or psychological perspective, it’s important to attend to the different potential types of burnout and their implications for what we should be doing with our lives, careers. In this post, I want to take the opportunity to think about what can and should be learned from the experience of being ‘burned out’ as a working quantitative epidemiologist.

For many of us working in public health – or I’ll at least speak for myself – this experience has been destabilizing because it has shaken our faith in the meaningfulness of what we do, even as it has also become a painfully relevant and routine part of everyone’s lives. This is a moment where epidemiological data are presented as frequently and casually as the weather. But it’s also a time in which epidemiological models have often been wrong, where our tools and techniques have been drawn into intense and politicized scrutiny, and as individuals we have become characters in the never-ending culture wars.

While before the pandemic, we might have been sometimes too confident in our ideas, models, and authority, many of us – again, I’ll speak for myself – are now working through a sort of crisis of confidence. Some of it just comes from the exhaustion of confronting an enormous crisis and being continually reminded of how difficult it is to make a meaningful impact on large-scale outcomes. This exhaustion will likely pass, and when it does, we need to focus on the very real needs to change, innovate, and respond more effectively to future crises that have been illuminated by the crisis.

The following three issues have – and continue to – dog me as I try to figure out what to do next. I don’t claim to have come up with all – if any – of these ideas, but instead have found them to be among the more significant challenges I’m wrestling with as we stare into the murky post-crisis future:

  1. We are political actors whether or not we like it or want to be. This point is now so obvious and trite that it belongs alongside “all models are wrong” in the COVID-19 rhetorical hall of shame. But it is – I think – an existential question for those of us who work in this field. The limits of our ability to impact the covid crisis as individuals should turn our attention towards collective action in all its messy forms. We need to be able to think more broadly about ‘intervention’ than the action of the state or a quasi-governmental authority operating in a top-down fashion on populations. What this means pragmatically remains unclear to me, but I know that meaningful, sustained, and opinionated engagement with politics and social movements is the only path to long-term utility for the work we do. Otherwise, we can expect to see the same ‘shit rolls downhill’ pattern play out in the next crisis, as the benefits of the insights and innovations of public health and medicine are funnelled first to the wealthy and other privileged groups.
  2. Top-down approaches to interventions generally fail, infectious disease epidemiology policy is not an exception. We can easily mistake the ability to impose a well thought-out intervention cleanly on a simulation model to be a ground-truth, base-case from which deviations represent failures. The problem with this is that when the real world is more messy and dysfunctional than the idealized one we imagine, it is a short leap to blaming individuals and groups for not getting with the program as we imagined it. I sometimes think of this as analogous to a problem I have with my daughter in the winter: If she would get her boots and coat on quickly, we wouldn’t be late for school essentially every morning. I find her inability to move at the speed I want her to to be extremely frustrating, especially on a freezing Michigan morning when she needs to wear eight layers of clothes. But the tendency to dawdle is so central to her existence – and to being a four year-old – that there is no point in comparing our outcomes against those of an alternative universe in which she is an AM speed demon. And rather than asking why she can’t move more quickly, I should probably ask why I don’t start the process of getting ready sooner and make it a bit more fun for her? (But hey, I don’t want to think about that…)
  3. There are sharp limits to the tools of quantitative analysis in public health. Epidemiologists may have long prided ourselves on being one of the hard health sciences in much the same way that economists and other quantitative social scientists have long been held up as the rigorous thought-leaders of social inquiry as compared to their more-qualitative peers. But just as the faith in economic models was at least partially responsible for our collision with the financial iceberg of 2007, we should ask what role quantitative epidemiology may have played in worsening or slowing progress in this crisis, in addition to all the good it has done. For example: What was the potential impact of overly-optimistic transmission modeling projections earlier in the pandemic on the trajectory of infection and death? Is there a process for presenting modeling results and their uncertainty in a way that makes it harder to cherry-pick the most politically convenient ones? Should we be making public forecasts at all?

To me, these existential questions are frustrating and somewhat confounding, but they also reflect what is – I hope – a once-in-a-career opportunity to reassess the alignment between what I do and the impact I hope to have on the problems I want to address. While my insights are obviously specific to the realm of infectious disease epidemiology, I think they might carry over into the broader world of people who want to do good things – broadly construed – with the tools of the quantitiative sciences.

39 thoughts on “What can we learn from COVID burnout?

  1. “Epidemiologists may have long prided ourselves on being one of the hard health sciences in much the same way that economists and other quantitative social scientists have long been held up as the rigorous thought-leaders of social inquiry as compared to their more-qualitative peers.”

    Good God! Do people who care about “social inquiry” really hold economists in high regard? I shudder. Is there anything, anything at all that economists agree to, that forms the basis of their model/theories/explanation of how things work, such that it makes sense to call it a “science” at all? Is it just the math? You can do Astrology with math.

    Is epidemiology really that weak? Is there no agreed upon explanations/theories that back any predictions epidemiologists make? Do I not know what epidemiology is? Looking less likely every moment. Cheers!

      • I think probably the relevant comparison is quality of quantitative analysis full-stop, regardless of field. In general, anyone in any field who is proclaiming that their methodological prowess makes them inherently more authoritative than the ‘weaker’ among them maybe isn’t to be trusted?

        • I resonate with this concern expressed by Jon Zeiner

          “we should ask what role quantitative epidemiology may have played in worsening or slowing progress in this crisis, in addition to all the good it has done.”

          At least on Twitter and YouTube, there have been heated conversations among academics about the earlier modeling of the virus. I can’t assess to what extent the conversations have led to improved decisions and practices. Biden has backed the manufacture and distribution of rapid antigen tests. So by December, we should see more test kits in circulation. It took one year of advocacy to move the needle on these rapid tests.

  2. Jon –

    Not to be critical – but after reading your post I’m not sure what you’re saying. At least for me, I would need more specifics to really respond. For example…

    You say:

    > For example: What was the potential impact of overly-optimistic transmission modeling projections earlier in the pandemic on the trajectory of infection and death?

    I need you to break down more specifically which projections you’re talking about, made by whom. Some of the projections I saw were overly optimistic. Others turned out to be too pessimistic. Many were just wrong, as all models are wrong, but their errors were leveraged – without proper regard to the accompanying confidence intervals – by advocates seeking to exploit uncertainty to push an agenda. Thus their impact wasn’t so much a function of the modeling errors so much as how those errors were exploited. As such, should epidemiologists be held responsible for that impact?

    Much of the impact of modeling errors was due to developments in a fast moving context that couldn’t have been anticipated such as variability in the implementation of and compliance with interventions, or the unprecedented development of and resistance to vaccines, such as the appearance of new variants, etc.

    Basically, I find your post too vague for me to understand what you’re really investigating. It seems that you have more specific questions and concepts in mind – maybe you could spell them out in more detail.?

    Or maybe this will be clearer to other readers.

    • Hi Joshua –

      I agree some more detail would be good here; hoping to flesh it out into a longer piece. I think your point about some being too optimistic and others being pessimistic is well-taken, but obviously some were promoted more loudly and picked up by politicians and the media for a host of reasons totally external to their quality. To be honest, I’m not sure I can get behind publishing model-based predictions as forecasts so early on in the process of an infection emerging, when there’s so little data to use to validate the model. Obviously this is all armchair/monday afternoon quarterbacking b/c I a) have not been in the forecasting business and b) have all the benefit of hindsight we didn’t have at the beginning. To me that’s sort of the point of the post – we’ve done all this stuff and have been flying by the seat of our pants, and if we don’t take a chance to clearly, honestly, and openly reflect on what went wrong I’m not sure we’ll be much better prepared the next time around.

      • Jon –

        > To be honest, I’m not sure I can get behind publishing model-based predictions as forecasts so early on in the process of an infection emerging, when there’s so little data to use to validate the model.

        I understand. But seems to me there’s an inherent problem there of deciding at what point you cross the line into the zone of having sufficient data of sufficient quality.

        Certainly, at some point you go to epidemiological war with the data you have. And if probabilistic projections result with ranges of uncertainty, but people look at the bottom line out of context and say “he predicted 2 million would die, do you go back then and say that such projections need to be made in secret? I’m not trying to be hyperbolic but just saying I’m not sure it’s possible to draw clear lines here.

        What I’ve seen with climate science is that there’s a tendency to have an unrealistic expectation of perfection, and to think that if projections don’t match reality then either the scientists are frauds (corrupt, perpetrating a “hoax”) or at best blinded by bias or hubrisric in their confidence about modeling reality.

        None of that is to say that epidemiologists and public health researchers and officials shouldn’t do an “after action review,” or even that bias or corruption or hubris aren’t real problems. I just worry about adding to the momentum to tear down “expertise” as a general phenomenon, which actually isn’t necessarily 100% a bad idea, except that I think the alternatives available are actually worse.

      • I agree some of this seems too generalized (it was interesting to read tho so thanks for posting).

        > Top-down approaches to interventions generally fail, infectious disease epidemiology policy is not an exception.

        I don’t understand this in the context of the paragraph that follows (and as a standalone statement I don’t see where this is coming from either).

        > We can easily mistake the ability to impose a well thought-out intervention cleanly on a simulation model … The problem with this is that when the real world is more messy and dysfunctional than the idealized

        This sounds more like it’s hard to make measurements of causal effects of these covid interventions (or even measure how the policy effects actual behaviors). And saying something is hard to measure is different from saying it fails.

  3. Just like “All models are wrong…”, I think “We are political actors…” is a lesson that is technically true but can also be overlearned. There is still something to be said for public health officials striving to project at least the appearance of impartiality in order to retain some authority with the broader population. Sure, there are those who would seek to demonize science, scientists, and their recommendations regardless. And they will succeed to a degree. But when quantitative public health researchers overreach their expertise, it might make it easier for their opponents to discredit them with some people who otherwise would have heeded their advice. For example, lots of epidemiologists pseudo-officiously used their implied authority to sanction the BLM protests as a net benefit for public health very shortly after condemning some lockdown protests as dangerous. (This was back before we knew that both were probably pretty safe because they were mostly outdoors.) The rationale was that the BLM protests were a once in a generation opportunity to reduce racism, and racism has negative public health consequences, and this would offset the expected spread of covid (again, it wasn’t yet appreciated that outdoor events were safe). But how were epidemiologists qualified to predict the social impact of the protests to factor this into their risk benefit analyses? They overreached, looked bad to many, and I would argue lost at least a little bit of influence as a result. (I say this as someone who fully supported the protests.)

    I also don’t think epidemiologists are qualified or have time to learn how to be effective political actors or messengers. This shouldn’t be part of the job description. Political organizations and government agencies should do better at digesting and communicating scientific consensus to both policy makers and the public. Some of these organizations can be activist in nature, but at least some large and well respected agencies should try to retain as much of an air of objectivity as possible.

    • I agree. As Andrew has said several times: “Transparency and honesty are not enough” to which I always add, ‘but they’re essential to get the truth off the ground in the public sphere.’ Transparency and honesty aren’t nearly as simple as they look, though, because they invite critique — your only defense is the method and the data, and the “other side” has made no promises to behave similarly, and it’s human to want to add more defenses. But as Z says here, those defenses are often illusory.

    • I think there’s a difference between the recognition that you’re making political – or at least politically relevant – statements with your research/practice and overwrought hashtag-tivism, although I do think some of the blowback from that came from the desire to preserve this scientific distance and remove that may be at variance with a values-driven field. I think the problem is that if the things you work on are best intervened on through some kind of political or collective action, we often to default to this position of making ‘policy proposals’ which still inherently falls on the side of power/govt, but can be interpreted as politically neutral even though it isn’t. But if you’re recommending some kind of protest, labor action, etc. then that become political/non-neutral but I’m not sure how much more biased that is than imagining yourself as being only in the business of constructing plans for local/state/national governments and NGOs?

      • I would say that as an epidemiologist you shouldn’t recommend policies. You should estimate their effects. To be clear, I think this somewhat fake facade of impartiality is especially important for public health. There are tons of economists who decide the minimum wage should be higher or taxes should be lower and essentially become activists for those ideas and I don’t think that does too much damage because there’s no reason the general public needs to trust economists. In public health, I think there should be a separation between lobbyists and evidence generators (like you and other epidemiologists) to allow for the possibility of widely trusted evidence interpreters who communicate what the evidence generators produce.

        • (It might have been a little strong to say there’s no reason the public needs to trust economists. If a candidate claims their policies would have certain beneficial effects and 99% of serious economists disagree, it would make the candidate more likely to lose ideally.)

    • I am not an epidemiologist but work in a lab site for a big company. At work we were given specific actions depending on our roles. Adaptations were made, adjusted and re-adjusted based on local risk.

      For the most part the public health sector failed at this. I agree that the George Floyd protests were a breaking point. Why were people packed into a small area acceptable for protestors but not for partiers or bikers? Nobody seemed able to communicate that a few cases in a city of 50,000 with one hospital might lead to crowded hospitals two weeks later, not a month earlier or later. We have higher death rates in smaller communities. We also have vast numbers of professionals leaving public health because of the personal attacks. On the flip side we have hysterical people who are still afraid to leave their house or go out to eat.

      Overall it was a failure of empathy, to realize that not everyone thinks in 2 x 2 crosstabs.

      • I think this was tricky b/c the advice was often focused on the idea of balancing risk associated with attending a social justice protest vs. the potential upside benefits to health disparities over the long run that could come out of that sort of collective action. I don’t think it’s problematic to be part of the conversation about that risk-balancing and to even have an opinion about that vs. the risk/benefit of attending Sturgis. But I do think the lack of empathy is critical here – I think it’s possible to be political without being demonizing and reductive about people who aren’t quite on the same page.

        • > – I think it’s possible to be political without being demonizing and reductive about people who aren’t quite on the same page.

          +1

  4. Appreciate the self-reflection and humility.

    Re: 1 and 2, if you haven’t already, you may want to look into work from Public Choice, Elinor and Vincent Ostrom, and Ronald Coase. The Ostroms in particular focused on researching “interventions” without top-down state action just like you mentioned. Just a thought, mahalos again for the post.

  5. I’ve criticized the coverage of covid-adjacent subject in this blog at multiple times for a number of reasons; so I feel like I should give my two cents here also:

    This is a thoughtful and well-written blog post and I appreciate seeing evidence of intellectual sophistication from epidemiologists. I think that I’ve sometimes wrongly assumed that some of the points you raise are (perhaps willfully) ignored by whole field, but perhaps I was extrapolating to the whole field from those epidemiologists with the most exposure. Keep up the good work! It’s always humbling to discern the limits of a field, but like in economics, it is better to notice boundaries when they exist than to pretend they are not there.

  6. Z –

    > But how were epidemiologists qualified to predict the social impact of the protests to factor this into their risk benefit analyses? They overreached, looked bad to many, and I would argue lost at least a little bit of influence as a result. (I say this as someone who fully supported the protests.)

    This argument echoes one I’ve e seen often with scientists about climate change – that climate scientists are straying outside their lane by advocating for policies related to their science, and that if they do so, they run the risk of looking bad and undermining the quality or objectivity of their scientific product.

    I don’t doubt this argument is relevant in the margins. Some % of the public may lose confidence in scientists if they see that the scientists are advocating for policies which have political implications they don’t support.

    On the other hand (1) I think that more often the causality runs the other way. People view an issue largely based on their political overlay, and then selectively chose politically-associated advocacy on the part of scientists to dismiss (or accept) science that has implications that don’t fit (or do fit) their political preferences. With climate science, a very prominent pattern is that right-leaning folks retroactively reverse engineer from something like “climategate” to explain why they lost confidence in the work of climate scientists. But looking at their political orientation suggests that something like “climategate” was actually just a rationalization for a view they would likely have had regardless, and (2) for others there could be an opposite reaction. In the case of climateb science, people with a left-leaning orientation look at something like “climategate” as reinforcing of their trust in the climate scientists – and if said climate scientists DIDN’T weigh in on related political issues they’d lose credibility within that other segment of the public.

    My guess is that the net effect is marginally small. The public’s acceptance or rejection of what scientists say about political issues adjacent to their science doesn’t explain as much about public viewpoints as the tribal orientation of the public in itself. Saying that the scientists themselves (what they say or don’t say and the political impact of what they say) are a significant driver in the casual dynamic, while there is a certain common sense logic, doesn’t seem to me to add up in full context.

    Further is it realistic anyway to expect that scientists would not weigh in on political issues they think are directly related to their science?

    • I agree tribal orientation is the bigger effect when present. But:

      1) Not everything gets tribalized. It was really bad that covid did, but lots of public health issues are not, and it would be good for the public health establishment to have some general credibility as the default starting point for future crises.

      2) While the immediate negative effects of scientist activism might be marginal, I believe so are the positive effects. The main political impact of scientists is to create scientific consensus where possible and communicate it so that people who care can use it to inform decisions/policies. The implications of climate change models are sufficiently alarming that scientists saying that they are personally alarmed doesn’t add much. If scientists explicitly embrace activism as a goal, however, I think long run there’s a risk that a non-marginal number of people will ignore science.

      • Looks like Z and Anonymous are one and the same?

        > 1) Not everything gets tribalized. It was really bad that covid did, but lots of public health issues are not, and it would be good for the public health establishment to have some general credibility as the default starting point for future crises.

        But public health has been under attack for political reasons, as have academic communities and “experts” more generally. Some issues don’t get politicized but I think there’s a tendency for it to be happening more and more across a wide spectrum of issues. I’m thinking, for example, of how the public reaction to the public health establishment on Ebola became politicized.
        How does the general public health community rise above that trend.

        > 2) While the immediate negative effects of scientist activism might be marginal, I believe so are the positive effects.

        I agree. I tried to convey that view.

        > The main political impact of scientists is to create scientific consensus where possible and communicate it so that people who care can use it to inform decisions/policies.

        But then there’s the problem where communicating about a consensus actually seems to have a very limited impact or a counterproductive effect – as seen with the whole question of the flaws of the “information deficit model” as a framework for science communication.

        > The implications of climate change models are sufficiently alarming that scientists saying that they are personally alarmed doesn’t add much. If scientists explicitly embrace activism as a goal, however, I think long run there’s a risk that a non-marginal number of people will ignore science.

        I accept that’s a risk. And again, I see the common sense logic. I haven’t seen, however, much evidence that takes place even though it is often argued that it is a strong causal mechanism in play. Again, this question has been litigated a whole lot related to climate change for a long time. I’ve seen very little strong evidence for the theorized causal mechanism – and what we see instead is an overwhelmingly strong ideological signal in views on climate change.

  7. Every line of work has its stresses, and burn-out means being made less functional by those stresses. In my field we looked at burn-out among nurses some years ago. We found the lowest rates of job turnover among oncology nurses who care for many doomed patients and the highest among ICU nurses. It looked to us that the ICU is very technical with the nurses spending a lot of time watching screens displaying ECG, blood pressure, oximetry, etc while the oncology nurses spent time having human interactions. The humanitarian impulses which are great in all nurses were not met by studying the data of intubated folks who left when they didn’t need the ET tube.
    Abstraction plus stress results in burn-out.

    • “The humanitarian impulses which are great in all nurses were not met by studying the data of intubated folks who left when they didn’t need the ET tube.”

      Perhaps there’s another way to look at the info you have.

      The ICU is stressful because people’s lives or the quality of their future lives hinge critically on every decision. Nurses are using all that gear, combining it presumably with doctor instructions, their personal knowledge and other tests and data to make highly consequential decisions and doing it frequently. Making sure you’ve covered every detail from among these disparate data sources before making a decision is extremely stressful, particularly as more pressure is bearing down on medical mistakes. It’s not the gear itself or the screens or the data that’s the problem. It’s assembling that disparate information and making a critical decision from it that’s stressful.

      OTOH with doomed cancer patients, decisions don’t matter. It’s not that it appeals to their caring humanitarian impulses. It’s that no significant consequence come from their actions, they don’t have to worry – at least not nearly as much – about making mistakes.

  8. Epidemiology and Public Health are different fields, and sometimes “different hats” on the same person.
    For Public Health, I learned a lot about risk communication from Peter Sandman’s website; he has collected his Covid-related commentary.

    I think it’s not fair to blame epidemiology on failures of public policy, because epidemiology is what it is, and public health should know that. The idea that some epidemiologic research/knowledge is so damaging that it should not be made public actually feels more like a framing problem to me: it’s more about communicating uncertainty than about being wrong.

    It’d also be nice to see more interdisciplinary research in epidemiology modeling. For example, a SIR model doesn’t really reflect the observed demographic differences of Covid incidence and death rates. As in economics, using models assuming an “ideal” population won’t reflect real people properly. (Cue the anecdote about the dawdling 4-year-old.)
    Covid has made this discrepancy very clear, and I’m hoping that many epidemiologists see this as an opportunity to expand the field and get better.

      • > The idea that some epidemiologic research/knowledge is so damaging that it should not be made public actually feels more like a framing problem to me: it’s more about communicating uncertainty than about being wrong

        I agree. But I think it’s also about how people deal with uncertainty from the other side as much as it is about how to communicate about uncertainty. They are glove in hand and you can’t communicate uncertainty perfectly when people are naturally inclined towards well-known cognitive biases when facing uncertainty.

        > For example, a SIR model doesn’t really reflect the observed demographic differences of Covid incidence and death rates.

        It’s been interesting to read about some of the attempts to model COVID with considering heterogeneity, that made sense to me a conceptual sense, but in the end came up with clearly falsified projections of extremely low herd immunity thresholds – that in turn helped to buttress much of the “let it rip” advocacy that has arguably led to many unnecessary deaths and avoidable illness.

      • Thank you, that’s exactly what I was going for! I especially like that you incorporated feedback from social and sociomedical collaborators.

        Your paper puts your first point from the above blog post in sharp relief: “We are political actors whether or not we like it or want to be.” The decision to incorporate socio-economic status variables into a model is political, as it determines whether the model is capable of supporting social change or not.

        It looks like you’re somewhat optimistic about getting a handle on socioeconomic factors such as housing and occupation’s effects on transmission, but despair about psychological and political factors driving Covid response: for example, it may be hard to predict to what extent Russian trolls and media outlets will drive misinformation and conspiracy theories in the next pandemic.

        I’m hoping that it’ll be at least possible to look back, identify and model these influences, so that they can be monitored and identified as sources of uncertainty. Inasmuch as misinformation is information warfare, epidemiology science is rather not expected to be able to predict the outcome of the war; but it’d be nice if we could extract some ongoing information on how the battle is going and how that’s going to affect the epidemic and the public health.

        Obviously these kinds of input have a political dimension as well:

        The effect of all of these communications failures is diffuse and uncertain. But we do know this much, according to new Cornell University research [“Covid Misinformation: …”, Evanega et al.]: The president of the United States was the loudest megaphone for Covid-19 misinformation during the first few months of the pandemic. [..]

        Of those million-plus articles with misinformation, about 38 percent of them featured Donald Trump and some specific kind of misleading claim of which the president is fond, or a general reference to his penchant for spreading false information.

        If you put these sorts of things in the models (and they’re arguably influences on the epidemic), that’ll be politically controversial at first, too.

        An easy way to burn out a subordinate is to make them responsible for a job, but to deprive them of the tools to actually do it.
        If you feel you lack the power to change your field, but also perceive that you can’t deliver the results we need without those changes, you’re in a similar position. The only antidote I know of is to band together with like-minded individuals and support each other.

  9. “Top-down approaches to interventions generally fail, infectious disease epidemiology policy is not an exception.”

    Can one conclude that without giving in to (very Top-down) PR machines that were quick to employ their old astrotruf denialism playbooks during the onset of this pandemic?

  10. Jon,
    It is remarkable to me, as an old guy (80), that you and those commenting largely take the current political situation as normal. I don’t glorify the past (I remember McCarthyism and Reagan and the Gulf of Tonkin and weapons of mass destruction and all that), but Trump and Trumpism is different, and scary as Hell.

    That said, I generally agree with you. Biologically, people are social animals, and better at rationalizing than at being rational, and people doing public health need to take that into account. And, few people understand enough about models to know how to think about them or their predictions. As an exampe, our local public health officer, who is a friend, knew perfectly well that they actual course of the epidemic would depend on how people behave, but he did not connect that to the uncertainty in model predictions until I pointed it out to him.

    • Definitely different and definitely scary, but to me it feels like the logical continuation of the path we’ve been on politically for most of my life, which started right as the Reganomics train was leaving the station. I think the difference outcomes at least in terms of inequality wouldn’t have been huge under Reagan or Clinton or Bush I/II because it’s about inequities that are deeply baked into the politics of prevention in the U.S. and our comfort with the linkage between wealth, identity, and health. One only needs to look back to the early days of the HIV/AIDS crisis to see the exact same type of denial and disregard coming from the govt.

  11. I’m grateful for your self-reflection. Thank you.

    In my opinion, a 4th item should be added to the list (although in rank order, it should be #1), and that is the need for humility. I don’t think of epidemiologists as the worst offenders in this regard (indeed, they may as a group be on the positive side of the scale), but we’ve all consumed too many numbers during the pandemic that would have been better replaced with an honest “we don’t know” and/or “it will depend”.

    Humility goes beyond including confidence intervals or understanding the existence of variability. Humility requires seriously evaluating what we actually know and what we actually don’t. In this example, we did not know how many cases there were (especially early on), we did not know R0, we did not know how the public would respond on their own, we did not know how they would respond to various policies. On the other hand, we do know very well the mathematical characteristics of the models used. But to plug numbers into a model and generate an estimate, even with confidence intervals, is not to actually grapple with the real uncertainty.

    tldr: “Garbage in; garbage out” or “It’s measurement all the way down”

    A real reckoning would require a deeper embrace of humility than I’ve seen from the public-facing scientific community.

    • In what ways do you feel science has failed?

      My sense is exactly opposite, but then I expect I live in a different country from you, so my experience was different.

      Science quickly identified the main mechanics of the spread and provided several ways to diagnose Covid, and also identified age as the main risk factor, as well as others, all within the first two months. The pace to develop a safe vaccine was greatly accelerated compared to how long this usually takes. Developing a cure is the hardest part, though that’s also been remarkably successful compared to some other virus diseases.

      • Yeah, I tend to agree that from a science side it’s been pretty much a bang-up success even if there are lots of things we could/should have done better. To me the biggest success on the epidemiology side is in the rapid genotyping and phylogenetic/phylogeographic analysis that gave us a very good sense of how it was spreading even when the number of observed cases was relatively low. Obviously the science communication part has been botched, but a lot of it comes down to social and political failures that no amount of good science-ing could have prevented.

Leave a Reply

Your email address will not be published. Required fields are marked *