Relative vs. absolute risk reduction . . . 500 doctors want to know!

Some stranger writes:

What are your thoughts on this paper? Especially the paragraph on page 6 “Similar to the critical appraisal ….. respectively”.
There are many of us MD’s who are quite foxed.
If you blog about it, please don’t mention my name and just say a doctor on a 500-member listserv asked you about this. And send me the link to that blog article please. There are at least 500 of us doctors who would love to be enlightened.

The link is to an article called, “Outcome Reporting Bias in COVID-19 mRNA Vaccine Clinical Trials,” which argues that when reporting results from coronavirus vaccine trials, they should be giving absolute risk rather than relative risk. These have the same numerator, different denominators. Let X be the number of cases that would occur under the treatment, Y be the number of cases that would occur under the control, and Z be the number of people in the population. The relative risk reduction (which is what we usually see) is (Y – X)/Y and the absolute risk reduction is (Y – X)/Z. So, for example, if X = 50, Y = 1000, and Z = 1 million, then the relative risk reduction is 95% but the absolute risk reduction is only 0.00095, or about a tenth of one percent. Here’s the wikipedia page.

What do I think about the above-linked article? It’s basically rehashing the wikipedia page. I’m not talking about Weggy-style plagiarism here, just that this is standard material. There’s nothing wrong with publishing standard material if people need to be reminded of some important message.

That said, I don’t think this article is particularly useful. I don’t think it conveys an important reason for parameterizing in terms of relative risk, which is that it can be much more stable than absolute risk. If the vaccine prevents 95% of infections, then that’s 95% of however many occur, which is a more relevant number than comparing to how many happened to occur in the control group in some particular study. Conversely, if that 95% is not a stable number, so that the vaccine prevents a greater proportion of infections in some settings and a lesser proportion in others, then we’d want to know that—this is an interaction—but, again, the absolute risk difference isn’t so relevant. Absolute risk does matter in some settings—for example, we wouldn’t be so interested in a drug that prevents 50% of cases in a disease that only affects 2 people in the world (unless knowing this would give us a clue of how to treat other diseases that have higher prevalence), but of course coronavirus is not a rare disease. Presumably the rate of infection was so low in those studies only because the participants were keeping pretty careful, but the purpose of the vaccines is to give it to everyone so we don’t have to go around keeping so careful.

So, unless there’s something I’m missing (that happens!), I disagree with the claim in the linked paper that failures to report absolute risk reduction “mislead and distort the public’s interpretation of COVID-19 mRNA vaccine efficacy and violate the ethical and legal obligations of informed consent.”

P.S. I’d never heard the term “foxed” before. From the above message, I’m guessing that it means “confused.” I googled the word, and according to the internet dictionary, it means “deceived; tricked,” but that doesn’t seem to fit the context.

122 thoughts on “Relative vs. absolute risk reduction . . . 500 doctors want to know!

    • Jon:

      Thanks. The key difference between the coronavirus-vaccine example and the rare-disease example is that coronavirus is not a rare disease. It just looks rare in the data because of how the experiment was designed. Looking at it this way, the key flaw in the above-linked article is that it is focused on the data at hand rather than on the underlying quantities of interest. That happens a lot in statistics, that people get sucked into discussions of estimates and standard errors without stepping back to consider what questions to ask.

      • How rare is “rare?” (I get varying empirical answers to this question when I ask for steak at a restaurant, so you don’t need to give a precise answer here.) While something on the order of 30 million people have gotten the disease in the US, when they were testing the virus in the US it was around 40,000 cases per day; given a two-week virulence period that suggests about 600,000 cases or so, or only about 0.2% of he population.

        • … though you really shouldn’t type “virus” when you mean “vaccine” or nobody will understand what you’re talking about. Unfortunately, those errors (by me) are not rare. [Edit button!!!]

        • Jonathan:

          I guess the vaccine is supposed to work for awhile, so I think the relevant rate is not the percentage of people who will get it in any particular two-week period, but rather the percentage of people who will get it ever.

        • Sure, but take the Pfizer vaccine initial announcement: https://www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-conclude-phase-3-study-covid-19-vaccine
          “The Phase 3 clinical trial of BNT162b2 began on July 27 and has enrolled 43,661 participants to date, 41,135 of whom have received a second dose of the vaccine candidate as of November 13, 2020.”

          So roughly 20,000 unvaccinated people (I assume the 41,135 must mean either second dose of vaccine or placebo) 162 people got Covid over a 4 month period. That’s still only a rate of 0.8 percent. Obviously, if you extrapolate out in time then that rate rises, but that’s the data they used to approve the vaccine. (It’s a little more complicated than that because (a) many of the enrollees had less than 4 months of exposure and (b) the underlying rate by November was considerably higher than 0.2%. Those two effects work in opposite directions.)

        • Remember that Pfizer study was not proactively testing for infection, but only reporting symptomatic cases that were reported.

        • To date no vaccine has actually been approved. The premise of the article in discussion pertains to informed consent, so it seems important to not get sloppy with language around this point.

        • Whether the vaccines have been approved or not is unimportant. What is important is whether they prove to be effective or not.

        • Like every attempt at predicting or modeling this disease, getting from the relative risk reduction in a fast-tracked clinical trial to absolute risk reduction long-term in the population means assuming facts not in evidence. Specifically 1)for how long will a vaccine reduce risk similarly to what was observed in the trial and 2) to what degree does the vaccine reduce transmission. Add the absolute risk reduction to the list of COVID “facts” that can take on ‘most any value you like depending on the assumptions you use for the unknown quantities!

        • Hopefully the questions you raise are being pursued practically. It’s a hit and miss effort in some substantive sense.

          The problem is that relative risk reduction has translated into a marketing strategy that exaggerate risks and benefits, as Welch notes in the article I posted.

        • as far as I know, (for risk assessment) usually it’s like:

          – very common: P > 10%
          – common: 10% > P > 1%
          – not common: 1% > P > 0.1%
          – rare: 0.1% > P > 0.01%
          – very rare: P < 0.01%

          on the topic, besides the other links I find this https://academic.oup.com/ndt/article/32/suppl_2/ii13/3056571#64437158 quite neat, it also briefly touches the 'number needed to treat' (NNT or NNV in this case) thing.
          Absolute risk is quite important to know: the 'treatment' comes with a risk/cost too (it might even reduce 100% the desease fatality, but adds it's own).
          My risk of dying (in a given time frame) is my expected context/situation risk of getting infected x my expected category IFR, versus the expected fatality risk of the vaccine (e.g. flu vax is some 3-4 ppm of severe/fatal outcome, < 1/100k overall o severe adverse effects). I'd want the latter well below (say, 10%?) the former to go for that jab.
          Suppose my overall chance of getting infected in say 1y is 10% and my expected IFR is some 0.05% then my acceptable limit would be a risk for the jab < 10% x 10% x 0.05% = 5 ppm.
          Anyway my overall expected 1y-chance of dying for that desease is some 0.005%, and even a 10-20 fold reduction from that level wouldn't look much impressive to me, regardless of the added risk.
          As others have pointed out already, these figures are much time-varing.
          Clearly, if (when) I were in the 5+% IFR category I'd likely rush for the jab.

      • Other factors to take into account include the number of infections over time such that waning effectiveness within 6 months or less has resulted in the vaccinated in numerous highly vaxxed societies with infection rates equaling or even exceeding those of the unvaccinated. US stats are less reliable on this factor since asymmetrical reporting occurs here.

        • That’s only true if you don’t differentiate by age, throwing healthy young unvaccinated and sickly old vaccinated in the same pit (and of course that’s misleading).

    • From the FDA’s Communicating Risks and Benefits: An Evidence-Based User’s Guide 2011. Note absolute risk is mentioned 11 times, A few tidbits:

      ‘Another statistical choice is between reporting relative or absolute
      risks. Because there is no way to infer the latter from the former, absolute
      risks are always more informative. Doubling a risk means very different things
      if that entails going from 10% to 20% or from 0.001% to 0.002%.16 Even when
      they contain the same information, different summaries can highlight different
      perspectives, hence bias choices.17’ From the conclusion:

      ‘What general practical advice can the science support?
      In this final section, we recommend ways to nudge individuals towards
      better comprehension and greater welfare. How to present information is an
      important choice for information providers that should be made with care
      using an evidence-based approach.
      2. Provide absolute risks, not just relative risks. Patients are unduly
      influenced when risk information is presented using a relative risk
      approach; this can result in suboptimal decisions. Thus, an absolute risk
      format should be used.’

      I guess we had a different FDA in 2011. There is quite a bit more to the report.
      https://www.fda.gov/files/about%20fda/published/Communicating-Risk-and-Benefits—An-Evidence-Based-User%27s-Guide-%28Printer-Friendly%29.pdf

      • With Covid, absolute risk is changing on a monthly (sometimes weekly) basis, and hence it is not a good basis for medium-term decisions.
        That guide dealt with more stable risk assessments.

        • Mendel:

          Yes, exactly. Another way of putting it is that absolute risk is a good predictive quantity to look at, but relative risk can make more sense as a generalizable parameter.

        • The absolute risk of interest is the net total risk at “infinite time” (or basically, when the virus becomes endemic and has been so for a while).

          The best estimate of that absolute risk reduction is the relative risk reduction * the asymptotic risk of exposure to an infectious dose (for an unvaccinated person) which is close enough to 100% so the relative risk reduction and absolute risk reduction for COVID are basically the same thing.

  1. From the article:

    > Such examples of outcome reporting bias mislead and distort the public’s interpretation of COVID-19 mRNA vaccine efficacy and violate the ethical and
    legal obligations of informed consent.

    What’s the non-biased baseline on which to determine bias and distortion? What is the baseline of ethical and legal obligations by which to determine what comprises a “violation.”

    Seems that the authors believe that their opinions on these subjective form those factual baselines.

  2. Efficacy of vaccines -as well as efficacy of any other treatment- has been always reported as relative risk reduction (RRR).
    Absolute risk reduction (ARR) is intended to measure how many people would be benefited from the vaccine (or treatment) out of those receiving it.

    The problem with ARR in covid-19 vaccines is that the follow-up had to be very short (just 2-3 months) in order to take decissions asap. That makes the infection rate very low.

    Another point with covid-19 is the lack of independence between cases. As in any transmissible disease, to prevent one case by vaccination is not just to prevent one case: it is also to prevent the whole chain of generation of cases transmitted by the primary case.

    • I am not clear as to why the FDA can’t include an ARR, as it issues not from the clinical trial data. It can be calculated now as a function of measuring the effectiveness of the vaccine. Maybe by June as then you will have had time monitor vaccine effectiveness.

      That article is bit more technical. But I think Gilbert Welch has penned an article for patients and consumers of medical research

      The Problem is Relative

      https://www.huffpost.com/entry/health-risk_b_1613912

      • Agreed – I think the problem in the framing is the either/or. I think there’s a good case for having both, as Andrew’s comments implicitly draws out by giving some conditionality. I was about to write about the dangers of how these things are publicised, but that huffpost link says it better than I can.

        BTW ‘foxed’ may be a geographic thing as I’m very familiar with it – my guess is the author might be British!

        • Brian,

          Hey hey, not to be outfoxed.

          My understanding is that the Consolidated Standards of Reporting Trials [CONSORT} recommends that both the relative and absolutes effects be reported with their confidence intervals. Moreover it recommends that event rates along with their denominators so as to convey how that patients and physicians understand how the risks were calculated.

      • Rachel

        > Everyone I know who’s gotten Covid this month is vaccinated.

        Is the plural of anecdote data?

        At any rate, vaccines don’t “stop” transmission in the sense of 100%, but it seems that they reduce transmission among those not previously infected, which has a significant impact at the population level.

  3. Absolute vs. Relative has plagued statistics from the outset. Textbooks of years ago would often show real examples (such as the Pentagon budget of one year vs. the next) of histograms, with and without the origin pictured specifically, in order to illustrate how a seemingly large relative difference could be considered tiny in absolute terms. The choice of absolute or relative was psychological and/or situation dependent.
    With regard to the Covid crisis, there is even disagreement regarding the numbers used to calculate anything. A depressing number of people in the U.S. still even doubt the existence of the disease, let alone the number who have been affected by the disease and cured by bleach.

    • The late Kary Mullis, as many of you know, was the Nobel Prize winner in 1993 (in chemistry) for his invention of the polymerase chain reaction (PCR) test.
      I watched a video of him explaining that depending on the number of cycles used in the test will drastically effect its accuracy. In fact, if enough cycles are used, anyone can test positive for anything at all.

      In fact, Oxford Academic published a study which resulted in the following finding:
      “ he verdict referenced a study published by Oxford Academic in September on the correlation between 3,790 positive PCR tests and 1,941 SARS-CoV-2 isolates. The researchers found that at a cycle threshold (ct) of 25, the test was 70 percent reliable, a figure that dropped to 20 percent at 30 cycles, and just three percent at 35 cycles. That meant 97 percent were false positives, yet that was used “in most laboratories in the USA and Europe.”

      Interestingly, [Mullis] said men like Fauci have an agenda which “is not what we would like them to have, being that we pay for them to take care of our health in some way. They have a personal kind of agenda. They make up their own rules as they go; they change them when they want to. And they (are smug)—like Tony Fauci does not mind going on television in front of the people who pay his salary and lie directly into the camera.’ ” *

      The above data, along with this brilliant scientist’s personal character assessment of the man fueling the Covid hysteria, is why we so many Americans are questioning the popular narrative. We are suspicious of government’s & Big Pharma’s agenda. (As the old adage goes: how do you know when a politician is lying?…His lips are moving.” And Fauci is more politician than doctor – always has been.

      For all of these reasons, we prefer to seek out the sources of their “scientific data”, their “statistics”, as well as detective 101 research…FOLLOW THE MONEY.

      But perhaps this will all fall on deaf ears…
      Perhaps, like our Dear Leader, pedoBiden, you see most people as peasants – “the ignorant masses”, etc. Strange, as we are the ones that put the time and effort into testing information touted by institutions that have a history of being manipulative.

      I hope that one or more of you follow up on these claims. You can find several of Mullis’ live comments on Rumble (YouTube hates facts). I think we should listen to him about the very test he won the Nobel Prize for inventing, though. And I hope this has helped

      • S:

        Yeah, the Nobel Prize does lots of damage, providing a platform for all sorts of b.s. I guess this kind of political conspiracy theory that you’re spreading would be out there anyway, but I can’t help feeling that the attention-loving Nobel prizewinners are part of the problem, as is the scientist-as-hero mentality that goes along with the Nobel prize.

      • To clarify:
        – Kary Mullis did not live to see Covid.
        – Kary Mullis denied the Aids virus exists.
        – PCR tests for SARS-nCov-2 are routinely validated by other tests (e.g. ELISA) and genome sequencing.

  4. We are fortunate to live in a time when the ARR of a smallpox vaccine is zero. Even one with an RRR of 100%. But that is largely because a high-RRR vaccine was administered globally for decades when any measure of the ARR was vanishingly close to zero.

    If the vaccines do work and are widely used, I might expect that repeating the study (except larger) next year would find the ARR about 100 times lower and RRR to be the same. So why fuss about ARR?

    I think the paper itself is very misleading to call the 1% ARR “very much lower,” inviting comparison of the two.

    • Thanks for the smallpox example. As an average punter with no background in stats, research or vaccination it is a very useful example. I was only dimly understanding the merits of RRR and ARR but you’ve just made it a whole lot easier to conceptualise.

    • That’s incorrect, assuming small pox vaccine works all the time (0% infected in the treatment group) and the disease infects 100% of the control group, then ARR = CER (100%) – EER (0%) = 100%. Hence, RRR = ARR/CER = 1/1 = 100% — which is great news for everyone!
      Right now, the vaccines in question have ARRs < 2%.

      • The point is that the ARR in your example depends only on how many subjects in the control group get infected — which has no logical connection to the vaccine at all!

        It also means that the ARR can never be higher than the infection rate, and thankfully that is low for Covid (and kept low through restrictions where appropriate) because otherwise the hospitals run over.

  5. Medicine is full of similar examples of RRR vs. ARR. Processed meat consumption increases colon cancer chance by 5%. The catch is everybody’s baseline is 4%, regardless of diet. All of a sudden cold cuts don’t seem too bad. (I made up the figures, don’t remember exactly).

    I believe all numbers need to be presented to get a full picture. The famous 94% or 95% figures that get thrown around for the two mRNA vaccines are simply based on a limited sample of 40K or so participants in each arm. Vast majority didn’t get infected, regardless of the vaccination status. It is a 20-fold difference in those that did get covid between the two arms that produced the often-quoted numbers.

    Here are the figures for Pfizer 7 days after the second dose:

    Vaccine: (N=18,198 not infected, 8 infected)
    Placebo: (N=18,325 not infected, 162 infected)

    I mean, when you only compare the number of non-infected, the difference is really trivial. It’s 8 vs. 162 part that make the headlines.

    What really matters is the true protection among hundreds of millions already vaccinated (myself included).

  6. I think Javier gave the answer. For absolute risks (and AR reductions) to be useful one needs a meaningful time horizon, such as 5-year survival in cancer or a one-winter risk of seasonal flu. Covid is now endemic rather than epidemic, and fluctuating at that. A 3-month risk doesn’t mean anything when you live with the virus continually, especially as the current transmission level differs from that which applied when the trials were run. At least the RR has a simpler message (whatever your risk is, it’ll be reduced by 90%…)

    • Waning and new variants ensure the RR (for infection) will drop with time since vaccination. The trials also excluded the most at-risk population and ignored the increase in illness seen during the first week after first dose while white blood cell counts were low (which has a “culling” effect).

      The right way to interpret those numbers is as a maximum possible efficacy.

      • Another issue was the rather noticable side effect profile after vaccination. The trials failed to do exit surveys but it seems unlikely the blinding was very effective.

      • I’ve seen you talk about immune suppression in the first week after vaccination several times, but there is no evidence of this. The EUAs for J&J, Moderna, and Pfizer vaccines show no increase in COVID cases in the first week. Specifically, the cumulative incidence curves show no evidence of increase susceptibility to COVID between 0 and 7 days, or 0 and 14 days. If you were substantially more susceptible to COVID in that period of time, the vaccinated curve would exceed the placebo curve before then dipping below it after your immune system “woke up”. I’m also not aware of any immunological reason why temporary immune suppression would be expected. What evidence do you have that these vaccines suppress your immune system for a week?

        Related, you state that leukopenia (low WBC counts) occurred during the first week. All three vaccine makers performed hematology panels that would have detected leukopenia. They each performed them before vaccination, after approx one week, and after approx. two weeks. So, what evidence do you have of leukopenia during the first week of vaccination?

        To make it clear and easy to access the phase I/II articles and how they performed hematology, here are the links:
        Pfizer: https://www.nature.com/articles/s41586-020-2639-4.pdf (safety assessment section in methods)
        Moderna: https://www.nejm.org/doi/suppl/10.1056/NEJMoa2022483/suppl_file/nejmoa2022483_protocol.pdf (page 13, clearly states they perform WBC counts)
        J&J: https://www.nejm.org/doi/suppl/10.1056/NEJMoa2034201/suppl_file/nejmoa2034201_protocol.pdf (pages 20-37 timing charts, page 114 lays out a reaction grades associated with reduction in WBC count)

        It’s also not clear what you mean by “excluded the most at-risk population”. The J&J and Pfizer EUAs are clear that the vaccine was tested in hundreds of people over the age of 75. Moderna only gives the number of individuals over 65, but it seems likely they have approximately the same number as the other vaccines. At least 20% of the participants in each trial was 65 or older. With respect to comorbidities, thousands of individuals in these trials had co-morbidities, some mild and some serious. There are huge tables in each EUA showing the estimated vaccine efficacy for each comorbidity. So who, specifically, did they leave out of these trials that you think should have been included?

        To make it clear and easy to access the EUA documents I’m talking about, here are the links:
        Pfizer: https://www.fda.gov/media/144416/download
        Moderna: https://www.fda.gov/media/144673/download
        J&J: https://www.fda.gov/media/146338/download

        • When you have zero deaths in the control group of Pfizer over a 6 month period during the worst part of the pandemic, it tells you something about the study design. It’s called “strategic selection” aka cheating.

    • Exactly! I finally came upon this discussion after searching far and wide for some mention of time period in relation to absolute risk. The measure of absolute risk seems to me to be a junk statistic unless given in the context of a meaningful time frame. Your examples of 5-year survival rate or one-winter seasonal flu risk are perfect.

  7. There’s a phenomenon whose name I’ve forgotten: you think you’ve never seen a word before, then it is brought to your attention, and suddenly you start noticing it all the time.

    Foxed…it’s not a common synonym for ‘deceived’, except in the form ‘outfoxed’, but I’ve seen it before, and Andrew I’m very sure you have too. It’s the kind of thing I see maybe once every year or two on average, usually in an old book or historical novel, so, OK, uncommon. Still, I started reading such books when I was in my early twenties, and now I’m 55. For sure I’m well into double digits in seeing that word used that way.

    So, Andrew, you’re going to start noticing it.

  8. The way to understand the difference between absolute vs relative risk reduction is this:

    1) Relative risk reduction is approximately a constant property of the vaccine itself. Whatever the risk is of getting COVID, being vaccinated reduces it by some factor.

    2) Absolute risk reduction is a fact that is a historical accident of the time and place in which the trial was run. Whatever prevalence there was, and behavior that obtained during the trial, that sets the absolute risk of infection among the control group, and with a large enough and randomized group, the same risk of exposure will obtain in the vaccinated group, which will be reduced by the relative risk reduction to some other value.

    So the question is really this: Do we care about a property of the vaccine, or of the accidental facts of the time period when the trial was run?

    Relative risk is the main and most important fact about the vaccines.

    • Hi Daniel,

      The way you framed the difference between absolute and relative risk does not compute for me. Admittedly, I’m not a statistician either. I think the contexts is very critical to evaluate. I am not convinced that calculation of absolute risk need to be restricted to the trial itself. It is a big jump to assert that because ARR is a historical accident temporally, therefore the relative risk is the most important fact. Strikes me as a hasty generalization

      Gerd Gigerenzer has made a valiant effort to educate physicians and the public at large about statistics.

      https://www.healthnewsreview.org/2010/12/leading-risk-comm-guru-gigerenzer-argues-that-absolute-risk-communication-is-a-moral-issue/

      In cancer treatment studies, I think it is an imperative to report both ARR and RRR.

      • When a disease is in an approximately “steady state” such as say tuberculosis, or heart disease, or colon cancer, the ARR from one period of time and the ARR from another would be similar, so then the ARR has meaning as it would apply to “next year” at least approximately as well as it applied to this year. That’s not at all the case for an epidemic disease spreading in exponential growth and decay phases.

        • That’s right. For example suppose there’s a very rare genetic condition that makes eating avocado life threatening. Suppose there’s a pill you can take with an enzyme which digests the dangerous component reducing the chance of the effect by 99%. However, only 1/100,000 people has the genetic condition.

          a 99% relative risk reduction would sound like a lot, but since it only affects 1/100,000 what’s happening is the risk is dropping from 1/100,000 to 1/10,000,000

          But what’s worse is that people who’ve eaten avocado before know already that they aren’t at risk… so for them the pill is useless.

          In these kinds of conditions, relative risk reduction of a rare issue can be very misleading…. almost no-one needs to have their avocado risk reduced, but among those who do the pill is a big deal.

          That’s not like COVID at all… In the absence of vaccines, north of 65% of everyone on earth will likely eventually get COVID and be at risk for serious illness and death over the next few years. This means everyone needs the risk reduction. Therefore, the relative risk reduction is precisely the quantity of interest. Furthermore, this is just the individual risk reduction. In the presence of widespread vaccination the fraction of the population which will actually be infected drops to much lower than the raw 65%… Furthermore, the economic consequences of delaying vaccination and stabilization of the illness are very severe as well.

          In the end for COVID vaccines, the relative risk reduction actually **understates** the value because it’s just considering the individual risk issue not the societal risk issue.

          yes, perhaps boosters and variant versions of the vaccine will be needed at some point, but right now, the relevant information is that if you get Moderna or Pfizer’s vaccine your risk of hospitalization, severe disease, or death if infected drops by perhaps 99% or so, your risk of being infected in the first place drops by 80-90% and the risk of children, elderly, and others in contact with you drops by some large amount as well. The systemic hospital system stability risk drops by a lot. The economic risk drops by a lot… There’s no excuse to not vaccinate other than a known sensitivity to a vaccine component.

          I don’t buy Anoneuoid’s suspicion that the vaccine doesn’t prevent transmission. The evidence seems to be overwhelming (from multiple sources) that it does at least for some time, just on the basis that highly vaccinated populations are not spreading the disease (such as LTC facility patients).

        • “I don’t buy Anoneuoid’s suspicion that the vaccine doesn’t prevent transmission”

          You are too generous.

        • It should be noted, that at least with Delta and months after vaccination (with the possibility of waning of efficacy), the head of the CDC just said that vaccination doesn’t prevent transmission (I’m not clear if she meant doesn’t prevent as in 100%, or doesn’t prevent at all).

        • …and I assume she meant after someone vaccinated has been infected – which evidence suggests is still less likely among people who are vaccinated Also, from what I’ve seen there are still experts who say there’s some uncertainty as to whether vaccination “prevents” transmission among vaccinated people who become infected (as in reduces the probability somewhat, not prevents 100%).

        • Well, now it’s Aug 6 and we have Delta variant. It seems pretty clear that the vaccine doesn’t prevent transmission of Delta variant. It’s not clear if the large drop in infections per day back in Feb-April was vaccine related, or just seasonal or due to the large changes in behavior after the huge winter spike. What seems clear now is that against Delta, you can still get infected, you will have anything from un-noticeable symptoms to very occasionally hospitalization, with typically most people having a “cold or flu like illness” that last a few days. The main mechanism to reduce spread might be that vaccinated people are infectious for shorter times.

          It **is** true that IM injection doesn’t tend to illicit IgA antibodies in the airways, instead mainly IgG in the bloodstream. This make it more likely you can be infected, but protects you from serious illness.

        • Daniel –

          > Well, now it’s Aug 6 and we have Delta variant. It seems pretty clear that the vaccine doesn’t prevent transmission of Delta variant.

          Doesn’t prevent 100%, or doesn’t even lower the chances of one (vaccinated) person transmitting an infection to someone else? I’m still seeing some experts saying there’s some uncertainty (despite the CDC’s response to the BARNSTABLE, MA report).

          I thought this recent article gave a reasonable overview:

          https://www.nature.com/articles/d41586-021-02158-6

        • I definitely think it lowers the transmission risk. There are several mechanisms… to get transmission you have to:

          1) get infected in the first place
          2) generate a significant viral load
          3) be in contact with others during the period where you have viral load and before you have neutralizing antibodies in the airways

          the vaxxes can reduce the probability of initial infection but definitely not eliminate it, maybe not even reduce it that much

          they can reduce the overall peak viral load that you’ll reach by having your immune system fight harder sooner

          they can make you produce neutralizing antibodies sooner thereby shutting down the time window.

          all of those can reduce transmission even if they don’t eliminate the initial infection.

        • “That’s not like COVID at all… In the absence of vaccines, north of 65% of everyone on earth will likely eventually get COVID and

          be at risk for serious illness and death over the next few years. This means everyone needs the risk reduction.”

          WAIT! Putting aside the first premise (which I question on whether we know this or not as well, that second line is definitely something that we do not know and in fact, it seems we know the opposite — SO FAR. In the most simplest form of what we know based on data since the beginning, and tracked and published in ways unseen until this particular pandemic, the chance of getting Covid-19 AND becoming severely ill and/or dying from it is extremely low up and until even 69 years of age. I don’t need to repeat those stats here, or the difference between CFR and IFR, etc. etc. The charts are everywhere and the risk of severe disease and dying, without even looking at how some key treatments are showing great results in reducing severity and hospitalization (another discussion) and are surely to become more and more prevalent.

          Your argument that we must all get vaccinated is based on a false conclusion — that 65% of the population will likely get Covid AND be at risk for serious illness and death, making what seems to me to be a circular argument resting on an inaccurate premise which we know not to be true – so far.

          Though I am neither a medical doctor nor a statistician, I can easily understand the risks of Covid-19, to me personally and to larger populations, overall, as the event tracking of this disease emanates from so many sources, including the CDC. Absolute Risk Reduction does seem to matter, quite a bit even, if one takes into account, which one should, the risk of severe disease and death. To say ARR does not matter is to miss the entire point, in my view. ARR absolutely matters when compared to the risk of severe disease and/or death.

        • Marie –

          > In the most simplest form of what we know based on data since the beginning, and tracked and published in ways unseen until this particular pandemic, the chance of getting Covid-19 AND becoming severely ill and/or dying from it is extremely low up and until even 69 years of age.

          Aggregating age stratified risk across ages 0-69 obscures the significant risk from, say, ages 50-70. The CDC’s latest IFR estimate is 0.6% for 50-64, and it obviously would be considerably higher for 50-69. There are a lot of people at that age, so 0.6%+ of all of them getting infected and dying is obviously a lot of death. And obviously, the risk of severe disease is much higher than that, and likewise, aggregating the risk of severe disease with the risk of death obscures the significant level of risk for severe disease.

          And then focusing only on severe disease further narrows the window in a way that might be sub-optimal. Even a relatively mild infection may well have long term post-infection sequelae that can have considerable public health consequences for years going forward, including enormous costs to our healthcare system once you consider the impact of those sequelae at the population level.

          And then there’s the economic and other disadvantages of a slower trajectory of reducing the infection rate.

          > To say ARR does not matter is to miss the entire point, in my view. ARR absolutely matters when compared to the risk of severe disease and/or death.

          Personally, I wouldn’t say that “ARR doesn’t matter” – but I would say that it’s important to evaluate COVID within a context that includes the relative risk, at the population level, in public health, economic, and other societal outcomes. A lot of individuals choosing not to vaccinate based on the rationale that it makes little difference in their individual level of absolute risk will have significant cost to our society.

        • First off when I said “at risk” i meant something like “we don’t know which ones will be seriously ill or die” not that 65% of people will in fact get seriously ill and die. It’s like “children in houses with pools that have no fence or gate are at risk of drowning”. They won’t all drown, but many more of them will than outside this group.

          Second even what’s considered “mild” COVID is extremely unpleasant and has long term effects in largish groups. Focusing on death rates is very misleading. Easily well over 10-20% of people who get the illness wind up with an extremely unpleasant illness whose effects last weeks. Maybe another 10% wind up hospitalized but not dead.

        • Hi Daniel,

          Thanks for such a thorough explanation. Post-vaccination durability & transmission are still open questions among COVID experts from my observation of their Twitter interchanges. It is a work in progress.

          As for the value of calculating absolute risk, I gather that it is recommended for most all trials/studies. Peter Attia MD, has a Twitter thread on ARR as it pertains to the J&J vaccine.

          https://twitter.com/PeterAttiaMD/status/1382700230676783119

        • Joshua,
          You write ‘The CDC’s latest IFR estimate is 0.6% for 50-64, and it obviously would be considerably higher for 50-69. There are a lot of people at that age, so 0.6%+ of all of them getting infected and dying is obviously a lot of death.’

          That doesn’t seem right. An IFR of 0.6% in this age group does not mean that 0.6% of them will get infected and die. It means that 0.6% of infected people in this age group will die. It says nothing about what % of people will get infected to begin with. So the real world risk of dying is much much lower than 0.6% for this particular age group.

        • Jon:

          It’s a non-serious illness that’s killed 10 million people and disrupted the lives of hundreds of millions of others.

          One way to look at this is that people react strongly to certain sorts of avoidable risks, including many risks that are much lower than that of covid.

        • “There’s no excuse to not vaccinate other than a known sensitivity to a vaccine component.” No mention is made of the lack of long term safety testing, without which no one knows what the true risks of the ‘jab’ are. After all, this is an experimental gene therapy, not a traditional vaccine.

        • It’s not a “gene therapy” those are things that permanently alter your DNA. This is the injection of a chemical (happens to be a particular sequence of RNA) upon which your body temporarily for a couple days produces spike protein localized to a certain area. Note that you ingest large quantities of RNA all the time for example.

        • Hi Daniel,

          Re: The evidence seems to be overwhelming (from multiple sources) that it does at least for some time, just on the basis that highly vaccinated populations are not spreading the disease (such as LTC facility patients).
          ————-

          I just find that there are so many different narratives in circulation that it is a struggle sometimes in trying to ascertain who to follow.

    • Relative risk reduction is approximately a constant property of the vaccine itself.

      This is true for primarily viremic viruses (polio, measles, etc) but not those that mostly stay out of the blood (influenza, the various cold viruses, covid).

      You can see the waning for Moderna *vs the original strain* in supplementary Figure 1 here (the best data is FRNT-mNG since that uses the actual virus):
      https://www.nejm.org/doi/full/10.1056/NEJMc2103916

      After 6 months the neutralizing activity has dropped to about 10-20% of the peak and back to the levels seen after a single vaccine dose in ~30% of ” healthy adult participants”. This waning seems to have accelerated between 3 and 6 months which is kind of odd, most people (including me) expected it to level off.

      Based on other studies, if they had done the same assay against the variant viruses (which they would be crazy to have not done) that would lead to another 2-15x reduction in neutralizing activity.

    • The ARR is far more important. It is what should dictate our behavior. The RRR is also relative. It’s not stable. It changes over time and with circumstance. For example, the Pfizer vaccine had an RRR in a population that wasn’t representative of any normal population since there were so few people who got covid, and so few people who died. The effectiveness of the vaccine changes over time as well. We know nAbs go down over time, titers in general go down over time and relative the person. The RRR changes depending on which variant, and the variants that are blocked by the vaccine don’t become predominant, so there is selection for escape variants which also affect the RRR dramatically. The ARR is more important because it gives you a larger perspective on how it impacts your life. If the ARR is 1% then a RRR of 95% vs an RRR of 40% won’t matter much. If the ARR is 70% then there’s a large difference between an RRR of 95% and of 40%

  9. Personally, I think that focusing on the question of whether relative or absolute risk is more informative, and by how much, mind of misses the point.

    Lome IFR, both of those measures of risk have fairly limited utility because COVID risk interacts with so many important variables at the individual level. Do you have diabetes? How old are you? Have you had extended contact on close quarters with someone who has a high viral load? Etc. All of these variables interact with both relative and absolute risk in very meaningful ways.

    More important, IMO, is to think from within a socio-ecologocal model of health, it am individual model. It’s important to think at the population level.

    If you get vaccinated, you help reduce the spread, you make it less likely that you’ll get infected and be a burden to healthcare heroes. You’ll contribute to fewer people dying from COVID.

    Javier says it well above:

    > it is also to prevent the whole chain of generation of cases transmitted by the primary case.

    • If you get vaccinated, you help reduce the spread,

      In contrast to sars-2 infection, vaccination (Pfizer) does not seem to trigger mucosal immunity:

      Altogether, our results suggest that vaccinees probably do not elicit an early humoral response detectable at mucosal surfaces. They strengthen the hypothesis that some vaccines may not protect against viral acquisition and infection of the oral–nasal region, but may prevent severe disease associated with viral dissemination in the lower respiratory tract. Our results are in line with those obtained in nonhuman primates, in which vaccinated and then challenged animals display detectable viral loads in nasal swabs but not in lower airways37.

      https://www.nature.com/articles/s41591-021-01318-5

      By what mechanism do you think it will reduce spread? That was how vaccination was possibly going to prevent infection of the upper airways.

      • I’m going on the many reports that differential comparisons of vaxed vs. non-vaxed shows an advantage in that not only do fewer people have severe infections, also fewer people get infected. While one study is certainly relevant, I”m leery of the “one study” problem, and anyway, I’d rather go with large scale stats in the real world.

        That said, even if it were true that just as many people still get infected, or there’s no differential benefit in that those who do get infected are less likely to infect others, I’m still going to go with the compounding benefits at the population level of fewer people winding up in the hospital, less stress on healthcare workers, etc. The compounding benefits of that alone, make navel gazing about relative versus absolute risk at the individual level, in my opinion, pretty meaningless and rather self-absorbed. Doesn’t mean that I don’t understand it. It’s a human characteristic to be self-absorbed and focus on relatively meaningless stuff. But even though I understand the tendency, I also have an opinion about the tendency.

        You should get vaccinated.

        • I’m going on the many reports that differential comparisons of vaxed vs. non-vaxed shows an advantage in that not only do fewer people have severe infections, also fewer people get infected. While one study is certainly relevant, I”m leery of the “one study” problem, and anyway, I’d rather go with large scale stats in the real world.

          Unfortunately you didn’t link to any of these studies so not much I can say other than “real world” data used to be referred to as “observational” data and considered less reliable. Eg, that argument has been used to not correct vitamin deficiencies in covid patients.

          I do know of this study:
          https://www.cdc.gov/mmwr/volumes/70/wr/mm7013e3.htm

          They compare positive tests rates in healthcare workers from dec 15 to mar 13. When unvaccinated that was during the winter peak of 200k cases/day but by the time 2 weeks after second dose started happening the US number had already dropped to about half that.*

          Very strange study that compares two different timeframes with very different baseline risk of being positive.

          * And total cases generally have lagged HCW cases by a week or two.

        • Sure. I’ll go with “observation data,” along with the always important caveat that with observational data (particularly cross-sectional observational data or limited longitudinal data), cauality can’t be proven. There could, conceivably be other reasons why the rate of infections has dropped as it has in Isreal, why those who do get infected are increasingly likely to be unvaccinated, etc. But sometimes even when you don’t have enough tightly controlled studies to make an airtight conclusion of cauality, you still have to make decisions about risk, and in particular decisions about even low probability but high damage risk. So you do your best to give a range of uncertainty.

          In this situation, I’m comfortable getting vaxed and telling other people to get vaxed, for reasons I consider more important than my own individual risk profile, be it relative or absolute risk (of course there might be individuals where their individual profile takes in a bigger weight).

          Maybe in 5 years I’ll prove to have been mistaken. I accept that. As my father used to tell me, no one ever promised me a rose garden.

        • BTW, while I think is important to stipulate caveats about drawing causality from observational studies, I think I’m going to walk back my agreement about observational data ’cause as I think about it more I’d be hard pressed to think of any data that isn’t “observational.”

          I think “real world data” might be subject to a similar criticism (i.e., is non-real world data from another planet?), but I think there’s a meaningful distinction to be made between data collected out there in the wild and data collected in contrived, lab or clinical conditions – and I think that “real world data” works as a descriptor.

        • There could, conceivably be other reasons why the rate of infections has dropped as it has in Isreal, why those who do get infected are increasingly likely to be unvaccinated, etc.

          What conceivable reason are you supposing it prevents infection without mucosal immunity?

          Perhaps that one study is flawed but that is all the evidence we have besides the supporting animal studies that found the same.

          And the same drop in infections has happened 2-3x in the last year without any vaccine involved:
          https://www.worldometers.info/coronavirus/country/israel/

          I don’t think you’ll have to wait 5 years to find out.

        • Anoneuoid –

          > And the same drop in infections has happened 2-3x in the last year without any vaccine involved:

          Really. The “same” drop?

          Can you give a link to the evidence where the previous drops showed such a wide differential in infections comparing vaxed to non-vaxed? ‘Cause I haven’t seen that from back in November 2020 😉

        • I’ll bet before Dec 31st, 2021 the daily number of cases on worldometers for Israel rises above 5k at some point again.*

          I am a stupid crazy crackpot who doesn’t understand science so you should give me at least 2:1 odds, right?

          * Even if they just stop testing, or don’t use tests that can detect new strains, or anything like that, I still lose the bet. If worldometers stops reporting data like covid19tracking did last month then no one wins.

          But yeah, don’t bother responding unless you are willing to bet at least $100.

        • Anoneioid –

          >I am a stupid crazy crackpot who doesn’t understand science so you should give me at least 2:1 odds, right?

          At the risk of deepening my status as “one of those commenters,” and hoping that given the nature of what I’m going to say, Andrew won’t think that I’m breaking of my pledge above, I’ll comment not in response to our topic of previous discussion but in response to what seems to be your mistaken impression that I consider you to be a crazy crackpot who doesn’t understand science.

          That isn’t my view. I apologize for giving you that impression.

        • Looks like a good thing I didn’t take that bet – although I don’t know that Delta was included in the conditions of the proposed wager.

    • Hi Joshua,

      RE: If you get vaccinated, you help reduce the spread, you make it less likely that you’ll get infected and be a burden to healthcare heroes. You’ll contribute to fewer people dying from COVID
      —-

      The durability and transmissibility questions after receiving the COVID vaccine still loom in social media, like Twitter especially. After all, we are wearing masks and advised to keep socially distance. This is all a work in progress

        • Definitely.

          I personally think that the concerns have been overstated (IE vaccine efficacy has been “underplayed”) which probably contributes to vaccine hesitancy among people who aren’t otherwise anti-vaccine (IE compared to a situation where the CDC etc. message was “two weeks after your second shot, you might as well act like COVID doesn’t exist”).

          But no one really knows.

          I personally think possible variant issues are “overstated” because in past pandemics we didn’t have the genetic technology to track mutations (so this “looks new and weird” when in fact it could be completely normal for the late stage of a pandemic, we couldn’t track mutations in 1890, 1918, 1957, 1968…)

  10. I think one case where relative risk reduction is more significant than absolute risk reduction is where the numbers come from measurements over a short period of a recurring risk. For example suppose over a two week period being instructed to wear a mask reduced the risk of getting Covid from 2.5% to 2% . That is a relative risk reduction of 20%, but an absolute risk reduction of only 0.5%; however, the risk will be repeated until the individuals change their other behaviors, get vaccinated, or the pandemic essentially ceases. If the risks remain constant, over 1 year’s time, that becomes an absolute risk reduction of about 7.3% from people who would have about a 50% chance of getting COVID.

  11. I review the Pfizer data and keep coming to the conclusion that during the trial period those in either group had a less than one percent chance of becoming infected. It seems to that studies should be done to better determine how and where the disease is transmitted and then provide better fact based recommendations as to how one should protect themselves. China has already proved the disease can be controlled quickly without vaccines and at a much lower cost to the population. Wuhan was reopened after less than 3 months yet the battle continues in most of the world.
    During the Pfizer trial there were no reported deaths in either group from covid. The deaths reported were all in line with deaths routinely reported throughout any given year. How does one come to the conclusion the vaccine significantly reduces death? In the Moderna trial one person died from covid.
    I saw statistics today claiming there have been over 150,000 infections in the US military yet only 21 deaths. Given their average age is considerably lower than the general population, is a vaccine even warranted?
    I do not want to be grouped with the anti vaxxers; i just want to better understand the trials.

  12. Simple, report the absolute risk to the public and convey the important aspect of relative risk. Thats all. To those trained in consuming the clinical trial reports absolute risk is common knowledge, to the layperson, their denominator is the 24×7 media reporting of the killer pandemic and the constantly updated death ticker standing at well over 500K (in the USA) people at the time I am writing this. I am relatively sure the authors of the clinical trial report did not intend to muddy the waters, however nobody who has even a smidgeon of the pulse of the general public would deny that the absolute risk reduction is not known, understood or considered in their decision to get a vaccine. Everyone would agree people should be educated as much as possible before making decisions that may effect their health. Reporting the absolute risk reduction in addition to relative risk is additive, and promotes more understanding, not less. This is a no brainer.

    • > Reporting the absolute risk reduction in addition to relative risk is additive, and promotes more understanding, not less

      The thing is, absolute risk reduction isn’t a number we know. Calculations involving it are essentially wild ass guesses which could vary over several orders of magnitude. Relative risk however is a number we know to within a few percent.

      The “absolute risk” doesn’t add anything to the debate because it’s a meaningless number.

      • I don’t think absolute risk can be considered more of a guess than relative risk considering the 95% efficacy rate was arrived at without a human challenge trial. Those in a placebo group would have roughly shown the same efficacy rate, there is no two ways around that. I think you can make the argument that absolute risk should not be given all that much weight in deciding the effectiveness of mass vaccination, but not the superiority of the way the figure was arrived at. The aspects that affect calculation of absolute risk reduction can be relatively controlled for. With the volume of cases vs population and clear patterns repeating for different populations based on location, age, etc. Analyzing a subset would almost certainly not stray in order of magnitude.

    • Where did you get the 500,00 deaths figure from? I haven’t seen or heard of any accurate accounting of deaths from SARS-2; dying with an infection is a lot different from dying from an infection.

  13. I prefer to see all the data. It seems to me that ARR and RRR are both contextualized and actually I am not convinced that enough control is applied in these studies at all. The decision should not be made only based on these numbers. The conversation has to go wider. The long term effect of the vaccine is not known, that risk is absent from the decision making process. The availability of treatment that avoids death and long term COVID is also absent. The evolutionary pressure on the virus (if you agree that it is prevalent) by the vaccines and other measures are also not factored into this. So the picture that emerges as if there is only one option and all should line up for this experiment is very worrisome. There is a clear agenda advanced by Pharma and Gov, but their bias and conflict of interest is unchecked.

  14. I believe one needs to distinguish risk and benefits (harms and efficacy) when viewing absolute or relative risk, toward the individual, not the population. Will the intervention “matter” to me vs doing status quo (or other personal mitigation or health improvement effort)?
    Speaking in terms of absolute or relative to a population is one thing. Determination based on a singular clinical trial, still in Phaee 3 motion, the personal impact +/- on “me” is wholly another. In either view, one cannot state “I” have a 95% lower chance of obtaining Covid once I am properly vaccinated.
    Social morning and guilt complexes need not apply. Ironically, vaccines do not require informed consent under EUA.
    The forging statements of 65% likelihood of obtaining Covid are a bit of a morning burger: who exactly are we talking about? The infirmed. The sickly? The elderly? 1 or 2 of 3 of these scenarios? CFA matters.
    Ironically, no vaccine has been powered to tell us that: if I get Covid will I die and will this vaccine prevent that from
    happening to people like ME? Moreover, a reduction from 4 to 2 deaths (Placebo to Active arm) isn’t quite the thrilling level of ARR of death (over that time frame, quite important), one would hope for and cannot be extrapolated to ME or the World when vaccinated.
    Not much discussion, yet, on ARR or RRR of key safety events or sub pop analyses. Looking forward to the dialogue.
    If we ascribed as much “what if…long term….potential badness” of the virus to the vaccine, we’d certainly be having a different dialogue.

  15. Some general comments. I find it troubling that a clinician would seek advice on this topic from a statistics blog, even one hosted by Columbia University. Also troubling that when searching for “Absolute Risk Reduction”, this page appears in the results ahead of the paper in question. Such is the state of things.

    For any other treatment or vaccine, there would be no controversy or concern regarding the admonition in the paper that ARR should be reported along with RRR. It is standard procedure and well understood by most clinicians, at least those who have had good training and keep current with literature.

    ARR is no more of a trivial measure than is RRR a magical one. Both are important for decision-making, hence the FDA guidelines mentioned in the paper. It should be noted that these EUA medical products are not without risks.

    For your own personal risk profile for SARS-CoV-2 / COVID-19 (age, co-morbidities, prior infection / immunity), one must determine how much risk is reduced by the intervention and balance that against how much risk you have without the intervention—PLUS you must factor in the risk(s) of the intervention itself.

    Good luck doing that in any realistic and clinically unbiased way without incorporating ARR into the process. Personally, I do not want medical advice from any clinician who doesn’t know the ARR of these medical products—and I am troubled that many are not only not aware of the ARRs but think that they are unimportant when advising patients. This is a major failing.

    • Md:

      He’s not seeking advice from “a statistics blog,” he’s seeking advice from a statistician! It’s a statistics question, so it seems very reasonable for this person to ask a professional statistician for thoughts on the topic. When I have a medical question, I’d ask a doctor.

      • Based on the responses so far, I don’t think he got what he wanted. All I see is a bunch of random posters are brainstorming and speculating. No definitive “advice”.

  16. Can anyone point me to what the study’s participants were instructed to do after receiving the vaccine/placebo when they went on their respective ways regarding wearing masks, socially distancing, working from home, etc.? Also, did the study track if they were living with other people? Seems like these would all be relevant factors since relatively speaking the difference between the infection rates between the groups was not significant.

  17. Why do we say RRR is more stable than ARR? Is this just something observed?

    A change in one implies a change in the other. Just because one is normalized, it changes less, but what does this mean?
    Aren’t these calculated based on an observation of a disease (and treatment) over a small period of time?

    That is, aren’t the really values dependent on time, where the underlying thing to measure is a stochastic differential equations? If I look at is this way, then stability makes more sense to me.

    Thanks

  18. Vaccine efficacy (VE) is determined by the relative risk reduction (RRR). It’s a ratio comparing the risk of infection in people who got vaccinated versus people who didn’t get vaccinated (the control group). The basic function of a vaccine is to keep people from catching a disease, so if a vaccine trial is successful, then this number will end up being a large number—regardless of those people’s chances of getting COVID.

    Another measurement is a calculation of the absolute risk reduction (ARR). This looks at the risk in a different way. The ARR is the difference in risk for someone in the treatment group versus someone in the control group.

    The RRR will change as more data is collected about the vaccine but should remain fairly stable over time. But, the ARR will change quite a bit over time as more is learned. The data collected in a vaccine trial is from a fairly short period of time, and if you look at the ARR for a short period of time, the value of that metric is limited.

    In order to get a result with a higher ARR, a trial would need to continue until a large number of the placebo group has contracted the disease. This would be unethical and counterproductive for public health. The analysis would be delayed until too many people get infected. So, looking for an impressive ARR would take too long and essentially prevent vaccines from ever being approved in time to be useful.

    Even if you were to calculate approximate numbers based on the ARR of say, the Pfizer vaccine study, which was .7%, you would come up with a NNV of over 100. (NNV is the number needed to vaccinate. This is how many people need to be vaccinated to prevent one case of the disease.) The NNV of the other COVID vaccines is slightly lower. If 190,000,000 people in the US have been vaccinated, then nearly 2 million people have benefited from the vaccines. With an average of over 1% of those infected who die (infection fatality rate or IFR is somewhere between .5% and 2.5%), then that would mean more than 30,000 lives have already been saved in the US. With more people getting vaccinated, this total just keeps going up.

    For example: Hypothetically, say you have 1,000 people who don’t get a vaccine, and you find that 100 of them catch the disease. So, the baseline risk of getting it is 10%. And suppose that 1,000 other people get the vaccine, and only 10 of these gets sick. Their risk is 1%. The absolute risk reduction (ARR) is then just 9% (10% minus 1%), because the risk was already pretty low. But the relative risk reduction (RRR) is 90%—that reduction of 9% divided by the baseline risk of 10%.

    The key here, though, is that absolute risk reduction is a number that can vary. It changes according to how at-risk the groups of people were in the first place. It also changes depending on the timeframe of the trial. This pandemic has widely varying risks across populations, and those change over time. (For example, viral variants change how infectious COVID can be, and young people’s risk of severe illness and death has changed as social policies and infection rates have fluctuated. It’s complex.)

    There is some confusion about RRR vs. ARR, and the conflation of these two ideas, is at the very heart of vaccine hesitance and resistance by some people. Public health experts have not been clear about the different types of risks and benefits for different vaccines and different people, and so doubt and dodgy personal interpretations have flourished.

    The absolute risk analysis helps to figure out how many lives will be saved. The inverse of the absolute risk reduction (1/ARR) is the number needed to vaccinate (NNV). This is how many people that you need to vaccinate to prevent one case of Covid-19 (or one severe case, or one death) depending on the study’s endpoints.

    Clinical trials are relatively short term. Using an ARR for short-term trial data can make a vaccine seem less promising compared to the effect a vaccine would have on a person’s risk over a longer period of time. So, the ARR has limited value in this case.

    Researchers looked at vaccine efficacy once a specific number of COVID-19 cases had been confirmed in vaccine trials. With this approach, the ARR will always appear low, because it is tied to the number of cases confirmed in the study. From a public health perspective, we definitely do not want to wait for more control group infections and a higher ARR before sharing the study findings.

    Vaccine effectiveness (RRR) explains how well a vaccine works for the public in real-world conditions. Research about vaccine effectiveness is ongoing. More data will become available as more people around the world are vaccinated.
    Since a vaccine prevents a greater proportion of infections in some settings and a lesser proportion in others, then we do want to know that, so the absolute risk difference can be a valid number to consider. But with vaccine efficacy, the ARR isn’t so relevant. Absolute risk does matter in some settings. For example, we wouldn’t be so interested in a drug that prevents 50% of cases in a disease that only affects 2 people in the world. But coronavirus is not as rare.

    The different vaccine studies yield very different results. It takes approximately 75 people vaccinated with the Moderna two-dose regimen to prevent a single case and over 100 for the Pfizer two-dose vaccine. And 85 for the one-dose J&J shot. But this is not very useful information regarding the power of the vaccine. Because the NNV also changes with the baseline risk of the population, which is affected by who they are and how prevalent infections are around them. (J&J conducted its trials, in part, in South Africa, where a more infectious variant of the virus was also spreading; that may have upped the risk for people there, and therefore lowered the NNV.)

    COVID affects different groups of people in different ways. The elderly are at more risk of getting sick and dying than young people. The ARR and NNT are influenced by the baseline risk for infection and amount of time at risk, while the RRR is not affected by these. The vaccine trials were relatively short (a few weeks). If you calculate an ARR from the short timeframe of the trial data it will understate the effect that the vaccine would have on a person’s absolute risk over a longer duration of time. The NNT computed on the basis of the vaccine trial’s data is not an accurate estimate of the NNT in the real world, because people will continue to be at risk for months or longer. Much longer than the vaccine trial participants were followed.

    Even more significant is that vaccine trials typically use an “event-driven” analysis approach, meaning that efficacy analyses are performed after a prespecified number of observed events. The absolute risk reduction is therefore bounded at a low number.

    Consider the following hypothetical example of a vaccine trial that enrolls 40,000 participants (half receive the vaccine, half get the placebo). The trial is designed to perform a statistical analysis after 200 confirmed cases of disease. If this vaccine turns out to be perfect (100% effective), then all 200 cases would occur in the placebo control group. There will be 200 cases out of 20,000 patients in the placebo group (1%) vs. no cases out of 20,000 patients in the vaccinated group. The ARR in this trial is only 1%. This would be converted to an NNT of 100, meaning that 100 people must receive the vaccine to prevent one case of disease.

    Now consider a similar trial of 40,000 participants, designed to perform its analysis after 2,000 confirmed cases of disease rather than 200. Again, if the vaccine is perfectly effective, all cases will be observed in the placebo group. But this time, with 2,000 cases out of 20,000 participants, the absolute risk will be 10% for the placebo group. The ARR is now 10%, translating to an NNT of 10 (i.e., 10 people must receive the vaccine to prevent one case of disease).
    The vaccine effectiveness is the same — 100% effective in both trials — yet the ARR and NNT are quite different. This is because the second trial simply waited until a much larger number of cases were observed. With event-driven trial design, the ARR is “bounded”; it literally cannot be higher than the number of events at which the analyses are scheduled, divided by the number of patients in the placebo group.

    This is a necessary feature of vaccine trial design for vaccines to be useful in the real world. If vaccine trials were required to demonstrate a large ARR, we wouldn’t analyze the results until a large percentage of the enrolled patients had contracted the disease. For a 100% effective vaccine to demonstrate a 20% ARR, the trial would have to wait until 20% of the placebo group had contracted the disease. This would be unethical and counterproductive for public health, because the incidence of disease in the placebo group of a vaccine trial will approximate the incidence in the population at large. The analysis would be delayed until a large share of the general population had already contracted the disease. Looking for an impressive ARR would essentially prevent vaccines from ever being approved in time to be useful.

    We do not interpret an ARR or NNV (number needed to vaccinate) from vaccine trials without also considering these key points about the trials: The trials occurred over a few months, whereas the disease they are intended to prevent will probably (in the absence of vaccines) remain with us for longer than that. The trials were designed with event-driven analyses, which place a relatively low ceiling on the maximum ARR they can demonstrate; this feature of vaccine trials is necessary to produce results in time for the vaccine to be useful in reducing spread of the disease.

    • Hal –

      > With an average of over 1% of those infected who die (infection fatality rate or IFR is somewhere between .5% and 2.5%), then that would mean more than 30,000 lives have already been saved in the US. With more people getting vaccinated, this total just keeps going up.

      Maybe I got this wrong, but it looks like you were figuring that number based only on vaccines preventing infection (and early on) and on that basis preventing death (based on the IFR), but it seems to me that the biggest benefit of the vaccines is preventing severe illness and death among those who have been infected and I”m not sure how well that matches up with estimating based on IFR with an extrapolation of how many didn’t get infected (especially since it seems that the effect on preventing infection has waned but the effect on preventing serious disease and death likely much less so).

      Plus, FWIW, your IFR seems pretty high. I think the most recent data (say, per the CDC, comes in more like around 6%. Of course, a generalized IFR for COVID isn’t particularly useful given the stratification in outcomes by age).

      • But shouldn’t we also worry about the long term side effects? I mean if we save 30,000 lives but it 10 years 100,000 die from blood clots heart attacks strokes and cancers then wouldn’t they be useful information to have before we make a decision to get vaccinated ?

        • Landohowl:

          Yes, I agree that if a vaccine saves 30,000 lives now but leads to 100,000 deaths in the next ten years, that would be useful information to have. Unless you want to wait 10 years, you can’t literally have that information before making the decision to get vaccinated, but, yes, you’d want to make some estimate of future harms, for sure. I’m pretty sure that pharmaceutical companies and policymakers have looked into this and don’t expect the covid vaccines to kill lots of people from blood clots heart attacks strokes and cancers. All things are possible it but it looks to me like you’re starting from a general anti-vaccine position and then just spewing out hypothetical objections. Nothing wrong with that—spewing out hypothetical objections can be a useful part of the brainstorming process—indeed, I hope that someone at the table in the policy discussion was spewing out hypothetical objections, which then they could look into, etc.

  19. Thanks for your response. Yes, the biggest benefit of the vaccines is preventing severe illness and death among those who have been infected. I’m not sure about the IFR since it varies from location to location. I picked a hypothetical 1% (somewhere between .5% and 2.5%), which is lower than 6%.

  20. who comes up with this statistical bullshit?

    The pfizer study submitted to the FDA is a great example of dumbest way to get results you want. out of 20,000 in each group, they found that 0.74% tested positive in those vaccinated, and 0.04% tested positive in the unvaccinated group… so they determined an efficacy (ie RRR) of 95%. How did they get that? 0.74-0.04/0.74 right?

    So… let’s keep it simple, 100 in each group, placebo and vac. And 1 person got infected in the vac group and 50 ppl in the placebo.
    So by your logic we would subtract to get a difference of 49% then divide that over the 50% in the placebo to get a whopping 98% efficacy (RRR). Well that’s ridiculous!

    If I substitute 10, 20, 30 and 90 infected for the placebo (instead of 50) in 4 other examples I would get 90%, 95%, 96% and 98.8% respective efficacy even though the difference is quite obviously NOTICEABLE.

    Thanks for showing us how pharmaceutical companies manipulate numbers to sell more products onto naive people who’s self righteousness entitles them to tell others what to do with their medical choices

    • > So by your logic we would subtract to get a difference of 49% then divide that over the 50% in the placebo to get a whopping 98% efficacy (RRR). Well that’s ridiculous!

      That’s not ridiculous, that’s maths. If 50 people would have died and 49 of these don’t die, the vaccine reduces deaths by 98%.

      The difference between 90% and 98% is also noticeable.
      If you’re using a cohort where 50 die if untreated, that’s the difference between 45 saved or 49 saved with treatment.

      There’s no manipulation here.

  21. Relative Risk Reduction vs. Absolute Risk Reduction
    Pfizer reported that its vaccine shows a 95% efficacy. That sounds like it protects you 95% of the time, right? But that’s not actually what that number means. That 95% refers to the Relative Risk Reduction, but it doesn’t tell you how much your overall risk is reduced by vaccination. For that, we need Absolute Risk Reduction. In the Pfizer trial, 8 out of 18,198 people who were given the vaccine developed COVID-19. In the unvaccinated placebo group 162 people got it, which means that even without the vaccine, the risk of contracting COVID-19 was extremely low at 0.88%, which the vaccine then reduced to 0.04%. So the net benefit or the Absolute Risk Reduction that you’re being offered with a Pfizer vaccine is 0.84%. That 95% number refers to the relative difference between 0.88% (unvaccinated) and 0.04% (vaccinated) (0.88 – 0.04). That’s what they call 95% Relative Risk Reduction. Relative Risk Reduction is well known to be a misleading number, which is why the FDA recommends using Absolute Risk Reduction instead, which begs the question of how many people would have chosen to take the COVID-19 vaccines had they understood that they offered less than 1% benefit?

    • Alan:

      Your comment seems to be a reaction to the title of the above post, but you don’t seem to have read or digested the body of the post, which explains the value of looking at relative risk reduction.

      One can also make a crude calculation as follows: a risk of 1% doesn’t sound like much to you. But 1% of the U.S. population is 3.3 million people. 3.3 million deaths is a lot.

      And your implicit conclusion that people would be better off not taking the vaccine is ridiculous.

Leave a Reply

Your email address will not be published. Required fields are marked *