Some cool interactive covid infographics from the British Medical Journal

I agree with Aleks that these are excellent. The above image is just a screenshot; the links below are all live and interactive:

Covid-19 test calculator: How to interpret test results

Current evidence for covid-19 prophylaxis: Visual summary of living systematic review and network meta-analysis

Visualising expert estimates of covid-19 transmission: What might be the best ways of protecting ourselves from covid-19?

Covid-19 lateral flow tests: Calculator for interpreting test results

Great stuff, and a model for risk communication going forward.

15 thoughts on “Some cool interactive covid infographics from the British Medical Journal

  1. I like it as well. But one question, using the first interactive graphic (COVID test results) as an example: are the default settings based on current estimates for sensitivity and selectivity? It is nice to explore how different values change the results, but it would be far more useful if the current state of knowledge were the default probability settings. Better yet, if there were ranges reflecting the current state of knowledge (e.g., confidence intervals for these probabilities). I didn’t see anything to indicate whether the default values were chosen to reflect realistic values or not.

    • From the challenge study:

      Thirty-six healthy volunteers aged 18–29 years were enrolled according to protocol-defined inclusion and exclusion criteria (see the Clinical Protocol in Supplementary Information).
      […]
      Eighteen participants, 53% according to the per-protocol analysis (95% confidence interval (CI) (35, 70)), subsequently developed PCR-confirmed infection.
      […]
      Of the participants not meeting infection criteria and deemed uninfected, low-level non-consecutive viral detections were observed only by qPCR in the nose of three participants and in the throat of six participants (Extended Data Fig. 1a,b).

      https://www.nature.com/articles/s41591-022-01780-9

      So 36 people were quarantined and had the virus sprayed up their nose. Then 50% were infected. Of the uninfected, 30-50% tested positive on pcr within 2 weeks (It isn’t clear to me whether there is overlap between the nose and throat-positive participants).

      Of the infected, we see in figure 2 that it took ~2 days to test positive on PCR and ~3 days to detect culturable virus (a proxy for active infection). Then PCR positivity lasted ~15 days vs ~10 for culture positivity.

      So PCR positivity reflected active infection for ~7/13 days (~50% of the time). PCR was also negative for ~2/10 days (20%) in people who became infected.

      From this I get pre-test of ~50%, sensitivity of ~80%, and specificity of ~50%. That is obviously specific to this population, exposure level, and testing protocol.

      Outside the lab, the pre-test probability will typically be much lower (1-10%?). I’d expect the sensitivity will drop from less careful swabbing and sample handling, and the specificity from contamination and possible cross reaction with other viruses/bacteria/whatever that may have similar sequences.

      Essentially, pre-test probability looks like the dominant factor when interpreting a given PCR result. A single test is essentially worthless, but two consecutive positive tests 12 hours apart is much more informative. There is still the problem with recent infections though.

      • The challenge study wasn’t designed to evaluate sensitivity and specificity of qPCR. Detection at or above LLoQ may not mean anything, clinically.

        There are plenty of studies where performance was assessed. Typical performance reported for PCR is ~98-99%. See:

        https://assets.thermofisher.com/TFS-Assets/GSD/Reference-Materials/taqpath-ceivd-rt-pcr-kit-technical-bulletin.pdf

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8863925/

        If you want to spend time reading up on the clinical specificity/sensitivity performance for a variety of COVID PCR products, read the IFU links for the products at the bottom of FDA’s page:

        https://www.fda.gov/medical-devices/covid-19-emergency-use-authorizations-medical-devices/in-vitro-diagnostics-euas-molecular-diagnostic-tests-sars-cov-2

        • These are not validations because:

          Furthermore, this study assumed that the reference method is 100% sensitive and specific

          Where the reference method is another PCR kit. That is circular logic.

          The valid gold standard is presence of culturable virus. In the challenge study this was also backed up by symptoms, serum antibodies, and longitudinal testing. Now we can be confident someone actually had (or not) actively replicating virus in their respiratory tract.

          This is another thing Ive been saying since the beginning. PCR is not a gold standard, and you need to include the entire sample collection and processing. Subjecting the same samples to different PCR kits doesn’t validate anything.

        • Why is the presence of cultural virus the reference standard? I have cited FDA-accepted definitions–an agreement test with an orthogonal-developed reference standard. There aren’t better tests out there.

          If you bothered to read any of the submissions to the FDA link, you will see that specimen collection and processing are part of the validation activities.

        • Because as the challenge study shows, about 50% of people testing positive on PCR tests were not sick or capable of infecting others. This is a long known issue.

          This appears to be from researchers who developed the tool in the OP:

          These estimates of sensitivity and specificity are based on evaluating the LFD tests against a gold standard of PCR.6 However, the PCR test has limitations as the gold standard test for diagnosing SARS-CoV-2 infection. Firstly, it is not 100% sensitive, meaning that some people will be missed when relying on PCR testing for diagnosis.18 Secondly, a PCR test can detect very low levels of virus present in a sample, meaning a positive PCR test does not necessarily equate with people being infectious. The median time for which an individual will test positive with a PCR test is the range 22-33 days,22 longer than the typical infectious period. In a clinical setting this high sensitivity to a low concentration of the virus in the sample may be helpful, by facilitating a diagnosis even if a poor sample is taken or if viral levels are low in the person being tested. But if PCR tests are used to test large numbers of people without symptoms to identify those who are infectious and to prevent further spread, the PCR test will register positive for people who are highly infectious, but it will also register positive for people who recently had the infection but are no longer infectious.

          https://www.bmj.com/content/373/bmj.n1411

          Then there were the other 50% of people in the challenge trial who never tested positive on culture, showed symptoms, or seroconverted. Out of them, about half tested positive at some point over the course of two weeks post-exposure.

        • The sensitivity of PCR is a feature, not a bug. It is considered the reference standard for detecting COVID. It is also the most specific, meaning it will not detect target viral nucleic acid if it is not present. An individual not feeling sick or not capable of infecting others does not mean they do not have latent disease. This has been discussed many times on this blog.

          “Molecular tests are the most accurate tests for detecting the virus that causes COVID-19.”
          https://www.health.state.mn.us/diseases/coronavirus/testsites/types.html

  2. Is that the beginning? I don’t know what I’m supposed to read first, so I gave up after a few seconds of looking at two of them.

    “This graphic gives an overview of the evidence for covid-19 prophylaxis”. What does that mean in English?

      • That’s my question/comment/complaint. It is instructive to explore how the numbers change the results, but without any guidance about what numbers are realistic, this seems like an invitation to “invent your own reality.” In my opinion, we have a little too much of that these days. I realize that there is uncertainty about all of the inputs, but we do have some information about potential values. Absent that, isn’t this a case of completely uninformative priors?

        • Agreed, it would be nice if there were some actual priors here. For example there could be a histogram for each of these, and you could click one of the bars and it’d insert a relevant value for you. Then you could do stuff like “what if this were at the low end of credible range, but the other is near the high end?” Without that, you have to come with your own priors, and the typical reader will have way too broad uncertainty compared to what we know about these tests.

        • Indeed. Presumably false positives and false negatives can change through time, for example due to variants that are less detectable by rapid antigen tests, or due to people having lower or higher typical viral loads etc. But to make that a worthwhile thing to do it really needs some informed priors for what are the reasonable ranges for the parameters.

  3. The problem with the calculation is it assumes independence between sensitivity/specificity and pre-test probability.

    There are clearly going to be many more cases of contamination, “abortive infections”, and cross reactions when covid-like (ILI) symptoms are more common. We can expect sampling/testing to become more error-prone in general when large number of tests are being processed.

    More challenge studies are needed to see how strong this relationship is, but Id guess sensitivity can vary from 50-80% and specificity from 50-99% depending on how much illness is going around.

Leave a Reply

Your email address will not be published. Required fields are marked *