Skip to content
 

“So the real scandal is: Why did anyone ever listen to this guy?”

John Fund writes:

[Imperial College epidemiologist Neil] Ferguson was behind the disputed research that sparked the mass culling of eleven million sheep and cattle during the 2001 outbreak of foot-and-mouth disease. He also predicted that up to 150,000 people could die. There were fewer than 200 deaths. . . .

In 2002, Ferguson predicted that up to 50,000 people would likely die from exposure to BSE (mad cow disease) in beef. In the U.K., there were only 177 deaths from BSE.

In 2005, Ferguson predicted that up to 150 million people could be killed from bird flu. In the end, only 282 people died worldwide from the disease between 2003 and 2009.

In 2009, a government estimate, based on Ferguson’s advice, said a “reasonable worst-case scenario” was that the swine flu would lead to 65,000 British deaths. In the end, swine flu killed 457 people in the U.K.

Last March, Ferguson admitted that his Imperial College model of the COVID-19 disease was based on undocumented, 13-year-old computer code that was intended to be used for a feared influenza pandemic, rather than a coronavirus. Ferguson declined to release his original code so other scientists could check his results. He only released a heavily revised set of code last week, after a six-week delay.

So the real scandal is: Why did anyone ever listen to this guy?

I don’t know. It’s a good question. When Ferguson was in the news a few months ago, why wasn’t there more discussion of his atrocious track record? Or was his track record not so bad? A google search turned up this op-ed by Bob Ward referring to Ferguson’s conclusions as “evidence that Britain’s political-media complex finds too difficult to accept.” Regarding the foot-and-mouth-disease thing, Ward writes, “Ferguson received an OBE in recognition for his important role in the crisis, or that he was afterwards elected a fellow of the prestigious Academy of Medical Sciences.” Those sorts of awards don’t cut much ice with me—they remind me too much of the U.S. National Academy of Sciences—but maybe there’s more of the story I haven’t heard.

I guess I’d have to see the exact quotes that are being referred to in the paragraphs excerpted above. For example, what did Ferguson exactly say when he “predicted that up to 150,000 people could die” of foot-and-mouth disease. Did he say, “I expect it will be under 200 deaths if we cull the herds, but otherwise it could be up to 2000 or more, and worst case it could even be as high as 150,000?” Or did he flat out say, “150,000, baby! Buy your gravestone now while supplies last.”? I wanna see the quotes.

But, if Ferguson really did have a series of previous errors, then, yeah, Why did anyone ever listen to this guy?

In the above-linked article, Fund seems to be asking the question rhetorically.

But it’s a good question, so let’s try to answer it. Here are a few possibilities:

1. Ferguson didn’t really make all those errors; if you look at his actual statements, he was sane and reasonable.

Could be. I can’t evaluate this one based on the information available to me right now, so let’s move on.

[Indeed, there seems to be some truth to this explanation; see P.S. below.]

2. Nobody realized Ferguson had made all those errors. That’s true of me—I’d never heard of the guy before all this coronavirus news.

We may be coming to a real explanation here. If a researcher has success, you can find evidence of it—you’ll see lots of citations, a prestigious position, etc. But if a researcher makes mistakes, it’s more of a secret. Google the name and you’ll find some criticism, but it’s hard to know what to make of it. Online criticism doesn’t seem like hard evidence. Even published papers criticizing published work typically don’t have the impact of the original publications.

3. Ferguson played a role in the system. He told people what they wanted to hear—or, at least, what some people wanted to hear. Maybe he played the role of professional doomsayer.

There must be something to this. You might say: Sure, but if they wanted a doomsayer, why not find someone who hadn’t made all those bad predictions? But that misses the point. If someone’s job is to play a role, to speak from the script no matter what the data say, then doing bad work is a kind of positive qualification, in that it demonstrates one’s willingness to play that role.

But this only takes us part of the way there. OK, so Ferguson played a role. But why would the government want him to play that role. If you buy the argument of Fund (the author of the above-quoted article), the shutdowns were a mistake, destructive economically and unnecessary from the standpoint of public health. For the government to follow such advice—presumably, someone must have been convinced of Ferguson’s argument from a policy perspective. So that brings us back to points 1 and 2 above.

4. A reputational incumbency effect. Once someone is considered an expert, they stay an expert, absent unusual circumstances. Consider Dr. Oz, who’s an expert because people consider him an expert.

5. Low standards. We’ve talked about this before. Lots of tenured and accoladed professors at top universities do bad work. I’m not just talking about scandals such as pizzagate or that ESP paper or epic embarrassments such as himmicanes; I’m talking more about everyday mediocrity: bestselling books or papers in top journals that are constructed out of weak evidence. See for example here, here, and here.

The point is, what it takes to be a celebrated academic is to have some successes. You’re defined by the best thing you did, not the worst.

And maybe that’s a good thing. After all, lots of people can do bad work: doing bad work doesn’t make you special. I proved a false theorem once! But doing good work, that’s something. Now, some of these celebrity academics have never done any wonderful work, at least as far as I can tell. But they’re benefiting from the general principle.

On the other hand, if the goal is policy advice, maybe it’s better to judge people by their worst. I’m not sure.

Not that we’re any better here in the U.S., where these academics have had influence in government.

Taking the long view, organizations continue to get staffed with knaves and fools. Eternal vigilance etc. Screaming at people in the press isn’t a full solution, but it’s a start.

P.S. There seems to some truth to explanation 1 above, “Ferguson didn’t really make all those errors; if you look at his actual statements, he was sane and reasonable.” From Tom in comments:

Mad Cow paper:
https://www.ncbi.nlm.nih.gov/pubmed/11786878

From abstract:
“Extending the analysis to consider absolute risk, we estimate the 95% confidence interval for future vCJD mortality to be 50 to 50,000 human deaths considering exposure to bovine BSE alone, with the upper bound increasing to 150,000 once we include exposure from the worst-case ovine BSE scenario examined.”

Consistent with the “up to 50,000” quote but the quote fails to mention the lower bound.

See also Vidur’s comment which discusses some of the other forecasts.

649 Comments

  1. James Annan says:

    Ferguson has made some errors (not calibrating his model is one of the largest) but most of this is bogus. Saying “if you don’t act, X will (or at least may) happen” is not a wrong prediction if people take action to avoid X, and then X fails to happen.

    • Carlos Ungil says:

      What do you mean by “calibrating his model”?

      • Andrew [not Gelman] says:

        I think he means that Ferguson et al assumed a doubling time of 5 days at a time when a fit to available data would have shown it to be less than 3 days. This meant that they may have under-estimated R0, which in turn may explain why the current UK deaths exceed Ferguson’s forecasts for the lockdown scenario.

        • Carlos Ungil says:

          Thanks. I’ve just noticed that I could have find out following the link in his name.

        • dhogaza says:

          It looks like the team working on the C++ version are calibrating the model. I have no idea how it’s being used, though.

          “…. Right now we are in the emergency stage where model runs are repeated multiple times in the week with calibrations to the latest data as it comes.”

          https://github.com/mrc-ide/covid-sim/issues/194

        • Marilyn Shepherd says:

          Doing some other research it didn’t take long to discover he based his ”model” on Spanish flu, a different virus from over 100 yrs ago and thought this one might be the same. Coronas have been around forever without a single nation ever shutting down.

          • V says:

            NB. MERS and SARS were both new corona viruses. This one is a new corona virus. Virologists are the best people to explain and identify viruses. Beware of misleading inaccurate sweeping generalisations.

        • Daniel Connelly says:

          You need to be careful about calibrating the model to doubling time, since early doubling time can be dominated by a subset of the population, while herd immunity may be determined by a broader component of the population. For example, those living in dense urban areas may dominate the early increase, but assuming this same R for herd immunity would lead to an overestimate of net infections. So if the calibration was conservative on R, that may have yielded a better final result.

          That said, I’ve not seen anything indicating the ultimate death numbers in the absence of interventions would be substantially (within a factor of 2) off from the study’s estimates. And a key conclusion of the paper is that relaxing interventions prematurely (before a vaccine, or aggressive test & trace capability) would result in a return of exponential growth.

    • Twain says:

      James Annan,

      > Saying “if you don’t act, X will (or at least may) happen” is not a wrong prediction if people take action to avoid X, and then X fails to happen.

      Are you claiming that “If we did X (shutdowns) to avoid Y (deaths from COVID-19) and then saw Y not occur, X must be the cause”?

      If yes, that is the Post Hoc Fallacy: “Since event Y followed event X, event Y must have been caused by event X.”

    • Linch Zhang says:

      No, he’s saying that if somebody makes a claim “If X, then Y” and neither X nor Y happened, this can’t be much evidence for the causal chain being wrong unless you have fairly sophisticated machinery.

      • Twain says:

        Linch,

        I don’t follow.

        • Carlos Ungil says:

          This was the prediction:

          “In the (unlikely) absence of any control measures or spontaneous changes in individual behaviour, we would expect a peak in mortality (daily deaths) to occur after approximately 3 months […] given an estimated R0 of 2.4, we predict 81% of the GB and US populations would be infected over the course of the epidemic. […] In total, in an unmitigated epidemic, we would predict approximately 510,000 deaths in GB and 2.2 million in the US, not accounting for the potential negative effects of health systems being overwhelmed on mortality.”

          We cannot say that the prediction was wrong just because, after a few weeks and with very strong mitigation measures in place, death have been only 31,000 and 77,000.

          • Joshua says:

            Carlos –

            Not wanting to be overly semantic – but I think the distinction between prediction and projection is important here.

            • Carlos Ungil says:

              I don’t know what that means but the choice of word is theirs, not mine. I just relayed the “prediction”.

              • Joshua says:

                In climate change, a result of a model often implies a projection – say that if emissions are at X level going forward, temps will rise within a range with Y being at the top of that range.

                People with an agenda will then say that “The model predicted that temps will reach Y, and temps didn’t reach Y, therefore the modelers were trying to scare us to destroy capitalism, and btw, modeling is bad.”

                The not only ignore that emissions were a X-n, they also ignore the lower bound of the projection.

                I think that a big part of the misundertanding/misuse of the models lies in the distinction between a “prediction” and a <conditional) "projection."

          • Lowell says:

            77,000 dead in the US from this plandemic?

            CDC reported 2018 influenza deaths at 79,000
            CDC reported 2018 pneumonia deaths at 40,000
            Now CDC says just less than 11,000 are attributable to COVID-19 alone.

            Comorbidities (https://www.cdc.gov/nchs/nvss/vsrr/covid_weekly/index.htm)
            Table 3 shows the types of health conditions and contributing causes mentioned in conjunction with deaths involving coronavirus disease 2019 (COVID-19). For 6% of the deaths, COVID-19 was the only cause mentioned. For deaths with conditions or causes in addition to COVID-19, on average, there were 2.6 additional conditions or causes per death. The number of deaths with each condition or cause is shown for all deaths and by age groups. For data on comorbidities,

            Some categorization issues are clouding the real data.

            Saying as many as NONE to 50,000 would make a really unpopular fortune teller, but profitable in other beliefs.

        • If you shoot yourself in the leg, you will bleed to death. Later… you don’t shoot yourself in the leg… and you do not bleed to death… headlines read “Twain does not die of exsanguination proving Daniel has no idea what he’s talking about”

          • Twain says:

            Daniel,

            > If you shoot yourself in the leg, you will bleed to death. Later… you don’t shoot yourself in the leg… and you do not bleed to death… headlines read “Twain does not die of exsanguination proving Daniel has no idea what he’s talking about”

            This is a strawman argument that does not represent the argument I am presenting.

            My argument is this: One cannot say “We implemented shutdowns (X) and cases/deaths from COVID-19 (Y) decreased. Therefore X is the sole cause of Y.”

            This ignores the numerous factors exclusive the the shutdowns — improved hygiene (before and during shutdowns), masks, voluntary distancing (prior to shutdown), immunity, weather, etc. — that could be causing Y in addition to X. Sure, X could have and affect — but say it is the sole cause is a logical fallacy in absence of decisive data, which we do not have yet.

            • You’re still missing the point… if I predict that shooting yourself in the leg will cause you to bleed to death… and later you don’t bleed to death… a sensationalist/politicized journalism article could come out saying “Daniel predicts Twain will bleed to death… but he doesn’t! so Daniel has no idea what he’s talking about”

              but I didn’t predict “Twain will bleed to death” I predicted “IF YOU SHOOT YOURSELF then you will bleed to death”. Since you DIDN’T SHOOT YOURSELF my prediction has yet to be tested.

              This seems to be a consistent thread here, that frequently a modeler creates a *conditional prediction*… “under conditions X then Y will happen” and it’s taken as an *unconditional* prediction of “Y will happen, panic!” later when Y doesn’t happen, it’s taken as evidence that modeler has no idea what he’s talking about (usually for political reasons, to attempt to discredit them).

              The only way you can disprove the goodness of a conditional probabilistic prediction is to see the conditions X come true, and then see that actual outcome Y is fairly far outside the high probability range of Y predicted by the model.

              So for example in the prediction quoted by Carlos above: https://statmodeling.stat.columbia.edu/2020/05/08/so-the-real-scandal-is-why-did-anyone-ever-listen-to-this-guy/#comment-1331888

              Since we did not see “In the (unlikely) absence of any control measures or spontaneous changes in individual behaviour” happen, it is now impossible to determine whether that was a good prediction by looking at the observations. All we can do is look at whether the mechanisms used in the prediction, or the software code to calculate it or anything like that might have had bugs or bad assumptions. The actual outcomes are in a counterfactual world where the conditions simply didn’t hold and are therefore irrelevant to the accuracy of the prediction.

              • CROM says:

                You conveniently forget that the specific modeler overshot reality in several past instances too.
                “ In 2005, Ferguson said that up to 200 million people could be killed from bird flu. He told the Guardian that ‘around 40 million people died in 1918 Spanish flu outbreak… There are six times more people on the planet now so you could scale it up to around 200 million people probably.’ In the end, only 282 people died worldwide from the disease between 2003 and 2009.”

                He was responsible for the killing of some millions of animals for foot and mouth disease

                “ In 2001 the Imperial team produced modelling on foot and mouth disease that suggested that animals in neighbouring farms should be culled, even if there was no evidence of infection. This influenced government policy and led to the total culling of more than six million cattle, sheep and pigs – with a cost to the UK economy estimated at £10 billion.
                It has been claimed by experts such as Michael Thrusfield, professor of veterinary epidemiology at Edinburgh University, that Ferguson’s modelling on foot and mouth was ‘severely flawed’ and made a ‘serious error’ by ‘ignoring the species composition of farms,’ and the fact that the disease spread faster between different species.
                Does Ferguson acknowledge that his modelling in 2001 was flawed and if so, has he taken steps to avoid future mistakes?”

                , some ludicrous numbers from mad cows disease

                “ In 2002, Ferguson predicted that between 50 and 50,000 people would likely die from exposure to BSE (mad cow disease) in beef. He also predicted that number could rise to 150,000 if there was a sheep epidemic as well. In the UK, there have only been 177 deaths from BSE.”

                What more do you need to admit that models are incredibly susceptible to GIGO and modelers are incredibly unwise/ignorant to prevent GIGO?

            • Twain says:

              Daniel,

              Thank you for clarifying. Apologies, I misunderstood your point.

              I agree with everything you stated.

            • Dyke says:

              What you are trying to say is one of the basic rules in logic – only consequent is certain (result), antecedent (cause) can be anything.

        • Twain says:

          Carlos,

          Thank you for clarifying.

          > We cannot say that the prediction was wrong just because, after a few weeks and with very strong mitigation measures in place, death have been only 31,000 and 77,000.

          Fair. We can’t say it is right, either, just yet. So it is TBD.

          I will add that Ferguson’s above prediction (if my memory is correct) assumes a uniform IFR applied to infected individuals — which as many have discussed here at length, is not a reasonable assumption because SARS-CoV-2 has an IFR that skews heavily with age and comorbitities.

          • jim says:

            What we can say is that the forecast/prediction is framed responsibly with many important constraints and caveats, even giving the R0 used to make it.

            For my money, if the quote is accurate, then this is a reasonable and responsible forecast. One can quibble with the actual numbers or methods, but the context is clear and the forecast is framed responsibly. He should be credited for that.

            • Twain says:

              Jim,

              I disagree.

              By not making clear his calculations assume a uniform IFR to calculate deaths for a disease where IFR varies tremendously (0.005% to 20%…) depending on age, comorbidity, etc., Ferguson misleads readers.

              Not making key assumptions clear is not responsible.

              • I’m not clear actually on whether he uses a uniform IFR, can you provide a source for this assertion? but you should also remember that those predictions were made at a much earlier stage in the pandemic when information available was much more limited.

                I agree with jim. For the most part, everything here looks quite responsible. Even using a single IFR is perfectly fine when calculating results for an entire population. the key is just whether the population averaged IFR is close to the chosen IFR.

              • Twain says:

                Daniel,

                I’m assuming he is using a uniform IFR because that is what the Imperial College London Model uses per their last report (if I’m wrong, please skewer me!).

                Can you explain how using a population-averaged IFR is reasonable?

                To me, it seems unreasonable because IFR varies by orders-of-magnitude and skews to a specific population — so is an “average” IFR even possible? If the variance was smaller and skewing was less, then I’d agree — but that isn’t the case.

                Is there something I am missing?

              • dhogaza says:

                Twain …

                “By not making clear his calculations assume a uniform IFR to calculate deaths for a disease where IFR varies tremendously (0.005% to 20%…) depending on age, comorbidity, etc., Ferguson misleads readers.”

                Are we being subjected to a repeat of the “mostly old people would die so just isolate them and let everyone else lead their lives normally” argument here?

                Also, please do remember that death isn’t the only consequence here.

              • dhogaza says:

                > Are we being subjected to a repeat of the “mostly old people would die so just isolate them and let everyone else lead their lives normally” argument here?

                No.

                I’m stating that Ferguson did not communicate a key underlying assumption of his model. There is a big difference, IMO, between using a uniform IFR versus an age-stratified IFR since it varies so much; the reader should know that information. He should have prefaced with “Assuming a uniform IFR for the entire population, I conditionally project…”

                > Also, please do remember that death isn’t the only consequence here.

                No sure what you imply here.

              • Twain says:

                Ignore the above post; my mistake.

                > Are we being subjected to a repeat of the “mostly old people would die so just isolate them and let everyone else lead their lives normally” argument here?

                No.

                I’m stating that Ferguson did not communicate a key underlying assumption of his model. There is a big difference, IMO, between using a uniform IFR versus an age-stratified IFR since it varies so much; the reader should know that information. He should have prefaced with “Assuming a uniform IFR for the entire population, I conditionally project…”

                > Also, please do remember that death isn’t the only consequence here.

                No sure what you imply here.

              • Thanks Carlos!

                So now once again we conclude…. totally responsible messaging from the researcher, appropriate assumptions, irresponsibly spun or distorted in media and popular culture for political purposes.

              • > Can you explain how using a population-averaged IFR is reasonable?

                So, as carlos points out *no* they didn’t use an average IFR.

                But since IFR is a number between 0 and 1, the average over the actual population exists. if you are modeling the entire population, then you can use whatever that average is…

                Suppose you have k age groups… and IFR_i is the IFR for each age group, *and* you assume that all age groups are equally susceptible and at risk to infection. Suppose Nt is the total population. N_i is the number of people in group i, and P_i is the proportion of group i that gets infected. Then the total number of people who die is:

                dead = sum(IFR_i * P_i * N_i)

                dead/Nt = sum(IFR_i * P_i * N_i/Nt)

                Call P_i * N_i/Nt the probability of a random person being infected. pinf_i

                dead/Nt = sum(IFR_i * pinf_i)

                now multiply by Nt

                dead = sum(IFR_i * pinf_i) * Nt

                call sum(IFR_i*pinf_i) as IFR_avg.

                dead = IFR_avg * Nt

                so provided you calculate it appropriately, IFR_avg produces the same results as stratifying explicitly.

              • Twain says:

                Carlos,

                Yes, I’m going to read something before making a claim.

                They do assume and apply a uniform IFR; see pg 22 of Report 20.

                Specifically, IFR’ ~ IFR * N(1,0.1). They then apply this IFR’ to calculate deaths: d = IFR’ * summation(c * pi,t), where c is number of cases and pi is incubation period. PDF N(1,0.1) is to “incorporate the uncertainty inherent in this estimate we allow the IFR for every region to have additional noise around the mean.”

                The above, translated in practical terms, means “anyone infected with SARS-CoV-2 has a probability of dying with mean 0.9% with sigma = +/-0.1%.

                So they are therefore claiming that the population in the US, who is age 0-49, has that probability of dying. Yet for any subgroup of this range of ages, per their Report 9, does not exceed 0.15% (and does not exceed 0.05% for age 0-29).

                That makes no physical sense based on the data. The majority of children, teenagers, healthy adults (or those with well-managed conditions), contract SARS-CoV-2 and experience no to mild symptoms. To say the have almost a 1% chance of dying is absurd.

              • Twain says:

                Daniel,

                Thank you for the detailed explanation. I appreciate it and I was mistaken with my earlier comments, sorry about that.

                One minor question: IFR_avg is a direct function of accurate IFR_i and P_i and therefore sensitive to these values? Therefore, IFR_avg is a direct function of testing and accurate count of infections?

              • Carlos Ungil says:

                Twain, they have different models for different things. For calculating those 500’000 deaths under dicussion they were not using a uniform IFR in the epidemiological model simulation. I think that’s pretty clear.

                Anyway, the fact that you can point to the place in the more recent report on Italian regions where they discuss their assumptions suggests they are not hiding them, right? This is a completely different problem, where they need to infer infection rates from observed deaths and a simpler model may be adequate for that purpose. That same average IFR can later be used for the predictive (or projective, whatever) part if the population infected going forward is similar to the population infected in the past. Even if you were using age-specific IFRs and estimates of infection reates you would still need to aggregate them in the end. Considering that the infected populations in the first and second waves are similar is probably not the wildest implicit or explicit assumption in the model, to be fair.

              • Twain says:

                Carlos,

                See my above comment in response to Daniel’s explanation/derivation.

              • Stephen M St Onge says:

                I think you are being unreasonable here. If I ask “How many Los Angeles residents will die in the case of a magnitude 8.0 earthquake with epicenter downtown?”, the answer I am looking for is a single number. In this scenario, I’m asking how many people die in the city limits, how many in the various suburbs of LA, how the San Fernando Valley will differ from the region south of the Santa Monica Mountains, or how the age of buildings affects the death rates of their residents.

                If Ferguson broke down his answer into many different subgroups, it would just leave us with the annoying necessity of adding those numbers up ourselves.

              • Rob Manzoni says:

                Not revealing the calculations – or the code used in a prediction – is the ploy of a typical fake scientist.
                Ref: Michael Mann, Phil Jones and the cabal of liars who enable idiot Nobel Laureates like Al Gore and Barak Obama to scare children; and to dismantle real science and very necessary proper education…

      • Nick Keefer says:

        X’s and Y’s? The statistics from Sweden and other non lock down country’s have proven unequivocally that his he and his model’s were wrong. Once again. He has shown he could give a shit about the humans he supposedly works to protect. He sold out to big Pharma. Why so much support for this clown?

        • Basil says:

          Can you read? “In the (unlikely) absence of any control measures or spontaneous changes in individual behaviour […]”. In Sweden this absence is clearly not the case, as proven by their similarly troubled economy. If X is not true, not Y does not disprove X -> Y.

          • confused says:

            I think the original model suggested that the deaths in a scenario with some mitigation but not lockdowns would be half the totally-unmitigated scenario (1.1 million for the US, vs. 2.2 million totally-unmitigated).

            I agree the totally-unmitigated scenario was never going to happen.

            But 1.1 million deaths is ~0.35% of the US population. Places that didn’t lock down and did more mild mitigation don’t seem to be showing that.

            0.35% of the Swedish population is about 36,000 deaths, which Sweden doesn’t seem to be on a trajectory to get near (they are at about 1/10 of that now).

            South Dakota would expect about 3,000 deaths. They have 44 now, and their cases don’t seem to be exploding (they have essentially one large cluster, in Sioux Falls, associated with a meatpacking plant).

            Arkansas is similar. Etc.

            0.35% of the population dying in a limited, largely voluntary mitigation scenario seems pretty reasonable for dense places like New York City/New Jersey, Lombardy, Madrid, etc. given how badly they actually were affected given much stricter measures. But it doesn’t seem to translate well to other areas, including much of the US.

    • Aaron Hall says:

      Exactly so. If we have extremely successful epidemiology we won’t know it, and every epidemiologist will have a “Chicken Little” reputation.

      Now that may come at a very high economic price, and we want to minimize that as well.

      How do we know the tradeoff? “I saved 150k lives, that’s 7-8 million as per the UMB, so I saved us over a trillion dollars in human capital at cost of X…”

      But we won’t believe it unless you show us the code and it withstands third party scrutiny.

      • Twain says:

        Aaron,

        > If we have extremely successful epidemiology we won’t know it, and every epidemiologist will have a “Chicken Little” reputation.

        Not necessarily. What if said epidemiologists produce rigorous data, both modeled and empirical, supporting their decisions? True, we would never know for certain if their recommended actions/policies were necessary (we cannot measure the counterfactual) — but rigorous data would stop many from crying “Chicken Little”.

        The problem is that some epidemiologists, like Ferguson, fail to produce the information necessary to determine if their data is rigorous (code; underlying assumptions; etc.) — which inclines some people to suspicion.

        • Joshua says:

          Twain –

          > What if said epidemiologists produce rigorous data, both modeled and empirical, supporting their decisions?

          The problem there is that the judgement you’re implying is necessarily subjective, at least to some degree – even in an expert or technically sophisticated community, let alone among the general public.

        • Twain says:

          Joshua,

          “Rigor” will always be subjective; perhaps “extensive” or “thorough” are better adjectives, but I’m not sure.

          To a degree, everything criteria is “subjective”. But I think there is a (relatively) clear distinction between non-rigorous, haphazard data and rigorous, quality data.

        • Carlos Ungil says:

          Have you tried to read their reports or is your suspicion based just on what you see in the press?

      • yyw says:

        Extremely successful epidemiology would provide projections for a number policy scenarios, so that there is a semblance of a validation feedback loop.

        • Joseph Candelora says:

          Presumably 8 policy scenarios, with a subset of them each broken into 12 subpolicies based on mitigation on/off triggers, was enough for you.

          https://spiral.imperial.ac.uk:8443/bitstream/10044/1/77482/14/2020-03-16-COVID19-Report-9.pdf

          Or did you want more?

          • yyw says:

            Thanks for the link. Why complain about the “chicken little” reputation in that case? Just point to the models with the relevant policy scenarios and compare the predictions with reality.

            • Joseph Candelora says:

              Why criticize a paper you haven’t read?

              • yyw says:

                I was replying to Aaron’s comment about “extremely successful epidemiologist”. Where in my comment above did I criticize Ferguson’s paper specifically?

                Not that I don’t have criticism now that I’ve read the paper.

            • dhogaza says:

              “Just point to the models with the relevant policy scenarios and compare the predictions with reality.”

              Do you think the people who are screaming chicken little will actually care about this? Are you aware that after publishing the paper with scenarios, and after the government of the UK decided to take firm action after all, and after Ferguson pointed out that their projection was for 20K deaths that he was accused of having modified his model? People pointed out that the various scenarios from the beginning until they were blue in the face, but the message most of the public paying attention would’ve seen was that Ferguson’s original 500K estimate was changed by him to 20K. Not that the scenario projections had been made and presented at once, and that the model had not been changed, blah blah.

    • Rob A says:

      It seems that Ferguson made some pretty bad predictions about countries that didn’t lock down, eg, Sweden and Japan.

      https://www.iedm.org/the-flawed-covid-19-model-that-locked-down-canada/

    • Tim says:

      Saying “if you don’t act, X will (or at least may) happen” is not a wrong prediction if people take action to avoid X, and then X fails to happen.

      That’s a specious argument, as it rests on the assumption that the first claim is an absolute truth, but if it isn’t then the rest falls (and the use of ‘may happen’ is a total get out clause).

      In the case of coronavirus, one only has to look at Sweden. If we follow Professor Ferguson’s assertion for the UK that if we don’t lockdown, up to 500,000 people could die, then a similar cataclysmic outcome should have affected Sweden, where they didn’t go into lockdown. It didn’t happen, and indeed thus far they have done better than we have in per capita terms. The virus did get into care homes in Sweden, as it did in many countries, but that is the only failure really.

      • Martha (Smith) says:

        Tim said,
        “If we follow Professor Ferguson’s assertion for the UK that if we don’t lockdown, up to 500,000 people could die, then a similar cataclysmic outcome should have affected Sweden, where they didn’t go into lockdown.”

        This reasoning is not valid unless other relevant conditions were the same for Sweden then as for the UK now. Do you have any evidence that these other relevant conditions were indeed the same for Sweden then as for the UK now? (Also, note the difference between “could die” and “should have affected Sweden” — the first describes a possibility, the second a certainty.)

    • This is a typical con trick.
      It was never a bad model from the start only something that needs adjusting.
      Health will always need adjusting because our lives are guided by trends, govt budgets & the labour market that are always changing. Those trends in food driven by marketing increased sugar increasing a chronic disease diabetes, increased computer use over sport lower vitamin D absorption. That govt austerity malnutrition & homelessness, reduced free education so less medics. Labor markets driven by industrial/robotic shifts unemployment leading to poverty.
      Even brainwashed by Behavioural Insight Team the Covid dreams & hair loss tells me people will always be individuals, will not blindly accept being told what they have to do. They banded together against them and more are waking up every day and joining.

  2. Tom says:

    Mad Cow paper:
    https://www.ncbi.nlm.nih.gov/pubmed/11786878

    From abstract:
    “Extending the analysis to consider absolute risk, we estimate the 95% confidence interval for future vCJD mortality to be 50 to 50,000 human deaths considering exposure to bovine BSE alone, with the upper bound increasing to 150,000 once we include exposure from the worst-case ovine BSE scenario examined.”

    Consistent with the “up to 50,000” quote but the quote fails to mention the lower bound.

    • Dale Lehman says:

      Thank you for saving me a bunch of time trying to track down this information. I will now paraphrase Andrew’s question: how should we judge the media? I am willing to ignore this journalist’s work entirely now – as evidence by his worst work. He has clearly misstated Ferguson’s BSE work and that is sufficient for me to ignore the rest of his (Fund’s) work. Everyone can make mistakes, including journalists, but this one looks dishonest and my patience of dishonesty is rapidly disappearing.

      • dhogaza says:

        “I am willing to ignore this journalist’s work entirely now – as evidence by his worst work.”

        If he’s the John Fund I found through google, though described as being a journalist, he’s paid to write editorial pieces for two conservative American outlets, The National Review and The American Spectator. No journalistic rigor is asked for or expected. He writes to support the political ideology those outlets were created to support, hell or high water.

        • ira says:

          Exactly. If you know who John Fund is there’s really no reason to read any more.

          • Kaya3 says:

            Hilarious watching people dismiss something because it doesn’t comply with their views. The whole of mainstream media is corrupted to it’s core. If the content is wrong then show it.
            Ferguson is a clown. An idiot who has been wrong almost every time he opened his mouth and has caused immeasurable damage to the whole planet and the deaths of tens of thousands who will die as a result of the actions taken over this event.
            Yes he left out the lower parameter, so what? Any scientist who gives themselves a range of 50 to 150,000 should never be listened to in the first place. He’s a moron, and now he’s resigned because he couldn’t even follow the orders he was imposing on the rest of the population to shag his married lover. Go figure.

        • Carl says:

          You can say the same thing of the “journalists” who write for liberal and leftist outlets.

    • Joshua says:

      Tom –

      > Consistent with the “up to 50,000” quote but the quote fails to mention the lower bound

      Bingo.

    • Andrew says:

      Tom:

      Thanks for looking this up. That’s helpful. I should’ve thought of googling *Neil Ferguson BSE sheep* or something like that before pressing Send on my post!

    • Adede says:

      50 to 50,000 is such a huge range. Almost anything would fall into it.

      • jd says:

        Came across this blog post that I thought was pretty good: https://maybury.ca/the-reformed-physicist/2020/04/09/no-better-than-a-fermi-estimate/
        Especially the last paragraph, which I quote here:
        “During World War II, [Nobel laureate, Ken] Arrow was assigned to a team of statisticians to produce long-range weather forecasts. After a time, Arrow and his team determined that their forecasts were not much better than pulling predictions out of a hat. They wrote their superiors, asking to be relieved of the duty. They received the following reply, and I quote, “The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.”

      • Phil says:

        Presumably the ‘central’ estimate was 5000, with factor of 100 uncertainty either way. I agree it’s a huge range.

        But (1) if you really are that uncertain, you should say so. That’s hard for people to do, so I see this as a feature, not a bug, unless there was some reason to think he should have narrowed the bounds.

        (2) it’s tempting to say there’s nearly no information in an estimate with bounds this wide, but there is. Contrast it with an estimate of 50-5000. If you give an estimate like that you’re saying even the worst case isn’t that bad. Put the upper limit at 50K and you’re saying Jeez, we have no idea. The degree of ignorance is the point, if you give bounds that wide, and that’s a kind of information.

        • Bill Spight says:

          Yes, you should say how uncertain you are, if you can. That may be hard for people in general to admit, but it’s a sine qua non for a scientist to say, especially one with even a smidgeon of statistical training.

          Consider the Arizona epidemiologist who simply said that unless Arizonans continued to shelter until the end of May, he did not see how to avoid exponential growth in Arizona cases. His statement was relevant, important, and appropriately vague.

      • Zhou Fang says:

        The death toll of Covid 19 does not…

      • More Anonymous says:

        Adede: Wide ranges show up like this because, at the start, there often isn’t enough evidence to tell if an epidemic will take off and cause many deaths or die out and cause a small number.

        Until more precise estimates of quantities like R0 can be obtained, the only honest answer is a wide range of deaths. People want smaller ranges and refusing to give them is a principled position.

        When people then cherry pick the side of the range that didn’t happen, I think they are being misleading and dishonest (not you, John Fund).

      • Linch Zhang says:

        I don’t think it’s that large of a range tbh. 3 orders of magnitude honestly is pretty impressive when you’re considering that Ferguson

        1) Was trying to predict different numbers based on whether an outbreak will happen or not, at the beginning of an outbreak.
        2) Was giving 95% CIs, so if he’s well-calibrated, only 5% of the time will numbers fall outside of his range.
        3) Was trying to predict numbers of people who will die from a novel infectious disease *before 2080*.

        If you think you can do better, you try to make a prediction about deaths from an arbitrary novel infectious disease before 2080.

        Heck, I’ll throw you a lowball and give you a disease that we already know a lot about. How about Ebloa?

        Global deaths from Ebola before 2080. What’s your 95% CI?

    • Robert Rees says:

      You raise an interesting point which is also pertinent to the current debate. I have noticed that the projections from these model have often extremely wide error bars. I agree that to write ‘Up to 50,000’ is misleading. But to project a range of 50 -150000, with a central estimate of 50,000 is effectively unfalsifiable and hence unscientific. The whole point of predictions is to prove or disprove. These type of wide ranges do neither.
      It also suggest to me that the model is pretty chaotic is small changes in assumptions can lead to massive changes in outputs – as has been seen in the Covid models.

  3. BenK says:

    You include a good panel of options. Just in January, I was with a group of people assembled to forecast biological disasters and around the coffee bar they were panning Tony Fauci for having had career success on the back of influenza predictions that were worse than the outcomes of the pandemics. These putative experts were saying that COVID-19 would be another SARS, limited to East Asia, no big deal.
    They lumped him in with people like Ferguson and Marc Lipsitch, who were apparently always wrong. According to them.

  4. Ben Bolker says:

    This article seems like a hatchet job. While anyone can turn out to be a charlatan, Ferguson has a very good reputation in epidemiological circles as a thoughtful, sensible guy. In support Fund quotes:

    * Johann Giesecke, a respected epidemiologist but one who disagrees strongly with Ferguson (and the majority of epidemiologists) on the wisdom of lockdown-based strategies
    * Elon Musk (!!)
    * Jay Schnitzer, “an expert in vascular biology and a former scientific direct of the Sidney Kimmel Cancer Center in San Diego”
    * Charlotte Reid, “a farmer’s neighbor” (on the foot and mouth disease virus)

    Everyone (including Ferguson) agrees that Ferguson behaved foolishly. It’s not crazy (although I think it’s incorrect) to say that Ferguson’s model was wrong, and that lockdown strategies are misguided. But calling him a charlatan is just unfair.

    Also, I think your title is a little clickbaity.

    • Andrew says:

      Ben:

      Yeah, I noticed the flakiness of the sources too. Regarding clickbait, my tile is taken from the original article and I kept it in quotes, but maybe that is not clear to the reader.

    • Ben Bolker says:

      FWIW here’s the source of the “150 million dead from avian influenza” quote https://www.theguardian.com/world/2005/sep/30/birdflu.jamessturcke

      Someone else (David Nabarro, WHO) gave a range of 5-150 million dead in an avian influenza outbreak. Ferguson said: “Around 40 million people died in 1918 Spanish flu outbreak … There are six times more people on the planet now so you could scale it up to around 200 million people probably.”

      A worst-case back-of-the-envelope calculation, but not at the level of a misguided prediction (IMO).

      • yyw says:

        From the Guardian: Last month Neil Ferguson, a professor of mathematical biology at Imperial College London, told Guardian Unlimited that up to 200 million people could be killed. “Around 40 million people died in 1918 Spanish flu outbreak,” said Prof Ferguson. “There are six times more people on the planet now so you could scale it up to around 200 million people probably.”

        If the reporter had not misquoted/misinterpreted him, he deserves a lot of flack for this kind of stupid speculation.

    • Anonymous says:

      I don’t know anything about Ferguson, but that Elon Musk attack “Elon Musk calls Ferguson an ‘utter tool” who does “absurdly fake science.”’ makes me want to root for Neil.

      Some amusing Musk quotes in the same vein:

      http://www.elonmusk.today

      • Andrew says:

        Anon:

        These quotes are hilarious. We can await the next article by John Fund: a debunking of Elon Musk, ending with the statement, “So the real scandal is: Why did anyone ever listen to this guy?”

        On the plus side, Musk didn’t call Ferguson a pedo guy.

        • dhogaza says: