## Coronavirus: the cathedral or the bazaar, or the cathedral and the bazaar?

Raghu Parthasarathy writes:

I’ve been frustrated by Covid-19 pandemic models, for the opposite reason that I’m usually frustrated by models in science—they seem too simple, when the usual problem with models is over-complexity. Instead of doing more useful things, I wrote this up here.

In his post, Parthasarathy writes:

Perhaps the models we’re seeing are not the models we need. . . . What models do I mean? It pains me as a physicist to write this, but: detailed ones. For disease spreads, we can imagine various modeling approaches. There are: (1) Simple differential equation based models, for example SIR models. These are elegant and interesting, describing in a generic sense the rise and fall of diseased populations as a function of just a few parameters. (2) Models that include network connectivity. (3) Models that include the realistic connectivity networks of the world we live in. Why? Because the network matters! The general approach of (2) above already tells us that geometry matters, not for the existence of infectious spreading but for its characteristics. It strikes me as analogous to percolation theory in physics — asking what fraction of sites on a lattice I’ll have to randomly occupy with a stepping stone before I can cross from one side to the other. That fraction varies a lot on different lattices, though it exists for all of them.

More importantly, our goal in analyzing an ongoing pandemic is to understant that pandemic, not the general case. . . . It’s only minimally helpful to apply the same naive model to all, even as a curve-fitting exercise, because while we can always fit effective parameters, the regions’ curves themselves may be profoundly different.

One possibility is that we do, but these models are not widely publicized. I would like to believe this, but I am doubtful. . . . Another possibility is that it’s much easier to make poorer models. Certainly it seems like everyone these days is doing it. . . .

I [Parthasarathy] would like to imagine that like the Manhattan Project, there is a sequestered band of scientists somewhere doing the difficult work of slogging through demographic, transport, and city planning data to construct realistic pandemic models . . .

I think these more elaborate models make sense, and the way they happen is that people build up from simpler models. I was just talking with someone today about the flaws of individualistic social science, and we see it in part in the horrible b.s. about “case fatality rate” etc. as if it’s a Platonic number just sitting there to be estimated and argued about.

Regarding the “Manhattan project” idea: no, I’m pretty sure that’s not happening. Instead we’re following the model of the “war on cancer,” which is to have a large number of research groups, all competing for attention and funds. I suspect that, during the time I spend blogging, various medical researchers are on the phone getting million-dollar donations…

Raghu’s response:

Well before Covid-19, I’ve been thinking that our present structure of academic science, mainly made up of lots of small research groups, is highly flawed—it’s what you’d create if you didn’t really care about the answers to questions, but rather cared about keeping people busy. That’s an exaggeration, of course—one needs some number of small, random groups (like mine!) hopefully finding new and unusual directions—but I think a smaller number of groups that are larger, more stable, and publishing less would be an improvement. It worked for the Manhattan Project. It also works for High Energy Physics, painful though it is to write that. I would be miserable as one cog in a CERN project, and I find their questions scientifically very boring, but they are undeniably successful at actually answering the questions they ask.

My further thoughts:

I’m not sure what to think here. On one hand, I like my freedom. It’s a lot more fun, and lucrative, to be a college professor and statistical consultant than I imagine it would be to be a full-time government employee. On the other hand, if the government really got a few thousand of us together to work on this stuff full time, we could probably make more progress than under the current ultra-decentralized system. On the third hand, if the government really did convene a panel of experts, who knows who they’d get? Maybe they’d get a bunch of politically-connected incompetents. Yes, the Manhattan Project in World War 2 was successful, but (a) that was just one project, and (b) the world of physics was pretty small back then, so maybe it was easier for them to find top people. Also, there were not such easy-to-find lucrative and comfortable non-governmental options the way there are now. But that in turn is partly the government’s choice. They choose to fund NSF, NIH, etc., rather than just hiring a bunch of us directly.

Maybe a Manhattan Project or the equivalent “cathedral” could not be constructed today. But we do see problems with the “bazaar,” especially in the way that the news media and social media create frenzies based on whatever bits of news come up on any given day. The best that can be said about these frenzies, I think, is that it’s not clear what the alternative would be. The news media have major problems, but I wouldn’t want the New England Journal of Medicine or Lancet or PNAS being our gatekeeper either. (Sorry, Lancet and PNAS, but we’re not going to forget Andrew Wakefield, himmicanes, and all the rest. You’re not evil; you’re just imperfect, like any other human institution.)

All that said, given that the bazaar is what we have now, we should think about how to make best use of it. Siloing data seems like a problem, releasing results with no report seems like a problem—it’s easy to list problems, not so easy to figure out what to do next, beyond each of us reacting to information as it comes in, and attacking problems from many directions (as here).

So far I’ve been talking about the actions of the scientific community, but the same issues arise for everyday decisions. I wrote about this a few weeks ago: the problem is that the necessary actions are at the societal level, but for years we’ve been socialized to think of everything in terms of individual decisions. Prepping might be a good idea for some people, but in any case it’s not scalable.

To get back to this cathedral/bazaar thing: that’s just a shorthand. In the famous article with that slogan, the “bazaar” was open source, but the current scientific “bazaar” is some mix of data sharing and data hiding.

Ideally we could have a best of both worlds, with a large government-organized effort that also includes some data and communication experts that would publicly release data, code, and preliminary findings as they came in. The cathedral would facilitate the bazaar, as it were.

In World War 2, we were fighting human enemies, and so the Manhattan Project was kept secret (not that it stopped the Soviets from infiltrating). Now we’re fighting a virus, so no need for secrets.

1. jim says:

If you have a goal – like creating a nuclear weapon or a vaccine or a pandemic model – a top-down directed project is the best way to go.

But basic research and therefore academic science isn’t goal-oriented. It’s about discovery, not creation. Its about generating knowledge, not generating a product. It does a great job of the former. But it’s not surprising that it does a poor job of the latter.

So it isn’t surprising that academic science isn’t finding pandemic solutions with standard operating procedures. What is surprising, however, is that they don’t seem to recognize that. They could self-organize into larger, directed groups to accomplish this task, but they don’t.

• Joshua says:

> So it isn’t surprising that academic science isn’t finding pandemic solutions with standard operating procedures. What is surprising, however, is that they don’t seem to recognize that.

First, how do you know that “they” don’t recognize the limitations with the standard operating procedures? Second, assuming “they” don’t, why would you find that surprising?

2. Joseph says:

I think the approach we are taking, trying 70 or so vaccine projects, is best done decentralized so that politics can’t close off promising alternative lines of development (some of which might, admittedly, be short of funding). The other issue with a centralized project is that there is a huge push for success. People have speculated that part of the use of the Atom Bomb was the need for the Manhattan project to show a success. Biomedical research also has similar issues when you need to find a success at any cost. There is also the issue that it took years for the big project to succeed and we’d like to succeed fast.

But it is a tough problem.

• Anoneuoid says:

70 vaccine projects and how many are testing in aghed animals/volunteers with comorbities?

I haven’t seen a single one not using young healthy animals or volunteers. This vaccine is going to be a disaster far worse than the virus if the safety is only checked in the not at risk group.

• Joshua says:

Anoneuoid –

> This vaccine is going to be a disaster far worse than the virus if the safety is only checked in the not at risk group.

Don’t existing vaccine development protocols include such testing? Do you think there’s a likelihood that there will be widespread distribution of a vaccine without such testing being done?

• Anoneuoid says:

Yes, everything I’ve seen points to all the testing being done in young healthy animals or volunteers. If you know of an example otherwise I’d love to see it.

• Josh says:

Anoneuoid –

No, I don’t know of any.

I would be gobsmacked, however, if such a step weren’t necessarily a part of vaccine development protocols, and a step necessarily taken before widespread dissemination of a vaccine for COVID 19 (especially given the higher fatality among those with comorbidities). That said, it wouldn’t particularly surprise me if it happened on a smaller scale in the testing of some vaccines before widespread dissemination.

But maybe I’m naive.

• Anoneuoid says:

Everything is being rushed, I wouldnt assume anything. Can anyone find a single clinical trial registered planning to use aged/comorbid subjects?

• Phil says:

I know nothing about the plans for these tests but I think it makes sense to start with young human volunteers. If it doesn’t cause severe illness in them, then you move to older human volunteers, and on up the ladder. Flipping it around, it seems like a really bad idea to start with old people, just way riskier.

• Anoneuoid says:

Sure, but the animal trials are using young, healthy animals too. There is no reason for that besides they are cheaper. Pretty sure Harlan has aged rats/mice for sale at least. But if you want hACE2 aged mice obviously that takes time. I dont know what the situation is with macaques.

• Mendel says:

I woiuld think the first thing to find out is what kind of antibody response the vaccination provokes. If you only get a weak immune response, it’s not worth pursuing further.
But to find that out, you need test subjects with a wealthy immune system. Otherwise, a weak response could just be because the subject is old, and then you’ve learned nothing.

• David J. Littleboy says:

+1

An old joke in AI (1970 generation neural network object recognition) was that if you can’t recognize something without noise, you won’t be able to recognize it with noise*. This seems to be a more general principle than I thought…

*: Perceptron systems were failing DoD tests. This was from Minsky and Papert’s AI seminar, fall 1972; much of their criticism of NN object recognition remains correct today. But that’s another rant…

3. https://www.medrxiv.org/content/10.1101/2020.04.14.20063750v1

In that paper a group uses a detailed multistage agent based model with realistic network to model the pandemic and then looks at what happens when they try different strategies. The night before my wife round that and sent it to me I had literally written down essentially these same compartments / states.

so what I think is difficult with this kind of model is doing inference on it. I suspect in this case they tuned parameters until they got something that more or less worked, and then did parametric studies of management strategies over the top of that tuned model. but if you wanted to do Bayesian inference to look at a full posterior… it’d be a LOT of computation. so we don’t know how sensitive their strategy effectiveness is to parameter tuning that they did ahead of time.

4. Also a physicist says:

Am I reading this right? A physicist is complaining about spherical cows? Is nothing sacred?

On a more serious note. Forcing a Manhattan Project on everything is not going to be robust for the same reason that Communism failed — that way leads to monoculture and stifled innovation and empty grocery shelves (wait..) What works for me is most unlikely to work for thee, so perhaps you should do your thing without interference from me. Furthermore, it is not like directed research is not being done; it is, via targeted NASA, NSF, and NIH programs.

• Andrew says:

Also:

Just to be clear, I wasn’t talking about the Manhattan Project for everything, just for some things like coronavirus research. To put it another way, I doubt they would’ve successfully designed and built the atomic bomb by funding 50 competing research groups, with career development being the proximate goal for each group.

• Brent Hutto says:

It seems to me a significant difference is that at most there are only resources for one or maybe two crash atom-bomb programs even if you devote a sizable chunk of national resources to the effort. So you can’t have fifty or even five Manhattan Projects with enough resources for a non-zero chance of success.

A vaccine for one particular virus can be pursued by dozens of competing (or complementary) initiatives and each can in theory have enough resources to possibly succeed.

Put another way, if you can only afford to roll the dice once then it makes since to put everything a nation-state can muster riding on that one try. A vaccine is not like that, much less a “model” (although frankly I can’t get my head around putting “model” on the same level of abstraction as “vaccine” or “atom bomb”).

• jim says:

“It seems to me a significant difference is that at most there are only resources for one or maybe two crash atom-bomb program”

IMO the key resource on large projects is likely managerial talent. Need to be sharp on all technical issues; be able to work top-down and bottom up; build an effective strategy; manage \$\$, lead well and manage human talent too.

• Martha (Smith) says:

Good point re managerial talent. But in scientific matters, that seems to be especially rare, because managing a group of scientists is like herding cats.

[Hint to Andrew: Good opportunity for a cat picture! (Anyone got a candidate?)]

• Joshua says:

Brent –

> It seems to me a significant difference is that at most there are only resources for one or maybe two crash atom-bomb programs even if you devote a sizable chunk of national resources to the effort. So you can’t have fifty or even five Manhattan Projects with enough resources for a non-zero chance of success.

I agree with the logic of this in principle, but translating that principle into a reality runs through something of a subjective filter.

For example, look at the amount of funding which in theory goes into a coherent effort – national defense – which comprises a huge number of discreet components, many of which are individual initiatives that consume an enormous amount of resources.

Or, I suppose, some might argue that entities like Medicaid are another instructive analogy in thst regard.

Again, the ways that people adjust their reasoning to accommodate their ideological influences, to draw conclusions about the benefits and costs of centralization vs. decentralization, when comparing defense spending and Medicaid, is as relevant, imo, as some abstracted outline the relative merits of those two framings.

• Carlos Ungil says:

Actually the Manhattan Project is a textbook example of running parallel projects when you don’t know what might work. Maybe there were not 50 groups but there were competing groups trying to obtain different fissionable materials using different techniques and designing different bombs. The end product was not *the* atomic bomb, but two of them: Fat Man was a impossion-type plutonium bomb and Little Boy was a gun-type uranium bomb.

• dhogaza says:

And the project developed and built uranium enrichment facilities in parallel based on three different approaches, each of which by 1944 were producing weapons grade uranium. Slowly. Only one Little Boy was built, and it was dropped on Hiroshima, with no prior testing. A combination of faith in the simple design and the pure lack and expense of sufficiently enriched uranium.

Then we had the reactors at Hanford creating plutonium for the Fat Man design.

So, yes, lots of parallelism, and as you say two atomic bombs, totally different in design (and fissionable material), fed by upstream parallel projects.

• Josh says:

Andrew –

> To put it another way, I doubt they would’ve successfully designed and built the atomic bomb by funding 50 competing research groups, with career development being the proximate goal for each group.

At least in the US, I think the combined trends towards lack of trust in government and a somewhat less evident also significant trend towards loss of trust in scientific institutions (at least among some people in particular sectors of the ideological spectrum), is an underlying factor here. As such, I’m not sure how relevant is the comparison to the Manhatten project from a very different time.

• Anonymous says:

“trends towards lack of trust in government…As such, I’m not sure how relevant is the comparison to the Manhatten project “.

The comparison is relevant as an example of how to achieve a goal that many people want to reach.

In terms of “trust in government”, the Manhattan project is a bad example all the way around. It was conducted in total secrecy, so the idea that its success reflects some sort of trust in government isn’t correct. There was a large pacifist movement in the US prior to the war, so a lot of people wouldn’t have simply “trusted the government” to create such a horrific weapon. Many of the scientists involved in the project probably wouldn’t have participated either, except for their fear that the Nazis might obtain it first.

I doubt there is any significant new trend to people not trusting government. The United States is here today because colonists didn’t trust the Crown to treat them fairly.

• Joshua says:

Anonymous –

> The comparison is relevant as an example of how to achieve a goal that many people want to reach.

Not possible. I’m immune from that pattern of behavior. :-)

> In terms of “trust in government”, the Manhattan project is a bad example all the way around. It was conducted in total secrecy, so the idea that its success reflects some sort of trust in government isn’t correct.

Fair enough.

But my larger point is that trying to extrapolate from the Manhattan project is problematic, and I guess you’d agree with that part at least?

> There was a large pacifist movement in the US prior to the war, so a lot of people wouldn’t have simply “trusted the government” to create such a horrific weapon.

Well, by pointing to relative trends I didn’t meant to imply anything absolute.

> Many of the scientists involved in the project probably wouldn’t have participated either, except for their fear that the Nazis might obtain it first.

Yes, I think that the immediacy of the risks at hand play a huge role in how people think these issues through. That immediacy of risk is what can prompt huge shifts along the centralized vs. decentralized divide. A great example is the rapidity with which the government recently approved “socialist”-seeming policies that would have been considered non-starters just a few months ago

> I doubt there is any significant new trend to people not trusting government.

Well, I’ve seen data showing a small overall trend (Google Gauchat, in paticular). But that small overall trend hides a more significant shift where “conservatives” have shifted from have in more “trust in science” to having less “trust in science” as compared to “liberals.” In particular, that shift among “conservatives” reflects a more dramatic movement from a significant chunk – largely those aligned with the religious right, along with the identification of the scientific establishment as a collection of latte-drinking, limp-wristed out of touch academics who don’t have the common sense they were born with, who have “no skin in the game” and are only out to scam the public to line their pockets with federal funding. The recent shift in the mainstream cohort of the Republican Party on climate change policy, from supportive to such policy to more or less uniformly aligned against it, might also reflect that trend.

> The United States is here today because colonists didn’t trust the Crown to treat them fairly.

Lol! I wasn’t thinking that far back. I was more thinking in the context of WW II to now.

• Martha (Smith) says:

Joshua said, “But my larger point is that trying to extrapolate from the Manhattan project is problematic, and I guess you’d agree with that part at least?”

Trying to extrapolate from n = 1 (or even n = 3, as dhogaza commented) is always problematic.

• Joshua says:

Maryha –

> Trying to extrapolate from n = 1 (or even n = 3, as dhogaza commented) is always problematic.

If I ever learned how to be concise, that’s how I might put it.

• Martha (Smith) says:

“Maryha –”

I’ve often been called Mary, Margaret, or other women’s names starting with M, but never before “Maryha”!

• Joshua says:

My creative keyboarding pushes to new horizons daily.

• “Am I reading this right? A physicist is complaining about spherical cows? Is nothing sacred?”

If you’re referring to my post: Yes, it was painful to write, for exactly this reason! I’d worry that they’ll change the locks to keep me out of the Physics Department, but since we’re working remotely I’m safe!

• “Forcing a Manhattan Project on everything … leads to monoculture and stifled innovation”

To be clear: I’m not suggesting this is a good approach for everything, or even most science. The main point of my post is to suggest that for the type of pandemic modeling that (I think) should be useful, a coordinated, large-scale implementation that dives into the details is warranted. Continuing with the (imperfect) Manhattan Project analogy: for discovering the basic science of nuclear physics, one doesn’t want a Manhattan Project — one wants Rutherford, Curie, Einstein, etc. But if one wants an atom bomb, one makes a Manhattan Project.

5. Mendel says:

Germany, March 3:

In response to an initiative by Charité which is aimed at tackling the current pandemic crisis, the Federal Ministry of Education and Research (BMBF) has allocated €150 million for a new academic research network. The network, which aims to pool all relevant expertise and support COVID-19-related research from across Germany, will be coordinated by Charité.

The novel coronavirus (SARS-CoV-2) represents Germany’s biggest post-war challenge. The task of curbing the spread of the virus requires a nationwide, unified approach, as does the task of ensuring optimal medical care. Increasing the knowledge base, and doing so quickly, is of the utmost importance – and requires an effective support structure.

This will be provided by the new research network, an alliance which aims to include all of Germany’s university hospitals. In response to an initiative by Prof. Dr. Heyo K. Kroemer (Chief Executive Officer of Charité – Universitätsmedizin Berlin) and Prof. Dr. Christian Drosten (Director of the Institute of Virology on Campus Charité Mitte), the BMBF has agreed to provide €150 million in funding. For the first time, the nation will respond to a crisis by systematically collating and consolidating both the response plans and the diagnostic and treatment strategies of its various university hospitals and other stakeholders from within the health care system.

Prof. Kroemer explains: “This is the first time that the Federal Government has explicitly appealed to the whole of German academic medicine with an initiative aimed at tackling a situation with major health and social ramifications. In contrast to the usual competitive funding process which awards funding to individual universities, this approach is an attempt to manage the current crisis by utilizing the nation’s potential in a coordinated manner. We have never seen anything like this in Germany before.”

Turn the bazaar into a cathedral?

• Phil says:

The Manhattan Project cost about \$2 billion in 1945 dollars, equivalent to about \$20 billion today (according to Wikipedia). So this German effort is about 1% the scale of the Manhattan Project. Of course, it’s early days, and if the effort gets additional funding and goes on for a few years it could reach, I dunno, 5% of Manhattan Project scale.

I’m not saying it’s not a good use of money or that it’s not enough, I have no idea, I’m just saying it really isn’t comparable to the Manhattan Project.

• Manhattan project took 4 years. The NIH budget is about \$40B annually. So the NIH runs about 8 Manhattan projects a year worth of resources.

• Phil says:

Yeah but the Manhattan Project was just one part of the war effort…and a small one, at that. On the one hand, it was the second-most-expensive _single_ project of the US war effort, so, yeah, pretty big. But the U.S. war effort as a whole spent more every ten days (Wikipedia again). That’s kind of astonishing to me. One of the things I had marveled at was the leadership’s willingness to spend huge amounts of money, put tens of thousands of people at work on it, create Oak Ridge and Hanford and Los Alamos, all just because some scientists were pretty sure but not positive that they would be able to solve all the problems. But in the context of the war as a whole, it’s a lot easier to make sense: if you’re spending that much every ten days maybe you think “well, it only has a slim chance of success, but why not take a chance?” Maybe the bigger cost than the economic one was taking all those bright people off other parts of the war effort.

I have diverged somewhat from my point, which is: NIH may be spending multiple Manhattan Projects ever year, but not to just achieve one thing. If we are taking this very questionable analogy seriously, NIH is more like fighting a war on health problems, whereas a single all-out coronavirus-specific effort would be like the Manhattan Project. Lots of reasons not to take that analogy too seriously.

• Mendel says:

My point was that there is planned coordination and cooperation, not just competition.
This is set up so that specifically the university hospitals who are about to run all of these clinical trials (and also advise the other hospitals around them) make their work count.

6. Anoneuoid says:

Well before Covid-19, I’ve been thinking that our present structure of academic science, mainly made up of lots of small research groups, is highly flawed—it’s what you’d create if you didn’t really care about the answers to questions, but rather cared about keeping people busy.

Everything the government funds eventually becomes a jobs program that may have incidental benefits. That is just what they do.

7. Bill Spight says:

Let’s not forget the DARPA model. Don’t fund research directly. Offer a huge prize for a successful result and let the people who want to get it spend their own money.

8. The Vole says:

>this cathedral/bazaar thing: that’s just a shorthand.
>
I thought this article was going to be about the Italian Bishops and Pakistani Mullahs marshalling for a religion exception to lockdown/social distancing.

9. Josh says:

It seems that the rough outline of this debate is reflected in the changes between the Trump and Obama administrations’ attitudes towards the input of scientists in decision-making more generally, and with the approach towards integrating scientific advice in addressing pandemics more specifically (see the kerfuffle about what happened with the The Global Health Security and Biodefense Unit).

I think that for scientists this is mostly a scientific or logistical debate, but thst debate is, imo, dwarfed by the associated ideological identity warfare. To resolve the scientific debate in the larger world, the ideological antipathy will have to be addressed.

10. Michael Nelson says:

A key characteristic of the bazaar that’s often overlooked is that it’s a market–we talk about its autonomy and variety, and its redundancy and selfishness, but these all arise from and are perpetuated by a market economy. So, for example, (many) researchers compete for scarce funds by signaling that they have a flashy new theory or a silver bullet intervention because it pays (in terms of media praise, advancement, funding). Expecting researchers to not do this, merely because it wastes resources and deters progress and is misleading, is like expecting advertisers not to come up with flashy ads with exaggerated claims for the same reasons.

Scientists hoard data because scarcity increases value, and because the critiques and corrections that come from making their data public can be a real cost–a tax. Expecting scientists to risk their “payment” just because it will advance human knowledge, in the current environment, is like expecting people to pay taxes just because the world needs roads and safety nets and armies. Some will, many won’t, and the latter will keep more of their money.

Why maintain this system? First, by default: No one designed it explicitly, but now that we have it, redesigning it seems like a massive and contentious effort. Second, we prefer to think, like capitalists, that it precludes some central authority from dictating which questions are important. In reality, though, we have many decentralized authorities dictating: government funds selectively, journals are biased toward tradition, fields have their gatekeepers and tenure committees, and the media chases trends.

So we have no good reasons, other than that of the defensive democrat: it’s the worst system in the world, except for all the others.

• Brent Hutto says:

To paraphrase Mark Twain…

Our system of federally funded research assures American citizens of getting the science they deserve, and gives it to them good and hard.

• These are good points, but they show how the market structure is broken… We’re paying people for… flashy ads!

Pay people to put together and curate public datasets. This is really the reason why we have govt funding, because information is a public good. And “tax” people for making flashy claims of excess certainty that don’t stand up to robust scrutiny: randomized audits and either penalties to further funding or reclaim grant money associated with research that doesn’t meet acceptable standards.

• Carlos Ungil says:

How happy are you so far with the epidemiological data put together by the people paid by the federal government to do so?

• Andrew says:

Carlos,

Sure, but that’s part of the point. If the government went all Manhattan Project and drafted me and a bunch of other statisticians, I think they’d get better statistics.

On the other hand, what if the government drafted the wrong people, and had their statistics program headed up by renowned Ivy League professors whose statistical judgment I don’t trust. Then I think they’d be in big trouble!

• Carlos Ungil says:

The government has several thousands of people working in something called Center for Disease Control and Prevention. Would the Manhattan Project have happened as it did if there had been already a Center for Atomic Bombs and Nuclear Weapons at the time?

• Brent Hutto says:

As Steven J. Gould is fond of pointing out regarding natural selection, it’s easy to fall into the trap of confusing a contingent fact of history with the inevitable outcome of a plan. The federal government succeeded in being the first nation to build and deploy two atom bombs. That is a contingent fact of history. It does not mean the Manhattan Project was inevitably going to succeed.

The federal government has now and has for year had all sorts of plan for preventing and dealing with infectious disease outbreaks. All those plans accomplished exactly nothing when faced with COVID=19 (what was the Mike Tyson quote about everybody having a plan up until he punched them in the face?).

Scaling up the agencies tasked with that planning into something that consumes a significant fraction of the country’s output for years on end will not inevitably solve the problem any more than the Manhattan Project was going to inevitably beat the Germans and Russians to the Bomb.

• David J. Littleboy says:

“the trap of confusing a contingent fact of history with the inevitable outcome of a plan.”

True in general, but atomic bombs are way easier to make than most people think. No one who has tried to make such a bomb has ever failed on their first attempt. (According to Teller in a New Yorker article years ago, although that was before North Korea started trying.)

• Renzo Alves says:

Susan (Fiske) may have ideas that you don’t like and preferences that you (and I) don’t necessarily share, but I don’t think she has faked data. Correct me if I’m wrong.

• Andrew says:

Renzo:

I agree. I have no reason to think that Susan Fiske has faked any data, and would not want to imply otherwise. I can see the confusion because in the above comment, I’d written, “Marc Hauser, Brian Wansink, and Susan Fiske,” and Hauser and Wansink have been accused of faking data. To clarify, I rewrote the comment, removing the names, because that’s not relevant to my point.

• I haven’t been super happy, though I quite admit that I probably am ignorant of what they’ve done. But this is part of what makes me rather unhappy. How come the best place to get easy to analyze data is covidtracking.com, which was created by The Atlantic, a popular magazine?

If you drill into the CDC web site a bit it does seem that they put together some useful information. So to that extent I’m pretty happy.

I’m definitely NOT happy that they don’t have an off-the-shelf system with some rather sophisticated compartment models that can be fit to their data using a rather largish (at least a few thousand core) compute cluster *every day* to provide essentially real-time tracking with uncertainty. I mean, Fauci specifically has been worried about a respiratory pandemic for decades right? So the CDC should basically have the same attitude. I think over the last 20 years they should have been working on such generic monitoring and statistical fitting technology.

I’m definitely not happy that they bumbled creating a test. I’m definitely not happy that they aren’t doing randomized sampling door to door in cooperation with the Census dept, to do PCR and antibody testing. I know the antibody testing is something that has needed some time to come online, but hey we have a pandemic costing us TRILLIONS of dollars and so far New York and Dade County seem to have more initiative than the CDC.

So, it’s a mixed bag in my opinion. CDC gets like a C on an A-F scale.

• Joshua says:

We don’t even have a good system for counting the dead (which doesn’t prevent people, famous epidemiologists, from making confident estimates of the mortality rate:

> In the early weeks of the coronavirus epidemic, the United States recorded an estimated 15,400 excess deaths, nearly two times as many as were publicly attributed to covid-19 at the time, according to an analysis of federal data conducted for The Washington Post by a research team led by the Yale School of Public Health.

https://www.washingtonpost.com/investigations/2020/04/27/covid-19-death-toll-undercounted/?arc404=true

That said, I think that it’s inevitable that evaluating this pandemic can only be effectively done after an extended periid of time.

So what is the reasonable expectation of where we should be now – especially since since public health services have been starved of finding for a decade now, and we have a federal government and a high percentage of legislators that have a systematic antipathy towards scientific institutions in general and federal institutions of public health?

• Mendel says:

The problem is that the reliable death count has never needed to be *fast*.

• Joshua says:

The operation was a success, the patient died?

• Andrew says:

Michael:

Your point is interesting, about the bazaar being a marketplace, which results in inefficiencies relating to the commercial motive to hide or even lie about information. (Consider a real bazaar, which indeed is not open source and where you have to put in a lot of effort to haggle, to obscure your own motives and resources, and maybe even to gather business intelligence.) This all seems to imply a bit of an internal contradiction in the whole bazaar-as-open-source idea. The Cathedral and the Bazaar essay has been discussed so much, I suspect this point has been raised before?

My point here is not “Gotcha! The bazaar ain’t perfect,” but rather a specific question about whether “the bazaar” is the best analogy or conceptual model for open-source sharing. Consider that much of open-source sharing is done by students and faculty who aren’t selling any products.

• Ben says:

It seems like a mistake that by this point we haven’t talked about private industry’s role in this.

If I Google “Gilead Research Budget” (https://statmodeling.stat.columbia.edu/2020/04/26/controversy-regarding-the-effectiveness-of-remdesivir/), it says \$9.1 billion dollars.

I Googled NSF and the CDC, and they’re both individually around \$10 billion dollars. I guess all the institutions here do more than Covid19, so I think it’s fair to put them on the same orders of magnitude. And that’s one healthcare company. I guess NIH is like \$30 billion, and a lot of medical research is DoD now too I think. My point is some members of the bazaar are not small — they might as well be cathedrals on their own.

New York apparently hired McKinsey to handle the “science” in reopening the economy: https://www.cnbc.com/2020/04/16/new-york-taps-mckinsey-to-develop-a-trump-proof-economic-reopening-plan.html

Asking the bazaar vs. cathedral question assumes that it has been decided that decision making in an epidemic would be led by public research institutions at all. I really think it should be handled that way, personally, but that may not be the reality.

Like, this is probably a gross simplification of the process, but what if a company finds and patents a vaccine? Sure it seems obvious everyone should get it, but that won’t be possible for some reason, and then presumably we’ll just end up debating trolleys or something.

• Dale Lehman says:

The news release is deeply troubling to me. McKinsey was hired to come up with a Trump-proof plan? If they new the answer, why waste the money hiring a high paid consultant to tell them? What ever happened to basing the plan on sound analysis? I’m not naive, the world has worked this way for a while – I’m just disgusted to see it openly pronounced.

• Brent Hutto says:

I may be off by a day or two but I don’t think so. To my recollection, two days after New York hired McKinsey to “Trump-proof” their “science” they added in about 3,000-4,000 “probable” deaths that had not previously been counted.

Of course temporal sequence does not prove causation by any means.

• Carlos Ungil says:

I think you are confusing New York State (who hired McKinsey) with New York City (who reports probable deaths in addition to lab-confirmed deaths). Or maybe the state is also providing probable deaths? I’d be interested in a reference in that case.

• Martha (Smith) says:

Ben said,
“Asking the bazaar vs. cathedral question assumes that it has been decided that decision making in an epidemic would be led by public research institutions at all.”

Or maybe it’s not “bazaar vs cathedral ” — maybe it’s another analogy, or maybe something like a bazaar of cathedrals? (After all, religions have their faithful, but also proselytize the faithful of other religions.)

• Joshua says:

> maybe it’s another analogy, or maybe something like a bazaar of cathedrals?

I like that.

Or maybe a market of bizarre cathedrals?

• jim says:

“New York apparently hired McKinsey to handle the “science” in reopening the economy”

Be interesting to see how that goes. Their report on the homeless situation in Seattle is a laugher. I guess some people get jobs bcz their connections rather than their knowledge.

• Michael Nelson says:

It seems to me that the metaphor of the bazaar applies well to both the open source (and hacker) community, and to the scientific community. The open-source market has a very low cost of entry, the investment of time and effort is self-regulated, and the compensation is not monetary–people are motivated by opportunities to join communities, earn the admiration of peers, enrich themselves and be part of something larger than themselves.

Formal science culture is a market with a high cost of entry, regulates minimum time and effort (excepting tenured and emeritus) and how those are to be measured, and the compensation is actual money–either directly through salaries, or via the fact that money is required to do science, and popular attention usually comes with more money.

These are two different kinds of markets–a swap meet and a mall, respectively. By this model, if we can lower the cost of entry to science–through the Internet, edX, free analytic software, etc.–people will be able to afford to provide work that is intrinsically motivated, less structured, more diffuse, and ultimately more productive for answering important questions. You might think that such informality would blunt efforts against COVID-19, but watch next time someone finds a vulnerability in a popular program, as millions of people snap into action.

Ironically, this is how Western science started out: the cost of entry was low, the rewards social and intrinsic, because most scientists were rich white guys with self-funded labs, excitedly exchanging letters. We may now be moving back to a democratic version of science that is less corruptible not because the scientists are rich but because the science is inexpensive.

• Thanks for this more in depth analysis. I agree with most of this. The *type* of market we have today for science is almost a “market for lemons”. The asymmetry of information between the “seller” (who peddles articles without accompanying datasets for example) and the “buyers” (who are forced to treat most science as of unknown quality) is significant. Those who want to produce “high quality” science are driven out because grants are gotten for “quick, low quality, flashy” stuff.

The type of market we NEED for science is a combination of pay people to collect data, and then an intrinsically motivated group that combines and analyzes data in myriad ways. Can’t happen fast enough in my opinion, and it’s starting to be an important part of biology already (with a lot of “push back” from people calling the analyzers “data leeches” or some such crap). It would be good to provide SOME financial benefits from pursuing the combining/analyzing type of science though. I think we should offer low-level long-term grants to people who can make a decent argument that they are pursuing public goods by publishing analyses of public information.

• Joshua says:

Michael –

Your descriptions look like idealized ones to me – targeting the same viewpoint even if the descriptions go in different directions on each side.

> It seems to me that the metaphor of the bazaar applies well to both the open source (and hacker) community, and to the scientific community. The open-source market has a very low cost of entry, the investment of time and effort is self-regulated, and the compensation is not monetary–people are motivated by opportunities to join communities, earn the admiration of peers, enrich themselves and be part of something larger than themselves.

Formal science culture is a market with a high cost of entry, regulates minimum time and effort (excepting tenured and emeritus) and how those are to be measured, and the compensation is actual money–either directly through salaries, or via the fact that money is required to do science, and popular attention usually comes with more money.

These are two different kinds of markets–a swap meet and a mall, respectively. By this model, if we can lower the cost of entry to science–through the Internet, edX, free analytic software, etc.–people will be able to afford to provide work that is intrinsically motivated, less structured, more diffuse, and ultimately more productive for answering important questions. You might think that such informality would blunt efforts against COVID-19, but watch next time someone finds a vulnerability in a popular program, as millions of people snap into action.

Ironically, this is how Western science started out: the cost of entry was low, the rewards social and intrinsic, because most scientists were rich white guys with self-funded labs, excitedly exchanging letters. We may now be moving back to a democratic version of science that is less corruptible not because the scientists are rich but because the science is inexpensive.

• Joshua says:

Sure wish I knew why I keep posting inadvertently on this website:

> The open-source market has a very low cost of entry, the investment of time and effort is self-regulated, and the compensation is not monetary–people are motivated by opportunities to join communities, earn the admiration of peers, enrich themselves and be part of something larger than themselves.

I think that there is a variety of motivations in the open-source participants. I think that some participants are pursuing any variety of ideological or personal goals that lie outside your exclusion criteria.

> Formal science culture is a market with a high cost of entry, regulates minimum time and effort (excepting tenured and emeritus) and how those are to be measured, and the compensation is actual money–either directly through salaries, or via the fact that money is required to do science, and popular attention usually comes with more money.

The “cost” of entry can be somewhat subjective. For example, might be easier for me to pursue a well work career pathway rather than to forge an independent trajectory, particularly if my study is subsidized. And I think your description of “compensation” is too tightly constricted to exclude those more admirable goals that you assigned to open-source participants. Certainly there are people in the formal science culture who motivated to join a community, earn the admiration of peers, or just develop expertise for the sake of developing expertise.

> You might think that such informality would blunt efforts against COVID-19, but watch next time someone finds a vulnerability in a popular program, as millions of people snap into action.

I think I can point to a lot of downsides to much of the informal entry into the effects against COVID-19 (in other words, the political impact of armchair epidemiologists).

I’m not saying that I think they should be banished, or even that we’d be better off without them. Just that thinks ain’t quite as binary as you seem to describe.

11. Funko says:

I think in the UK there was an effort to get realistic contact data, see eg here:

https://www.sciencedirect.com/science/article/pii/S1755436518300306

(There are probably better sources, this was the first I found)

• Thanks — I hadn’t seen this. This goes together with the 2018 Vespignani et al. paper I cite to imply that decent real-world data collection is clearly feasible, and could therefore inform models and forecasts.

12. Dalton says:

He’s got a point. In particular, the IHME model (https://covid19.healthdata.org/united-states-of-america) that the White House seems to put a lot of stock in seems pretty simplistic. They’re basically just fitting daily deaths to a Guassian curve with random effects for daily departures from the curve. (Which, weirdly, reminds me of something that is done in the fisheries world: https://www.nrcresearchpress.com/doi/abs/10.1139/cjfas-2015-0318). The crazy thing to me about this model is that the future prediction is entirely driven by the shape of the Normal curve. Which maybe makes sense for China where they were able to do a truly complete lockdown, but when you look at their model for Italy (https://covid19.healthdata.org/italy) it sure seems like the model is way over optimistic about a steep decline in deaths, and that optimism is entirely driven by their choice of curve. (Please, please, somebody correct me if I’m wrong). Why not a Weibull curve or a Gamma? They say in their paper that they tried other curves (without saying what they were) but I think they chose their model when most countries were still on the ascending limb. If you only have the left side of the distribution, pretty much every curve could be Gaussian…

• Andrew says:

Dalton:

Recently we discussed a different curve-fitting model. The IHME model seems a bit too authority-driven and not so much exploratory or science-driving.

• Joshua says:

Andrew –

I’m guessing you’re too busy to listen to a podcast, but I found this discussion about modeling (by modelers) pretty interesting. Not terribly profound, especially for you as compared to me…but one of the more interesting parts is when they compare the IMHE approach to the Imperial College approach (more towards the beginning of the podcast) at around 6:30 minutes in.

Daniel Kaufman, Eric Winsberg, and John Symons discuss the complexities of modeling the coronavirus’s spread.

https://meaningoflife.tv/videos/42714?in=47:31&out=57:31

• Joshua says:

Sameera –

Did you look at how they calculate the infection rates in Spain?

• Joshua says:

He says that if we “extrapolate data,” 39% of New Yorkers are infected…meaning that 7.5 million are infected in NY, and he’s extrapolating infection rates based on the number (some 649k) of people who tested positive – but not for the *antibodies,* but who tested positive for *the virus.* That’s how he suggests that we “follow the science.”

Talks about Spain…204,178 positive tests out of 930,000 total tests. 22% of all who tested were positive in Spain – Spain has 47 million people, “that equates to about 10 million cases in Spain” … And then he says…because 21,282 died out of 47 million, he calculates that 0.05 chance of dying from COVID if you’re a citizen of Spain.

It’s an embarrassment. Either that, or it’s deliberate disinformation.

• Joshua says:

Sorry – he got that 39% based on 256,272 positive tests out of 649,325 tests in NY State.

And from that he “extrapolated” to 7.5 million infected in NY State. I didn’t describe what he did correctly, but describing it correctly doesn’t make it look any better. He actually says 39% of the state residents were infected by the time he taped the video.

• Yes, the numerator and denominator calculations have been astounding to me. Thanks for responding.

Catch up with you tomorrow probably. Cheerios

• Joshua says:

Unreal –

These guys are on Laura Ingraham as I type.

I’m actually stunned.

I’m barely math literate and I know nothing about the medical science, and it was immediately obvious to me how bad that video was.

• jrkrideau says:

video removed!

• Joshua says:

In case anyone’s interested in seeing how bad it was, here’s a partial transcript. It actually skips over the worst part which is in the beginning. I think the first part was deliberately omitted – as the person who posted the transcript is an economist and in theory math literate.

https://www.aier.org/article/open-up-society-now-say-dr-dan-erickson-and-dr-artin-massihi/

• Joshua says:

And now they’re going to claim censorship because it was removed! It’s a self-sealing process. Promote dangerous misinformation and then claim censorship when it’s removed.

13. jim says:

Ha, missed the prepper reference earlier.

Working in the mining industry you get the interesting experience of meeting the Gold Bugs, the financial version of preppers. When the world goes to hell, gold will be the currency of the day, right?

Funny that the article claims they’ve been vindicated. I doubt any of them have enough Purell in their bunkers to survive the bioapocalypse.

14. Carlos Ungil says:

“The Secret Group of Scientists and Billionaires Pushing a Manhattan Project for Covid-19”

“These scientists and their backers describe their work as a lockdown-era Manhattan Project, a nod to the World War II group of scientists who helped develop the atomic bomb. This time around, the scientists are marshaling brains and money to distill unorthodox ideas gleaned from around the globe.

“They call themselves Scientists to Stop Covid-19, and they include chemical biologists, an immunobiologist, a neurobiologist, a chronobiologist, an oncologist, a gastroenterologist, an epidemiologist and a nuclear scientist. Of the scientists at the center of the project, biologist Michael Rosbash, a 2017 Nobel Prize winner, said, “There’s no question that I’m the least qualified.”

https://www.wsj.com/articles/the-secret-group-of-scientists-and-billionaires-pushing-trump-on-a-covid-19-plan-11587998993

https://s.wsj.net/public/resources/documents/Scientists_to_Stop_COVID19_2020_04_23_FINAL.pdf

15. Wide collaborations are taking place e.g. https://covid.cd2h.org/ (which includes the OHDSI Covid19 group) and in my country, Canada’s COVID-19 immunity task force and I am sure many more.

But everyone pretty much has to use and put up with current resources and skills – habits are hard to break and most are in the habit of advancing their career and interests.

The one big break being people working from home have more flexibility.

16. bbis says:

The framing of the discussion did set up a fairly well-defined set of responses, although there have been some interesting comments, particularly about how a bazaar can tend towards a market in lemons and also comments on how so many cathedrals were built over many years often with significant changes in plan as people changed. Watching how people are adapting to social distancing it seems that many are struggling with understanding and learning new rules for social interactions and behaviour. The social lessons learned while growing up are not useful at this point, and without clearly defined social rules to follow being modeled consistently around them, people are struggling to create the rules for themselves.

To get back to the point of the post – Managing the coordination of small groups and individuals at times of crisis would be made easier if there were individuals who played such a role already existing in the community. They would need to be connected and respected and willing (and able) to provide the recognition and support for people or groups that had put aside their own interests to contribute to a major project. One reason there does not seem to be enough people is such a role is that the way so much of science has developed with the expansion of universities following WW2 and the baby boom emerged as a result of the lack of role models for people to follow. The wave of new, young people pushing science in new directions displaced the prior generation and essentially eliminated such a role. In the absence of sufficient people to provide the coordination, new people need to step in and learn the role and create the expectation that others should be willing to contribute and include at least some of this public service work as part of a normal career. So perhaps the metaphor of the bazaar vs. the cathedral is not so apt. A sports metaphor (or simile really) that it should be more like a soccer team, everyone is expected to contribute in multiple roles as needed, may be more useful?

It does seem the issue is emphasizing the need for a change in how people define their roles in different segments of society with a strong emphasis on increasing flexibility, being willing to contribute more broadly with reduced expectations of direct rewards, and a willingness to engage more widely across groups.