“Have we been thinking about the pandemic wrong? The effect of population structure on transmission”

Philippe Lemoine writes:

I [Lemoine] just published a blog post in which I explore what impact population structure might have on the transmission of an infectious disease such as COVID-19, which I thought might be of interest to you and your readers. It’s admittedly speculative, but I like to think it’s the kind of speculation that might be fruitful. Perhaps of particular interest to you is my discussion of how, if the population has the sort of structure my simulations assume, it would bias the estimates of causal effects of interventions. This illustrates a point I made before, such as in my discussion of Chernozhukov et al. (2021), namely that any study that purports to estimate the causal effect of interventions must — implicitly or explicitly — assume a model of the transmission process, which makes this tricky because I don’t think we understand it very well. My hope is that it will encourage more discussion of the effect population structure might have on transmission, a topic which I think has been under-explored, although other people have mentioned the sort of possibility I explore in my post before. I’m copying the summary of the post below.

– Standard epidemiological models predict that, in the absence of behavioral changes, the epidemic should continue to grow until herd immunity has been reached and the dynamic of the epidemic is determined by people’s behavior.
– However, during the COVID-19 pandemic, there have been plenty of cases where the effective reproduction number of the pandemic underwent large fluctuations that, as far as we can tell, can’t be explained by behavioral changes.
– While everybody admits that other factors, such as meteorological variables, can also affect transmission, it doesn’t look as though they can explain the large fluctuations of the effective reproduction number that often took place in the absence of any behavioral changes.
– I argue that, while standard epidemiological models, which assume a homogeneous or quasi-homogeneous mixing population, can’t make sense of those fluctuations, they can be explained by population structure.
– I show with simulations that, if the population can be divided into networks of quasi-homogeneous mixing populations that are internally well-connected but only loosely connected to each other, the effective reproduction number can undergo large fluctuations even in the absence of behavioral changes.
– I argue that, while there is no evidence that can bear directly on this hypothesis, it could explain several phenomena beyond the cyclical nature of the pandemic and the disconnect between transmission and behavior (why the transmission advantage of variants is so variable, why waves are correlated across regions, why even places with a high prevalence of immunity can experience large waves) that are difficult to explain within the traditional modeling framework.
– If the population has that kind of structure, then some of the quantities we have been obsessing over during the pandemic, such as the effective reproduction number and the herd immunity threshold, are essentially meaningless at the aggregate level.
– Moreover, in the presence of complex population structure, the methods that have been used to estimate the impact of non-pharmaceutical interventions are totally unreliable. Thus, even if this hypothesis turned out to be false, we should regard many widespread claims about the pandemic with the utmost suspicion since we have good reasons to think it might be true.
– I conclude that we should try to find data about the characteristics of the networks on which the virus is spreading and make sure that we have such data when the next pandemic hits so that modeling can properly take population structure into account.

I agree with Lemoine that we don’t understand well what is going on with covid, or with epidemics more generally. I agree, and, as many people have recognized, there are several difficulties here, including data problems (most notably, not knowing who has covid or even the rates of exposure etc. among different groups); gaps in our scientific understanding regarding modes of transmission, mutations, etc.; and, as Trisha Greenhalgh has discussed, a lack of integration of data analysis with substantive theory.

All these are concerns, even without getting to the problems of overconfident public health authorities, turf-protecting academic or quasi-academic organizations, ignorant-but-well-connected pundits, idiotic government officials, covid deniers, and trolls. It’s easy to focus on all the bad guys out there, but even in world where people are acting with intelligence, common sense, and good faith, we’d have big gaps in our understanding.

Lemoine makes the point that the spread of coronavirus along the social network represents another important area of uncertainty in our understanding. That makes sense, and I like that he approaches this problem using simulation. The one thing I don’t really buy—but maybe it doesn’t matter for his simulation—is Lemoine’s statement that fluctuations in the epidemic’s spread “as far as we can tell, can’t be explained by behavioral changes.” I mean, sure, we can’t tell, but behaviors change a lot, and it seems clear that even small changes in behavior can have big effects in transmission. The reason this might not matter so much in the modeling is that it can be hard to distinguish between a person changing his or her behavior over time, or a correlation of different people’s behaviors with their positions in the transmission network. Either way, you have variation in behavior and susceptibility that is interacting with the spread of the disease.

In his post, Lemoine gives several of examples of countries and states where the recorded number of infections went up for no apparent reason, or where you might expect it to have increased exponentially but it didn’t. One way to think about this is to suppose the epidemic is moving through different parts of the network and reaching pockets where it will travel faster or slower. As noted above, this could be explained my some mixture of variation across people and variation over time (that is, changing behaviors). It makes sense that we shouldn’t try to explain this behavior using the crude categories of exponential growth and herd immunity. I’m not sure where this leads us going forward, but in any case I like this approach of looking carefully at data, not just to fit models but to uncover anomalies that aren’t explained by existing models.

Scabies!

When talking about junk science, or bad research, or fraud, or mixtures of these things (recall Clarke’s Law), we often talk about the role of scientific journals in promoting bad work (with Psychological Science and PNAS being notorious examples), being defensive and slow to admit problems (Lancet, more than once), playing the business-as-usual game (lots of examples), and flat-out refusing to issue corrections even when pointed out to them (lots more examples).

Another problem is propaganda journals. I’m not talking now about medical and public health journals which occasionally play a propagandist role by not looking hard at papers that push a liberal political agenda, nor am I talking about traditional propaganda such as CIA-funded journals or commercial propaganda such as the apparently all-too-common practice of research claims being dictated by pharmaceutical companies, etc. Nor am I talking about so-called predatory journals that exist not to push an agenda, scientific or otherwise, but just to make money by conning authors into paying for publication and conning promotion committees into count those publications. Rather, here I’m talking about entire journals created to push some pseudoscience, with today’s example being vaccine denial.

My first thought when seeing an entire journal devoted to a fake science was annoyance, but that initial reaction is really missing the point. Yes, it’s annoying that there are people out there pushing ESP, ghosts, climate change denial, vaccine denial, evolution denial, etc.—and that’s not even getting into the various noxious forms of historical denial—but I guess the real problem here is not the existence of the journals so much as that there are enough people who are confused—passionately confused—that they go to the trouble of putting together these journals in the first place. Conditional on such people existing, yeah, sure, they should definitely set up journals. It’s a free country! Also, apparently misguided theories sometimes do contain truths, so maybe these journals play a potentially valuable role as safe spaces where true believers can share their theories and maybe turn up something useful for the rest of us, if only by accident.

Anyway, my main point here is not whether these journals should exist, or how many such journals there should be, or whether a journal on a fake science like ESP is better or worse than a journal on some popular but unverifiable religious belief, or whether I’m violating the spirit of St. Feyerabend and being “patronizing” and “punching down” by even suggesting that there are people out there who have M.D.’s or Ph.D.’s after their name but don’t know what they’re doing . . . whatever.

No, my main point is that often in our discussions of published research incompetence or misconduct (again, recall Clarke’s Law), we hope or demand or expect or wish that the journal that published the bad thing will remedy the problem. But when it’s junk science published in a junk journal, there’s no hope! Pretty much the entire reason for these journals is to push an agenda and to provide a place for people who push that agenda to publish their papers, so of course that’s what they do. To expect a journal of fake science to retract a paper because it does poor science would be like . . . oh, I dunno, it would be like the House of Lords expelling some Lord Thistlethwaite type for being too snobby.

How much does this bother me? It depends on the field. Arguably, even the junk science on astrology or ghosts is doing some damage, at least to the extent that it degrades the reputation of science more generally and takes resources away from more worthy projects such as Game of Thrones. Junk science such as the critical positivity ratio or himmicanes is a bit worse, as these are shiny objects that attract not just feature stories but also can fool respected science writers. I’d give a break to cold fusion and speculative cancer cures, at least at first, because they fall in the “big if true” category.

Then there’s vaccine denial, which seems much worse to me, as it’s killed hundreds of thousands of people already. I can’t quite say that the researchers who publish vaccine denial papers are immoral, exactly, as many of them might be sincere in their beliefs—they can’t all be political hacks or irresponsible media hounds, and statistics is hard. But, as we all know, bad deeds can be done by people who don’t understand what they’re doing.

I thought about all this after reading this Retraction Watch article about a university lecturer in New Zealand who published a fatally-flawed paper claiming a negative effect of vaccines in a journal published by a vaccine denial group (who, for better or worse, don’t have access to the same high-quality web design as the Hoover-adjacent Panda organization). What was interesting here is that the lecturer’s employer got involved:

Robert Scragg, the head of the School of Population Health at the University of Auckland, where Thornley is employed, took the unusual step of demanding the retraction of the Thornely and Brock paper.

In an email to the institution, which was posted on Twitter, Scragg wrote that the article — in a “low ranking non-indexed journal” — includes a “major error” and called on them to:

immediately publicly retract their article because of the anxiety it is creating for expectant parents and those planning to have a child.

The authors took the hint and retracted their paper.

It’s hard to know what to think about this. On one hand, I don’t like the idea of research being policed by one’s employer. On the other hand, the author teaches in the epidemiology department, and it’s pretty ridiculous to have an epidemiologist pushing anti-vaccine propaganda (or pushing incompetent anti-vaccine work). Everybody makes mistakes, but mistakes that pseudoscience talking points that are killing people, that’s really bad. Then again, academic freedom. Then again, the head of the school has academic freedom too . . .

Here’s the researcher’s posted self-description:

Avid reader, cyclist, teaches applied statistics, uses R . . . hey, he’s practically talking about me! On the differences side, I’m not much of a photographer and I eat tons of carbs and sugar.

Googling this guy led to this news article from 31 August 2020 where he was quoted as saying:

“Looking at the science, I believe an effective vaccine is a very remote possibility for COVID-19,” said Dr Thornley.

“We know that the world record in terms of vaccine development is four years – that’s with mumps, from the Merck company. We know that most of them take 10 years – they need to be carefully evaluated. These early vaccines that are coming out of Russia I’m very sceptical they’ve been really well tested in long-term studies.”

He said discussions with vaccinologists he knows have led him to be sceptical.

“Hanging out for a vaccine is not an option… a fantasy, in my view.”

I guess the next step after saying the vaccine won’t happen is to deny the vaccine’s effectiveness and to make up stories about its hazards. Kind of funny that he coordinates a course on Evidence Based Practice. Maybe the Hoover Institution could hire him to head up a new biostatistics department?

Kelloggs in the house!

We got twin Kelloggs for you today.

First is Greg, who came up on the blog the other day. Greg’s a biologist, and he posted a video debunking some bogus covid science claims that are floating around on the internet. Actually, Greg has a whole Youtube channel with bioinformatics explainers. So that’s cool. Dude can sing too.

And this got me wondering what David’s been up to. A quick google turned up this article with Cary Moskovitz, “Inquiry-Based Writing in the Laboratory Course.” Hey! I don’t teach laboratory courses, exactly, but I think statistical methods is close enough. Moskovitz and Kellogg share some insights:

The inadequacies of the traditional lab, in which students go through the motions of laboratory work in a series of “cookbook” activities, have been widely recognized. . . . However, educational reform has yet to overcome the inertia of the traditional school “lab report.” Even in inquiry-based settings, such lab reports remain largely inauthentic and make-work affairs, involving little actual communication beyond the implicit argu- ment for a good grade. Real scientific writing, on the other hand, involves a variety of rhetorical functions including persuading skeptical audiences, constructing interpretive frameworks, refuting the work of others, and so forth.

To that list, I’d add “refuting one’s own work” and “exploring the conditions for one’s work to be refuted,” but I agree the general point.

They also emphasize the importance of defining the audience for a writing task. I agree with that too.

From the other direction we can consider the mess that is scientific writing in the real world. I think Moskovitz and Kellogg are talking about undergraduate students, but things are just as bad, although in different ways, among Ph.D. students and postdoctoral trainees, where there can be huge pressure to publish in so-called top journals, a pressure that can overwhelm all other goals of communication and learning. We need to work on this at all levels, from the empty “five-paragraph essay” on King Lear in high school to empty journal articles produced by working academics, and everything in between.

So thanks, Kelloggs, for giving us lots to think about today. And happy Thanksgiving!

Just another day at the sausage factory . . . It’s just funny how regression discontinuity analyses routinely produce these ridiculous graphs and the authors and journals don’t even seem to notice.

Ethan Steinberg sends in these delightful regression discontinuity graphs:

He reports that the graphs come from a study from the University of Pennsylvania trying to estimate the effect of a care management intervention for high risk patients.

I’m skeptical for usual reasons. They also make the classic error of comparing the statistical significance of different comparisons.

Nothing special here, just another day at the sausage factory. People are taught this is what to do, it spits out publishable results, everybody’s happy.

P.S. Just to explain some more: Sometimes people frame this as a problem of trying to figure out the correct specification for the running variable: should it be linear, or polynomial, or locally linear, whatever? But I don’t think this is the right way to think of things. I’d say that the original sin of the “regression discontinuity” framing is the idea that there’s some sort of purity of the natural experiment so that the analysis should be performed only conditioning on the running variable. Actually, these are observational studies, and there can be all sorts of differences between exposed and unexposed cases. It’s poor statistical practice to take the existence of a discontinuity and use this to not adjust for other pre-treatment predictors. Once you frame the problem as an observational study, it should be clear that the running variable is just one of many potential adjustment factors. Yes, it can be an important factor because of the lack of overlap between exposed and control groups in that variable. But it’s not the only factor, and it’s a weird circumstance of the way that certain statistical methods have been developed and taught that researchers so often seem to act as if it is.

P.P.S. A student asked for more detail regarding my concerns with certain regression discontinuity analyses. If you want more of my thinking on the topic, you can google *statmodeling “regression discontinuity”* to see some posts, for example here and here, and you can look at my articles with Imbens and Zelizer. We also discuss the topic in a less critical and more constructive, how-to-do-it perspective in section 21.3 of Regression and Other Stories.

P.P.P.S. Commenter Sam looked at the above-linked article more carefully and reports that their main analysis adjusts for other pre-treatment predictors and also includes alternative specifications. I still think it’s nuts for them to use this quadratic model, and even more nuts to include different curves on the two sides of the boundary—this just seems like noise mining to me—but they did also do analyses that adjusted for other pre-treatment variables. I don’t find those analyses convincing either, but that’s another story. Maybe the general point here is that it takes a lot for this sort of statistical analysis to be convincing, especially when no pattern is apparent in the raw data.

Not so easy to estimate the effects of school mask requirements . . . how to think about this?

Ambarish Chandra writes:

I’m writing regarding the CDC study on school mask mandates from last month, which shows that counties with school mask requirements had lower pediatric covid cases than those without them.

As any social scientist knows, the study cannot possibly reveal causal effects due to the endogenous decision to adopt mask mandates by particular communities, and at specific times. Yet, the results of this study are very influential: they are repeatedly cited by the CDC director, posted prominently on the CDC website as “evidence” for school mask requirements and even repeated verbatim in other countries (e.g. Canada, Australia) to justify school mask mandates in those places. The CDC is not including any caveats about the study being correlational and not causal, and the media cannot tell the difference.

I attempted to replicate that CDC study, using the restricted CDC case data and matching it to data on school district mask mandates. It turns out that the CDC study was restricted to schools that opened by August 14, 2021, which are disproportionately in southern US states that were in the middle of the Delta wave when schools opened. I extended the study to use more recent data (up to the Oct 25 release) which also allows an examination of schools across the country. In short, the study does not hold up when extended. There is absolutely no difference between counties with school mask mandates and those without, in terms of either pediatric or adult cases. I’ve written all this up in this document, with figures included.

The document also has a short explanation of the pitfalls of using correlational studies. In the context of the pandemic, we should expect that private behaviour will be positively correlated with public health measures (e.g. places that impose mask mandates, school closures, or vaccine mandates are also likely to have citizens voluntarily reducing their social contacts, eating out less, improving ventilation etc), both cross-sectionally and temporally.

So, a simple correlation will likely overstate the true effect of PH measures. That’s why it is not surprising that the CDC study found a negative association between masks and pediatric cases—we would expect that even if the true causal effect is zero.

Regardless of how anyone feels about masks (and I have been on both sides of this fight over the past 18 months), we should not allow bad research to influence public policy. This study has been very influential and continues to shape policy. I am keen to publicize the fact that the CDC result is fragile, and completely disappears when extended to a larger sample.

If you have any thoughts on how I might get the word out about this critique, I would be grateful. I took a long shot and emailed David Leonhardt at the NYT, but that hasn’t worked out.

My reply:

1. It’s great to see people reacting to published studies and doing their own analyses. I haven’t looked carefully at this particular analysis so I can’t really comment on the details, but I’m supportive of the general idea of looking at the numbers. In particular, it’s hard to untangle the effects of policies that are done at the same time, especially when behaviors are changing too. We discussed this regarding covid responses here and here. I appreciate that the CDC report is unambiguously correlational (“this was an ecologic study, and causation cannot be inferred”), but then it’s funny that at the end they say, “School mask requirements, in combination with other prevention strategies, including COVID-19 vaccination, are critical to reduce the spread of COVID-19 in schools.” That seems a bit like what Palko calls a “cigarettes and cocaine” argument.

2. I don’t know if I buy Chandra’s argument that “a simple correlation will likely overstate the true effect.” I agree that mask mandates are likely to occur in concert with private masking behavior and general carefulness that could’ve happened anyway, but I can also picture some things going in the other direction: for example, a mask mandate can be applied at a time when there’s a concern or expectation of an increase in risk. I think it’s fine to say that the correlational estimate has issues; I’m not so comfortable with using it as a unidirectional bound.

3. At the policy level, I see the appeal of a mask requirement in part because it addresses many parents’ and teachers’ concerns about health risks. Kids are required to go to school, so it’s important to do what it takes to make them and their parents and their teachers feel safe. All within reason, of course, but given what we know about the spread of covid, mask requirements seem to me to be within reason.

4. Regarding the last paragraph in Chandra’s note above: Yeah, I too have found it difficult to get the attention of reporters! My experience is that journalists are really busy people: they’ll contact me when they have questions or if they’re on a deadline, but otherwise they typically don’t seem to have the time to respond to email. That’s just the way it is. I guess if they spent too much time responding to messages, they wouldn’t have time for anything else. That’s true for me too, but I’ll respond to most things anyway because I appreciate the distraction.

Smoking and Covid

Paul Kedrosky wrote:

This paper is getting passed around today, with its claim that there not only isn’t a causal relationship between smoking and COVID, but possibly a protective role. This sort of thing drives me crazy about pre-prints. If your data suggests a conclusion that runs counter to decades of prior work with better data — in this case, that there is a strong relationship between smoking and lower respiratory infections — you should always consider the possibility that your new data is crap, not that instead of x → y you have somehow proven y → x.

I replied: Sure, but it’s fine for them to post the research, right? People might be overreacting to it, but the article itself seems clear enough.

Kedrosky responded:

It’s fine to post the research, I guess, but I don’t see what we learn from it, other than that the underlying data is flawed. I would rather see a paper saying, “You want to know bad China’a COVID data is? I can use it to prove statistically that cigarette smoking protects against lower respiratory infections”. That’s a better paper.

That makes sense, but you have to be open to the possibility that smokers are less susceptible to the disease, right?

P.S. I put this under the 6-month delay because, yeah, coronavirus is topical, but this smoking-and-coronavirus thing is not such a big deal. And in 6 months maybe people will want to read some retro coronavirus content. So here it is!

Nooooooooooooo (another hopeless sex ratio study)

Two people pointed me to this one:

In Malta, during the study period, there were 8914 live births (M/F 0.5139; 95% CI 0.514–0.524). There was a significant dip (χ2 = 5.1, p = 0.02) of the M/F ratio 4 months after DCG’s assassination to 0.4529. This translates to an estimated loss of 21 male live births.

In Ireland, during the study period, there were 102,368 live births (M/F = 0.5162; 95% CI 0.513–0.519). There was a significant dip (χ2 = 4.5, p = 0.03) of the M/F ratio 4 months after VG’s assassination to 0.5. This translates to an estimated loss of 72 male live births.

I have no problem with people reporting raw data and performing their statistical analyses, but I have a big big problem with this conclusion of the article:

Assassinations of investigative journalists may have important population mental health implications.

The trouble is that actual differences Pr(girl) are very small, and observed differences are very noisy compared to actual differences.

Here’s a baseline. A 0.5 percentage point change in Pr(girl) is about the biggest you’ll ever get, outside of sex selection or major famines. The larger of the two analyses in this paper takes 102,000 births over two years and analyzes the monthly data. That’s 102000/24 = 4250 births per month. So the sampling standard deviation of Pr(girl) in a month will be sqrt((1/2)*(1/2)/4250) = 0.008, which is bigger than any real change you could expect to see.

Now at this point you might say: Why not put together a larger dataset over more years with data from more countries to get a big enough sample size? And my response would be: Sure, do it if you want, but don’t expect anything, as I think any real effects would be in the range of 0.1 percentage point or 0.01 or something like that. It’s just not plausible that this one piece of news would have such a big effect on behavior. Also, there’s other news happening at the same time: the performance of the national economy, releases of influential movies and pop songs, wars breaking out all over the world, refugee crises, economic shocks, etc., all sorts of local and national news. The whole thing makes no sense.

But . . . how did they get the statistical significance, the p-values of 0.02 and 0.03? That’s easy. They looked at 24 months and found the biggest change. Forking paths.

Let me emphasize that “forking paths” is not my main criticism of this study. My main criticism is that it’s hopeless, they’re just using too noisy a measurement. The reason to bring up forking paths or p-hacking is to close the loop, to explain why what is offered as evidence is not good evidence at all.

I agree with the author of the above-linked article that the murder of investigative journalists is a horrible thing. I just don’t think anything is gained by trying to study the topic by linking it to a dataset of what are essentially random numbers.

Again, the trouble with using the sex ratio as an outcome is that (a) it has close to zero variation, and (b) you’re looking at random binary outcomes so the data are really noisy.

P.S. I think that, beyond basic statistical misunderstandings, there are a few more things going on here:

1. Social scientists are always looking for something to publish about, and journalists are always looking for material. So all sorts of crap gets out there, some of which makes it way all the way up the NPR/Gladwell/Ted ladder.

2. There’s an attitude—something that might be called a statistical ideology—that causal identification plus statistical significance equals discovery. This ideology is so strong that can overwhelm people’s critical faculties. Remember Daniel Kahneman’s notorious statement, “You have no choice but to accept that the major conclusions of these studies are true.” Kahneman has since retracted that statement, and good for him—my point here is that the ideology was so strong that it was leading him into the inferential equivalent of stepping off a cliff because he believed that he had wings. I mention Kahneman here not because he had the worst case of it! It’s the opposite: he’d published influential research on researchers’ misunderstanding of statistics; if he can be blinded by ideology or received wisdom in this way, anyone can.

3. Sex-ratio stories in particular seem to have an appeal to people, I guess having to do with very basic gender essentialism, what I’ve called schoolyard evolutionary biology. The differences between boys and girls are so evident that it just seems to make sense to many people to suppose that sex ratios are correlated with so many things (as in the notorious claim promoted by Freakonomics that “good-looking parents are 36% more likely to have a baby daughter as their first child than a baby son”)

“Fast state, slow state: what’s the matter with Connecticut?”

A year ago, Janet Rosenbaum wrote:

I’m writing a paper that could appropriately be titled, “Fast state, slow state: what’s the matter with Connecticut?”

I don’t think the journal will go for that, and they will insist on a boring title, but I wanted to let you know.

The article finally appeared! It’s called “Timeliness of provisional United States mortality data releases during the COVID-19 pandemic: delays associated with electronic death registration system and weekly mortality.” So, yeah, no kidding about the boring title. The content’s good, though; that’s what’s important.

A quick fix in science communication: Switch from the present to the past tense. Here’s an example from New Zealand:

Somebody named Andrew (but not me) writes:

Just curious to see if you might have any reaction to this study, “Māori and Pacific people in New Zealand have a higher risk of hospitalisation for COVID-19,” published in the New Zealand Medical Journal.

Some of the numbers are being widely quoted in NZ as proof of differential health outcomes by ethnicity, but a sample of 120 hospitalisations seems kind of small to me, especially with the number of parameters they are looking at.

Re the “widely quoted” the 20 year age gap between Māori and European risk of hospitalisation seems to be generally standard now – e.g. in this from a generally excellent local journalist:

A study released in July by Te Pūnaha Matatini found that, even after accounting for the greater prevalence of underlying health conditions among Māori and Pasifika communities, the average Māori person has the same risk of hospitalisation from Covid-19 as a white person 20 years older. For Pasifika, the gap was 25 years.

It might be true, but it seems a bit of a stretch to be so confident based on a logistic regression using such a small sample. It’s an important topic here, and unfortunately quite politicised, so was interested in an impartial take.

I see my correspondent’s point regarding small sample size; on the other hand, you gotta report the data you see, and of course it’s good news that the number of cases was so low.

My quick solution to the science communication problem here is to change from the present tense to the past tense. So, instead of “Māori and Pacific people in New Zealand have a higher risk of hospitalisation,” the title of the article could be, “Māori and Pacific people in New Zealand had a higher risk of hospitalisation.”

There’s a common pattern in science writing to use the present tense to imply that you’ve discovered a universal truth. For example, “Beautiful parents have more daughters” or “Women are more likely to wear red or pink at peak fertility.” OK, those particular papers had other problems, but my point here is that at best these represented findings about some point in time and some place in the past.

Using the past tense in the titles of scientific reports won’t solve all our problems or even most of our problems or even many of our problems, but maybe it will be a useful start, in reminding authors as well as readers of the scope of their findings.

What can we learn from COVID burnout?

Burnout has become a central theme of the COVID-19 pandemic, impacting essentially everyone in different ways, from those who were, and continue to be subject to endless, stressful ‘essential work.’ to those in the relatively privileged position of being trapped at home for more than a year. While there is a pretty clear, clinical-ish definition of burnout from an occupational or psychological perspective, it’s important to attend to the different potential types of burnout and their implications for what we should be doing with our lives, careers. In this post, I want to take the opportunity to think about what can and should be learned from the experience of being ‘burned out’ as a working quantitative epidemiologist.

For many of us working in public health – or I’ll at least speak for myself – this experience has been destabilizing because it has shaken our faith in the meaningfulness of what we do, even as it has also become a painfully relevant and routine part of everyone’s lives. This is a moment where epidemiological data are presented as frequently and casually as the weather. But it’s also a time in which epidemiological models have often been wrong, where our tools and techniques have been drawn into intense and politicized scrutiny, and as individuals we have become characters in the never-ending culture wars.

While before the pandemic, we might have been sometimes too confident in our ideas, models, and authority, many of us – again, I’ll speak for myself – are now working through a sort of crisis of confidence. Some of it just comes from the exhaustion of confronting an enormous crisis and being continually reminded of how difficult it is to make a meaningful impact on large-scale outcomes. This exhaustion will likely pass, and when it does, we need to focus on the very real needs to change, innovate, and respond more effectively to future crises that have been illuminated by the crisis.

The following three issues have – and continue to – dog me as I try to figure out what to do next. I don’t claim to have come up with all – if any – of these ideas, but instead have found them to be among the more significant challenges I’m wrestling with as we stare into the murky post-crisis future:

  1. We are political actors whether or not we like it or want to be. This point is now so obvious and trite that it belongs alongside “all models are wrong” in the COVID-19 rhetorical hall of shame. But it is – I think – an existential question for those of us who work in this field. The limits of our ability to impact the covid crisis as individuals should turn our attention towards collective action in all its messy forms. We need to be able to think more broadly about ‘intervention’ than the action of the state or a quasi-governmental authority operating in a top-down fashion on populations. What this means pragmatically remains unclear to me, but I know that meaningful, sustained, and opinionated engagement with politics and social movements is the only path to long-term utility for the work we do. Otherwise, we can expect to see the same ‘shit rolls downhill’ pattern play out in the next crisis, as the benefits of the insights and innovations of public health and medicine are funnelled first to the wealthy and other privileged groups.
  2. Top-down approaches to interventions generally fail, infectious disease epidemiology policy is not an exception. We can easily mistake the ability to impose a well thought-out intervention cleanly on a simulation model to be a ground-truth, base-case from which deviations represent failures. The problem with this is that when the real world is more messy and dysfunctional than the idealized one we imagine, it is a short leap to blaming individuals and groups for not getting with the program as we imagined it. I sometimes think of this as analogous to a problem I have with my daughter in the winter: If she would get her boots and coat on quickly, we wouldn’t be late for school essentially every morning. I find her inability to move at the speed I want her to to be extremely frustrating, especially on a freezing Michigan morning when she needs to wear eight layers of clothes. But the tendency to dawdle is so central to her existence – and to being a four year-old – that there is no point in comparing our outcomes against those of an alternative universe in which she is an AM speed demon. And rather than asking why she can’t move more quickly, I should probably ask why I don’t start the process of getting ready sooner and make it a bit more fun for her? (But hey, I don’t want to think about that…)
  3. There are sharp limits to the tools of quantitative analysis in public health. Epidemiologists may have long prided ourselves on being one of the hard health sciences in much the same way that economists and other quantitative social scientists have long been held up as the rigorous thought-leaders of social inquiry as compared to their more-qualitative peers. But just as the faith in economic models was at least partially responsible for our collision with the financial iceberg of 2007, we should ask what role quantitative epidemiology may have played in worsening or slowing progress in this crisis, in addition to all the good it has done. For example: What was the potential impact of overly-optimistic transmission modeling projections earlier in the pandemic on the trajectory of infection and death? Is there a process for presenting modeling results and their uncertainty in a way that makes it harder to cherry-pick the most politically convenient ones? Should we be making public forecasts at all?

To me, these existential questions are frustrating and somewhat confounding, but they also reflect what is – I hope – a once-in-a-career opportunity to reassess the alignment between what I do and the impact I hope to have on the problems I want to address. While my insights are obviously specific to the realm of infectious disease epidemiology, I think they might carry over into the broader world of people who want to do good things – broadly construed – with the tools of the quantitiative sciences.

0/0 = . . . 0? That’s Australian math, mate!

Tom Davies writes:

I looked down on stats when I was at university, and now it’s the only area of maths which is of any use to me.

And he points us to this amusing example:

What’s great about this story is that it is happening in a “faraway land” (as the Gremlins researcher might say), and so I have no idea who the good guys and bad guys are supposed to be. No need to be happy that the bad guys blew it one more time, or to be frustrated that the good guys dropped the ball. I’ve never heard of the Burnet Institute (or, for that matter, the @BurnetInstitute) or the article in question. The 0/0 thing does look fishy, though, so good to see people getting called out on this sort of thing.

P.S. The above title is a joke. American math is just as bad! (And that last link came directly from the U.S. government.)

How did the international public health establishment fail us on covid? By “explicitly privileging the bricks of RCT evidence over the odd-shaped dry stones of mechanistic evidence”

Peter Dorman points us to this brilliant article, “Miasmas, mental models and preventive public health: some philosophical reflections on science in the COVID-19 pandemic,” by health research scholar Trisha Greenhalgh, explaining what went wrong in the response to the coronavirus by British and American public health authorities.

Greenhalgh starts with the familiar (and true) statement that science proceeds through the interplay of theory and empirical evidence. Theory can’t stand alone, and empirical evidence in the human sciences is rarely enough on its own either. Indeed, if you combine experimental data with the standard rules of evidence (that is, acting as if statistically-significant comparisons represent real and persistent effects and as if non-statistically-significant comparisons represent zero effects), you can be worse off than had you never done your damn randomized trial in the first place.

Greenhalgh writes that some of our key covid policy disasters were characterized by “ideological movements in the West [that] drew—eclectically—on statements made by scientists, especially the confident rejection by some members of the EBM movement of the hypothesis that facemasks reduce transmission.”

Her story with regard to covid and masks has fourth parts. First, the establishment happened to start with “an exclusively contact-and-droplet model” of transmission. That’s unfortunate, but mental models are unavoidable, and you have to start somewhere. The real problem came in the second step, which was to take a lack of relevant randomized studies on mask efficacy as implicit support to continue to downplay the threat of aerosol transmission. This was an avoidable error. (Not that I noticed it at the time! I was trusting the experts, just like so many other people were.) The error was compounded in the third step, which was to take the non-statistically-significant result from a single study, the Danmask trial (which according to Greenhalgh was “too small by an order of magnitude to test its main hypothesis” and also had various measurement problems), as evidence that masks do not work. Fourth, this (purportedly) evidence-based masks-don’t-work conclusion was buttressed by evidence-free speculation of reasons why masks might make things worse.

Greenhalgh’s message is not that we need theory without evidence, or evidence without theory. Her message, informed by what seems to me is a very reasonable reading of the history and philosophy of science, is that theories (“mental models”) are in most cases necessary, and we should recognize them as such. We should use evidence where it is available, without acting as if our evidence, positive or negative, is stronger than it is.

All this sounds unobjectionable, but when you look at what happened—and is still happening—in the covid discourse of the past year and a half, you’ll see lots of contravention of these reasonable principles, with the errors coming not just from Hoover Institution hacks but also from the Centers for Disease Control and other respected government agencies. It might sound silly to say that people are making major decisions based on binary summaries of statistical significance from seriously flawed randomized studies, but that seems to be what’s happening. But, as Greenhalgh emphasizes, the problem is not just with the misunderstanding of what to do with statistical evidence; it’s also with the flawed mental model of droplet transmission that these people really really didn’t want to let go of.

And check out her killer conclusion:

While I [Greenhalgh] disagree with the scientists who reject the airborne theory of SARS-CoV-2 transmission and the evidence for the efficacy of facemasks, they should not be dismissed as ideologically motivated cranks. On the contrary, I believe their views are—for the most part—sincerely held and based on adherence to a particular set of principles and quality standards which make sense within a narrow but by no means discredited scientific paradigm. That acknowledged, scientists of all creeds and tribes should beware, in these fast-moving and troubled times, of the intellectual vices that tempt us to elide ideology with scientific hypothesis.

Well put. Remember how we said that honesty and transparency are not enuf? Bad statistical methods are a problem in part because they can empower frauds and cheaters, but they also can degrade the work of researchers who would like to do well. Slaves to some long-defunct etc etc. And it’s not just a concern for this particular example; my colleagues and I have argued that these problems arise with so-called evidence-based practice more generally. As I put it a few years ago, evidence-based medicine eats itself.

P.S. The problems with the public health establishment should not be taken to imply that we should trust anti-establishment sources. For all its flaws, the public health establishment is subject to democratic control and has the motivation to improve public health. They make mistakes and we can try to help them do better. There’s some anti-establishment stuff that’s apparently well funded and just horrible.

Found poetry 2021

The poem is called “COVID-19 and Flu Vaccination | Walgreens Immunization Services.pdf,” and here it is:

What happened was we went online to Walgreens to schedule our booster shots. When we printed out the receipt, lots of extra pages spewed out, and we were using these as scratch paper. And then one day I noticed the above string of code words that didn’t seem to go together. Or maybe they do go together in some way. I guess that’s the point of found poetry, to reveal connections that we hadn’t thought about.

Just to break the spell a bit, I’ll try to analyze what makes this poem seem so striking and absurd. What first jumps out is “Modern Slavery and Human Trafficking,” which is so horrible and doesn’t seem to fit in with anything else here, except I guess “Affiliate Program”? And then something about how “Do Not Sell My / Personal / Information” is broken into three lines, which deconstructs the message, in the same way that if you look at a word very carefully, letter by letter, it starts to fall apart in your mind. Finally, the juxtaposition of all this corporate-speak seems to speak poetically about the contradictions of the modern world, or of any world.

“Causal Impact of Masks, Policies, Behavior on Early Covid-19 Pandemic in the U.S”: Chernozhukov et al. respond to Lemoine’s critique

Victor Chernozhukov writes:

Two months ago your blog featured Philip Lemoine’s critique “Lockdowns, econometrics and the art of putting lipstick on a pig” of our paper “Causal Impact of Masks, Policies, Behavior on Early Covid-19 Pandemic in the US” (ArXiv, June 2020, published in Journal of Econometrics). The paper found mitigating effects of masks and personal behavior and could not rule out significant mitigation effects of various “shut down” policies. Over the last two months, we studied Lemoine’s critique, and we prepared a detailed response.

Although Lemoine’s critique appears ideologically driven and overly emotional, some of the key points are excellent and worth addressing. In particular, the sensitivity of our estimation results for (i) including “masks in public spaces” and (ii) updating the data seems important critiques and, therefore, we decided to analyze the updated data ourselves.

After analyzing the updated data, we find evidence that reinforces the conclusions reached in the original study. In particular, focusing on the first three points to keep this note short:

(1) Lemoine showed that replacing “masks for employees” (business mask mandates) by “masks in public spaces” (public mask mandates) changes the effect estimate from negative to slightly positive. This critique is an obvious mistake because dropping the “masks for employees” variable introduces a confounder bias in estimating the effect of “masks in public spaces.” When we include both “masks for employees only” and “masks in public spaces” in the regression, the estimated coefficients of both variables are substantially negative in the original data. Lemoine’s argument seems to be an obvious but honest mistake.

(2) The second main point of Lemoine’s critique is non-robustness of results with respect to the data update. However, Lemoine has not validated the new data. We find that the timing of the first mask mandate for Hawaii (and another state) is mis-coded in the updated data. After correcting this data mistake, the estimated coefficients of “masks for employees only” and “masks in public spaces” continue to be substantially negative. This critique is also an honest (though not obvious) mistake.

(3) Lemoine analyzed the updated data that kept the original sample period from March 7 to June 3, 2020. The negative effects of masks on case growth continue to hold when we extend the endpoint of the sample period to July 1, August 1, or September 1 (before the start of school season). With the extended data, the estimated coefficients of “masks in public spaces” range from −0.097 to −0.124 with standard errors of 0.02 ∼ 0.03 in Tables 5-7, and are roughly twice as large as those of “masks for employees only.” A preprint version of our paper was available in ArXiv in late May of 2020 and was submitted for publication shortly after, which is why we did not analyze either the updated data or the extended data in our original paper.

Response to other points raised and supporting details on (1)-(3) are given in the ArXiv paper.

It’s great that outsiders such as Lemoine can contribute to the discussion, and I think it’s great that Chernozhukov et al. replied. I’ll leave the details to all of you.

Learning by confronting the contradictions in our statements/actions/beliefs (and how come the notorious Stanford covid contrarians can’t do it?)

The fundamental principle of Neumann/Morgenstern decision theory is coherence. You can start with utilities and probabilities and deduce decision recommendations, or you can start with decisions and use these to deduce utilities and probabilities. More realistically, you can move back and forth: start with utilities and probabilities and deduce decision recommendations, then look at places where these recommendations conflict with your actual decisions or inclinations and explore your incoherence, ultimately leading to a change in assessed utilities/probabilities or a change in decision plans.

That is, decision theory is two things. In the immediate sense, it’s a tool for allowing you to make decisions under uncertainty, in tangled settings where human intuition can fail. At the next level, it’s a tool for identifying incoherence—contradictions between our beliefs and actions that might not be immediately clear but are implicit if we work out their implications.

How does this play out in practice? We don’t typically don’t write down or otherwise specify utilities and probabilities—indeed, it would generally be impossible to do so, given the complexity of all that might happen in the world—but in any setting where we make a series of decisions or statements, we can try to derive what we can of our implicit utilities, and from there find incoherence in our attitudes and behaviors.

Finding incoherence is not the ultimate goal here—we’re animals, not machines, and so of course we’ll be incoherent in all sorts of ways. Rather, the point in identifying incoherence is to be able to go back and find problems with our underlying assumptions.

We’ve discussed this many times over the years, both in terms of conceptual errors (the freshman fallacy, the hot hand fallacy fallacy, the “what does not destroy my statistical significance makes me stronger” fallacy, and all the others), with particular statistical models where people find errors in our work and this enables us to do better, and with public forecasting errors such as overconfidence election forecasts and sports betting odds (see here and here, among others).

As the saying goes, “a foolish consistency is the hobgoblin of little minds.” Again, consistency is not a goal, it’s a tool that allows us to investigate our assumptions.

And that brings us to the notorious Stanford covid contrarians. Mallory Harris tells the story in the school newspaper:

As a Stanford student, I [Harris] have appreciated the University’s unambiguous commitment to universal, basic COVID-19 precautions. Throughout this pandemic, Stanford has relied on evidence-based advice from faculty experts to prevent large outbreaks on campus and keep us safe from COVID. . . . As a student, it has been discouraging to witness multiple Stanford affiliates repeatedly leveraging the University’s name through national media appearances, policy advising and expert-witness testimony to attack the same measures being implemented on our campus. . . .

Stanford was one of the first universities in the country to require vaccines for students, faculty, and staff. For this practice, Professor Jay Bhattacharya singled out Stanford in an op-ed and characterized policies like ours as “ill-advised” and “unethical.” He also advised against vaccination for anyone who has had COVID, claiming (despite evidence to the contrary) that “it simply adds a risk, however small, without any benefit.” . . . At the same time, Bhattacharya recently cited Stanford’s vaccine requirement as a reason he feels comfortable returning to in-person instruction this fall.

The evening after the FDA granted full approval to the Pfizer vaccine, Bhattacharya appeared on Fox News to assert that the approval was too fast and that we lack sufficient data on safety and efficacy of the vaccine. These statements are debunked by Stanford Healthcare and the Stanford Center for Health Education’s Digital Medic Initiative, which affirm the COVID vaccines are safe, effective and well-researched. Indeed, Stanford Medicine helped lead Phase-3 trials for the Johnson and Johnson vaccine and pediatric trials of the Pfizer vaccine. Disturbingly, the latter trial was attacked on Fox News by [Hoover Institution fellow and former Stanford medical school professor Scott] Atlas, who baselessly accused researchers of violating medical ethics and characterized a clinical trial participant as “brainwashed” and “psychologically damaged.” . . .

Bhattacharya appeared on Fox News to discuss [a study that was later retracted] as evidence of dangers to children, likening masks to “child abuse.” These comments were never revisited after the paper’s retraction weeks later . . .

My point here is not that Bhattacharya, Atlas, etc., got some things wrong. We all get things wrong. It’s not even that they made mistakes that were consequential or potentially so. They’re in the arena, working in an important and controversial area, and in that case there’s always the risk of screwing up. Even if some of their errors were avoidable . . . we all make avoidable errors too, sometimes.

No, what I want to focus on here is that they keep missing the opportunity to learn from their mistakes. If you first advise against vaccination, then you turn around and cite a vaccine requirement as a reason to feel comfortable, then confront that incoherence. If you first say the vaccine approval was too fast, then you turn around and characterize the vaccine as “a miraculous development”; if you cite a study and it is then retracted; . . . these are opportunities to learn!

Pointing out these contradictions is not a game of Gotcha. It’s a chance to figure out what went wrong. It’s the scientific method. And when people don’t use their incoherences to learn what went wrong with their assumptions, then . . . well, I wouldn’t quite say that they’re not doing science—I guess that “doing science” is whatever scientists do, and real-world scientists often spend a lot of time and effort avoiding coming to terms with the contradictions in their worldviews—but I will say that they’re not being the best scientists they could be. This has nothing to do with Fox News or anything like that, it’s about a frustrating (to me) practice of missing a valuable opportunity to learn.

When I come across a reversal or contradiction in my own work, I treasure it. I look at it carefully and treat it as a valuable opportunity for discovery. Anomalies are how we learn; they’re the core of science, as discussed for example here and here. And that’s more special than any number of Ted talks and TV appearances.

The COVID wager: results are in

This post is by Phil Price, not Andrew.

Frequent readers of this blog will already know about the wager between me and a commenter called Anoneuoid. Would the number of new COVID cases in the U.S. in the seven days ending 10/5 be lower than 500,000? If yes, I pay him $34. If no, he pays me $100. The number of new cases in those seven days was around 590K, so he owes me $100.

Anoneuoid, please send the money to Andrew (in small, unmarked bills); Andrew, please post your preferred address in the comments. Please use the money to buy donuts for the Stan team or for some other Stan-supporting activity that isn’t too much hassle for you.

Pundits continue to push the white-men-are-dying story, even though the real change is occurring among women.

From the New York Times book review:

Over the last century, Americans’ life expectancy at birth has risen from 49 to 77. Yet in recent years, that rise has faltered. Among white people age 45-54 — or a time many view as the prime of life — deaths have risen. Especially vulnerable are white men without a four-year bachelor’s degree. Curiously, midlife deaths have not climbed in other rich countries, nor, for the most part, have they risen for American Hispanics or blacks.

And, now, the data:

See here for a link to lots more graphs.

If you want to talk about trends in death rates for middle-aged whites, it’s women, not men, whose death rates have been going up. But pundits just loooove talking about the problems of white men.

That said, the absolute death rate remains higher among men than women:

And I haven’t looked at the data from the past few years. So the story is complicated. Still, I think the whole blue-collar blues thing is overplayed.

From the New Yorker:

As Case and Deaton recount in their new book, “Deaths of Despair and the Future of Capitalism” (Princeton), they dug deeper into national vital statistics and compared rates of suicide with those of other causes of mortality. “To our astonishment, it was not only suicide that was rising among middle-aged whites; it was all deaths,” they write. . . . working-age white men and women without college degrees were dying from suicide, drug overdoses, and alcohol-related liver disease at such rates that, for three consecutive years, life expectancy for the U.S. population as a whole had fallen. . . .

Case and Deaton argue that the problem arises from the cumulative effect of a long economic stagnation and the way we as a nation have dealt with it. For the first few decades after the Second World War, per-capita U.S. economic growth averaged between two and three per cent a year. In the nineties, however, it dipped below two per cent. In the early two-thousands, it was less than one per cent. This past decade, it remained below 1.5 per cent.

Though their earnings still lag behind those of the white working class, life for this generation of people of color is better than it was for the last. Not so for whites without a college education. Among the men, median wages have not only flattened; they have declined since 1979. . . .

Again with the men! But, again, at least at the time when Case and Deaton published their influential paper, the death rates for middle-aged white men was declining in this country. It was only the women whose death rates were still increasing.

But this doesn’t fit the decline-of-the-white-male story, so nobody talks about it.

CDC as bad as Harvard? . . . no, but they could still do better

Commenter Kevin writes:

Consider this article on the CDC website:

“New CDC Study: Vaccination Offers Higher Protection than Previous COVID-19 Infection”

From that headline, you would think that those who have been vaccinated (but have never had Covid) have better immunity than those who have recovered from Covid (but have never been vaccinated). After all, it says right there that vaccination offers higher protection. And in fact this is how the media, government officials, corporate policy makers, university administrations, etc. have interpreted it. It’s the reason why vaccination mandates don’t have an exclusion for those who have previously recovered from Covid.

Yet if one reads the body of the article carefully, the study in question does not support the headline. BOTH groups of subjects in the study were people who had previously been infected and recovered from Covid. The study merely indicates that vaccination can provide ADDITIONAL short-term protection above and beyond the immunity acquired by recovering from Covid. It does NOT compare those who have been vaccinated (only) to those who have recovered from Covid (only).

Given that there are various studies showing that previous infection (alone) provides as good or better protection than vaccination (alone), and better in the case of the Delta variant, this article seems to be a deliberate attempt to mislead the public.

I have no idea if this is a deliberate attempt to mislead—people make lots of errors by accident—but . . . the press release is dated August 6! I’d think that someone during the past two months would’ve notified the problem and informed the CDC press office.

It says, “Page last reviewed: August 6, 2021,” and maybe they have a policy of not correcting old press releases, and I can kind of understand why such a policy would generally make sense, but in this case I hope they can fix the headline.

On the plus side, the body of the press release seems just fine, and the study in question (with the accurate title, “Reduced Risk of Reinfection . . .”) appears to support the CDC’s recommendation, “If you have had COVID-19 before, please still get vaccinated.” So I don’t know that the misleading headline has policy implications, but, yeah, don’t write misleading headlines. And correct them when you learn about the error.

P.S. OK, I guess the CDC isn’t quite so bad as Harvard‘s Jesus story, where the untruths were not just in the headline but in the entire webpage, which was rotten to the core. So, in the spirit of the above, I added “? . . . no, but they could still do better” to the above headline. Had I not made that addition, I’d be as bad as the CDC!

Webinar: Towards responsible patient-level causal inference: taking uncertainty seriously

This post is by Eric.

We are resuming our Webinar series this Thursday with Uri Shalit from Technion. You can register here.

Abstract

A plethora of new methods for estimating patient-level causal effects have been proposed recently, focusing on what is technically known as (high-dimensional) conditional average effects (CATE). The intended use of many of these methods is to inform human decision-makers about the probable outcomes of possible actions, for example, clinicians choosing among different medications for a patient. For such high-stakes decisions, it is crucial for any algorithm to responsibly convey a measure of uncertainty about its output, in order to enable informed decision making on the side of the human and to avoid catastrophic errors.

We will discuss recent work where we present new methods for conveying uncertainty in CATE estimation stemming from several distinct sources: (i) finite data (ii) covariate shift (iii) violations of the overlap assumption (iv) violation of the no-hidden confounders assumption. We show how these measures of uncertainty can be used to responsibly decide when to defer decisions to experts and avoid unwarranted errors.

This is joint work with Andrew Jesson, Sören Mindermann, and Yarin Gal of Oxford University.

About the speaker

Uri Shalit is an Assistant Professor in the Faculty of Industrial Engineering and Management at Technion University. He received his Ph.D. in Machine Learning and Neural Computation from the Hebrew University in 2015.  Prior to joining Technion, Uri was a postdoctoral researcher at NYU working with prof. David Sontag.

Uri’s research is currently focused on three subjects. The first is applying machine learning to the field of healthcare, especially in terms of providing physicians with decision support tools based on big health data. The second subject Uri is interested in is the intersection of machine learning and causal inference, especially the problem of learning individual-level effects. Finally, Uri is working on bringing ideas from causal inference into the field of machine learning, focusing on problems in robust learning, transfer learning, and interpretability.

The video is available here.