When speculating about causes of trends in mortality rates: (a) make sure that what you’re trying to explain has actually been happening, and (b) be clear where your data end and your speculations begin.

A reporter writes:

I’d be very interested in getting your take on this recent paper. I am immensely skeptical of it. That’s not to say many Trump supporters aren’t racist! But we’re now going to claim that this entire rise in all-cause mortality can be attributed to the false sense of lost status? So so so so so skeptical.

You’re cited, and the headline takeaway is about perceived racialized threat to social status. But threat to social status isn’t mentioned — % of GOP voteshare is taken as a straightforward proxy of this. But doesn’t voteshare % jump around for a million reasons, often in reaction the most recent election?

I took a look. I don’t see how they can say “For these reasons (and for the sake of parsimony), like Case and Deaton (2017), our starting premise is to examine as a singular phenomenon; the rise in national mortality rates of working-age white men and women.” Just look at figure 2C here. They cite this paper but they don’t seem to get the point that the rate among middle-aged men was going down, not up, from 2005-2015. This is important because much of the decline-of-status discussion centers on men.

focus_group_3

Also, see here (which links to an unpublished report with tons more graphs). Some lines go up and some lines go down. “For the sake of parsimony” just doesn’t cut it here. Later in the paper they write that the rise in white mortality “is more accentuated in women than in men.” But “more accentuated” seems wrong. According to the statistics, the mortality rate among 45-54-year-old non-Hispanic white men was declining from 2005-2015.

This is a big problem in social science: lots of effort expended to explain some phenomenon, with it being clear exactly what is being explained. So you have to be careful about statements such as, “A valid causal story must explain something that is occurring widely among whites and also explain why it is not occurring among blacks.” I don’t think that kind of monocausal thinking is helpful.

The comparisons by education group are tricky because average education levels have been increasing over time. That’s not to say the authors should not break things down by education group, just that it’s tricky.

Regarding their county-level analysis: it seems that what they find is that Republican vote share in 2016 is predictive of trends in white mortality rates. This is similar to other correlations that we’ve been seeing: in short, Trump did well (and Clinton poorly) among white voters in certain rural and low-income places in the country. I don’t see that this gives any direct evidence regarding status threat. Also I don’t think the following statement makes sense: “In the absence of an instrumental variable, or of a natural experiment, our study provides a conservative estimate of the effect of the Republican vote share by controlling for a host of economic and social factors.” First, “conservative” is a kind of weasel word that allows people to imply without evidence that true effects are higher than what they found; second, “effect of the Republican vote share” doesn’t make sense. A vote share doesn’t kill people. It doesn’t make sense to say that person X died because a greater percentage of people in person X’s county voted for Trump.

Finally, they put this in italics: “For perhaps the first time, we are suggesting that a major population health phenomenon – a widespread one – cannot be explained by actual social or economic status disadvantage but instead is driven by perceived threat to status.” But I don’t see the evidence for it. They don’t supply any data on “perceived threat to status.” At least, I didn’t see anything in the data. So, sure, they can suggest what they want, but I don’t find it convincing.

All that said, I have general positive feelings about the linked paper, in the sense that they’re studying something worth looking into. Social scientists including myself spend lots of time on fun topics like golf putting and sumo wrestling, and this can be a great way to develop and understand research methods; but it’s also good for people to take a shot at more important problems, even if the data aren’t really there to address the questions we’d like to ask.

There should be a way for researchers to study these issues without feeling the need to exaggerate what they’ve found (as in this press release, on “a striking reversal [in mortality rate trends] among working-age whites, which seems to be driven principally by anxiety among whites about losing social status to Blacks”—without mentioning that (a) the trends go in opposite directions for men and women and (b) their research offers no evidence that anything is being driven, principally or otherwise, by anxiety or social status.

P.S. I can understand my correspondent’s desire for anonymity here. A couple years ago I got blasted on twitter by a leading public health researcher for my response to Case and Deaton. He wrote that I had “scoffed at the Case/Deaton finding about U.S. life expectancy . . . Has he ever admitted he was wrong about that?” I sent him an email saying, “Whenever I am wrong in public, I always announce my error in public too. I’ve corrected four of my published papers and have corrected many errors or unclear points in my other writings. But I can only issue a correction if I know where I was wrong. Can you please explain where I was wrong regarding the work of Case and Deaton? I am not aware of any errors that I made in that regard. Thank you.” We did a few emails back and forth and at no time did he give any examples of where I’d “scoffed” or where I’d been wrong. He wrote that I spent most of my time “carping about compositional effects” and that my efforts “helped spread the idea that Case and Deaton were wrong, that there was nothing to see here, that it was all liberal whining about inequality, etc., etc.” When the facts get in the way of the story, shoot the messenger.

9 thoughts on “When speculating about causes of trends in mortality rates: (a) make sure that what you’re trying to explain has actually been happening, and (b) be clear where your data end and your speculations begin.

    • Philip:

      I show 45-54 because that is what Case and Deaton looked at. If you follow my link we have graphs of all sorts of different age groups. But even from the 45-54 graph, we can see the problem with any general claim about “the rise in national mortality rates of working-age white men and women.” I feel like the overreaching in the authors’ causal analysis leaked backward and led to oversimplification in the description of the phenomenon to be explained.

  1. Andrew, these are my favorite two lines in their entire paper, wherein the authors pretend to justify their choice to ignore your paper: “Auerbach and Gelman (2017) also suggested that the mortality increase among working-age whites was limited to women. Case and Deaton (2017) found that although white women experienced a greater increase in mortality rates, these rates also increased among white men.” I love this passage because both statements are COMPLETELY TRUE! You did refute C&D’s results (line 1), and C&D did find the thing you refuted (line 2). All the authors did was state two undisputed facts in reverse chronological order, which together amount to saying nothing more than “The original results, which have since been criticized, were reported to be correct prior to that criticism.” Yet it is human nature for the reader to unconsciously insert a “but” or “however” between the sentences, or to assume that the findings in the second line were provided in response to the criticism in the first. Social scientists are devious!

    Almost as good but not nearly as original: C&D “found” their conclusions but you only “suggested” yours.

  2. Yeah, even leaving aside the men-and-women problem, the journalist’s skepticism is warranted. To copy and paste my comment from elsewhere:

    It appears that these researchers did not relate measures of anxiety to mortality data at all. Rather, they discuss other people’s work which showed significant links between survey measures of anxiety and the Republican vote share. In their own analyses, they then show significant links between the Republican vote share and mortality data.

    I would call that pretty indirect.

  3. It’s interesting to me that two of the coauthors are Darrick Hamilton and Sandy Darity, very big names in the study of racism and inequality. This is high profile research.

    I didn’t look at the empirical part of the paper, only the discussion of hypotheses, and two things bother me. First, two alternative hypotheses are given (competitors to their favored hypothesis of status threat), but no mention is made of the intuitive explanation most widely invoked for the rise of mortality: “deaths of despair”. Despair, of course, is about the future, and a common view is that people who see a bleak future are more likely to abuse drugs (especially opioids) and take their lives. It would be interesting to test this “folk explanation” empirically, bearing in mind that the future economic prospects of individuals are not the same as the prospects of the communities they currently live in — although contagion effects may occur at the community level. Proxies for expectations, if they can’t be measured directly (but they should!), might include various types of investment: schooling, savings, etc.

    Second, the paper surveys a set of authors who argue that status resentment was the primary cause of Trump’s election. The first problem is that no attention is given to critiques of this view; my sense is that this is not a resolved issue, especially because of the complexity and heterogeneity of “explaining” voting patterns. The second is that, at best, voting for Trump is a partial proxy for status threat and there is no discussion, at least at that point in the paper, of how tight or loose the correlation might be. If you care about the size of an effect an not just whether it exists (“is significantly different from zero”) this ought to be a central concern.

    Unfortunately, issues like these are common, almost ubiquitous, in economic research. (Can’t speak about sociology.) Potential competing hypotheses go unmentioned; it is enough to say that the data “are consistent with” the favored hypothesis of the authors. And proxies are employed with little investigation of their quantitative relationship to the factors of actual interest. The bar is very low in these respects, and there is little incentive for authors to concern themselves with them.

    • “and a common view is that people who see a bleak future are more likely to abuse drugs (especially opioids) and take their lives.”

      I think the correct counter to this common view is that it doesn’t reflect the actual development of the opioid epidemic. From the late 1990s to 2015 or so, the vast majority of the growth in opioid addiction was due to prescription opioids. These aren’t people in despair turning to drugs, these are people with good health insurance and a bad doctor*. We now know what we saw in Vietnam veterans: about 7% of people exposed to opioids get addicted. We shouldn’t have had to relearn that.

      Also, gun ownership was increasing over this period, and gun ownership correlates with suicides.

      FWIW, I don’t think it’s “status threat”, I think it’s that for voters who wanted to be dog-whistled, Trump put on great shows during the primaries and the election itself and didn’t pull an Etch-a-Sketch like Romney did. (I dug into some rural county voting data, and found places with (a) falling populations, (b) Dem votes falling at about the same rate, and (c) Republican votes increasing.)

      *: I wonder about the “we didn’t think they were addictive” excuse. Thinking back, the Dr. House TV show understood very well (by the 2010 series, at least) that prescription opioids were addictive.

  4. There’s a couple of problems here. The authors are liars, simple as that. It’s pointed out above that two of the authors are big longstanding names in the field, so do you think they are unaware of the methodological flaws that are pointed out?

    This ‘research’ is dispicable. It is intended as a counterpoint to anyone who disputes the notion of ‘white privilege’ by pointing out the life outcomes of poor white people. How can this notion be correct if poor white people have it just as bad as poor black people? Well, we could update the notion to better reflect reality, or engage in lies and distortion to protect the narrative.

    They are lying for political reasons, they knew it would be picked up by sympathetic outlets and will become fact in the minds of those who want to believe it.

    What’s more interesting to me is the throat clearing your correspondent felt the need to perform, “that’s not to say many Trump supporters aren’t racist!”. Why the need for this clarification? Why the need to point out that he has the ‘correct’ opinion, he’s just a tad sceptical and doesn’t want to be lumped in with any terrible people? Even though he’s anonymous and nobody can shoot the messenger in this case.

    The ‘reserach’ is flawed and politically motivated. It is quite correct that it is called out as such, the political leanings of the person doing it doesn’t matter.

Leave a Reply to LemmusLemmus Cancel reply

Your email address will not be published. Required fields are marked *