“Presidential Election Outcomes Directly Influence Suicide Rates”

This came in the spam the other day:

College Station, TX–August 16, 2010–Change and hope were central themes to the November 2008 U.S. presidential election. A new longitudinal study published in the September issue of Social Science Quarterly analyzes suicide rates at a state level from 1981-2005 and determines that presidential election outcomes directly influence suicide rates among voters.

In states where the majority of voters supported the national election winner suicide rates decreased. However, counter-intuitively, suicide rates decreased even more dramatically in states where the majority of voters supported the election loser (4.6 percent lower for males and 5.3 lower for females). This article is the first in its field to focus on candidate and state-specific outcomes in relation to suicide rates. Prior research on this topic focused on whether the election process itself influenced suicide rates, and found that suicide rates fell during the election season.

Richard A. Dunn, Ph.D., lead author of the study, credits the power of social cohesion, “Sure, supporting the loser stinks, but if everyone around you supported the loser, it isn’t as bad because you feel connected to those around you. In other words, it is more comforting to be a Democrat in Massachusetts or Rhode Island when George W. Bush was re-elected than to be the lonely Democrat in Idaho or Oklahoma.”

Researchers have commonly thought that people who are less connected to other members of society are more likely to commit suicide. The authors of the study first became interested in this concept when studying the effect of job loss and unemployment on suicide risk, which theoretically causes people to feel less connected to society. The authors realized that while previous work had explored whether events that brought people together and reaffirmed their shared heritage such as elections, war, religious and secular holidays lowered suicide rates, researchers had generally ignored how the outcomes of these events could also influence suicide risk.

The study holds implications for public health researchers studying the determinants of suicide risk, sociologists studying the role of social cohesion and political scientists studying the rhetoric of political campaigns.

I want to laugh at this sort of thing . . . but, hey, I have an article (with Lane Kenworthy) scheduled to appear in Social Science Quarterly. I just hope that when they send out mass emails about it, they link to the article itself rather than, as above, generically to the journal.

More seriously, I don’t want to mock these researchers at all. In most of my social science research, I’m a wimp, reporting descriptive results and usually making causal claims in a very cagey way. (There are rare exceptions, such as our estimates of the effect of incumbency and the effects of redistricting. But in these examples we had overwhelming data on our side. Usually, as in Red State, Blue State, I’m content to just report the data and limit my exposure to more general claims.) In contrast, the authors of the above article just go for it. As Jennifer says, causal inference is what people really want–and what they should want–and so my timidity in this regard should be no sort of model for social science researchers.

With regard to the substance of their findings, I don’t buy it. The story seems too convoluted, and the analysis seems to have too many potential loopholes, for me to have any confidence at all in the claims presented in the article. Sure, they found an intriguing pattern in their data, but the paper does not look to me to be a thorough examination of the questions that they’re studying.

P.S. to those who think I’m being too critical here:

Hey, this is just a blog and I’m talking about a peer-reviewed publication in a respectable journal. I’m not saying that you, the reader, should disbelieve Classen and Dunn’s claims, just because I’m not convinced.

I’m a busy person (aren’t we all) and don’t have the time or inclination right now to go into the depths of the article and find out where their mistakes are (or, alternatively, to look at their article closely enough to be convinced by it). So you can take my criticisms as seriously as they deserve to be taken.

Given that I haven’t put in the work, and Classen and Dunn have, I think it’s perfectly reasonable for you to believe what they wrote. And it would be completely reasonable for them, if they happen to run across this blog, to respond with annoyance to my free-floating skepticism. I’m just calling this one as I see it, while recognizing that I have not put in the effort to look into it in detail. Those readers who are interested in the subject can feel free to study the matter further.

6 thoughts on ““Presidential Election Outcomes Directly Influence Suicide Rates”

  1. Andrew,

    You write, "In most of my social science research, I'm a wimp, reporting descriptive results and usually making causal claims in a very cagey way…and so my timidity in this regard should be no sort of model for social science researchers."

    I would change the word "cagey" to "circumspect" and declare that your way is a model for social science researchers. I am always suspicious of "headline grabbing" research publicized with press releases.

    Back in the early 90s, one year I investigated every social science "new research shows" story that made the evening news. Every one was either misreported (the story made claims that the research did not) or (in the case that the research was reported accurately) flat out wrong.

    Good research should be carefully caveated. (Is that a word?)

    Bruce

  2. All social science "research" suffers from the fact that it can't be believed. It may be right, or it may be wrong, but without independent study by someone who has considerably more training and abilities than social scientists, it simply can't be believed (or disbelieved, for that matter). One obvious test that might provided support or opposition to the author's claims is to start correlating _everything_ with suicide rates (state real estate market, league of world series winner compared to league represented in state, etc). Most likely multiple correlations would be significant. Also, breaking down by congressional district (precinct might even be better, if one can get the data) would be interesting. In other words, controls.

  3. Here is a different perspective.

    If one does not think a paper is worth spending 20 minutes on to get a quick sense of its credibility, then it is not worth spending time spreading one's prejudices against its finding.

    I too wonder how one adequately studies this question, and I think most people would be skeptical, but as scientists we should either look at the evidence or remain silent.

    If the paper is as unconvincing as presumed, then one is not conveying information by just stating the obvious.

    If it is convincing, then one is being incredibly unfair to the authors and the scientific method.

    An opinion on a paper one does not read is hence a tie or lose situation.

  4. Anonimo:

    1. I wrote "I'm not saying that you, the reader, should disbelieve Classen and Dunn's claims, just because I'm not convinced. I think it's perfectly reasonable for you to believe what they wrote. . . ." If you think that's "incredibly unfair," I think you have incredible standards of fairness!

    2. More to the point, they're the ones who spammed their study into my email. If the publishers of the journal didn't want some uninformed blogger to go commenting on their study, they should've thought of that before sending their report to me. It's not like I'm going out of my way looking for studies to mock; this one fell into my lap.

  5. I'm reminded of cd volume (only a small band of volume is usable because everyone tried to be louder than everyone else) and car sizes (in an accident the heavier car is generally safer so cars keep getting heavier, but because they're all heavier we aren't safer than before, just burning more gas).

    Causal inference by the researcher is useful. They presumably looked very closely at the data, read all the literature: I'd like to know what they think is going on. But without a good vocabulary for signaling the strength of the evidence for your interpretation, you can't really tell apart "we saw somewhat weak correlation, we're just guessing this is the cause" from "we controlled extensively for confounding factors, ran very well designed experiments, and X is just about certainly causing Y." So you're left assuming its probably the former category.

    Thus any strong claim is rationally dismissed because its usually this game of needing to report a causal inference. But if you really do have a stronger case, it's much harder to signal that ("X causes Y, and this study really means it"?)

Comments are closed.