Skip to content
 

“Did Jon Stewart elect Donald Trump?”

I wrote this post a couple weeks ago and scheduled it for October, but then I learned from a reporter that the research article under discussion was retracted, so it seemed to make sense to post this right away while it was still newsworthy.

My original post is below, followed by a post script regarding the retraction.

Matthew Heston writes:

First time, long time. I don’t know if anyone has sent over this recent paper [“Did Jon Stewart elect Donald Trump? Evidence from television ratings data,” by Ethan Porter and Thomas Wood] which claims that Jon Stewart leaving The Daily Show “spurred a 1.1% increase in Trump’s county-level vote share.”

I’m not a political scientist, and not well versed in the methods they say they’re using, but I’m skeptical of this claim. One line that stood out to me was: “To put the effect size in context, consider the results from the demographic controls. Unsurprisingly, several had significant results on voting. Yet the effects of The Daily Show’s ratings decline loomed larger than several controls, such as those related to education and ethnicity, that have been more commonly discussed in analyses of the 2016 election.” This seems odd to me, as I wouldn’t expect a TV show host change to have a larger effect than these other variables.

They also mention that they’re using “a standard difference-in-difference approach.” As I mentioned, I’m not too familiar with this approach. But my understanding is that they would be comparing pre- and post- treatment differences in a control and treatment group. Since the treatment in this case is a change in The Daily Show host, I’m unsure of who the control group would be. But maybe I’m missing something here.

Heston points to our earlier posts on the Fox news effect.

Anyway, what do I think of this new claim? The answer is that I don’t really know.

Let’s work through what we can.

In reporting any particular effect there’s some selection bias, so let’s start by assuming an Edlin factor of 1/2, so now the estimated effect of Jon Stewart goes from 1.1% to 0.55% in Trump’s county-level vote share. Call it 0.6%. Vote share is approximately 50%, so a 0.6% change is approximately a 0.3 percentage point in the vote. Would this have swung the election? I’m not sure, maybe not quite.

Let’s assume the effect is real. How to think about it? It’s one of many such effects, along with other media outlets, campaign tactics, news items, etc.

A few years ago, Noah Kaplan, David Park, and I wrote an article attempting to distinguish between what we called the random walk and mean-reversion models of campaigning. The random walk model posits that the voters are where they are, and campaigning (or events more generally) moves them around. In this model, campaign effects are additive: +0.3 here, -0.4 there, and so forth. In contrast, the mean-reversion model starts at the end, positing that the election outcome is largely determined by the fundamentals, with earlier fluctuations in opinion mostly being a matter of the voters coming to where they were going to be. After looking at what evidence we could find, we concluded that the mean-reversion model made more sense and was more consistent with the data. This is not to say that the Jon Stewart show would have no effect, just that it’s one of many interventions during the campaign, and I can’t picture each of them having an independent effect and these effects all adding up.

P.S. After the retraction

The article discussed above was retracted because the analysis had a coding error.

What to say given this new information?

First, I guess Heston’s skepticism is validated. When you see a claim that seems too big to be true (as here or here), maybe it’s just mistaken in some way.

Second, I too have had to correct a paper whose empirical claims were invalidated by a coding error. It happens—and not just to Excel users!

Third, maybe the original reaction to that study was a bit too strong. See the above post: Even had the data shown what had originally been claimed, the effect they found was not as consequential as it might’ve seen at first. Setting aside all questions of data errors and statistical errors, there’s a limit to what can be learned about a dynamic process—an election campaign—from an isolated study.

I am concerned that all our focus on causal identification, important as it is, can lead to researchers, journalists, and members of the general public to overconfidence in theories as a result of isolated studies, without always the recognition that real life is more complicated. I had a similar feeling a few years ago regarding the publicity surrounding the college-football-and-voting study. The particular claims regarding football and voting have since been disputed, but even if you accept the original study as is, its implications aren’t as strong as had been claimed in the press. Whatever these causal effects are, they vary by person and scenario, and they’re not occurring in isolation.

12 Comments

  1. Terry says:

    There’s also a sly semantic/logical error here. In a close election, there are many factors that “elected Trump”, so it is misleading to say, in isolation, that one of them “elected Trump”. It makes the one factor sound uniquely important.

    This is clearest when you think of an election decided by one vote. Then, there are (probably literally) millions of factors that “elected Trump”. Each is critical in some sense because each was important at the margin in a mathematical sense, but it is more accurate to say that ALL those millions of factors together “elected Trump”.

    • Z says:

      I don’t think it’s an “error” to point to an individual event or circumstance and say, “had this event or circumstance been otherwise, the election result would have been different”. The fact that there can be many such causes does not make it false to identify any one, and to me the word “elected” doesn’t implicitly claim unique importance for the particular factor being discussed. But maybe that’s just because I’m so used to thinking in the counterfactual framework of causal inference that the colloquial meaning of words has faded from my brain.

      I do agree that the usefulness or meaningfulness of identifying such factors declines as the election result gets closer and as the number of identifiable independent causes of the result increases. It often seems that the implicit motive behind pointing out a cause of the outcome is to assign blame (though not, I think, in the case of the Jon Stewart paper), and I think this is what feels silly when there are so many causes.

      • Terry says:

        A valid point.

        I wondered whether it was an error, or just misleading, and whether it was a logical flaw or just a semantic flaw.

        At the end of the day, “misleading” is probably the best descriptor. When you say, in isolation, that “Jon Stewart elected Trump” the average listener will get a misleading impression of how important Jon Stewart was. It is fundamentally misleading because it gives primacy to Jon Stewart over the other factors.

        It is somewhat analogous to “the straw that broke the camel’s back”. All the straws broke the camel’s back. However, it is different than the camel analogy because there is a time-dimension to the camel analogy, and the “last straw” has a claim to primacy over the other straws.

  2. jd says:

    In this comment https://statmodeling.stat.columbia.edu/2019/04/16/abandoning-statistical-significance-is-both-sensible-and-practical/#comment-1020009 , I thought maybe statistical significance gets in the way of common sense at times, but maybe it’s just too strong on emphasis statistical methods and data in general. Sometimes scientific discoveries are surprising, but shouldn’t there be some element of skepticism about results that seem implausible at face value? Many of the examples on this blog (himmicaines, esp, etc) seem rather unlikely, regardless of data or analysis.

    Is it heretical (for a stat blog) to wonder if there is too much emphasis on data and analysis, and not enough on thinking it through? Maybe it just seems this way because I’m not an expert in weather, psychology, or political science…

    • Brent Hutto says:

      And easy “thinking it through” face value test for the supposed effect being discussed here might be this…

      What is the overlap between regular watchers of Jon Stewart’s program when it was on the air and people who would ever even consider voting for Trump?

      The overlap has to be tiny, surely. Probably of the same magnitude as Public Radio contributors who are climate change deniers.

      • jd says:

        Good point!

        Maybe a starting point for this type of check is – if I didn’t have this wonderful machine called data and statistical analysis, what would I think of the title of my paper? – not an argument for theory over ‘evidence’, just a pause to consider if I pointed my machine in the right direction.

      • Andrew says:

        Brent:

        I agree that I would not expect any effect to be large, but I think you’re overstating the case here. First, I expect that lots of regular watchers of Stewart’s show considered voting for Trump or not voting at all. Second, the show could affect occasional watchers too. Third, the show could have indirect effects; among other things, people can talk about what they saw on the show and take those conversations into other settings.

        • Brent Hutto says:

          You and I may have different priors when it comes to conversation (or any meaningful exchange of ideas) between members of different groups.

          But I do agree with your point that the “not voting at all” option is a potentially crucial effect that must be accounted for. I am often guilty of treating elections as a dichotomous choice when they are anything but!

        • jd says:

          Approx 2.5 million viewers nightly at the time, according to wiki. Most of these were young, so maybe 50% (optimistically) voted? Surely, many of these were the same viewers over and over (regular viewers, and the ones most influenced). So, of the 130+ million who voted, maybe very roughly around 1% of the people who voted watched the show regularly? Of those, only a certain % would consider voting for Trump at all. Of those, only a certain % would allow a comedy show to influence their vote choice. As for occasional watchers, how many occasional watchers of a comedy show have their voting influenced by it? Or, what about some other stories – How many occasional watchers found they hated the show and made them more determined to vote for Trump? Maybe satirical comedy news makes them angry at liberal comedians, and now they really want to vote for Trump. The stories seem endless, and it doesn’t make a lot of sense. The second and third points that you mention seem like rather minuscule effects.
          I’m hardly an expert on political science (really I don’t know much at all), but I don’t see that Brent’s point is very far off the mark. Any effect would seem to be so small as to be undetectable. It just seems rather implausible at face value.

      • Dzhaughn says:

        “What is the overlap between regular watchers of Jon Stewart’s program when it was on the air and people who would ever even consider voting for Trump?”

        I think you’d be surprised on that point. At least consider that the number of people “who would ever consider voting for Trump” grew extremely rapidly in 2015.

        Not that I would buy anything like the 1.1% figure. Anyway, the counterfactual is meaningless. The world, alas, is real; there is no world where John Stewart does not get tired of his job.

  3. Bob says:

    There’s an argument that Stewart got Trump elected by making him a bigger deal than we was because there was more comedy value to him. Or at least he contributed hugely to the Trump campaign building up steam.

  4. Jeff says:

    The paper doesn’t pass the common-sense test. Everyone knows that Jimmy Fallon elected Trump.

Leave a Reply