Oooh, I hate it when people call me “disingenuous”:

This has happened before.

I hate it when someone describes me as “disingenuous,” which according to the dictionary, means “not candid or sincere, typically by pretending that one knows less about something than one really does.” I feel like responding, truly, that I was being candid and sincere! But of course once someone accuses you of being insincere, it won’t work to respond in that way. So I can’t really do anything with that one.

Anyway, this came up recently in an article by Casey Mulligan, “3 Reasons Election Forecasts Made False Projections Favoring Joe Biden.” Mulligan makes some interesting points in his article, but I don’t agree with this bit:

The Economist forecaster Andrew Gelman, not an economist but an eminent Bayesian statistician, is now rather disingenuously shifting all the blame onto pollsters for assembling skewed samples. Arguably most of his forecast error came instead from his seemingly arbitrary choice of which questions to use from the polls. Gelman has claimed that own-vote questions are better forecasters than expectation questions, which is a respectable conclusion but no reason to completely ignore the expectation questions instead of assigning them somewhat less weight.

There are a few things wrong here.

First, the “disingenuous” thing (or “rather disingenuously,” which sounds like the name of one of the characters in a hilarious Michael Keaton movie from the 1980s)—that’s just bullshit. Everything I’ve written on the topic of polling and elections is 100% sincere. The idea that I would pretend I know less than I really do about something . . . let’s just say that’s never been my style! So one minus point for Mulligan for failed mind-reading.

Second, Mulligan points to my post entitled, “Don’t kid yourself. The polls messed up—and that would be the case even if we’d forecasted Biden losing Florida and only barely winning the electoral college,” as evidence that I “shifted all the blame onto pollsters.” Funny that he should say this because here’s what I wrote in the post:

Saying that the polls messed up does not excuse in any way the fact that our model messed up. A key job of the model is to account for potential problems in the polls!

It’s ridiculous to say that I’m “shifting all the blame onto pollsters.” Indeed, I’m tempted to say that Mulligan was being disingenuous by so badly mischaracterizing my post, but my guess is that he wasn’t being disingenuous, he just had a pre-conceived storyline and then linked to my post without really reading what I wrote.

Third, he writes about my “seemingly arbitrary choice of which questions to use from the polls.” The choice of poll questions had nothing to do with me! We at the Economist were using the same poll summaries that were reported in the newspaper, Real Clear Politics, Fivethirtyeight, and everywhere else.

In his article, Mulligan has some reasonable points and some not-so-reasonable points. He suggests that in forecasting elections we use other information besides horse-race polls and fundamentals; he thinks we should also use information such as survey responses on who people think will win the election. I agree that this would’ve helped in 2020. In other elections such as 2016, such information would not have been so helpful, but I take Mulligan’s point that more information is out there, and it could be a bad idea to ignore such information even though it’s not always clear exactly how to use it.

One thing Mulligan says that I don’t believe is that Trump’s underperformance in the polls is due to “social desirability . . . Trump is the Bad Orange Man to many. . . . This suggests that some fraction of Trump supporters would not acknowledge their support for him — the “shy Trump voter” — especially in Democratic communities.” I don’t buy this, partly because the poll gap in 2016 was largest not in Democratic states such as New York and California but in strong Republican states such as West Virginia and North Dakota; see figure 2 of this paper. The places where Trump’s vote was understated by the polls was in Republican states, not Democratic states. So Mulligan’s doing a “This suggests” argument that is not supported by the data.

Mulligan also says that “renegade pollsters Democracy Institute and Trafalgar . . . can be proud of the accuracy of their much-maligned forecasts of the 2020 election.” I haven’t looked into Democracy Institute, but we did check out Trafalgar’s forecast, and it wasn’t so great! They forecast Biden to win 235 electoral votes. Biden actually won 306. Our Economist model gave a final prediction of 356. 356 isn’t 306. We were off by 50 electoral votes, and that was kind of embarrassing. We discussed what went wrong, and the NYT ran an article on “why political polling missed the mark.” Fine. We were off by 50 electoral votes (and approximately 2.5 percentage points on the popular vote, as we predicted Biden with 54.4% of the two-party vote and he received about 52%). We take our lumps, and we try to do better next time. But . . . Trafalgar’s forecast was off by 71 electoral votes! So I can’t see why Mulligan thinks we were so bad but they “can be proud of” their accuracy. If they can be proud of being off by 71, then we should be even more proud ot have only been off by 50, no?

Finally, Mulligan makes another good point when he talks about voter turnout. Turnout is indeed hard to forecast from polls, as my colleague Bob Shapiro discusses in this post. But the fact that turnout modeling is difficult does not mean that we should just throw up our hands and leave it to the pollsters! I accept Mulligan’s point that we didn’t try hard enough to model this potential source of nonsampling error.

In summary, Mulligan makes some good points and some bad points in his article. I’m annoyed because he called me insincere when I wasn’t, and he mischaracterized my post as “shifting all the blame” onto others when I explicitly wrote that this “does not excuse in any way the fact that our model messed up.” That sort of thing is irritating. It’s work enough to correct people’s factual errors without also having to deal with stupid mind-reading tricks.

P.S. If I really wanted to be disingenuous, I’d say: Hmm, Mulligan’s post is at thefederalist.com. I wonder what else is there . . . Hmm, let’s take a look . . . We got “Pork-Stuffed Bill About To Pass Senate Enables Splicing Aborted Babies With Animals” and “Disney’s ‘Cruella’ Tells Girls To Prioritize Vengeance Over Love”:

Films aren’t here to tell us how to live, as art is not mere propaganda. But given Disney’s insistence on crafting stories specifically to indoctrinate little girls into prioritizing a woke, corporatist agenda, such as “Frozen,” it’s hard not to look at “Cruella” outside of that frame.

A “woke, corporatist agenda,” huh? I guess I missed that one when we went to Disneyworld and did the Frozen Sing-Along Celebration and rode the Frozen Ever After ride 8 times in a row. (Pro tip: go on a really rainy day and get there right when the park opens.) I guess it’s true what they say about horseshoe theory: on the far left or the far right, everything’s political. I’d love to hear thefederalist.com’s take on Snow White: “Whistle while you work” is either a fascist suppression of worker rights or a communist glorification of corrupt labor practices. Take your pick.

What is it that’s so unsettling about the “Pork-Stuffed Bill” headline? I think is that it evokes an image of an aborted baby being stuffed with pork. Putting “pork-stuffed” and “splicing with animals” in the same headline . . . maybe that was a mistake. Kind of like how it’s a mistake to call me disingenuous when I’m not, misrepresent what I write, and slam our errors while saying that “renegade” pollsters who were off by more should be “proud of their accuracy.” Pork stuffed spliced aborted animal babies indeed.

So I could say, What’s up with that thefederalist.com site? Seems pretty nutty to me. But I wouldn’t say such a thing, as that would be disingenuous.

P.P.S. A colleague writes:

To be even more disingenuous, one might copy a paragraph and bold a sentence from the bottom of the article . . .

Casey B. Mulligan, Professor of Economics at the University of Chicago, served as Chief Economist of the White House Council of Economic Advisers from 2018 to 2019. His new book “You’re Hired! Untold Successes and Failures of a Populist President” explains how President Trump deals with politics and policy.

Oooh! Untold successes I can believe, because the news media hates the Orange Man. But untold failures?? It stuns me that Trump could have failures that were untold. Maybe someone can read the book for us and share with us what were Trump’s untold failures. I’d say that I’m genuinely curious, but that would be disingenuous.

11 thoughts on “Oooh, I hate it when people call me “disingenuous”:

  1. >Economists Rothschild and Wolfers (2013) had argued that the
    >best way to predict the election is not to ask people whom they
    >will vote for, but rather ask whom they think will win. Their
    >claim was that when you ask people whom they think will win,
    >survey respondents will be informally tallying their social net-
    >works, hence their responses will contain valuable information
    >for forecasting.

    I’m not sure what to think about expectation questions.

    My gut instinct is that they don’t belong in a model of who is going to win, because if 99% of people believe candidate A is going to win, but candidate B has more votes in the relevant states, then candidate B is going to win.

    On the other hand, maybe they help compensate for survey nonresponse bias. The vast majority of people (~%93) won’t respond to pollsters, so the expectation question is a way of measuring what they think.

    On the other other hand, if people just repeat what they hear on the news about who is going to be elected, then this is just a measurement of which candidate the news organizations think is going to win.

  2. > I don’t buy this, partly because the poll gap in 2016 was largest not in Democratic states such as New York and California but in strong Republican states such as West Virginia and North Dakota; see figure 2 of this paper. The places where Trump’s vote was understated by the polls was in Republican states, not Democratic states. So Mulligan’s doing a “This suggests” argument that is not supported by the data.

    The whole “shy Trump voter” theory is so often taken as fact by right-wingers, without subjecting it to due diligence – which is exactly what you’d expect as it’s a politically convenient narrative (that poor Trump voters are victimized by mean and intimidating pollsters).

    In addition to the point that you raise…

    As I recall Clinton did better than expected in areas where you’d think that Trump voters would have been less intimidated by those tsk-tsking pollsters…AND she did worse than expected in areas where Demz dominate. Both trends would work against the shy Trump voter thoery.

    Further, why wouldn’t there be a counterbalancing “shy voter” effect on Clinton voters who live in Republican-dominated areas that would balance out any shy Trump voter effect. In the rural area where I live, where I STILL see pickup trucks with Trump signs and American flags proudly displayed, I’d say that Clinton voters would be more likely to be the shy ones.

    Finally, I’ve often read rightwingers saying that they intentionally answer polls incorrectly because they want to tip the polls in a desired direction. If so, I’d think that Trump supporters would want to inflate his polling results. I have often seen Trump supporters pointing to those polls that had more favorable results for him.

    I get that there’s a certain common sense logic to the “shy Trump voter” theory. But you’d think that people who are serious about taking on this issue would subject theories that overlap with their own political predisposition to a very high level of scrutiny. Maybe that has happened, but I haven’t seen it done.

    Most of what I see about the “shy Trump voter” rhetoric looks to me like suspiciously convenient confirmation bias. If someone knows of an actually carefully conducted analysis that supports the shy Trump voter theory I’d appreciate a link.

    As to the conclusion that a failure to hold the shy Trump voter theory due diligence is a sign of disingenuousness…you need to pass a high bar of evidence to conclude that you know what someone truly believes – which is what the label of disingenuous requires. It’s like someone calling someone else a “liar” in the Internet, or saying that someone “ignored” an inconvenient fact. Assigning that label, IMO, usually tells more about the labeler than it does the leabelee.

  3. I think they called you disingenuous because they were being disingenuous. This seems to be a common tactic in journalism these days. I wonder if there’s a term for it. Maybe “gaslighting”, but there should be something more specific.

  4. And here I looked up the definition of ‘obtuse’, which typically I had inferred to mean abstract or unclear. Rather it means dull, annoyingly insensitive or slow to understand, blunt.

    In reviewing the GRE vocabulary lists & definitions, there are some words that we use incorrectly.

  5. Disingenuous?

    Your code is all out there, and it’s pretty clear following your work that you were open and honest about your inputs.

    Pollsters did a crap job. Were you supposed to have magicked up some super data? Who is debating this?

  6. I agree about the mind-reading. It seems common for pundits and others to attribute motives and intentions to others, when they cannot possibly know. The mere attempt at mind-reading reveals some fallacious reasoning.

Leave a Reply

Your email address will not be published. Required fields are marked *