What went wrong with the polls in 2020? Another example.

Shortly before the election the New York Times ran this article, “The One Pollster in America Who Is Sure Trump Is Going to Win,” featuring Robert Cahaly, who on election day forecast Biden to win 235 electoral votes. As you may have heard, Biden actually won 306. Our Economist model gave a final prediction of 356.

356 isn’t 306. We were off by 50 electoral votes, and that was kind of embarrassing. We discussed what went wrong, and the NYT ran an article on “why political polling missed the mark.”

Fine. We were off by 50 electoral votes (and approximately 2.5 percentage points on the popular vote, as we predicted Biden with 54.4% of the two-party vote and he received about 52%). We take our lumps, and we try to do better next time. But . . . they were off by 71 electoral votes! So I think they should assess what went wrong with their polls, even more so.

The Times article ends with this quote from Cahaly:

“I think we’ve developed something that’s very different from what other people do, and I really am not interested in telling people how we do it,” he said. “Just judge us by whether we get it right.”

Fair enough: you run a business, and it’s your call whether to make your methods public. Trafalgar Group polling keeps their methods secret, as does Fivethirtyeight with their poll aggregation procedure. As long as things go well, it’s kinda fun to maintain that air of mystery.

But “judge us by whether we get it right” is tricky. Shift 1% of the vote from the Democrats to the Republicans, and Biden still wins the popular vote but he loses the electoral college. Shift 1% of the vote from the Republicans to the Democrats, and Biden wins one more state and the Democrats grab another seat in the Senate.

From the news articles about Cahaly’s polling, it seems that a key aspect of their method is to measure intensity of preferences, and it seems that Republicans won the voter turnout battle this year. So, looking forward, it seems that there could be some benefit to using some of these ideas—but without getting carried away and declaring victory after your forecast was off by 71 electoral votes. Remember item 3 on our list.

72 thoughts on “What went wrong with the polls in 2020? Another example.

  1. This seems as good a place as any to put in this thought/question (rather than burying it in one of the prior posts over the last 2 weeks). My theory for the nonresponse discrepancy is that many Trump voters will refuse to talk to any pollster they associate with those liberal/elitist/media types. So, if they hear it is a Pew survey, they refuse to talk. On the other hand, never-Trumpers have felt so powerless (speaking from personal experience here) that they were more than willing to talk to pollsters in order to release some of their frustrations (though I still refuse to answer polls, regardless of the circumstances). My question is: why wouldn’t the adjustments for nonresponse bias have picked this up? If I am right about the differential nonresponse, I would have expected the adjustments to have compensated for that – but it appears that the adjustments were not sufficient. Of course, my assumption may be wrong, but if it isn’t, why was the adjustment for nonresponders inadequate?

    • I’m very doubtful that most people are “inside baseball” enough to distinguish between individual pollsters. Most likely the non-response population wasn’t picking anything up from a robodial. I have family members of all political stripes who simply let everything go to voicemail that isn’t already a contact in their phone.

      You can adjust for non-response in various ways but the larger the non-responsive portion of your electorate the more error those adjustments will be subject to.

      • Here’s evidence, sample size n=1. I saw an interview with someone at a Trump rally and when they heard the interviewer was from CNN, they said “I’m not talking to you” and walked away. I’m not sure most people need to be “inside baseball” to act this way. On your other point – do we know if the non-response rates were different in the polls this year than in 2016? That would seem to be fairly relevant information and probably not hard to get, though I don’t know where to look.

        • Honestly, that sounds like a lie to me.

          CNN’s pollster is SSRS. I don’t think they say they are from CNN until the very end, at the beginning they just say ‘SSRS’. Also, Fox News (which has it’s own outside pollster) had some of Trump’s worst polls during this cycle. If Trump supporters were hanging up on hearing CNN (which as I pointed out, is not mentioned until the end), and staying for Fox, the results don’t reflect it.

          Also, Selzer, who had the most accurate Iowa poll was sponsored by the Des Moines Register, which endorsed Democrats all through.

        • I don’t think it’s a lie – I think it’s a different thing: interview vs pollster. And I agree that most people don’t know enough about pollsters to be selective about which they’d respond to.

        • What makes you think they don’t identify until the end? At the beginning of the comment you said you “don’t think” but later you say it with certainty (“as I pointed out is not mentioned until the end”). I once got called for an NYTimes poll and the identified right away. I think if the logic is that people respond to surveys whose sponsors they trust a local newspaper sponsor might be more trustworthy.

    • Sorry if this got mentioned in a previous thread, but I wonder how much is pandemic-related. The population of people either able or willing to work from home (and maybe thhus easier to reach by pollsters or at least talk to them) differs in a lot of ways from those who weren’t able or willing to do so. If the voting preferences of the two groups differ (in ways that don’t correlate with how they corrected for 2016, we might expect the polls to be off.

      Although I agree with your point. I just wonder why tue adjustment wasn’t adequate.

      • I’ve seen this argument — Democrats and D-leaning people are more likely to take the virus seriously, therefore more likely to be bored this year, therefore more likely to pick up the phone.

        • “more likely to take the virus seriously”

          or more likely to work for gigantic corporations that can allow them to work from home, while Repubs are more likely to own their own businesses.

        • Work-from-home is also likely to be nonrandomly distributed (though I don’t know if it would actually be more Democrats), but the speculation I’ve seen was more about differential levels of boredom thus willingness to answer polls, not physically being at home so much.

  2. Probably a naive question about polling but here goes. Suppose you are doing a 2020 Trump/Biden poll. You are making sure you have enough of this kind of voter, that kind of voter, trying to correct the mistakes of 2016 as it were, but you still would like to get some measure of bias in your sample if possible. A seemingly simple thing you could do is ask respondents who they voted for in 2016. If your sample underrepresents Trump share in 2016, retrospectively, it is probably underrepresenting Trump share now.

    Obviously there are lots of caveats to this approoach, but, my question is just: do pollsters do this?

    • Ben:

      Yougov does this, I believe, and our model does allow for a bias among polls that don’t adjust by party identification or previous vote. But, as we’ve seen, these adjustments aren’t perfect either. Also they don’t model turnout.

      • Hi Andrew, are you sure YouGov does? I don’t see it anywhere in for instance their last pre-election Florida poll: https://drive.google.com/file/d/12YvPqiTHtWdC4CnV0zAtNtvj7_QO-mr9/view

        I don’t mean to suggest that this question would be “one adjustment to rule them all”; I’m just somewhat surprised it isn’t in *an* adjustment. Or maybe a sort of calibration tool.

        I guess this would be basically combining an exit poll that has a 4-year lag with a prospective poll. Maybe people have tried this and it doesn’t work? I could imagine that some people don’t like to answer about how they voted in the past?

        • YouGov has a panel that was asked in the past (possibly even just after the 2016 election) about their vote choice. So there is no 4 year lag. Of course, there are new voters that could still be sampled incorrectly.

    • Trump got almost 20% more votes in 2020 then he did in 2016. He increased his vote total my eleven million.

      Compare that to Barack Obama who lost 4 million on his total vote count between 2008 and 2012.

      That makes me somewhat skeptical past vote adjustment would account for the huge turnout increase.

      • Dalton:

        Both sides had a big turnout increase, also the lack of serious third-party challenges this time increased the vote counts for each major party. Saying that Trump received 11 million more votes in 2020 than he did in 2016 doesn’t tell us much. Biden received 16 million more votes in 2020 than Clinton did in 2016.

        Adjusting for past vote can help a lot with certain sorts of differential nonresponse that’s been a big issue in previous elections: big swings in the polls arising from more Democrats or Republicans responding; see here. You’re right that this adjustment doesn’t fix everything, but survey adjustment is a form of engineering, involving many small steps.

      • Right, it can’t account for what’s different in the new election; but it could still be important information. If you know you missed Trump voters in 2016, you can check if you’d still be missing them given 2016 conditions.

      • It’s an oft-observed fact that “scientists” and “researchers” have a quaint belief that by simply calling people on the phone or placing a survey in front of them they can find out whatever it is they want to know. Never has been remotely true but countless dollars and untold years of effort have been spent collecting such responses and performing statistical magic to transmute them into actual data.

      • People have definitely done research on this and there is lots of error (and I can think of 10 different reasons why people would misreport). More important is that people report that they voted than actually voted and that’s why the NES is used for more academic research.

  3. Wouldn’t exit polls help here? Has anyone processed the 2020 results?

    In any given precinct, there are 2016 and 2020 poll results, vote results, and exit poll results. Four of those data sets involve nonresponders, but you can measure it from the vote totals. Seems like a very useful result would converge from this analysis by comparing 2016 to 2020 but I have not seen anyone talking about it.

    I presume the limitation here is that exit polls have an entire suite of issues all their own…

    • Exit polls in 2020 are close to being useless
      since they are conducted only among the people who voted on election day
      and they are emphatically not a representative sample of the voters.

      • This isn’t quite true, right? I read that Edison changed their methodology to include a hybrid of traditional “exiting the polls” type exit polling with phone surveys of mail voters. I mean, obviously that’s still problematic since we know the phone polls didn’t do a great job this year in the days leading up to the election, but it’s not the case that they didn’t capture mail-in voters at all.

  4. Somewhat off topic, but what would your forecast have been if polls were accurate (showing small Biden edge) but we still didn’t know polls were accurate? Is there a way to get a rough approximation? I am curious for a somewhat close election outcome like 2020, what distribution of win probabilities we might expect if we assume random polling errors may under or overestimate Biden’s actual support. I guess 95% is on the high end of what you might reasonably forecast, but what would be the low end and what would be the median forecast?

    • N:

      Our model did allow for the polls to be inaccurate. The polls showed Biden with about 54-55% of the two-party vote and that, along with the fundamentals-based estimate, drove our inferences. If the polls had been 2 points lower for Biden, then our estimates would’ve shifted down a bit but still with a broad uncertainty. The election outcome was well within our 95% interval, but, as we discussed in our earlier post, we’d still like to understand the gap that we saw.

      • Andrew:

        I was wondering how your win probability estimate would have changed if state polls had been accurate. For example, if polls had shown Biden +2 instead of + 5 in Pennsylvania or -3 instead of +3 in Florida. I imagine your forecast would have been closer to 60-70% probability of winning with accurate polling. However, if error had gone the other way then maybe you would have forecast <50% probability of Biden winning. Do you have any sense of what you range of forecasts might be assuming we could repeat 2020 and polls might have different random errors?

    • N: There’s a nice blog post from Boaz Barak that you might want to look at. For FiveThirtyEight and Economist forecasts, he shows plots of the Biden win probability conditional on the popular vote margin being X.

      https://windowsontheory.org/2020/10/30/digging-into-election-models/

      Biden’s two-party popular vote margin is currently about 3.7 percentage points. Looking quickly at the plots in the blog post, the Economist model gave Biden a roughly 50-60% win probability condition on the popular vote margin of 3.7 percentage points.

      That number does not directly address you questions, but it seems like you are more generally wondering about how sensitive the forecasts are to polling bias of the scale that actually occurred. So maybe it is useful to you.

      Keep in mind that Biden’s popular vote share could change as move votes are counted, however.

  5. Implicit in your argument is that the best way to evaluate forecast error is through the absolute value (or square) of the expected value minus the true value. Is that always the case? For instance, what if the “true” distribution is skewed or multi-modal?

  6. Shift 1% of the vote from the Democrats to the Republicans, and Biden still wins the popular vote but he loses the electoral college. Shift 1% of the vote from the Republicans to the Democrats, and Biden wins one more state and the Democrats grab another seat in the Senate.

    I think that the public and news would have fewer negative reponses to the election forecasts if these kind of numbers were reported by the Economist and FiveThirtyEight ahead of time.

    Even though your forecast gave Biden a 97% overall chance of winning, Trump could have won by switching 0.5% or fewer voters in 44% of your election simulations. Also, Trump could have won by switching 0.06% of voters (100k voters) in 15% of your election simulations.*

    To most people, its very unintuitive that both (a) Biden could have a 97% chance of winning and (b) the election could hinge on such a small number of voters. Yet, your election forecast shows that (a) and (b) are entirely consistent with each other.

    If people knew this, they might react less negatively when the results feel very close on election day.

    Hindsight is 20/20 of course, but maybe this is a useful suggestion for the future.

    * These numbers are from analyses I ran on the 80 000 draws from the posterior that the Economist published on election day.

    • Fogpine:

      We did say all this. Our model gave an uncertainty range for Biden’s popular vote share, and we also wrote about the popular vote share that Biden would need to win the electoral college. He was projected to need about 52% of the popular vote to win the electoral college, and a 53% share for Biden was well within our uncertainty interval. We did not put all these calculations in one place, though.

      • Hm, I think maybe I’m being unclear. I’m trying to say that people see a 97% win probability and they expect a safe-feeling win. Then it turns out that the election winner could have been Trump if less than 50,000 voters* in various states (less than 0.05% of voters) had decided to vote for him instead of Biden. People see that and they think the forecast was bogus.

        But actually, there can be both a 97% Biden win probability and a high chance that the election will hinge on few votes. It could be so helpful to say that to readers! Something like, “Yes we give Biden a 97% win probability, but beware — it’s very possible that the winner will depend on a tiny number of votes, just like in 2016 and 2000. In fact, in X% of our simulations, the winner could be changed if less than than 1 in 1000 voters switched who they voted for.”

        Maybe that kind of statement was made somewhere, but I couldn’t find it with the main forecasts or articles I read. To me, it seems distinct from saying Biden can only be confident in a win if he gets more than 52% of the popular vote, but maybe I misunderstand you.

        * These numbers may be wrong-ish, they are quick estimates.

        • fogpine, the forecast was bogus! In the sense that you mean, at least. Systematic polling error was quite large in some states that happen to be very important states (like the midwest). Pollsters thought they had this fixed from 2016; they didn’t. Andrew has been engaging in what I think is a very constructive manner with how the modelling could have been done differently/better, but at least from my relatively ignorant perspective it seems like this is something that has to be fixed at a level closer to the ground than the modelling. That is, I’m just not sure what a modeler is to do when polls (with adjustment) give an 8% margin and the true margin is about half a percent.

          I think pollsters may have to start doing something radically different, like constructing long-term panels of many thousands of people in key states. Expensive!

        • Thanks for your comment Anonymous. Polling error was bad in 2020, but not unprecedentedly bad. Near the election, the main purpose of forecast models is to account for the uncertainty added by possibilities of polling errors that are above and beyond what is expected from the sampling design and statistical adjustments that the polls already use. So when you write,

          “I’m just not sure what a modeler is to do when polls (with adjustment) give an 8% margin and the true margin is about half a percent.”

          from my perspective, this is exactly what the modelers are responsible for! It is their work that is meant to account for possibilities like this, and factor them into the final win probability they assign each candidate. This also means that polling errors make (good) election forecast models more useful and important, not less relevant.

          I think pollsters may have to start doing something radically different, like constructing long-term panels of many thousands of people in key states

          Maybe! It’s a good thought.

        • Pew used long term panels. Also, the USC panel was long term (including 2016 participants). Their national polls were around Biden + 10, so missed by around 5-6.

  7. The fragility of the predictions on electoral college count is in a similar way explained by how a small sample size and small effect magnitute (≈50%) can give rise to both a significant p-value and a large type-S error.

    • It was pretty obvious they gave up on Florida in the last week, and the Democratic commentators were talking about losing the Hispanic vote there for several weeks in the run up. Yes a lot of campaigns have private polling.

  8. Asked before but got no answer. I’ll try again. What could a better methodology be for estimating non-response bias?

    I know some pollsters get a group of people that they poll continuously – so people picking up the phone wouldn’t be the biasing exclusion criteria – but I would imagine there would be self-selection bias in putting such a group together.

    Why not do incentivized “focus groups” with a wide range of demographically organized cohorts and then weight them to be nationally (or by state) representative? Would that be better? Maybe even do that state by state. Would be expensive and require a lot of resources, but with so much focus on polling… That way your not assuming your sample is representative (when it isn’t) just because it’s random.

    Also, people are guessing that “Republicans” are less likely to respond to polling (it’s politically convenient for Republicans to argue that they’re intimidated by those mean politically correct libz), but I wonder if there isn’t a confounding non-response bias. For example, maybe people stretch the truth about their income level or education level – which seem to me to be more in line with typical social desirability biasing in self-report surveying – which could look a bit like rural white Republicans not responding.

    • >>if there isn’t a confounding non-response bias

      One speculation I’ve seen is: Democrats/Democrat-leaning take COVID more seriously, therefore are more likely to be sitting at home bored this year because they’re limiting their activities, thus more likely to pick up the phone.

      Thus response bias favors Democrats/Democrat-leaning in a way that wasn’t predicted from analysis of the 2016 polling miss, since it’s 2020/COVID specific.

  9. Just a thought. Could pollsters try to trade the bias for variance. Usual thing in statistics, as evrebody knows. For example, vary questions on purpose to probe voters of different persuation. For example, start some questionnairs with some “Trump friendly” questions like “do you approve Trump’s job on economy?” or “do you approve of Trump’s efforts to bring peace to the Middle East?” Something that puts Trump supporters in an easier state of mind where they can tell pollsters something good about their candidate first. And another group of surveys start with unfriendly/contensious questions (coronavirus, impeachment). Is it done already? Would it work?

  10. I’ve always been skeptical about a Brady effect, but people have not really discussed how many Evangelicals in particular do not want to vote for Catholics (and all the “Biden goes to church” messaging highlights that identity) and how many others did not want to vote for a not-White VP especially one who is female and has a Jewish spouse. I’m not talking about a lot, but in some of the close states a small percentage scraped to the Trump side or to not voting combined with social acceptability bias can make a difference.

  11. Can anyone comment on the affidavit filed by the mathematician from Williams college? I’m thinking that if they are only surveying Republican voters, then a survey of Democrat voters is in order and we would probably find similar error rates there too, thus making the whole analysis moot. Or are there some shenanigans in the affidavit or methodology?

    https://justthenews.com/politics-policy/elections/mathematics-prof-says-sworn-statement-many-56000-gop-ballots-pa-may-be

      • I’m actually proud of the fact that I need to look up these people for background information – that’s how out of the media loop I am. So, the affidavit/analysis from the Williams guy is pretty awful, and it does continue to shock me what people are willing to do despite knowing better. So, I went to the source where the affidavit was reported – “Just the news.” I never heard of it (again, a source of pride) – but the editor, John Solomon, has quite a resume (never heard of him either, another source of pride). With his background and a media source that claims to just present facts and eschews bias, it started to look like a legitimate report. Then, after searching John Solomon a bit, I discover that he has quite a reputation. He has a legacy of supporting Trump, a Fox commentator who has somewhat recently been let loose from Fox, etc etc.

        Here is my point: we have reached a point where nothing can be trusted. I am not naive – in fact, I was raised to be skeptical of virtually everything. And I have some of the skills needed to see though poor analysis and reporting. But the media environment is so inundated by intentional (and accidental) deception that we truly live in a world of alternate realities. For that matter, the research environment is not any better. We have tens of millions of voters receiving their “information” in this environment. It’s hard to find much about democracy to like in this environment – of course, until you consider the alternatives.

        • It’s funny to me that I kind of had a quite reaction as a function of knowing about the source. I was already familiar with the quality of “Just The News,” having seen Looney Tunes stuff from them before.

          (I also took note of the use of “Democrat” rather than “Democratic” in JS’s post which could sometimes indicate there’s more to the – I’m Just asking questions about an absurd analysis – frame for the comment.)

          At any rate, I was going to post something about the source but (1) there’s a type of fallacy in dismissing a report just based on its source and (2) I would think that the vast majority of times someone posts an item from that source they must know about the quality of that source and so they aren’t really interested in meaningful engagemwnt so it would be a waste of time to point out the dubious nature of the source.

        • My purpose in posting was to get some more expert opinion than my own. I suspected the analysis in the affidavit was flawed in some way, but I don’t want my own bias to lead me to that conclusion. I want to be sure I am justified in my conclusions. I mention “Democrat” to mean that it seems they only considered ballots for registered Republicans in their analysis (unless I missed something. My impression is that they want to imply these would have been votes for Trump had they been properly handled and accounted for. So my suspicion is that a similar issue would arise is they were to analyse registered Democrats too. In fact, it might be the case that many more Democrats requested mail in ballots and therefore experience errors with them and their being lost or not counted. So the overall compounding of errors might have actually favored Trump. He did in fact do better in PA in 2020 than on 2016, is what I read.

          I like to spend time looking into things that my intuition tells me is nonsense just to make sure I’m not overlooking something sensible due to my own bias. Is there was significant fraud and I believe otherwise, I’d like to know. This affidavit is the first participation in this theatre by someone who might actually have some real expertise and that uses actual quantification, so I wanted to get some eyes on it to know what the problems with it are.

        • Fair enough.

          FWIW – substituting “Democrat” where typically in the past one would likely have used “Democratic” (as in Democrat voters for Democratic voters) has become very much a rightwing trope.

    • You suggest that “errors” would cancel out but it isn’t necessarily so.

      Assuming the estimates are correct (there are many reasons why they could be biased) there were 40k mail ballots requested by someone else and not counted. Of course the problem would be with counted fraudulent votes, not with the uncounted ones.

      If we knew how many counted mail ballots had been requested by someone else using the name of registered republicans and registered democrats and registered independents we would still have no idea of who had benefited from the fraud.

      • But without a reason to suspect who benefited from the errors and having only estimated the errors for registered Republicans, we should probably expect similar error rates for ballots registered as Democrat or independent. With your point, they should have done that to overall increase the total number of ballot errors. We would need a reason to suspect those ballots deviated from the overall vote distribution significantly enough to change the result. I’m honestly still not sure what they actually found and how unusual it is.

        • If it is true (I don’t say that it is, I have no idea) that there were tens of thousands of ballot cast under the name of people [1] without their knowledge, why should we assume that those ballots didn’t deviate from the overall vote distribution? Those wouldn’t be random errors, it would be outright fraud.

          [1] dead or alive, registered as a Republican, a Democrat or an independent

        • From what I can gather, they surveyed registered Republicans who they know didn’t turn in their mail ballots, and found that some say they never requested a ballot. That definitely is a problem if it is what happened. I can imagine many sources of error unaccounted for in their analysis. Sampling error, and the respondent remembering incorrectly would be two big sources. I still think it would be important to do the same analysis for voters registered as other than Republican. If this phenomenon is not uniform across all registrants, then it would warrant additional scrutiny. Even if it is uniform, it would be worth investigating how this occurred, but would be less worrisome (since it would seem like a generic error in the system and not biased one way or the other). Given that 165k R mail ballots weren’t turned in, there were probably 4x that many D ballots sent out but not turned in. We’d probably find 30% of a sample of 2,000 of them don’t remember requesting a ballot either. I tried to check the PA vote dashboard to get those numbers but the dashboard appears to no longer be available.

        • If mail in ballots were requested and voted, that would show up when people went to vote. The rule is that

          “If the pollbook shows the voter has timely returned and voted their absentee or mail‐in ballot, they are not
          eligible to vote by regular ballot at the polling place. These voters are not eligible to vote on the voting
          equipment but may vote provisionally if they believe they have not already voted and are eligible to vote.
          Voters who have requested an absentee ballot or mail‐in ballot and are not shown on the district register
          as having voted the ballot and who appear on Election Day to vote can only vote provisionally at the polling
          place, unless they surrender their ballot and outer return envelope to be spoiled and sign the required
          declaration before the judge of elections.”

          There are no reports of that but there is an issue in PA, well it’s interesting. The state put a check box on the mail in ballot request for the primary election that automatically generated a general election mail in ballot being sent and it led to great confusion

          https://www.post-gazette.com/news/politics-state/2020/10/16/pennsylvania-rejected-mail-ballot-applications-duplicates-voters/stories/202010160153.

          So when Steven Miller claims that

          “Almost surely, the number of ballots requested by someone other than the registered Republican is between 37,001 and 58,914”

          he is on shaky grounds especially considering that the election turnout was 71% in PA. If those ballots had been returned it would have shown up at the polls when those registered Republicans went to vote.

        • > The state put a check box on the mail in ballot request for the primary election that automatically generated a general election mail in ballot being sent and it led to great confusion

          Interesting. That would explain why some people was not aware that they had requested mail in ballots (and why they were not returned by some impersonator).

        • Yes, this indeed probably explains the phenomenon mostly. They didn’t technically request a mail ballot, but it was automatically sent out due their checking that box.

          Honestly, I’m still just shocked that an actual successful PhD mathematician would sign a legal oath on this document. Can we get a formal letter together and have thousands of statisticians signing it rebuking this guy for his terrible work? This is damaging both to the profession and national security.

        • A little more background on the PA mail in check box – the description of checking that box on the primary ballot is as follows:

          “If you indicate you would like to be added to the annual mail-in ballot request list, you will receive an application to renew your request for mail-in ballot each year. Once your application is approved, you will automatically receive ballots for the remainder of the year and you do not need to submit an application for each election. If you update your voter registration due to relocation out of county after you submit an annual mail-in request, please ensure your annual status is transferred when updating your address.What is an annual mail-in ballot request?

          WARNING: If you receive a mail-in ballot and return your voted ballot by the deadline, you may not vote at your polling place on election day. If you are unable to return your voted mail-in ballot by the deadline, you may only vote a provisional ballot at your polling place on election day, unless you surrender your mail-in ballot and envelope to the judge of elections to be voided to vote by regular ballot.”

          While the instructions are clear if you read it carefully, I can easily see confusion in people’s minds if they checked the box in the primary. If you read it carelessly, it seems as though you can get the mail-in ballot and then ignore it if you wish (or if you forget) – that’s not what it says, but I can see people possibly reading it that way. Of course, both Republicans and Democrats might misunderstand the instructions, or the check box might have been differentially used by the two, but in any case it seems a relevant point for interpreting the supposed survey results that Miller was analyzing.

    • I just realized the initial numbers in the first paragraph don’t even add up. They say “almost 20,000” (which indicates to me slightly less than 20,000) yet the other numbers add up to well over that.

      He swears the math in the affidavit is true though!

    • So Miller has essentially disavowed his own testimony: https://www.berkshireeagle.com/news/local/williams-prof-disavows-own-finding-of-mishandled-gop-ballots/article_9cfd4228-2e03-11eb-b2ac-bb9c8b2bfa7f.html

      “But, Miller’s affidavit was met with sharp criticism from his peers, who agreed with his later statement that it was wrong to separate the basic mathematical analysis from questions about the validity of the data.

      Statisticians, mathematicians and other academics who work with large data sets told The Eagle they were deeply concerned that the data collection was flawed and therefore the results meaningless.”

      The original affidavit was an early leak and not the final version: https://williamsrecord.com/wp-content/uploads/2020/11/Attachment-A.pdf

      • I’m glad to see this, but there is a bit more to the story. I’ve exchanged several emails with Miller and was relieved (not happy) to see that his final version was more palatable than the initial version. Given all the caveats, and the sophomoric analysis, it is still embarrassing that he took this work on. But his politics and mine differ, so perhaps he thought it was for a good cause. What still irks me is who he has aligned himself with. I don’t know that the data he was working with is wrong – neither does he. But I think the credibility of his sources has proven itself not worthy. There is the other affidavit story listed today on this blog (from a different author). But there is also the fact that his clients released a draft of his testimony that does not match what he finally filed. He had signed the draft – and told me it was a draft, even though it was marked “final.” He might just be inexperienced and/or naive, but I doubt it. But even if he is, he is apparently willing to trust the data enough even though it came from the same people that are willing to release a copy that he supposedly did not agree to. I think the evidence suggests that his sources cannot be trusted. But he chose to ignore all this and do the work anyway. That is an ethical breach that bothers me, regardless of his politics.

Leave a Reply to Joshua Cancel reply

Your email address will not be published. Required fields are marked *