Skip to content

A political reporter asks some questions about polling

John Kruzel from PolitiFact writes in with some questions:

Readers have asked us to look into Trump’s claim that his support among African Americans doubled as a result of praise from rapper Kanye West. (Trump: “Kanye West must have some power because you probably saw, I doubled my African-American poll numbers. We went from 11 to 22 in one week. Thank you, Kanye, thank you.”)

Trump appears to have been referencing a Reuters weekly poll that showed his approval rating among black men go from 11% for the week ending April 22, to 22% for the week ending April 29.

I’d be grateful if you’d help me explain to readers how reliable an indication this is that Trump is enjoying a favorable shift in public opinion among black men.

(1) The two weekly polls were based surveys, respectively, of 118 and 171 respondents. According to Reuters’ own polling editor, the “credibility interval was more than +/- 9 percentage points for each measurement, which leaves open the possibility that his approval also could have dropped in this time frame.” In layman’s terms, why is having such a small sample size problematic? How might it have affected the reliability of this particular poll?

(2) CNN’s director of polling said the Reuters survey “was conducted using a non-probability online sample, meaning that those who participated signed up to take the poll rather than being randomly selected.” In plain English, what does this mean, and how might it have affected the results?

(3) Given the aforementioned caveats with respect to the poll Trump cited, how accurate is Trump’s statement that his support among African American (men) doubled in a week?

My reply:

(1) The classical margin of error is 2 standard errors, that is, 2*sqrt(p*(1-p)/n). Setting p=0.16 (the midpoint of the two estimates), we get margins of error of 0.07 and 0.06 for the two polls. The classical margin of error for the difference is sqrt(sigma_1^2 + sigma_2^2) = 0.09.

So I guess when the Reuters polling editor said “credibility interval,” he meant 2 times the classical standard error.

(2) I don’t understand the question: the CNN polling director’s statement looks like plain English to me! All I would add is that every opinion poll is a non-probability sample. With nonresponse rates exceeding 90%, it’s doesn’t matter much if the respondents are randomly selected. I mean, sure, why not, but random selection does not give you a probability sample.

(3) I doubt Trump’s support among African American men doubled in a week. Such a statement is indeed consistent with the data, but the data are also consistent with much smaller changes.

P.S. Kruzel’s article is here. It has an error. Kruzel writes:

So while Trump claimed his approval rating among black men for the week ending April 22 was 11 percent, realistically it could have anywhere from 2 percent to 20 percent.

I don’t see how Trump’s approval rate could realistically have been 2%. I mean, sure, 2% is possible—anything’s possible—but given that his support was 11% among the survey respondents, it seems pretty unrealistic to claim that the underlying rate could’ve been 2%.

I guess I’ll rate Kruzel’s post as Half True.


  1. Ted says:

    I give Kruzel a 0% Truth. Trump only claimed his poll numbers doubled! He didn’t make any inferences about error rates or the population. Kruzel is a hack looking to score easy “strokes” — an old sociology term for ego-grooming (a really old one).

    • Andrew says:


      Kruzel had some truths. First, Trump claimed he doubled his African-American poll numbers. Actually ha only doubled his poll numbers for African American men, not African Americans overall. Second, of course Trump was making a claim about the population. Poll numbers are only interesting to the extent they represent the population, and the margin of error is relevant to interpreting poll numbers.

      But Kruzel also had some errors, as noted in my post above. So I think Half True is about right.

      Whether this was worth reporting is another story. But, for that, you have to blame Reuters, who did the poll in the first place, and Trump, who tweeted about it, along with Kruzel, who decided to write this story. You can also blame me, I guess—I just figured since I bothered to field the reporter’s question, I might as well share the answer with all of you.

  2. James Anthony (pseudonym) says:

    I’m not by any means a political scientist, but something immediately jumped out at me (besides the possible non-response bias, which was addressed in Kruzel’s article):

    Kanye West’s support was only made public recently, right? At least it seems to me that way, as someone who lives in Europe. Anyway, if that is the case, then is it possible that this may only be a temporary surge in support? Some may be swept up by the drama of it all, which may affect their opinions. Once the news dies down, maybe their feelings will revert back to “normal” again.

    Like how you might see a surge of support after a political rally, that will then probably diminish again over time. Is there a term for that sort of effect in political science/statistics? This seems like it might be somewhat similar.

  3. Carlos Ungil says:

    Dr. Hibbert: Homer, I’m afraid you’ll have to undergo a coronary bypass operation.

    Homer: Say it in English, Doc.

    Dr. Hibbert: You’re going to need open heart surgery.

    Homer: Spare me your medical mumbo jumbo.

    Dr. Hibbert: We’re going to cut you open, and tinker with your ticker.

    Homer: Could you dumb it down a shade?

    • James Anthony (pseudonym) says:

      Milhouse: Sorry Bart, I’m deeply immersed in the Teapot Dome scandal.

      Bart: Huh?

      M: However, it might be feasible in a fort-night.

      B: Wah?

      M: I can play in two weeks

      B: Juh?

  4. Jeff says:

    Kruzel set out to explain some statistical concepts to a general audience but would have done well to point out how many polls there are in this area, how noisy the signal is, and how ridiculous it is to declare a trend based on a single number. This is the message that is perpetually ignored in the reporting of poll results. A picture like the 538 Trump tracker is the black-box evidence that you shouldn’t take the Reuters poll (or any one poll) at face value; the math of sample sizes and non-probability sampling provide some explanation of why this is so.

  5. jrg says:


    Seems fair to point out that non-response is a big issue even with probability sample, but do we really believe that “a self-selecting online sample” is no worse than a careful study with lots of non-response? Seems like an easy thing to study. Has anyone done it?

    On Jeff’s point, I looked at the linked graph of ups and downs in the Reuters poll (,SEX:1/dates/20180101-20180501/type/week). It’s nice to see the ups and downs from week to week and get a sense of how much bigger (it does seem bigger) the swing was in the week being discussed.

  6. Zad Chow says:

    Why did Kruzel ask this series of questions if he didn’t bother waiting for your opinion? Perhaps, he went with a shotgun approach (asked as many data experts as possible) and was on a deadline. Regardless, that particular sentence is enough to mislead readers significantly.

    • Andrew says:


      Yes, I think he sent the question to many people at once, I assume in the hope of getting free material for his website. I thought it would be amusing to turn it around and use his question as free material for our website here.

  7. I used to think Andrew was joking when he made replies like that! Now I know better.

    I’d have tried to reason by example. Let’s say we’re trying to estimate the average female height in the United States. Let’s suppose further that we do this by taking one random woman from the population and using her height. That’ll get us in the ballpark, but unless we’re very lucky, we’ll probably be off by a few inches or more. So let’s say we take four women and average their height. Better, but we still don’t expect to get a good number. What if we take sixteen women, or sixty four, or 256? Our estimate will get better and better. Stats will tell you how much better—the expected error goes down as the square root of the number of women drawn—if you take a sample size of four, the estimate will have half the expected error from measuring just one.

    Next up, non-probability samples. Let’s say I go online to my volleyball club and start asking women how tall they are. I’m likely to get a biased sample in the sense that no matter how many women I choose from the volleyball club, I’m not going to get an estimate of the population height, just the volleyball club height. I have the same problem if I go online and poll Xbox users—they’re not representative of the general population. For example, if you polled Xbox users, you might think the population was largely made up of 12 to 30 year old males with Republican political leanings (for the stats geeks, see Andrew et al.’s Xbox paper that adjusted for this sample bias with multilevel regression and poststratification). As Andrew says, it’s not like landline respondents are representative of the population, either.

Leave a Reply