John Cook writes:

Some physicists say that you should always have an order-of-magnitude idea of what a result will be before you calculate it. This implies a belief that such estimates are usually possible, and that they provide a sanity check for calculations. And that’s true in physics, at least in mechanics. In probability, however, it is quite common for even an expert’s intuition to be way off.

I agree with Cook’s general message but I’d say it slightly differently. I’d say that even many experts often have **no intuition at all** when it comes to probability, which will lead them to miss huge conceptual errors in their calculations.

The example that comes to mind is the largely atrocious literature in political science and economics on the probability of a decisive vote. A paper was published in a political science journal giving the probability of a tied vote in a presidential election as something like 10^-90. Talk about innumeracy! The calculation, of course (I say “of course” because if you are a statistician you will likely know what is coming) was based on the binomial distribution with known p. For example, Obama got something like 52% of the vote, so if you take n=130 million and p=0.52 and figure out the probability of an exact tie, you can work out the formula etc etc.

From empirical grounds that 10^-90 thing is ludicrous. You can easily get an order-of-magnitude estimate by looking at the empirical probability, based on recent elections, that the vote margin will be within 2 million votes (say) and then dividing by 2 million to get the probability of it being a tie or one vote from a tie.

The funny thing—and I think this is a case for various bad numbers that get out there—is that this 10^-90 has no intuition behind it, it’s just the product of a mindlessly applied formula (because everyone “knows” that you use the binomial distribution to calculate the probability of k heads in n coin flips). But it’s bad intuition that allows people to accept that number without screaming. A serious political science journal wouldn’t accept a claim that there were 10^90 people in some obscure country, or that some person was 10^90 feet tall. But intuitions about probabilities are weak, even among the sort of quantitatively-trained researchers who know about the binomial distribution.

**P.S.** The point of this post is not to bang on the people who made this particular mistake but rather to use this as an example to illustrate the widespread lack of intuition about orders of magnitudes of probability, which is relevant to John Cook’s point regarding statistical thinking and communication.

Another example is business-school prof Reid Hastie’s apparent belief that “the probability that a massive flood will occur sometime in the next year and drown more than 1,000 Americans” is more than 20%. 20% sounds like a low number, low enough that Hastie didn’t consider that such floods have been extremely rare in American history. (Even Katrina drowned only 387 people, according to this source which I found by googling Katrina drownings.) This is not to disparage the importance of preparing for floods; even if the probability is only 1%, it’s still makes sense to do what we can to mitigate the risks. My point here is just that probabilities are hard to think about. It’s Gigerenzer’s point.

To continue with the Gigerenzer idea, one way to get a grip on the probability of a tied election is to ask a question like, what is the probability that an election is determined by less than 100,000 votes in a decisive state. That’s happened at least once. (In 2000, Gore won Florida by only 20-30,000 votes.) The probability of an exact tie is of the order of magnitude of 10^(-5) times the probability of an election being decided by less than 100,000 votes.

See, that wasn’t so hard! Gigerenzering wins again.

Is Andrew being very subtle or was it just a typo (Freudian slip?) to claim-> “In 2000, Gore won Florida by only 20-30,000 votes.”?

Via Cook’s blog I found this pertinent TED talk by Rosling and son:

http://www.ted.com/talks/hans_and_ola_rosling_how_not_to_be_ignorant_about_the_world

The essence of the talk is that we can be wildly off in our prior judgements and therefore, the need to be properly informed.

Paul:

I’m not an expert on this one but it’s my understanding that Gore won Florida by about 20-30,000 votes, had all ballots been counted and recorded (for example, counting as Gore votes anyone who punched Gore on the ballot and also wrote in Gore, etc). It’s my impression that 20-30,000 was the best estimate out there.

I just did a quick Google search and found this, from political scientist Walter Mebane in the journal Perspectives on Politics, from 2004:

I don’t know why I said “20-30,000 votes” rather than simply “20,000 votes.” Maybe there’s another source out there estimating 20,000.

In any case, the point stands: I think there’s no serious debate that (a) tens of thousands more people voted for Gore than for Bush in Florida in 2000, but (b) even a gap of 30,000 is pretty close, percentage-wise.

Clearly, more people went to the polls in Florida intending to vote for Gore than for Bush. The Democrats, however, have traditionally had a larger problem than Republicans getting their would-be voters to properly execute the act of voting, a problem that was particularly large among the Democrats’ black base. Jesse Jackson rightly complained back in the 1980s that his supporters were more likely to be stumped by the complications of voting than the supporters of his rivals for the Democratic nomination.

Here’s a 2000 NYT article on how numerous Democrats in Jacksonville got confused at the polls and voted for more than one individual for President:

http://www.nytimes.com/2000/11/17/politics/17DUVA.html?ex=1133845200&en=8296464416bd4b79&ei=5070

Here’s the best part of the NYT article of 11/17/2000:

Nearly 9,000 of the votes were thrown out in the predominantly African-American communities around Jacksonville, where Mr. Gore scored 10-to-1 ratios of victory, according to an analysis of the vote by The New York Times.

The percentage of invalidated votes here was far higher than that recorded in Palm Beach County, which has become the focus of national attention and where Democrats have argued that so many people were disenfranchised it may be necessary to let them vote again. Neither Democrats nor Republicans have demanded a hand recount or new election in Duval County.

Local election officials attributed the outcome to a ballot that had the name of presidential candidates on two pages, which they said many voters found confusing. Many voters, they said, voted once on each page. The election officials said they would not use such a ballot in the future.

Rodney G. Gregory, a lawyer for the Democrats in Duval County, said the party shared the blame for the confusion. Mr. Gregory said Democratic Party workers instructed voters, many persuaded to go to the polls for the first time, to cast ballots in every race and “be sure to punch a hole on every page.”

“The get-out-the vote folks messed it up,” Mr. Gregory said ruefully.

If Mr. Gregory’s assessment is correct, and thousands of Gore supporters were inadvertently misled into invalidating their ballots, this county alone would have been enough to give Mr. Gore the electoral votes of Florida, and thus the White House.

The voters turned out by Democrats, Mr. Gregory said, took the instructions to vote in every race to mean: “I’ve got to vote for Gore. I’ve got to be sure Bush doesn’t get elected. I’ve got to vote on every page.”

Democratic officials, Mr. Gregory said, should have told voters they were bringing to the polls. Vote for Gore, then skip the next page.

“In hindsight,” he said, “we didn’t fully understand the problem. ”

The Duval County ballot listed Mr. Gore on the first page, along with Mr. Bush, Ralph Nader and two other candidates. Then on the second page were the names of five other presidential candidates. After voting for Mr. Gore, many Democratic voters turned the page and voted for one of the remaining names, Mr. Gregory said.

The double-marked ballots substantially affected Mr. Gore’s showing, a Times analysis of voting data suggests. More than 20 percent of the votes cast in predominantly African- American precincts were tossed out, nearly triple the majority white precincts. In two largely African-American precincts, nearly one-third of the ballots were invalidated.

I wonder if the page-splitting was just an innocent error by the ballot designer….

I see these differently. The idea of 10 to the 90th is stupid; it’s greater than estimates of the total number of atoms in the universe. Using people’s odd response to weird questions like a flood killing 1000 people doesn’t necessarily imply endorsement of the probability “low” or some other version of low like “shrinkingly low”. The point seemed more to identify that people construct a weird narrative, like maybe God is smiting us, which argues against rational evaluation so people don’t say “really?”.

Jonathan:

I agree that the numbers are different. 10^90 is pure innumeracy, whereas that 20% number on floods merely reflects some ignorance regarding natural disasters. The point about that 20% example is not that Hastie was misinformed regarding flood risks—after all, none of us can be an expert on every topic—but rather that he exhibited overconfidence in characterizing 20% as a “low” estimate, in an article in which he was criticizing others for their irrational thinking!

I think you may be misinterpreting Hastie. He wrote:

One group of respondents was asked, “What is the probability that a massive flood will occur sometime in the next year and drown more than 1,000 Americans?” The typical estimate was low (less than 20 percent).

I don’t think that when he says “low” he means “low compared to a reasonable answer.” You could argue that that’s the only thing that “low” should mean in such a sentence, and I might even agree with you, but I’m not sure that if you asked Hastie “what is the chance of a flood in the next year that kills more than 1000 Americans” he would give a number over 20%.

Here’s another likely example, although we won’t know for sure until 2025:

http://www.autismdailynewscast.com/warning-half-of-all-children-will-have-autism-by-2025/12873/laurel-joss/

Dr. Stephanie Seniff, a scientist at MIT: “At today’s rate, by 2025, one in two children will be autistic.“

BTW, I thought Bush won Florida on a 5-4 vote.

http://en.wikipedia.org/wiki/Bush_v._Gore

…and we even know whose vote decided the case…it was Sandra Day O’Connor’s vote (and I understand that she has since stated on the record that she made a mistake)

In what sense will we not know until 2025 whether, at today’s rate [of increase, I presume], by 2025 one in two children will be autistic? This seems like a simple mathematical exercise: tell me how many kids today are autistic, and the rate at which this is increasing, and I will calculate what fraction of children will be autistic in 2025.

I’m pretty sure the point is that “at today’s rate of increase” extrapolated to 2025 is a pointless mathematical axercise that has absolutely nothing to do with reality.

To anyone who believes that 1/2 of all children will be Autistic in 2025, I have a bridge to sell you built on swamp land in florida… it’s quite cheap.

It’s even worse than it seems: by 2100, two out of every one child will be autistic!

+1

Andrew- this is slightly off topic, but I think, of interest to you. I was wondering whether you are familiar with this paper by Christopher Achen, ‘Garbage Can Regressions,’ and whether both his description of current Political Science research rings true and your take on his statistical critique in terms of the solutions.

Here’s a link to the paper:

http://qssi.psu.edu/files/achen_garbagecan.pdf

To develop an intuition needs experience & a broad viewpoint. If you just publish a shotgun paper after having dipped your feet in the water, it’d be hard to have any useful intuition about the metrics of a field.

Another problem is not being generalists. e.g. You don’t know your simulation is yielding crap because you’ve never seen the real equipment at work, so have no developed intuition of its natural dynamics.

It happened five or six times in 2000 alone, depending on how you think about electoral–college ties.

http://en.wikipedia.org/wiki/United_States_presidential_election,_2000#Votes_by_state.

You haven’t defined “tie”. If it means exactly the same number of actual (popular) votes for each of two major candidates In a national election, that probability may in fact be extremely low, but meaningless. If you mean a tie in the electoral college, that is a completely different thing. The former is meaningless practically due to measurement error (the impossibility of hard-determining the actual vote counts (due to ambiguous ballots, and other counting/accumulating errors inherent). A more meaningful definition of “tie” might be “nominal tally difference lower than margin of error”, but of course that runs into disagreement on margin-of-error estimates PLUS the aforementioned tally problems. So, please define your terms before criticizing.

Jeff:

This point has come up before.

Yes, what we’re really concerned about is the probability that a vote is decisive. For this calculation, it turns out that issues of recounts etc have essentially no effect on the probability.

For an explanation with details, see the appendix (page 674) of this paper with Katz and Bafumi from 2004.

You can define tie in the obvious way as two candidates with equal numbers of official votes. This is hardly “meaningless.” The probability that two candidates have the same number of official votes is basically the same as the probability that they have the same number of votes without measurement error.

Huh? The involvement of “officials” would only seem to introduce even more error. How are you defining “official votes”?

It doesn’t matter. You can start with the Platonic ideal of vote counts and impose however many layers of error you want, the vote differential distribution will not significantly change.

Looking further, I agree that the underlying distribution model is completely wrong. So some apologies for the previous comment. On the other hand, any basis like “recent national elections” is going to be a hopelessly small sample for making any reasonably meaningful estimate at all, no? (My first thought as to a model would be more along the lines of tug-of-war — presume equally-divided initial preference with competing influencers of various strengths operating over campaign time.)

Jeff:

Recent national elections is too small a sample to get a

preciseestimate, but it’s enough to make it clear that an estimate such as 10^-90 is hopelessly innumerate.I’m not especially familiar with the relevant data, but my guess is that there has never been a tie, in the sense of exactly the same number of votes for the two leading candidates. A “UMVUE” is the sample proportion, which is 0. As you might argue in some of your other posts, it is absurd to think that a conceivable event like a tie has probability zero–this stems from insufficient regularization. We can do better by introducing some bias, by making some assumptions about the statistical process in play, e.g., that votes are IID draws from a binomial distribution. This gives us a non-zero result, but one that is still extremely low. This is not the best model, to be sure, but if a better model produced something quite a bit different from 0, shouldn’t we be suspicious? Isn’t this analogous to weakly informed priors, where we use some prior information to sand of the edges but deliberately allow the data to do most of the talking? And given a sample proportion of zero, with a large sample size, don’t we expect any mostly data-driven method to give some very low probability?

“The probability of an exact tie is of the order of magnitude of 10^(-5) times the probability of an election being decided by less than 100,000 votes.”

I wonder how this changes as “100000” becomes other numbers, both larger and smaller.

[…] "Common sense and statistics" http://statmodeling.stat.columbia.edu/2014/12/25/common-sense-statistics/ … […]

The Wikipedia article on Hurricane Katrina says the total death toll was well into four digits:

“The confirmed death toll is 1,836, with one fatality in Kentucky, two each in Alabama, Georgia, and Ohio, 14 in Florida, 238 in Mississippi, and 1,577 in Louisiana.[34][35] However, 135 people remain categorized as missing in Louisiana,[35] and many of the deaths are indirect, but it is almost impossible to determine the exact cause of some of the fatalities.[1]”

http://en.wikipedia.org/wiki/Hurricane_Katrina

[…] Gelman discusses John Cook's post about order-of-magnitude estimates and goes on to state that 10^{-90} is a "hopelessly innumerate" estimate of the probability of a […]