Some people pointed me to this official statement signed by Michael Link, president of the American Association for Public Opinion Research (AAPOR). My colleague David Rothschild and I wrote a measured response to Link’s statement which I posted on the sister blog. But then I made the mistake of actually reading what Link wrote, and it really upset me in that it reminded me of various anti-innovation attitudes in statistics I’ve encountered over the past few decades.
If you want to oppose innovation, fine: there are a lot of reasons why it can make sense to go with old methods and to play it slow. Better the devil you know etc. And on the other side there are reasons to go with the new. Open discussion and debate can be helpful in establishing the zones of application where different methods are more useful.
What I really don’t like, though, is when someone takes a position and then just makes things up to support it, as if this is some kind of war of soundbites and it doesn’t matter what you say as long as it sounds good. That’s what Link did in his statement. He just made stuff up. AAPOR is a serious professional organization and this statement was a serious mistake on its part.
After reading Link’s article, I wrote a long sarcastic post blasting it. But then I deleted my post: really, what was the point? Instead, I’ll say things as directly as possible.
In his article, Link criticizes the recent decision of the New York Times to work with polling company YouGov to conduct an opt-in internet survey. Link states that “these methods have little grounding in theory and the results can vary widely based on the particular method used.”
But he’s just talking out his ass. Traditional surveys nowadays can have response rates in the 10% range. There’s no “grounding in theory” that allows you to make statements about those missing 90% of respondents. Or, to put it another way, the “grounding in theory” that allows you to make claims about the nonrespondents in a traditional survey, also allows you to make claims about the people not reached in an internet survey. What you do is you make assumptions and go from there. You gather as much data as possible so that the assumptions you make can be as weak as possible. The general principles here are not new but a lot of research is done on the specifics. Regular readers of this blog will know about Mister P (multilevel regression and poststratification) which is my preferred framework for attacking such problems. The basic ideas of Mister P come from the work of Rod Little in the early 1990s, but it’s a pretty open-ended framework and I and others have been working hard on it for awhile.
Whether your data come from random-digit dialing, address-based sampling, the internet, or plain old knocking on doors, you’ll have to do some adjustment to correct for known differences between sample and population. This was true in 1995 and is even more true today, as response rates regularly go below 10%.
Link talks a bit about transparency, which is a bit of a joke considering all the mysteries involved in conventional polling. How was it, again, that Gallup’s numbers were so far off in 2012?
This kind of aggressive methodological conservatism just makes me want to barf. Just to be clear: methodological conservatism doesn’t bother me. Indeed, I’d completely respect it if Link were to write something like, “Survey researchers have nearly a century of experience with probability sampling using random digit dialing, address-based sampling, and so on. For the purpose of public policy, it makes sense to trust and refine the methods that have worked in the past. New methods such as multilevel regression and poststratification might be great, but really we should be careful. And new methods should be held to the same high standards as we require for classical methods.”
That’s what Link could have said. And I’d have no problem with it. I mean, sure, I’d still disagree, I’d still prefer that this not be released as an official statement of the nation’s leading professional society of pollsters, I’d still point out the problem with 90% nonresponse rates, and I’d still argue that traditional survey methods have big problems. But I’d respect his argument.
What I don’t respect is the B.S. in the official AAPOR statement. There’s the bogus, bogus, bogus bit about “grounding in theory” and then there’s this:
Standards need to be in place at all times precisely to avoid the “we know it when we see it (or worse yet, ‘prefer it’)” approach, which often gives expediency and flash far greater weight than confidence and veracity.
Expediency and flash, huh? How bout you do your job and I do mine. My job when doing survey research is to get an estimate that’s as close to the population value as possible. If you think that’s “expediency and flash,” that’s your problem, not mine.
AAPOR is committed to exerting leadership in maintaining data quality regardless of the methodologies being employed. To this end, we strongly encourage all polling outlets to proceed cautiously in the use of new approaches to measuring public opinion, such as those using non-probability sample methods, particularly when these data are used in news or election reporting or public policymaking.
Sorry, but this AAPOR statement is not an exercise in leadership. It’s an exercise in rhetoric, and it makes me want to barf. “Little grounding in theory . . . expediency and flash . . .”: Give me a break.