More bad news for the buggy-whip manufacturers

In a news article regarding difficulties in using panel surveys to measure the unemployment rate, David Leonhardt writes:

The main factor is technology. It’s a major cause of today’s response-rate problems – but it’s also the solution.

For decades, survey research has revolved around the telephone, and it’s worked very well. But Americans’ relationship with their phones has radically changed. It’s no surprise that survey research will have to as well. . . .

In the future, we are unlikely to live in a country in which information is scant. We are certain to live in one in which information is collected in different ways. The transition is under way, and the federal government is among those institutions that will need to adapt.

Let’s hope that the American Association for Public Opinion Research can adapt too.

24 thoughts on “More bad news for the buggy-whip manufacturers

  1. The David Leonhardt article is motivated mainly by an academic article suggesting that rising nonresponse in labor force survey has biased downward estimates of the US unemployment rate. The response rate in these surveys has gone down from 96% to 89%.

    Elsewhere in the article Leonhardt notes that response rates in telephone surveys have gone down from a “healthy” 36% to 9%. (As a side point how do we know what response rates are in typical surveys?) Am I the only one who thinks that the bar shifted somewhere in the middle of the article?

    It seems to me that many polls with single-digit response rates would qualify as “gold-standard polls” as discussed recently on 538:

    http://fivethirtyeight.com/features/are-bad-pollsters-copying-good-pollsters/

    There seems to be some determination out there to believe that there is polling out there that conforms to a simple textbook story of how surveys are supposed to work.

    Here’s is, in my opinion, a much more forward looking and sophisticated discussion of the future of polling:

    http://downloads.bbc.co.uk/podcasts/radio4/moreorless/moreorless_20140912-1700a.mp3

  2. Naive question: As cellphone penetration increases why cannot we transition from a conventional land-line survey to a cellphone survey?

    That’s still the old methods of analysis just a different medium right? Or will we never be able to get anywhere close to the old response rates with a cellphone?

      • I thought it was some legal issue about cold calling cell phones. If so, can’t we just legislate that out?

        Also, in the past I remember people worrying that cellphones were a young, elite demographic but with current high cellphone penetration levels I doubt that’s going to be a big issue. Cellphones are as representative as it gets.

        • It is a legal issue, you can’t auto-dial cell phones according to article linked below, they have to be hand-dialed. It may also be that there are restrictions on calling numbers that pay per minute to receive calls, I can’t remember.

          I suggest we create a “reverse toll call” area code, and make all polls use that area code. The question is how to price it. I suggest GDP/capita/1000 per call, which these days is about $50 and would cover most people’s monthly cell phone bill in one call. I think that’s about the right level of deterrent.

          Also I would totally set up my VOIP PBX to take every one of those calls. I can’t promise I’d have it connect to a human though ;-)

        • I agree that the restriction on auto-dialing cell phones is a problem (a legal one) in using mobile phones for surveys/ polls. As for cell phone users who pay by the minute, even to receive calls: that would cause terrible response rates, even if it were legal!

          I still think the primary problem with doing polls by cell phone is the loss of location data. That is very important, and it is based on ZIP code. Mobile phones can’t be reliably linked to ZIP codes. Land lines can.

    • Here (http://bits.blogs.nytimes.com/2012/11/12/how-cellphones-complicate-polling/?_php=true&_type=blogs&_r=0) is a link from a few years ago that says it is more expensive than landline calling. It also says response rates dropped from 36\% to 9\% and that if it goes much further this might be a problem for the sample being representative of the population! Wow, maybe I am mis-skimming it, but the author thinks 81% missing isn’t a concern?

      • I was at a workshop 20 years ago at the UK census and there were the same statements about how people were screening calls with their answering machines and how this would be the downfall of surveys unless things changed radically.

        Interestingly at that time we had a couple of papers at POQ, AAPOR’s main journal, and were needing to argue why our non-probability sample was okay.

  3. A known probability sample is sufficient but not necessary. What is necessary is that the selection be (conditionally) independent of the outcome. If so, the sample is (asymptotically) informative for the population no matter how twisted and narrow the selection (so long as it is large enough). If this is not the case, parametric assumptions are needed.

    I discuss this in the context of external validity and a structural approach here .

    Post stratification is a heuristic way of doing this (in the sense that the assumptions in the causal model, and the logic for why it works, are left implicit).

  4. Pingback: A representatividade das pesquisas de opinião | Blog Pra falar de coisas

  5. The linked Leonhardt NYT article had this quote:

    The declining response rate to surveys of almost all kinds is among the biggest problems in the social sciences.

    Is this really true? “One of the biggest problems” sounded pretty strong. Are surveys in itself that much crucial to the social sciences? Other than unemployment rates which other measurements have been shown to be seriously impacted by the decline in phone response rates?

    Of all the surveys conducted in the social sciences what percent are traditional phone surveys?

    • I would agree. Surveys are very crucial to the social sciences, at least in so far as it is an empirical science most empirical work is done via surveys and there is seldom any other possible way. Of course there are some experiments, some non-survey sampling but insofar as anything regarding the bigger society is concerned there is nearly no way around sampling. The only notable exceptions I could think of are psychology (but they have the freshmen fallacy problem) and educational research (they have a lot of relatively high quality data from school evaluations e.g.). I might be biased coming from Sociology, though.

      • But do these declining response rates always / mostly lead to a drastic impact on quality? i.e. what if the non-responses are mostly random?

        Alternatively even had the response rates been happily high were we so sure that we were doing a good job picking targets in a very unbiased / representative manner?

        • I agree that 35% response rate vs 9% both seem like you have the same basic problem (potentially highly non-representative samples). It’s not clear what you could do to improve the state of affairs through models (as opposed to changing data collection methods) unless you have done some pretty careful modeling and data collection to predict what causes non-response and how it relates to opinions.

        • The bad thing is: We actually don’t know. If the non-response is mostly random there won’t be a big problem but I don’t think there is any hope for this to be true.

        • The thing that concerns me is that I see a bunch of articles, posts etc. first mention the low response rate & then immediately jump to labeling this as a mega crisis.

          I feel that’s a bit premature. Shouldn’t they also try to measure if the low response rate altered the survey results in a substantive manner? Has that been done often? Or is it just too hard to do so people avoid it?

        • Rahul:

          Just to be clear, I brought up the low response rate in response to that Aapor guy who was arguing that random-sample telephone surveys were cool but opt-in surveys couldn’t be trusted because blah blah blah. My comment was that with nonresponse rates at 90%, random-sample telephone surveys essentially are opt-in surveys. I’m not slamming phone surveys, I’m just saying that, like other opt-in surveys, they should be matched to the population as well as we can. And, indeed, organizations that run telephone surveys do a lot of corrections, both in design and analysis.

        • Andrew:

          No, not you. But Leonhardt seems to clearly make the jump that “Low Response Rate = Big Crisis in Social Sciences”.

          That’s possible but I’m not sure I see convincing evidence yet.

        • > “…not sure I see convincing evidence yet.”

          This Response-Rate stuff is a routine topic here and clearly IMO seems to be a very major (and embarrassing) issue to professional pollsters and the Survey Research establishment.

          Lots of talk about transforming very-low-response-rate-samples into post facto scientifically valid samples, but it mostly sounds like advice on how to make a peanut-butter sandwich without peanut-butter.

          Big-Picture perspective seems absent… rather amazing that this issue can not be definitivel resolved by the professionals after decades of ‘uncertainty’.

  6. Sites like YouGov seem to give panelists money for answering surveys. Is there any work on whether such payments affect the quality of results?

    Response rates are one thing; incentives are another.

    • Rahul, yes: Now THAT is a very interesting question! YouGov definitely pays panelists/ survey participants. I would be curious if those payments affect the quality of the results, although I wonder if such a determination is feasible.

      By “feasible”, I had two concerns. The first pertains to statistical design given the available data. I could go into a lot of detail about that, and I’m sure that Professor Gelman’s blog readers could too, so I’ll leave it at that. The other aspect of “feasible” is due to YouGov itself. YouGov is a growing UK company with a brisk pace of merger-acquisitions. If they keep up that pace, they’ll be the primary source of opinion poll data. That will make it even more difficult to do any accuracy check.

Leave a Reply

Your email address will not be published. Required fields are marked *