Can you trust international surveys? A follow-up:

Michael Robbins writes:

A few years ago you covered a significant controversy in the survey methods literature about data fabrication in international survey research. Noble Kuriakose and I put out a proposed test for data quality.

At the time there were many questions raised about the validity of this test. As such, I thought you might find a new article [How to Get Better Survey Data More Efficiently, by Mollie Cohen and Zach Warner] in Political Analysis of significant interest. It provides pretty strong external validation of our proposal but also provides a very helpful guide of the effectiveness of different strategies for detecting fabrication and low quality data in international survey research.

I’ve not read this new paper—it’s hard to find the time, what with writing 400 blog posts a year etc!—but I like the general idea of developing statistical methods to check data quality. Data collection and measurement are not covered enough in our textbooks or our research articles—think about all those papers that never tell you the wording of their survey questions! And remember that notorious Lancet Iraq study, or that North-Korea-is-a-democracy study.

2 thoughts on “Can you trust international surveys? A follow-up:

  1. I quickly checked the paper: the amount of quality controls (30, to be exact) was fairly impressive, and to my eye these were sensible metrics as well. But I’d be more interested to generally read about these practices:

    “Survey teams in each country used trained auditors to listen to audio recordings captured during each interview. Next, auditors employed by third-party firms or in LAPOP’s central office ran spot checks such as reviewing interview logs and verifying the field team’s auditing. Finally, a staff member at LAPOP’s central office ran weekly (and sometimes daily) checks of interview metadata. While auditors were able to review survey and metadata, they were not able to edit survey responses, which were uploaded to a remote server when mobile telephone service was available. Additionally, LAPOP conducted extensive enumerator training ahead of fieldwork to both reduce enumerator cheating and increase the overall quality of data collected.”

    Do we have rigorous trace data, logs, or at least documentation about similar practices used in some big money surveys? With respect to Europe, I am thinking about Eurobarometers, ESS, and such and such.

    I mean the above description feels quite ad hoc “auditing” from an IT perspective…

  2. Super interesting article and very happy for it to be shared here. I don’t, however, agree that this provides “strong external validation” of the PercentMatch methodology. I say this because I suspect that LAPOP utilized PercentMatch to help make decisions about what survey’s to cancel.

    Cohen and Warner address this–“Scholars may be concerned that because LAPOP makes cancellation decisions using the quality control checks we study, there may be a deterministic link between these variables and the outcome of interest—necessarily yielding high predictive performance”, etc.–but I don’t find their response–that different people make decisions at different levels and that no one person has all the relevant information related to the quality checks–enough to yield strong external validation.

    That said I use PercentMatch and think it can be a very useful tool. However, the Pew rebuttable convinced me that 85% isn’t always an appropriate cut-off.

    But questions about external validation aside, I think the focus of the article is on maximizing resources by not duplicating checks that provide identical information, and this is well done.

Leave a Reply

Your email address will not be published. Required fields are marked *