Wacky surveys where they don’t tell you the questions they asked

Maria Wolters writes:

The parenting club Bounty, which distributes their packs through midwives, hospitals, and large UK supermarket and pharmacy chains, commissioned a fun little survey for Halloween from the company OnePoll. Theme: Mothers as tricksters – tricking men into fathering their babies. You can find a full smackdown courtesy of UK-based sex educator and University College London psychologist Petra Boynton here.

(One does wonder how a parenting club with such close links to the UK National Health Service thought a survey on this topic was at all appropriate, but that’s another rant.)

So far, so awful, but what I [Wolters] thought might grab your attention was the excuse OnePoll offered for their work in their email to Petra. (Petra is very well known in the UK, and so was able to get a statement from the polling company.) Here it is in its full glory, taken from Petra’s post:

As the agency which commissioned this research and distributed the resulting news story, I would like to respond. OnePoll polled 3,000 mothers on behalf of Bounty, looking into the subject of pregnancy. The stats emerged that a small percentage of women admitted to tricking their partner into getting pregnant. I’d like to say that the resulting story in no way glorifies or condones this, in fact Bounty support the very opposite in their quotes. As market research specialists and providers of national news, we would always present the stats, as they are, however controversial. I would like to apologise to anyone who was offended by this piece of research.

So far, OnePoll have refused to release the original raw data, descriptive statistics for each of the survey questions, the survey questions themselves, a characterisation of the demographics of their sample, a proper operationalisation of “tricked”, or a description of their recruitment strategy. It could be that the survey was well-designed, well-phrased, and that sampling was well-balanced. But somehow, I’m not too optimistic.

What all this leaves us with are the “stats” which “emerged”, dressed up in a nausea-inducing little story for the tabloids, without sufficient context for their interpretation. To add insult to injury, the “stats” are presented “as they are”, without acknowledging that numbers are meaningless without a story.

What makes me furious is the way in which these nonsensical numbers are presented as a measure of an objective fact about the world (or, in this case, the microcosm of Great Britain and Northern Ireland), when in all likelihood they are as fabricated as James Frey’s or even JT LeRoy’s memoirs, but at the same time completely devoid of the deeper truths that even fake memoirs can reveal.

Wolters asks for my comments, given my experience in working with survey data. My first comment is that it’s often difficult to get details on survey methods. In 1995, Voss, King, and I published an article with details of methodology from nine national polling organizations. I like this paper a lot–and the only way we were able to get this information is that Voss had experience as a news reporter and knew how to get people to tell him things. The reluctance of survey organizations to give up information is understandable–it’s somewhat of a trade secret, and also it can take effort to write it all up–but, still, I find it frustrating.

I agree with Wolters’s general point that there’s no reason we have to believe something, just because a pollster or news organization says it’s true. Recall the so-called survey of the political attitudes of the super-rich, which it appears may have been based on only six respondents.

P.S. Hey–this Zombies category is coming in handy once again. And just in time for Halloween!

1 thought on “Wacky surveys where they don’t tell you the questions they asked

  1. Something I always worry about when it comes to polls: I know from long experience running psychology studies — many of which are surveys but in a different context — is that a good proportion of respondents will say yes to *anything*. That is, if you ask, "Are you currently dead," you can easily get 5% of people saying "yes."

    In my field, we assume that some percentage of people aren't paying attention, misheard, etc., and we use methods of eliminating those participants. I don't get the sense that surveys in other fields do the same thing. Do they?

    This comes up on my language blog periodically. Here's one example.

Comments are closed.