“The ‘Will & Grace’ Conjecture That Won’t Die” and other stories from the blogroll

From sociologist Jay Livingston:

The “Will & Grace” Conjecture That Won’t Die

From sociologist David Weakliem:

Why does Trump try to implement the unpopular ideas he’s proposed, and not the popular ideas?

History professor who wrote award-winning book about 1970-era crime, is misinformed about the history of 1970s-era crime

“West Virginia, which was a lock for Trump, got twice as many mentions as Maine, which was competitive”

17 thoughts on ““The ‘Will & Grace’ Conjecture That Won’t Die” and other stories from the blogroll

  1. My field is communication, which has roots in both sociology and social psychology, though for many of us its more the latter than the former. But we’re a variable field. The “Will & Grace conjecture” is perfect for us given that its really a sort of cross-level question. But the data presented don’t tell me anything unless the question is whether Will & Grace made everyone, including people who never watched the show, more tolerant. That question isn’t entirely implausible given the potential for social influence, but it’s a pretty bold hypothesis.

    What I would want to know is how the attitudes of people who watched Will & Grace changed compared to similar people who didn’t watch Will & Grace. The lack of an aggregate change doesn’t mean that the show wasn’t influential, since the world is complex and there were certainly countervailing forces pushing against the increased tolerance towards gay people. So while the data presented are consistent with a “Will & Grace didn’t change attitudes” conclusion, those data are fairly ill-equipped to answer the most plausible questions with much confidence. I’ve seen a few studies in my field about Will & Grace or Ellen, but none with particularly strong designs (e.g., one-off surveys of college students).

    • To make sure I understand:

      There are basically two predictors in the discussion, with two values each: {watched Will and Grace: YES, NO} X {attitude toward homosexuality before Will & Grace came on the air: WRONG, NOT WRONG}. To get a useful analysis, you’d need to include the other major predictors of attitudes toward homosexuality: race, education, income, geographic location, and religion. The outcome is the attitude after the show’s run {WRONG, NOT WRONG}. (Ideally we’d have more than a binary choice, but there you go. Maybe we should include the three degrees of wrong in the original GSS questions.)

      I share your skepticism about the “bold hypothesis” that “Will & Grace made everyone, including people who never watched the show, more tolerant”. The more plausible hypothesis is that the show did change the attitudes of many people who did watch it, but didn’t do much for anyone else. What would be your guess at the number of people who watched Will & Grace and started out disapproving of homosexuality, as a percentage of all Will & Grace watchers? Of those people, roughly what percentage do you think changed their minds by the end of the show’s run? Do you think that any Will & Grace watchers (read: a non-negligible number of them) changed their minds in the opposite direction? How do you think those things interacted with other predictors?

      I’m not a statistician, and Will & Grace was a bit before my time, so if there are already answers to some of those questions, I’d be happy to see them.

      • I think to do a good job of this you need even further elaboration. The truth is, at best any TV show or event or celebrity appearances or whatever are driving forces moving people towards some attitude. Furthermore, the attitudes of ones friends and acquaintances are also driving forces moving people towards an attitude (and there may be countervailing forces).

        The proper model then is something like an ODE for each person. Those who watch the TV show directly get pushed towards some attitude, and those who don’t watch the show get downstream effects pushing them towards the attitude by associating with people who do watch the show, or who are N degrees separated from someone watching the show. In essence, we have a diffusion of an attitude on a social lattice or something.

        Now, if you’d like to homogenize that whole social lattice, you can potentially talk about the net effect of the TV show on the overall average attitude… but it will need to be an ODE in which the effect of the TV show itself persists long after the TV show is actually being watched, as the attitude diffuses through the underlying lattice that you are homogenizing over.

        So, like cigarette smoking, or color televisions, or the smartphone, we expect an adoption curve. And for something where there is significant resistance, the adoption curve should be long and slow. Now, we already see this adoption curve starting in about 1991 or so in the GSS… whereas W&G comes on the scene in 1998. The effect of W&G could be to hasten adoption rate, but it simply isn’t possible to have initiated the changes begun in 1992 or so because that would be retro-causal.

        • That’s an interesting idea. What is the difference between this scenario and, say, a priming study, where you have a group A exposed to some kind of priming stimulus and a group B that receives no such stimulus but later mingles with group A? Would you want to do an ODE model for that? Or is there some difference I’m missing? (Assuming that the hypothesis, that group A influences group B, was even worth testing.)

        • The effect of the priming had better persist long enough for the later mingling to potentially transmit the effect. Also, the effect had better be “transmissible” that is, there must be a plausible mechanism by which the primed group A could apply a priming to B.

          But, I think this happens literally *all the time*. Person A is exposed to advertisement for good G and buys it, enjoys it, tells friends about it, suddenly person B wants it… This is more or less exactly how smartphones and color TV propagated their viral particles.

          So, yes, I think the right model *has* to be an ODE unless you’re looking over long enough time scales that the whole transmission process is un-started at the beginning, and complete by the later time. Then you could just asymptotically consider a step function or something.

  2. A slightly tangential comment on something from Livingston’s blog.

    He said that respondents were asked:

    “What about sexual relations between two adults of the same Sex?
    1 Always Wrong
    2 Almost Always Wrong
    3 Sometimes Wrong
    4 Not Wrong at All”

    He also said that for his graph, he “collapsed the first three responses into a single category – “Wrong.” Besides, “Almost Wrong” and “Sometimes Wrong” combine for only about 10-15% of the total.”

    If I had been taking the poll, I would have chosen 3, “Sometimes Wrong” — because I think it’s wrong if it’s not by mutual consent; I would have chosen the same response if the question were about sexual relations between two adults of opposite sex.

    So this is an example of an inadequately thought out survey question.

  3. @Martha Smith
    Your response would have been wrong. Because by assuming a limiting condition that isn’t mentioned anywhere in the question you implicity decide, that the authors wanted only people who consider sexual relations without mutual consent ‘not wrong’ to pick the fourth option. So, in your view, the fourth option is about a very tiny segment of the population that doesn’t care about consent and yet the survey authors never indicate that this is who they’re after with that option (if they were not, they could just remove the fourth answer). In your view, the authors care enough about that subset to offer them a response option yet curiously never bother to point it out and risk other, less ‘woke’ people chosing that option because they fail to consider the limiting condition.
    I’d say that makes no sense.
    In addition, one could construct the same argument even if the authors had included ‘mutual consent’ in the question by bringing up cases where ‘consent’ is not clear cut (e.g. the Anna Stubblefield case) and where some will feel, that merely satisfying some specific form of mutual consent is not enough and the sexual relationsship could still be wrong. Or what about mutually consenting adults with a massive power imbalance, e.g. a student-teacher relationsship? Or the relationship R. Kelly is said to have with some young women, which some outsiders consider abusive even though apparently everyone involved says consent is mutual.
    By your reading the authors failure to list all those circumstances (and many more I just now fail to think of) and either explicitly including or excluding them means they’re valid limiting conditions and only people who don’t care about any of this at all are expected to chose option four.
    Or alternatively, the question is about one’s attitude to homosexual sexual relations in general, without special limiting conditions.

    FWIW, here’s the part of the questionaire preciding and following that question:
    – There’s been a lot of discussion about the way morals and attitudes about sex are changing in this country. If a man and woman have sex relations before marriage, do you think it is always wrong, almost always wrong, wrong only sometimes, or not wrong at all?
    – If a man and woman have sex relations before marriage, do you think it is always wrong, almost always wrong, wrong only sometimes, or not wrong at all?
    – What if they are in their early teens, say 14 to 16 years old? In that case, do you think sex relations before marriage are always wrong, almost always wrong, wrong only sometimes, or not wrong at all?
    – What is your opinion about a married person having sexual relations with someone other than the marriage partner–is it always wrong, almost always wrong, wrong only sometimes, or not wrong at all?
    – What about sexual relations between two adults of the same sex–do you think it is always wrong, almost always wrong, wrong only sometimes, or not wrong at all?
    – Do you agree or disagree? Homosexual couples should have the right to marry one another.

    So, your theory is that the authors of the survey are interested in people who don’t care about consent in homosexual relationships but care about consent when it comes to premartial sex (or any other combination)? Otherwise why bother asking for all these specific relations, when it’s really the limiting condition that’s of interest? And why not ask about consent directly or at the very least ask about heterosexual adults as baseline?

    • Interesting point. When I take surveys I’m often conflicted between answering certain questions based on what I think is the intended meaning, or based on how they are actually written. So in cases like this where you have always/sometimes/never, I almost never want to choose “always” or “never” because I can almost always think of some exception – even though I think that surveys probably don’t want or expect such strict absolutes. And clearly in this case most respondents aren’t using such strict definitions, since only 10-15% are selecting “Almost Always” or “Sometimes”, but I’m sure almost all the people saying “Not Wrong At All” would agree that it is wrong if there are consent issues, and similarly you could get most of the “Always Wrong” crowd to agree it might not be wrong in some bizarre contrived situation like a madman will set off a nuclear bomb unless this same-sex couple has sexual relations.

      Seeing the preceding questions actually makes me more likely to select “Sometimes Wrong”. They start with the general is sex before marriage wrong, but then ask about the specific subset of early teens. So since I think that early teens have sex relations before marriage is almost always wrong, I must also think that sex relations in general is at least sometimes wrong. So when I see the same-sex question, I’m primed to think of what subset of that that I will find wrong they will ask about next.

      In conclusion, I’m bad at taking surveys.

    • Markus: your position is what exactly? That it’s some kind of deep responsibility of the answerer to analyze the entire survey and make every effort to understand what the intent of the survey question writer is and give as accurate and correct answers as is humanly possible?

      Surveys are for the most part donated time on the part of the answerer. And answerers are frequently harassed with multiple repeat phone calls etc. EVERY responsibility rests on the Surveyor. All of it. If the surveyor doesn’t like the way people are interpreting the question, then it’s up to the surveyor to model the responses appropriately in their analysis, or re-do the survey.

      There is literally *no* responsibility that the answerer has to do *anything*, not even *answer the phone*, not even *tell the truth*.

      Martha is merely pointing out that in the presence of this question, some people will interpret it in a certain way, and that the analysis model should take that into account. The phrase “Your response would have been wrong” is literally a *meaningless* statement. There simply is not a *wrong* answer. Even an intentional lie is not “wrong” in any meaningful sense. There is no duty or responsibility or deep necessity for a person to give any particular kind of answer to any question in a telephone or similar survey. It’s simply data streaming out of a measurement instrument in the same way that an ohm meter works. If an ohm meter tells you something that doesn’t make sense, it’s *your* responsibility to get another meter and check the result.

      • Markus:

        1. In response to your question, “So, your theory is that the authors of the survey are interested in people who don’t care about consent in homosexual relationships but care about consent when it comes to premartial sex (or any other combination)?”: No, I don’t have a theory of what they are interested in.

        2. I agree with the points Daniel has made in his response above.

        3. For more discussion of (and references on) the topic of wording survey questions, see http://www.ma.utexas.edu/users/mks/statmistakes/wordingquestions.html and the links and references there.

      • Something that I thought I made clear in my original comments (that I was referring mostly to Jay Livingtson’s post, not the original GSS survey) apparently did not come across as clearly as I thought it would. The question as worded was not suitable for the use Livingson used it for. Still, the question was subject to different interpretations, so I think a criticism of the wording in the GSS surveys still stands.

  4. Daniel, your answer is quite strident, however it’s just one view.
    A different one is that the agreement to take part in a survey includes a responsibility on the part of the answerer to make a good faith effort to answer questions as they are intended. That’s really the same as in any other day to day interaction: Chosing to answer a stranger is optional, but if you do, you implicitly agree to try to do a good job, not to lie etc. By the same social contract frame of survey taking, the surveyor is expected to phrase his or her question as good as possible and to ask only about stuff he or she actually cares about so as not to waste the responders time.
    [Tangentially: You’re saying the moral argument against lying becomes moot if one is taking a survey? Seriously? It’s not relevant to the issue at hand, but I’m curious whether you think lying is ok generally or whether you think there’s something special about surveys and what moral theory you use to base that on. Also, remind me to never ask you for directions. ;-)]

    That latter part about wasting time is why it’s common to not exhaustively list all ‘madmen with atomic bombs’ scenarios before each question, or in the case of sexual relations all sort of consent and power issues that might influence the answer. You’d end up with a wall of text or speech, which for most people wastes their time unnecessarily. (And, time being limited, likely means you can ask fewer questions.)

    I’ve taken courses in survey design, I’ve designed surveys, I’ve counselled others on survey design, and my experience is, that question phrasing and answer formats are always a tradeoff between clarity and brevity and precision and other concerns. And, generally, because of that experience for quality surveys I tend to assume the authors thought long and hard about the phrasing and what came out is their best faith effort, their least worst solution. Even if I personally would have made a different decision. I also learned that my intuitions are sometimes wrong, and a pilot study reveals that some phrasing I disagreed is understood perfectly fine (i.e. as intended by the surveyors) by all participants of the pilot.

    Back to the case at hand, two points: (1) I agree the phrasing is not good enough. For me, one either agrees homosexual sex is fine, or one doesn’t. But again, following the ‘principle of charity’, I can imagine that the authors might have wanted to keep the format from the preceding questions and when I think about it for a little, I can see that the same conditions i.e. degree of commitment which might lead participants to answer ‘sometimes wrong’ to the premartial sex question might apply equally here.
    (2) Marthas response is ‘wrong’ in the sense of ‘she misunderstands the question’. I totally agree that people misunderstanding the question are part of the phenomenon under study and as such part of the world science is trying to learn about. Analytically, this is most often modelled as participants responding to a different question than the one the others were answering.
    Martha is certainly free to interpret the question the way she does, but when she then says ‘So this is an example of an inadequately thought out survey question.’ I believe we need to ask, whether her interpretation is a reasonable one.
    And on that latter point, I say it isn’t. As I argued above, her interpretation is a piece of half-assed sophistry. It goes far enough to come up with some limiting condition, but fails to then think through, what the assumptions that this limiting condition is valid means for the rest of the response options. In this case, it means that only people who approve of rape should chose option 4.

    @ Martha
    1) I think you do. What else is your introduction of ‘mutual consent’ than a theory about what was meant by the question?
    2) see above
    3) A good, if basic, introduction.

    • Markus:

      Martha’s perspective is perfectly reasonable. It is how most surveys would phrase the question because it is now impossible to determine with the data at hand what number or percent of people who answered ‘Sometimes Wrong’ did so because that qualifier was not there. You can argue that the surveyors thought this through, but also having worked on a great many survey projects I can unequivocally say that thinking through all of the possible interpretations of a question is not nearly as common as you claim.

    • I think you’re being unfair to Martha. The survey designers themselves introduced a limiting condition by first asking the general premarital sex question and then asking about a subset of premarital sex that most people would find more objectionable. That leads to the assumption that you should consider special cases. I’m not saying they need to list every madmen with atomic bombs, but adding some sort of “all else being equal” or “in most situations” type qualifier would be sufficient. If even then someone was bringing in scenarios from outside the question, then you could accuse them of “half-assed sophistry”.

    • My impression is survey designers are pretty clueless of the effect their industry can have on others. If a person gets multiple calls a day every day for weeks from survey organizations which is not at all out of the question during certain times such as election season etc, they might reasonably interpret all this as an attempt to extort something valuable out of them (which of course it is). In such a situation, lying to get them to stop calling is more or less in the same category as punching a mugger, a form of self defense.

      In order to have any contract theory be involved, both parties need to get something of value out of the exchange. In this scenario you expect those who care about the survey might get value out of later reading the results, they shouldn’t lie. But those who don’t care about the survey issue don’t get anything of value. They are not bound by any theory of mutual exchange, they are bound by the theory of how to deal with harassment. If their method is to lie and hope the harassment stops, this is to be expected.

      one might argue that government surveys are not of the same category, as professional survey organizations are after all actually getting money to do this, but government surveys are for the purpose of collecting information that is or should be of relevance to efficient and fair operation of the government, a public good that all benefit from to some extent. In such a scenario I think you can argue for a social contract and for moral imperative not to lie.

      • Daniel:

        To expand slightly on your response to Markus about lying by survey respondents. The issue of whether or not someone should be truthful on a survey is beside the point as the only relevant issue for the surveyor is whether the data accurately reflects the intended construct. Arguments about social contracts or value exchange are not very compelling models of survey response and methods that assume them will not likely produce very good models of the data and will result in biased inferences. If the data does not accurately reflect the construct and it is important that they should, methods should be improved.

Leave a Reply

Your email address will not be published. Required fields are marked *