Skip to content
 

Why I’m still not persuaded by the claim that subliminal smiley-faces can have big effects on political attitudes

We had a discussion last month on the sister blog regarding the effects of subliminal messages on political attitudes.  It started with a Larry Bartels post entitled “Here’s how a cartoon smiley face punched a big hole in democratic theory,” with the subtitle, “Fleeting exposure to ‘irrelevant stimuli’ powerfully shapes our assessments of policy arguments,” discussing the results of an experiment conducted a few years ago and recently published by Cengiz Erisen, Milton Lodge and Charles Taber. Larry wrote:

What were these powerful “irrelevant stimuli” that were outweighing the impact of subjects’ prior policy views? Before seeing each policy statement, each subject was subliminally exposed (for 39 milliseconds — well below the threshold of conscious awareness) to one of three images: a smiling cartoon face, a frowning cartoon face, or a neutral cartoon face. . . . the subliminal cartoon faces substantially altered their assessments of the policy statements . . .

I followed up with a post expressing some skepticism:

It’s clear that when the students [the participants in the experiment] were exposed to positive priming, they expressed more positive thoughts . . . But I don’t see how they make the leap to their next statement, that these cartoon faces “significantly and consistently altered [students’] thoughts and considerations on a political issue.” I don’t see a change in the number of positive and negative expressions as equivalent to a change in political attitudes or considerations.

I wrote:

Unfortunately they don’t give the data or any clear summary of the data from experiment No. 2, so I can’t evaluate it. I respect Larry Bartels, and I see that he characterized the results as the “subliminal cartoon faces substantially altered their assessments of the policy statements — and the resulting negative and positive thoughts produced substantial changes in policy attitudes.” But based on the evidence given in the paper, I can’t evaluate this claim.  I’m not saying it’s wrong. I’m just saying that I can’t express judgment on it, given the information provided.

Larry then followed up with a post saying that further information was in chapter 3 of Erisen’s Ph.D. dissertation, available online here.

And Erisen sent along a note which I said I would post. Erisen’s note is here:

As a close follower of the Monkey Cage, it is a pleasure to see some interest in affect, unconscious stimuli, perceived (or registered) but unappreciated influences. Accordingly I thought it is now the right time for me to contribute to the discussion.

First, I would like to begin with clarifying conceptual issues with respect to affective priming. Affective priming is not subliminal advertising, nor is it a subliminal message. Subliminal ads (or messages) were used back in the 1970s with questionable methods and current priming studies rarely refer to these approaches.

Second, it is quite normal to be skeptical because no earlier research has attempted to address these kinds of issues in political science. When they first hear about affective influences, people may naturally consider the consequences for measuring political attitudes and political preferences. These conclusions may be especially meaningful for democratic theory, as mentioned by Larry Bartels in an earlier post.

But, fear not, this is not a stand-alone research study. Rather, it is part of an overall research program (Lodge and Taber, 2013) and there are various studies on unconscious stimuli and contextual effects. We name these “perceived but unappreciated effects” in our paper. In addition, we cite some other work on contextual cues (Berger et al., 2008), facial attractiveness (Todorov and Uleman, 2004), the “RATS” ad (Weinberger and Westen, 2008), the Willie Horton ad (Mendelberg, 2001), upbeat music or threatening images in political ads (Brader, 2006), which all provide examples of priming. There is a great deal of research in social psychology that offers other relevant examples of the social or political effects of affective primes.

Third, with respect to the outcomes, I would like to refer the reader to our path analyses (provided in the paper and in The Rationalizing Voter) that show the effects of affect-triggered thoughts on policy preferences (see below). What can be inferred from these results? We can say that controlling for prior attitudes affective primes not only directly affected policy preferences but also indirectly affected preferences through affect-evoked thoughts. The effects on political attitudes and preferences are significant as we discuss in greater detail in the paper.

erisen

Fourth, these results were consistent across six experiments that I conducted for my dissertation. Priming procedure was about the same in all those studies and patterns across different dependent variables were quite similar.

Finally, we do not argue that voters cannot make decisions based on “enlightened preferences.” As we repeatedly state in the paper, affective cues color attitudes and preferences but this does not mean that voters’ decisions are necessarily irrational.

Both Bartels and Erisen posted path diagrams in support of their argument, so perhaps I should clarify that I’ve never understood these path diagrams. If an intervention has an effect on political attitudes, I’d like to see a comparison of the political attitudes with and without the intervention.  No amount of path diagrams will convince me until I see the direct comparison. You could argue with some justification that my ignorance in this area is unfortunate, but you should also realize that there are a lot of people like me who don’t understand those graphs—and I suspect that many of those people who do like and understand path diagrams would also like to see the direct comparisons too.  So, purely from the perspective of communication, I think it makes sense to connect the dots and not just show a big model without the intermediate steps.  Otherwise you’re essentially asking the reader to take your claims on faith.

Again, I’m not saying that Erisen is wrong in his claims, just that the evidence he’s shown me is too abstract to convince me.   I realize that he knows a lot more about his experiment and his data than I do and I’m pretty sure that he is much more informed on this literature than I am, so I respect that he feels he can draw certain strong conclusions from his data.  But, for me, I have to go what information is available to me.

P.S.  In his post, Larry also refers to the study of  Andrew Healy, Neil Malhotra, and Cecilia Hyunjung Mo on college football games and election outcomes.  That was an interesting study but, as I wrote when it came out a couple years ago, I think its implications were much less than were claimed at the time in media reports.  Yes, people can be affected by irrelevant stimuli related to mood, but it matters what are the magnitudes of such effects.

37 Comments

  1. A minor but important point: Just because something was presented briefly does not meant that it was subliminal. This claim is just wrong: “for 39 milliseconds — well below the threshold of conscious awareness.” People can consciously perceive quite a bit with a 39ms presentation, especially if it isn’t immediately masked (e.g., they can tell what type of scene photograph they’re viewing).

    Demonstrating that something is subliminal, below the threshold for conscious awareness, requires actually measuring that threshold (ideally, using signal detection procedures). Few of these sorts of studies have done anything close to that. Most just assume that brief presentations are necessarily outside of awareness. Or, they just ask people after the study if they had noticed anything, a procedure that is wholly inadequate for measuring subliminal effects.

    Claims about unconscious perception have a long history, with decades of method and measurement improvements. Claims of big effects from subtle manipulations (e.g., subliminal persuasion or effects on strongly held beliefs/attitudes) tend to fly in the face of that history. Within the subliminal perception world, even proponents of these sorts of effects agree that the effects tend to be small, difficult to measure reliably, and short lived.

    • 39 ms is basically one frame of a video shown at 24 frames per second. If you show me a video of people walking around in the snow and then have one frame of that video be totally black, I guarantee you I *will* notice that frame. So, I fully agree with Daniel Simons that the idea that this 39 ms presentation is below any threshold of perception is highly suspect.

      Now, if it had been 39 microseconds, I probably wouldn’t have blinked an eye ;-)

      • I used to show movies at the student union when I was an undergrad. we actually rented 16mm film and showed them, which requires the projectionist (that’s me) to watch for a special mark in the upper right corner of the frame to know when to start the next film (we’d have two projectors, so we could switch from one roll of film to the other “seamlessly”). These are called “Cue Marks” (you can read about them on wikipedia).

        That blinking mark appears in 4 frames of 16mm film (24 frames per second), so for about 4 times as long as this cue, yet once you know it’s there, you can’t miss it, it’s completely and totally obvious. Although 1 frame is probably harder to see, I don’t doubt that I would see it there as well if I were watching carefully, and didn’t happen to be blinking when it showed up.

    • Patrick K says:

      I just wanted to note that Erisen et al. used masked primes in the experiment. Below is the section describing the priming procedure in their “Affective Contagion” paper (p.193):

      “On each trial, a grid was presented on the screen for a full second to focus the participant’s attention on the exact location of the upcoming prime. Then a forward mask was presented for 13 ms to ensure that all participants start with no meaningful image in visual sensory memory. The prime (smiley, frowny, or neutral cartoon face) was then presented for 39 ms, which is well below the threshold for subjective conscious awareness but above the objective threshold for sensory and brain reaction. A backward mask followed for 13 ms to wipe clean visual sensory memory and ensure that the image of the prime remained below the threshold of subjective conscious awareness. After the priming procedure, the issue prompt appeared in the same screen location with a thought-listing box below.”

  2. Tom M says:

    I haven’t read the paper, but am just commenting on the path model (Fig 4). Assuming this is a linear model and you can just multiply path coefficients, there is a direct effect of -1.1 (in whatever units the outcome is measured in), and an indirect effect (mediated through both negative and positive thoughts) of +1.1, for a total effect of approximately zero. That is, the model is consistent with the claim that the affective prime influences positive and negative thoughts, and that pos and neg thoughts are associated with support for the target statement, but NOT that the prime affects support for the statement, because there is some other presumed mechanism by which the prime has an equal and opposite effect on the outcome. Am I reading this correctly?

    Also, I’d second Daniel Simons’ comment that unless the experimenters used some sort of masking, a 39 msec visual presentation is very likely to be conscious (it depends on brightness, contrast, visual angle, etc., but 39 msec is pretty long). Just asking participants “did you see it” doesn’t cut it. As I say, I haven’t read the paper and maybe they did use backward masking and psychophysical measurements. Or maybe it’s not that important whether the stimuli were conscious, but it does sort of change how you think about the study.

    • David J. Harris says:

      That’s how I read the diagram as well, although it’s possible that I’m misunderstanding it. The positive, indirect effects should be about 3.76*.18 + 2.68*.17, which is just enough to cancel out the direct negative effect.

      Even if we’re misunderstanding it, I’d like to see the authors explain the existence of the direct negative effect, which seems not to be consistent with their proposed explanation.

      • David J. Harris says:

        Found their discussion of it in the “Affective Contagion” paper. They seem to acknowledge that the effect of their priming on people’s actual policy evaluations is nil. They attribute this to some kind of convoluted mood effect–even though they find no evidence for it–and still claim that their results will somehow affect policy decisions overall. Amazing.

        >There is an interesting anomaly, however, in the strong and significant negative direct effect of prime on posterior support for anti-illegal immigration policies, which is nearly exactly the same size but in the opposite direction of the indirect effect. That is, we find that subliminal exposure to smiley (frowny) faces while generating policy thoughts significantly reduces (increases) subsequent support for building walls or deporting illegal immigrants, measured 30-45 minutes later. Given what is known in cognitive psychology about the fleeting duration of concept activation (Barsalou, 1992; Eysenck & Keane, 2010; Neely, 1977), this result may be interpreted as a mood effect. Participants in the positive condition are put in a positive frame of mind by subliminal exposure to 42 smiling faces, while those in the negative condition feel more diffuse negativity, and this occurs in much the same way as the sunny day effect on reported life satisfaction. The positive mood makes subjects less likely to support “nasty” immigration policies, while those in a negative mood are more inclined to punish illegal immigrants. Unfortunately, we cannot put this interpretation to direct test in these experiments since we do not have any measures of posttreatment mood. We (Erisen, 2009; Erisen, Lodge, & Taber, 2009) found small but significant effects on measures of posttreatment mood in other experiments that used smiling and frowning cartoon faces as subliminal primes, suggesting that our primes probably did influence participant mood. Interestingly, we find this mood effect only for participants below the median in political knowledge, with more sophisticated participants showing no significant direct effect of prime on evaluations of anti-immigration policies.

        >We do not find a mood effect for pro-immigration policy evaluations, even among less sophisticated respondents. Figure 5 shows exactly the same overall pattern of results that we found for anti-immigration policies. Negative thoughts about a policy reduce support for that policy; positive thoughts about a policy increase support. Since these thoughts are driven by prior attitudes and even more strongly by the incidental primes, we find significant and sizable indirect effects of prior and prime on posterior policy evaluations. The indirect effect of prior attitudes on posterior policy evaluations is.25, which combines with the direct causal path for a total effect of 1.06. The indirect effect of prime on posterior policy evaluations is.77. Taken together, the analyses reported in Figures 4 and 5 strongly support our affective mediation hypothesis.

        • Fernando says:

          All this goes back to ESP. If the claim is a priori unlikely you’d better double make sure your experimental procedure is flawless, including pre-registration of theory, hypothesis, test statistics etc.

          I did not read article but quote above suggest ex post theorizing.

          PS I like DAGs for non-parametric identification. I don’t like linear path diagrams. Results and QOIs are better shown in a separate graph contrasting both conditions.

  3. Dean Eckles says:

    On the diagrams: I think it is important to distinguish between these kind of diagrams and associated estimation methods based on linear Gaussian models (long popular in the social sciences) and the more general utility of drawing and reasoning with causal DAGs.

    Many statisticians who would advocate (at least sometimes) using DAGs in causal inference would also be happy to criticize the continued practice of the “multiple regression with pictures” (as David Rogosa would call it http://web.stanford.edu/~rag/stat209/exs2014.html) to supposedly settle difficult causal questions.

    It is important to remember that these kind of mediation analyses generally require very strong sequential ignorability assumptions to be informative. It is sad that Baron & Kenny (1986) and other simple mediation analysis methods have had so much influence on social psychology and related areas.

    It is too bad that experiment 2 in the paper doesn’t involve a simple difference in means (or a difference in means adjusted for prior attitudes). It seems from the dissertation that most of these are not statistically significant at conventional standards. The justification provided is that MacKinnon (2008) and Shrout & Bolger (2002) say that you don’t need this to do mediation analysis. (This might in some specific senses be true, but it is also worrying if you want to make claims about these stimuli doing important things.)

    • Keith O'Rourke says:

      Tyler Vanderweele gave some nice talks on mediation analyses at the last two JSMs and has some resources for anyone interested.

      http://www.hsph.harvard.edu/tyler-vanderweele/tools-and-tutorials/

    • JSB says:

      I am not sure about your distinction as neither DAGs nor path analysis provide a magic bullet for determining causation. DAGs and path analysis both attempt to use “pictures” to represent the underlying equations. Both also require an explicit statement of assumptions to be truly useful and presentation of data in a form other than the DAG or path model. DAGs may address some technical issues associated with the technique but as discussions here with Pearl have shown, DAGS have their own issues to reaching a state of causal bliss. Would a DAG really have saved the authors here? I tend to think not.

      • judea pearl says:

        All,
        When Galileo brought the telescope to Italy, the church forbade its use. Why? Because the devil can play tricks on you, and make you see all kind of
        phenomena (eg sun spots) which do not exist. Andrew’s rejection of diagrams is more subtle. He does not fear the devil behind of the telescope, but he finds the telescope useless because it does not square the circle. In his words:
        ” I’ve never understood these path diagrams. If an intervention has an effect on political attitudes, I’d like to see a
        comparison of the political attitudes with and without the intervention. No amount of path diagrams will convince
        me until I see the direct comparison. “
        Translated: No causal inference is valid unless it is confirmed by INTERVENTIONAL experiment. Thus, nothing in causal inference is
        worth studying with the exception of Fisher and his randomized experiment (1935), and all the works of Cochran and Rubin and Rosenbaum
        which extend causal analysis to observational studies should be scrapped off hand as useless. Why? because these works do not rely on
        coomparisons with and without interventions. Instead, these works rely on scientific causal assumptions. Deriving causal conclusions without those
        causal assumptions is a mathematical impossibility, much like squaring the circle.
        What Andrew does not seem to appreciate is that diagrams are not circle squarers, they were devised to do the job of
        a telescope, namely, to communicate your causal assumptions clearly and unambiguously and, more importantly, to derive the logical
        consequences of these assumptions swiftly and reliably.

        Andrew is absolutely right in stating that “there are a lot of people like me who don’t understand those graphs”
        These people belong to two camps. (1) Those who would not touch a telescope because the devil is behind it.
        (2) Those who find a telescope useless because it does not square the circle.

        I was glad to see that several discussants in this forum do understand that telescopes do have some use, that causal inference
        has a place in observational studies, and that the works of Cochran, Rubin and Rosenbaum can be amplified 10 fold with the
        use of telescopes (read: diagrams.)
        For a glimpse, see http://bayes.cs.ucla.edu/csl_papers.html

        • Andrew says:

          Judea:

          To continue your analogy, you’re on the wrong planet here! In this particular example, Erisen did perform an intervention, he did perform an experiment, and I just want to assess the claim that the intervention had an effect on political attitudes. In the case of a controlled experiment, it’s hardly “squaring the circle” to ask for a direct comparison of treated and control outcomes.

          • judea pearl says:

            Andrew,
            Great!
            So your two statements: (1) “I have never understood these path diagrams”
            and (2) “there are a lot of people like me who don’t understand those graphs”
            only pertain to this specific example, where there were interventions.
            But in general, we can take it that (1) you do understand these path diagrams and that
            (2) there are not a lot of people who dont understand those graphs.”
            This is a relief.
            BTW, diagrams are also indispensible in interventional studies, for example, to articulate assumptions about direct and indirect
            effects. But, since you endorse it, there is no point elaborating on how and why.

            • Andrew says:

              Judea:

              I appreciate that you comment here but I wish you would just take my words literally. My statements (1) and (2) are an accurate description of my understanding and you can feel free to be bothered by them. But this is probably not the best example for you to fight over. I never said the authors of this paper should not use path analysis, I just said that (a) I’m not gonna buy their claim until I see the direct comparison, and (b) I’m not the only person with this attitude. Indeed, according to some of the commenters here who do seem to understand path analysis, the direct comparison in the data is close to zero, which suggests that the claim of an effect is false. I have never said that people shouldn’t do path analysis, I’m just suggesting that authors of this sort of work use multiple channels of communication.

              I’d say the same thing regarding other methods that are not so well understood. For example, when someone makes a claim based on a fitted regression model, I recommend displaying the direct comparison. Regression models are complicated and can obscure patterns in the data. I run a lot of regressions but I also like to see the direct pattern in the data (and, when there’s selection, I like to demonstrate how that pattern arose). I feel the same way about path analysis (actually even more so, given that, as I stated, I don’t really understand it). It’s neither an endorsement nor an anti-endorsement of a method to want to connect it to simpler comparisons in the data.

              • Dean Eckles says:

                Yes, this doesn’t seem like the best example for Pearl’s argument. The first quantity of interest is the average effect of these primes on political attitudes. Then maybe decomposing that into effects via different mechanisms could be interesting (perhaps using graphs, though the required assumptions will be very strong).

              • judea pearl says:

                Andrew,
                Glad we are in agreement.
                I just wanted to make sure no reader gets the impression that
                you think a forward-looking statistician can do causal inference in the 21st century without understanding graphical models. It was important for me to eliminate such potential misunderstanding and, now, that it is eliminated, we can go back to science.
                Judea

              • Andrew says:

                Judea:

                Huh? I never said that. Your comments here are just getting goofy. I do’t think you’re trolling, exactly, but you do seem to be misreading what I am saying.

                What I wrote is that I have never said that people shouldn’t do path analysis. I offer neither an endorsement nor an anti-endorsement of a method that I do not understand. Contrary to what you seem to be saying, I do think it is possible for a forward-looking statistician to do causal inference in the 21st century without understanding graphical models. I also accept that these models are helpful to some forward-looking statisticians. There are many roads to Rome and I will just about never say that any particular method is necessary.

            • Anonymous says:

              Wow, your comment reads like a classic internet troll. Did you really mean to put words in Andrew’s mouth and then dismiss his valid point?

              I take Andrew’s point to be:
              1) We have an intervention, yet we have no clear simple communication of what the data shows about that intervention and its ultimate effect on the measurement of interest. In fact, people with more understanding of how these diagrams work have in these comments pointed out that it appears from the diagram that there is NO EFFECT on the outcome of interest (see David J Harris above).

              2) If it’s true that the article states: “Fleeting exposure to ‘irrelevant stimuli’ powerfully shapes our assessments of policy arguments” and yet the diagram implies no such thing, I think something has gone wrong in communication. I don’t take any of Andrew’s points to be complaints about the use of the diagram in their model, but rather complaints about how the research results are presented to the reader, and those complaints appear to be valid in this case.

              • judea pearl says:

                Andrew,
                Alas, my joy at having reached an agreement was short lived.

                So, you really think “it is possible for a forward-looking statistician to do causal inference in the 21st century without understanding graphical models.”
                This is precisely the impression that I hoped readers will not get from your comments. And what I hoped to tell readers is that the correct analogy in not “many roads to Rome” but rather, doing astronomy “with or without a telescope”.
                Hear we are, back to Galileo.
                I just wish to tell enlightened readers to be very very weary about the “many roads to Rome” attitude, because the “no-telescope needed” astronomers of Galileo’s time, were not exactly “forward looking”.

                I can elaborate more on the validity of the telescope analogy, with logic, theorems and examples. But, at this point, it suffices to keep in mind that the “many roads to Rome” attitude, is not as inclusive as it sounds; it is like telling high-school kids: You dont need to learn multiplication; some people use it and some dont.

    • Andrew says:

      Dean:

      Yes, and remember the original subhead, ““Fleeting exposure to ‘irrelevant stimuli’ powerfully shapes our assessments of policy arguments,” which does not seem to be supported by the data.

  4. Cengiz Erisen says:

    Hi,

    I hope this comment will be informative on the paper:
    1. If you really don’t believe in experiments I am not sure how I can really explain anything. But let me try.
    2. If you are interested in this topic and Andrew’s comments I first recommend reading the article to get a better sense of the experimental procedure and the statistical analysis.
    3. The priming procedure is clearly explained in the paper. If one could really list five publications indicating that 39ms primes are not subliminal I will then reply with more evidence.
    4. With respect to the “significant effects canceling each other out” I just would like to ask if those who wrote those comments really know how the variable “Affective Prime” and the DV are coded? If this is clear, then I would be happy to talk about the direct and indirect effects.
    5. If anyone really has genuine interest in any part of the paper I would be more than happy to provide detailed explanation or statistical results, if requested.

    Well, after all I might be on the wrong planet! Nonetheless, shouldn’t we still be able to understand science and causal association?

    • Andrew says:

      Cengiz:

      Thanks for your offer. And don’t worry, I strongly doubt there are any readers of this blog who “really don’t believe in experiments.” We just like to see direct comparisons where possible.

    • Rahul says:

      Can you post a link to the pdf? I’m finding it hard to get an ungated copy. Thanks!

    • David J. Harris says:

      Okay, I’ll bite. I believe in experiments and have read the paper. I’m not sure what you’re trying to get at with how the variables are coded, however.

      I think the crux of the issue is this:

      The other commenters in this thread and I interpret your path diagram as suggesting that “Support for Anti-illegal immigration sentiments” cannot be affected reliably by your priming procedure. The two paragraphs I quoted above seem to confirm this interpretation.

      Are we wrong?

      • Cengiz Erisen says:

        I think it depends on what you mean by priming procedure. If you mean the priming procedure in which an affective prime changes the nature of the thought listed then this would be only the initial effect of the prime. But, if you mean the direct effect of the prime on policy preferences I think you will need to take into account the prime-evoked thoughts because we did not directly prime policy preferences. We do not test or hypothesize a direct effect of the prime on policy preferences or political attitudes. That is not the paper is about. We test the contagious effect of the prime on thoughts that in turn change preferences/attitudes. When you take into account the prime-evoked thoughts you can see that they do change one’s leaning on a policy. If it is your only evidence that out of a number of models one shows an anomaly on a non-hypothesized direct effect of the prime, then I guess that is what you will be able to rely on.

        It is unnecessary to list the number of the studies that show and repeatedly test the direct effect of affective primes on a number of political and non-political subjects.

        • Phil says:

          I like experiments. I am convinced that priming can affect people’s views. I have skimmed the paper but not read it in detail.

          In presenting the results of the first experiment in the paper, you show a table: Number of positive and negative thoughts about illegal immigrants, for supporters and opponents of illegal immigration, by priming category. (And similarly for energy policy). So when it comes to _thoughts_ you are doing the sort of direct comparison Andrew is asking for. But for Experiment 2, which looks at _policies_, you don’t do that. There’s no table that shows that X% of people who received a pro-immigration prime supported such-and-such a policy, whereas Z% of people who received an anti-immigration prime supported that policy. Is that because no such table is necessary because it’s possible to derive those percentages from the results that you do present?

          If I understand Andrew’s sentiment correctly, he’s saying that if you’re going to claim that priming changes the policies that people support, the most convincing way to do that — or at least _a_ convincing way to do that — would be to show that people who received one type of prime are supportive of different policies than people who received a different type of prime. If, as you say, “the effects on political attitudes and preferences are significant” then presumably they are easy to see in a direct comparison like that.

          Perhaps this discussion can move away from whether a direct comparison is strictly necessary in this case — Andrew says yes, Erisen and Pearl say no — to whether it is desirable. It seems to me that a quantitative comparison could be done in a small table, or in perhaps three lines of text. So why not do it?

          • Cengiz Erisen says:

            Sure, below is Table 2 that we had to exclude due to space limits for the article. It was probably included in all the previous versions of the paper presented at different occasions.

            Sorry for the format, I couldn’t paste it as a table, or as a picture.

            Table 2: Positive and Negative Thoughts in Response to Pro- and Anti-Illegal Immigration Policies in Study 2, by Priming Condition and Prior Attitude

            Pro-Illegal Immigration Policy Prompts

            Supporters of Illegal Immigration
            Positive Thoughts Negative Thoughts
            M SD M SD
            Positive Primes (N=24) 7.83 3.32 1.88 1.54
            Neutral Primes (N=21) 3.95 2.46 2.86 1.93
            Negative Primes (N=19) 3.53 2.20 5.11 2.28

            Opponents on Illegal Immigration
            Positive Thoughts Negative Thoughts
            M SD M SD
            Positive Primes (N=13) 5.31 4.50 3.77 2.17
            Neutral Primes (N=15) 3.27 2.71 3.60 2.72
            Negative Primes (N=15) 2.47 2.29 5.60 3.20

            Anti-Illegal Immigration Policy Prompts

            Supporters of Illegal Immigration
            Positive Thoughts Negative Thoughts
            M SD M SD
            Positive Primes (N=24) 4.96 2.94 4.75 2.83
            Neutral Primes (N=21) 2.38 1.72 5.62 3.19
            Negative Primes (N=19) 1.26 1.15 8.89 3.65

            Opponents on Illegal Immigration
            Positive Thoughts Negative Thoughts
            M SD M SD
            Positive Primes (N=13) 6.31 2.53 4.23 3.19
            Neutral Primes (N=15) 3.40 1.84 3.67 2.53
            Negative Primes (N=15) 2.40 1.30 5.27 2.81

            • Andrew says:

              Cengiz:

              No, this doesn’t answer Phil’s question. He (and I) were asking about evidence of effects on issue attitudes. Your table shows effects on positive and negative thoughts.

              • Dean Eckles says:

                The answer is that those effects are not significant at conventional levels in Exp 2. From ch. 3 (pages 89-91) of Cengiz Erisen’s dissertation (from https://dspace.sunyconnect.suny.edu/handle/1951/52338) we have:
                “Illegal Immigration … In the first step of the mediation model a simple regression shows the effect of affective prime on the attitude (beta=.34; p.10). Yet, as discussed before, the first step of mediation analysis is not required to establish the model (Shrout & Bolger 2002; MacKinnon 2008).”

                I don’t think Judea Pearl would want to defend Erisen et al.’s use of “direct effect” anyway. What we have here is no substantial evidence in favor of a total effect of the primes on policy attitudes (ie the ITT). Nonetheless, the machinery of B&K-style mediation analysis is still applied.

                I believe in randomized experiments, but also that one of the primary analyses of an experiment should end up being simple contrasts between conditions or levels of factors, perhaps with some kind of adjustment for pre-treatment covariates expected to increase precision.

              • Dean Eckles says:

                Prior comment failed because of angle brackets. Correct version of quotes:

                Illegal Immigration: “In the first step of the mediation model a simple regression shows the effect of affective prime on the attitude (beta=.34; p [less than] .07). Although not hypothesized, this confirms the direct influence of the affective prime on the illegal immigration attitude.”

                Energy Security: “As before, the first step of the mediation model ought to present the effect of the prime on one’s attitude. In this mediation model, however, the affective prime does not change energy security attitude directly (beta=-.10; p [greater than] .10. Yet, as discussed before, the first step of mediation analysis is not required to establish the model (Shrout & Bolger 2002; MacKinnon 2008).”

    • Fernando says:

      I believe in experiments. I also believe that experiments are no panacea.

      As a consumer of research I can read and experimental paper showing some effect and conclude “I don’t believe it”.

      This is a completely legitimate and scientific reaction, assuming you allow Bayesian updating is scientific.

      I elaborate on this points here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2496670

      PS This is not a comment about your paper, which I have not read, but about the implicit idea in your comment that just because it is an experimental result I must believe it. That is wrong.

      • Fernando says:

        PPS What matters is not that it is an experiment so much as the supporting evidence about the procedures used. If I have to choose between a theoretically motivated, pre-registered observational study, and an experiment that has no procedural controls whatsoever (e.g. that limit degrees of freedom etc), and where that theory may have done ex-post, then I’ll choose the observational study as better evidence.

        Randomization removes confounding from a common cause, something an observational study cannot do, but there are a lot of other things that can go wrong with experiments. Indeed the number of implicit exclusion restrictions being imposed is in fact quite large.

        Once again, this is not a comment on your specific experiment which I have not read.

  5. D.O. says:

    Look, but if irrelevant subliminal primes change people thoughts without changing their attitudes, it’s a great result! People do change their sorts for very fleeting reasons, but do not allow these thoughts to interfere with their better judgement. Thomas Jefferson would have loved us.

  6. […] still stifling research in the 2nd decade of our century, as expressed by Andrew Gelman last month (link): “I do think it is possible for a forward-looking statistician to do causal inference in the […]

Leave a Reply