The blogger known as Neuroskeptic writes:
Can the thought of money make people more conservative?
The idea that mere reminders of money can influence people’s attitudes and behaviors is a major claim within the field of social priming – the study of how our behavior is unconsciously influenced by seemingly innocuous stimuli. However, social priming has been controversial lately with many high profile failures to replicate the reported effects.
Now, psychologists Doug Rohrer, Hal Pashler, and Christine Harris have joined the skeptical fray, in a paper soon to be published in the Journal of Experimental Psychology: General (JEPG).
Rohrer et al. report zero evidence for money-priming effects across four large experiments. They conclude that “Although replication failures should be interpreted with caution, the sheer number of so many high-powered replication failures cast doubt on the money priming effects reported . . .”
Each of the four experiments was a replication of one of the experiments in a previous study, Caruso et al. (2013). . . . However, Rohrer et al. report that they couldn’t replicate any of the four effects they looked for.
The above graph summarizes the original study and the unsuccessful replication:
In the original studies (red bars), the ‘money’ condition (bright) produced increases in the various behaviors compared to the control condition (dark). In Rohrer et al.’s replications (blue), there were no differences by condition. The error bars are smaller for the blue bars too, reflecting the replications’ higher sample sizes.
And there were several other failed replications. (See the linked post for details.)
So it all seems pretty clear. I have no reason to believe in this effect. And, to the extent it is happening, the effect could vary: it could be positive in some scenarios, negative in others, large in some places, small in others, etc. No evidence for any sort of universal effect; the explanations all devolve into contextual stories, which tell us nothing more than what we already knew, which is that lots of factors influence individual behavior and attitudes.
Just one thing, though . . .
Rohrer et al. say that they don’t have any explanation for the positive findings in Caruso et al. The results are unlikely to be due to publication bias, they say. The effects are too strong, and highly unlikely to occur by chance, even taking into account that there were unpublished null results too.
Rohrer et al. also reject the idea that methodological differences could account for the failures to replicate. But this leaves us with the question of what is going on here. Hmm.
Can’t it just be the garden of forking paths? Lots of choices in data processing and coding, multiple outcomes, various ways that an analysis can lead to a “p less than 0.05” comparison.
As I’ve said before, I worry that concerns about the “file-drawer effect” (unpublished studies) and “fishing” or “p-hacking” (intentional searching for statistical significance) miss the elephant in the room, which is the garden of forking paths—that is, data-processing and analysis choices that are contingent on data, hence making the statement “p less than 0.05” essentially meaningless.
This is not at all to say that all or even most studies reporting significant effects are mistaken. Rather, what I’m saying is that if a study reports statistical significance, and if that study contradicts theory and the rest of the literature (in this case, unsuccessful replications), then it’s not such a mystery that statistical significance was attained.
No need to think of this as a loose end that needs to be followed up.
Rather, it’s standard operating procedure: if there’s nothing going on (or, more precisely, a highly variable and context-dependent effect) that’s being studied by researchers using standard statistical methods with a strong motivation to find statistically significant p-values, then it’s no surprise at all that such p-values were found. It tells us pretty much nothing at all, especially in the context of a bunch of unsuccessful replications. So no puzzle, no “Hmm” required, I think.