So I open the email one day and see this:
hi, Andrew – FYI, here’s another paper from the Annals of Small-N Correlational Studies, also known as Psychological Science:
hope all is well!
The research paper he was referring to is called “The Ergonomics of Dishonesty: The Effect of Incidental Posture on Stealing, Cheating, and Traffic Violations,” and it’s by Andy Yap, Abbie Waslawek, Brian Lucas, Amy Cuddy, and Dana Carney.
Regular readers will know I’ve been ragging on Psych Science recently, and I just looove the name “Andy Yap” (I don’t know the person, I just like the name because I’m named Andy and I like to yap; I guess in the same way that someone named Andrew who loves jello might find my name amusing), and the paper itself seems eminently mockworthy, with goofy images like this:
My correspondent continued:
Please don’t quote me on this, but in study 4, they report p < .001 (ooh!)...then, oh yeah, they figure out that a bigger seat = bigger car (gee, we they think of that until *after* they reported p < .001), so then they control for car size...and the seat effect has an un-sexy p=.087 (yes, they report their super-exact p to 3 decimal places). I need a drink!
I took a quick look at the paper and here’s what I thought:
Study #1 looks pretty clean, although it’s possible that the experimenters unwittingly manipulated the money-passing condition, as they knew which participants had open stances and which had closed stances. Other than that possible problem, that experiment seems clean, no?
With experiment #2, I wonder whether it was just easier to cheat in the expansive condition. But, again, I don’t know if that’s a legitimate criticism or me just searching for a possible problem with the study.
But experiment #3 seems ok, no?
I agree with my correspondent on study #4, that it is supportive of the general claims in the paper but is but nothing more than that. I wouldn’t really take study #4 as any kind of independent evidence.
Overall, though, this all looks much stronger than the arm-size and political attitudes study, or the ovulation and political attitudes study.
So, although this whole “embodied cognition” thing is pretty controversial, and I’m not a fan of the style in which the paper is written (lots of presentation of positive result and not so much on the warnings; for example, in study #4 it is explicitly stated that the measurement is taken by “hypothesis-blind research assistants,” but in the other studies there is no mention of this, which I take to mean that the measurements were not hypothesis-blind but the researchers didn’t want to emphasize that point). But, at least from a statistical standpoint, most of the results seem clean, not a lot of mucking around looking for subsets and interactions and controls.
It was easier for me to get a handle on the political studies (for example, not trusting at all the huge effects claimed in the ovulating-and-voting analysis), but I don’t know enough about plain old psychology to have a sense of how to think about these embodied cognition experiments. I wish the study was crappy and I could just comfortably sit here and mock.
Maybe some of you can help me out on this?