Skip to content

Snappy Titles: Deterministic claims increase the probability of getting a paper published in a psychology journal

A junior psychology researcher who would like to remain anonymous writes:

I wanted to pass along something I found to be of interest today as a proponent of pre-registration.

Here is a recent article from Social Psychological and Personality Science. I was interested by the pre-registered study. Here is the pre-registration for Study 1.

The pre-registration lists eight (directional) hypotheses. Yet, in Study 1 of the paper, only 2-3 of these hypotheses are reported. Footnote 1 states that the study included some “exploratory” measures that were inconsistent and so not reported in the main text.

In one sense, this is pre-registration working well in that I can now identify these inconsistencies. In another sense, I think it’s a misuse of pre-registration when literally the first hypothesis listed (and three of the first four hypotheses) is now labeled as exploratory in the published version. It’s the first time I’ve seen this and I wonder how common it will become in the future, knowing that many people don’t look closely at pre-registrations.

My reply: I have no idea, but I was struck by how the paper in question follows the social-psychology-article template, with features such as a double-barreled title with catchy phrase followed by a deterministic claim (“Poisoned Praise: Discounted Praise Backfires and Undermines Subordinate Impressions in the Minds of the Powerful”) and a lead-off celebrity quote (“Flattery and knavery are blood relations. — Abraham Lincoln”). This does not mean the results are wrong—I guess if you’re interested in that question, you’ll have to wait for the preregistered replication—it’s just interesting to see the research pattern being essentially unchanged. Just speaking generally, without any knowledge of this particular topic, I’m skeptical about the possibility of learning about complex interactions (for example, this tongue-twister from the abstract: “high-power people’s tendency to discount feedback only produced negative partner perceptions when positive feedback, but not neutral feedback, was discounted”) from this sort of noisy between-person experiment.

I agree with my correspondent that preregistration is a good first step, but ultimately I think it will be necessary to use more efficient experimental designs and a stronger integration of data collection with theory, if the goal is to make progress in understanding of cognition and behavior.

P.S. My correspondent asked for anonymity, and that is a sign of something wrong in academia, that it’s can be difficult for people to criticize published papers. I don’t think this is a problem unique to psychology. In a better world, the person who sent me the above criticism would not be afraid to publish it openly and directly.


  1. Anonymous says:

    Another great example of why pre-registration (of course) needs to be available to the reader. Here is a similar example:

    The fact that psychology seems to be in the process of messing up this simple and primary function of pre-registration is both pretty hilarious and sad. See comment-section here for example:

    I guess what’s next is peer-review only “open data” and “open materials”. They make just about as much sense as peer-review only “pre-registration”, and provide another way of slapping the tax-payers in the face and/or screwing up the most basic scientific methods, values, and principles.

  2. Simine Vazire says:

    As a fan of post publication peer review, I want to thank you and the anonymous correspondent for this feedback. I don’t speak for SPPS, but I personally appreciate criticism that can help me do better in the future. I think that our field is still figuring out how to handle/review pre-registrations, and there’s lots of room for improvement. The points raised here are good ones and will help me in my future reviewing and editorial work.

  3. Eric says:

    I’m not at all surprised that the correspondent asked for anonymity. Its the safest thing to do if you don’t have tenure or a secured academic future. The lack of anonymity is one of the things that may have hampered the use of PubMed Commons and led to its closure:

  4. AA (Also Anonymous) says:

    What a coincidence: The New York Times just ran an article in which a psychologist justified the use of his snappy title. It gave me whiplash to see his next sentence defending what he does as science. (I think the journalist was poking some fun at him.) “The project’s title may be bold (“To us, it was just catchy,” he said) and the team may not have been clear they would be happy to discover no universals exist, but he is not worrying about criticisms unless he can learn from them. “We’re doing science,” he said.” ( Not worried about criticisms, but loving the attention. The Times seems to like this guy: They gave him an Op-Ed to “clarify” his having hyped doomed-from-the-start study that “found” no effect of music training. (here’s the paper:

  5. Terry says:

    Is Social Psychology and Personality Science a “respectable” journal?

    When reading blog entries here that critique a specific article, I often wonder if we are just picking on a trashy article in a trashy journal that no one takes seriously.

    Wikipedia says this journal is ranked 14 out of 62 journals in the category “Psychology, Social”. That doesn’t sound very impressive. I can’t imagine reading 14 Social Psychology journals on a regular basis.

    • Andrew says:


      Yes, I think SPPS is a respectable journal. You write, “I can’t imagine reading 14 Social Psychology journals on a regular basis,” and I kinda know what you mean, but there are a lot of researchers out there, and they have to publish their work somewhere. I’ve published in a lot of obscure journals myself, but I don’t think that just cos a journal is obscure to outsiders, that this makes it trashy or not to be taken seriously.

    • JD says:

      First, journal prestige is linked to less reproducibility. The less respectable journals actually produce better science. Part of this is that top journals obsess over novelty, not quality. See here for more:

      Second, it is worth mentioning that preregistration can be done on datasets that have already been collected. I know people who do this. Their rationalization is that they haven’t looked at the data yet and only do after pre-registering their hypotheses. This could be the case when people use data from the World Values Survey, the International Social Survey Program, or the American National Election Survey, among others. Since this is allowed, it makes pre-registration rather useless. As a reader, I still have no way of knowing whether you already collected the data, analyzed it, then “pre-registered” your hypotheses. It’s really just a smokescreen and the system can still be cheated just like the old one could.

      • Andrew says:


        Indeed, my colleagues and I performed a preregistered study on data that had already been collected. The paper is here. I disagree with you that the preregistration was “rather useless.” It was useful to me.

        The reason for the preregistration was not a concern that I would “cheat”; the concern was that it’s easy for data collection and analysis to be so fluid that we can mislead ourselves. Recall the 50 shades of gray paper: Nosek, Spies, and Motyl replicated their own study not because they thought they might cheat but because they wanted some control over their inferences.

  6. Dan Simpson says:

    I like the snappy titles. I understand that this isn’t really why they’re doing it, but I do think we could all stand to have a bit more fun with our academic writing. But then again, I once used the hook of a Madonna song as a title, so I probably should have less fun.

  7. Alex says:

    Hasn’t there been some discussion (maybe on this blog?) on the disconnect between registered medical trials and the publications that eventually come from them? This is a problem, but I don’t think a new or unique one.

Leave a Reply