An ethnographic study of the “open evidential culture” of research psychology

Claude Fischer points me to this paper by David Peterson, “The Baby Factory: Difficult Research Objects, Disciplinary Standards, and the Production of Statistical Significance,” which begins:

Science studies scholars have shown that the management of natural complexity in lab settings is accomplished through a mixture of technological standardization and tacit knowledge by lab workers. Yet these strategies are not available to researchers who study difficult research objects. Using 16 months of ethnographic data from three laboratories that conduct experiments on infants and toddlers, the author shows how psychologists produce statistically significant results under challenging circumstances by using strategies that enable them to bridge the distance between an uncontrollable research object and a professional culture that prizes methodological rigor. This research raises important questions regarding the value of restrictive evidential cultures in challenging research environments.

And it concludes:

Open evidential cultures may be defensible under certain conditions. When problems are pressing and progress needs to be made quickly, creativity may be prized over ascetic rigor. Certain areas of medical or environmental science may meet this criterion. Developmental psychology does not. However, it may meet a second criterion. When research findings are not tightly coupled with some piece of material or social technology—that is, when the “consumers” of such science do not significantly depend on the veracity of individual articles—then local culture can function as an internal mechanism for evaluation in the field. Similar to the way oncologists use a “web of trials” rather than relying on a single, authoritative study or how weather forecasters use multiple streams of evidence and personal experience to craft a prediction, knowledge in such fields may develop positively even in a literature that contains more false positives than would be expected by chance alone.

It’s an interesting article, because usually discussions of research practices are all about what is correct, what should be done or not done, what do the data really tell us, etc. But here we get an amusing anthropological take on things, treating scientists’ belief in their research findings with the same respect that we treat tribal religious beliefs. This paper is not normative, it’s descriptive. And description is important. As I often say, if we want to understand the world, it helps to know what’s actually happening out there!

I like the term “open evidential culture”: it’s descriptive without being either condescending, on one hand, or apologetic, on the other.

6 thoughts on “An ethnographic study of the “open evidential culture” of research psychology

  1. >”When problems are pressing and progress needs to be made quickly, creativity may be prized over ascetic rigor.”

    Reminds me of this:
    https://en.wikipedia.org/wiki/Politician%27s_syllogism

    Also, I wouldn’t take oncology as an example of a successful field. They are quite possibly further astray than any other (yes, even psychology).

    http://www.nature.com/nature/journal/v483/n7391/full/483531a.html
    http://www.nature.com/news/cancer-reproducibility-project-scales-back-ambitions-1.18938
    http://www.sciencemag.org/news/2015/06/feature-cancer-reproducibility-effort-faces-backlash

    • When a group I was with studied the adoption of treatments within a field of oncology (primary investigator expected it to happen long after evidence was clear) we found that instead it was well before the evidence was clear sometime just from presentations of abstracts at meetings (where subsequent publications withdrew earlier claims).

      That field of oncology faced high short term mortality with little to no effective treatments so we postulated “problems are pressing and progress needs to be made quickly, creativity may be prized over ascetic rigor [hopeful treatments adopted before proven]”

      Follow up studies were planned to test that claim but I don’t think ever funded/done.

      • >”well before the evidence was clear”

        Doesn’t this phrase presuppose it is somewhat common for the evidence to become clear on the topic? From what I’ve read, I don’t think the evidence is clear about anything regarding cancer beyond the most trivial observations (like “cells are dividing” or “A is correlated with cancer”).

  2. We often go around saying that researcher degrees of freedom can be exercised without malicious intent or even awareness. When I think about how that actually works in the real world, often the first things I think of are internal psychological processes – things like confirmation bias, self-deception, etc. Maybe that’s my predilection as a psychologist but I suspect a lot of people think of it in those terms.

    So one thing I liked about this paper is that it pointed to the interpersonal side of things as well. Specifically, how problematic practices can become normalized through social processes and become part of how a small organization (a lab) operates, to the point where they’re openly discussed because everyone holds shared beliefs and assumptions about them. I wrote a little more about that here: https://hardsci.wordpress.com/2016/02/12/reading-the-baby-factory-in-context/

Leave a Reply to Keith O'Rourke Cancel reply

Your email address will not be published. Required fields are marked *