Skip to content
 

“Maybe the better analogy is that these people are museum curators and we’re telling them that their precious collection of Leonardos, which they have been augmenting at a rate of about one per month, include some fakes.”

Someone sent me a link to a recently published research paper and wrote:

As far as any possible coverage on your blog goes, this one didn’t come from me, please. It just looks… baffling in a lot of different ways.

OK, so it didn’t come from that person. I read the paper and replied:

Oh, yes, the paper is ridiculous. For a paper like that to be published by a scientific society . . . you could pretty much call it corruption. Or scientism. Or numerology. Or reification. Or something like that. I also made the mistake of doing a google search and finding a credulous news report on it.

Remember that thing I said a few years ago: In journals, it’s all about the wedding, never about the marriage.

For the authors and the journal and the journal editor and the twitter crowd, it’s all just happy news. The paper got published! The good guys won! Publication makes it true.

And, after more reflection:

I keep thinking about the couathors on the project and the journal editors and the reviewers . . . didn’t anyone want to call Emperor’s New Clothes on it? But then I think that I’ve seen some crappy PhD theses, really bad stuff where everyone on the committee is under pressure to pass the person, just to get the damn thing over with. And of course if you give the thesis a passing grade, you’re a hero. Indeed, the worse the thesis, the more grateful the student and the other people on the committee will be! [Just to be clear, most of the Ph.D. theses I’ve seen have been excellent. But, yes, there are some crappy ones too. That’s just the way it is! It’s not just a problem with students. I’ve taught some crappy classes too. — ed.]

So in this case I guess it goes like this: A couple of researchers have a clever, interesting, and potentially important idea. I’ll grant them that. Then they think about how to study it. It’s hard to study social science processes, where so much is hidden! So you need to find some proxy, they come up with some ideas that might be a little offbeat, but maybe they’ll work. . . . then they get the million data points, they do lots of hard work, they get a couple more coauthors and write a flashy paper–that’s not easy either!–maybe it gets rejected by a couple journals and gets sent to this journal.

Once it gets to there, ok, there are a couple possibilities here. One possibility is that one of the authors has a personal or professional connection to someone on the editorial board and so it gets published. I’m not saying it’s straight baksheesh here: they’re friends, they like the same sort of research, they recognize the difficulty of doing this sort of work and even if it’s not perfect it’s a step forward etc etc. The other possibility is they send the paper in cold and they just get lucky: they get an editor and reviewers who like this sort of high-tech social science stuff–actually it all seems a bit 2010-era to me, but, hey, if that’s what floats their boat, whatever.

Then, once the paper’s accepted, it’s happy time! How wonderful for the authors’ careers! How good for justice! How wonderful of the journal, how great for science, etc.

It’s like, ummm, I dunno, let’s say we’re all kinda sad that there have been over 50 Super Bowls and the Detroit Lions have never won it. They’ve never even been in the Super Bowl. But if they were, if they had some Cinderella story of an inspiring young QB and some exciting receivers, a defense that never gives up, a quirky kicker, a tough-but-lovable head coach, and an owner who wasn’t too evil, then, hey, wouldn’t that be a great story! Well, if you’re a journal editor, you not only get to tell the story, you get to write it too! So I guess maybe the NBA would be a better analogy, given that they say it’s scripted . . .

My anonymous correspondent replied:

I just have no idea where to start with this stuff. I find it to be profoundly confused, conceptually. For one thing, the idea that we should take seriously [the particular model posited in the article] is deeply essentialist. I can imagine situations in which is the case, but I can also imagine situations in which it isn’t the case because of interacting factors from people’s life history. . . . That’s how social processes work! But people do this weird move where they assume any discrepant outcome like that must be the result of one particular stage in the process rather than entrenched structures, which, to my mind, really misses the point of how this stuff works.

So I’m just so skeptical of that idea in the first place. And to then claim to have found evidence for it just because of these very indirect analyses?

I responded:

I’m actually less interested in the scientific claims of this paper than in the “sociology” of how it gets accepted etc. One thing that I was thinking of is that, to much of the scientific establishment, the fundamental unit of science is the career. And a paper in a solid journal written by a young scholar . . . what a great way to start a career. The establishment people [yes, I’m “establishment” too, just a different establishment — ed.] can’t imagine why someone like you or me would criticize a published scientific paper—it’s so destructive! Not destructive toward the research hypothesis. Destructive to the career. For us to criticize, this could only be from envy or racism or because we’re losers or whatever. Of course, they don’t seem to recognize the zero-sumness of all this: someone else’s career never gets going because they don’t get the publication, etc.

Anyway, that’s my take on it. To the Susan Fiskes of the world, what we are doing is plain and simple vandalism, terrorism even. A career is a precious vase, taking years to build, and then we just smash it. From that perspective, you can see that criticisms are particularly annoying when they are scientifically valid. After all, a weak criticism can be brushed aside. But a serious criticism . . . that could break the damn vase.

Maybe the better analogy is that these people are museum curators and we’re telling them that their precious collection of Leonardos, which they have been augmenting at a rate of about one per month, include some fakes. Or, maybe one or two of the Leonardos might be of somewhat questionable authenticity. But, don’t worry, the vast majority of their hundreds of Leonardos are just fine. Nothing to see here, move along. Anyway, such a curator could well be more annoyed, the more careful and serious the criticism is.

P.S. The story’s also interesting because the problems with this research have nothing to do with p-hacking, forking paths, etc. Really no “questionable research practices” at all—unless you want to count the following: creating a measurement that has just about nothing to do with what you’re claiming to measure, setting up a social science model that makes no sense, and making broad claims from weak evidence. Just the usual stuff. I don’t think anyone was doing anything wrong on purpose. More than anyone else, I blame the people who encourage, reward, and promote this sort of work. I mean, don’t get me wrong, speculation is fine. Here’s a paper of mine that’s an unstable combination of simple math and fevered speculation. The problem is when the speculation is taken as empirical science. Gresham, baby, Gresham.

19 Comments

  1. John Williams says:

    The attitude you describe is harmful enough in academic science, but it is even worse in applied science. Several of my papers attacked a bogus model that was supposed to assess how the habitat value of a stream varies with flow, which people used to determine how much water should be released from dams, or what would be the environmental effect of a dam, etc. One reviewer gave a thumbs down on one paper with the comment that it was not fair to take away a tool that people needed to do their jobs without giving them another one to use instead.

  2. Erez says:

    No fair. We can’t judge the paper ourselves.

  3. Corwin Schlump says:

    How do you think we can changes the incentive system from the one right now which promotes quantity of publications, impact factors and citations instead of high quality research? Even if your article isn’t cited it can still influence other research or you have a single very high impact article while your other work isn’t that influential. How should we evaluate professors in departments? I like the h-index but it still relies on impact factors/citations which incentivizes people to publish more.

    • Anoneuoid says:

      “Experimentalists” should be rewarded for making reproducible and precise measurements, ie generating interesting data.

      Theorists should be rewarded for making precise and accurate predictions consistent with the data generated above.

      There is feedback between the two because the data generated is interesting if it helps distinguish between different theories. Theories are interesting if they simply explain data that is surprising under competing theories.

  4. “… I’ve seen some crappy PhD theses, really bad stuff where everyone on the committee is under pressure to pass the person, just to get the damn thing over with. And of course if you give the thesis a passing grade, you’re a hero. … [Just to be clear, most of the Ph.D. theses I’ve seen have been excellent.”

    I feel like I’ve seen more of these in the past few years (crappy PhD theses, or the PhD as a participation prize for staying in grad school for 7 years). It’s difficult, though important, to fight. One rarely acknowledged harm it causes is frustration & unhappiness in the students who *are* doing good, challenging work — these people don’t have a voice on the crappy-thesis committees.

    • Kyle C says:

      Interesting. In general I’m sympathetic to critiques of academia, but this comment reminds me there is a common persona one sees on Twitter, the young academic who rues how bad their “diss” was and bemoans their lack of success in the job market.

    • Martha (Smith) says:

      I’ve had a variety of experiences on Ph.D. committees. A couple that come to mind:

      In one I criticized something the student had done, but her advisor said something to the effect that he had told her to do that, and it was all right, because she needed to finish up.

      In another, I backed up the student who disagreed with something her advisor told her to do. Another committee member agreed with me and the student, and the three of us persuaded the advisor that our perspective was correct — she finally understood and was very embarrassed by her lack of understanding.

    • anon e mouse says:

      In my department there was an unofficial but widely known and followed rule that your advisor was absolutely not supposed to let you schedule your defense if they weren’t sure your work was in good enough shape to pass. I think this is a good system, although students were very dependent on their advisors gauging this correctly and acting in their best interests.

    • Rahul says:

      The generalization of this is degree dilution.

      I am often interviewing candidates with an Engineering masters that I would rate at barely high school degree level.

  5. Renzo Alves says:

    The solution is utterly simply and devastatingly obvious: Academics respond to incentives. Therefore reward them for good work, not bad work. You might have been able to punish them for poor work, via shame or some such, but not anymore. No one has any sense of shame now (I don’t want to overgeneralize here.) I’m surprised that no one seems to have implemented this basic principle of psychology. On the other hand I might not have a Ph.D. from one of America’s top universities if my idea had been practiced back in the day.

  6. David Marcus says:

    I will suggest another possibility that may sometimes (often?) apply: The people do not realize that the paper is wrong/ridiculous. If you start with this assumption, then the behavior is more reasonable.

    Long ago I was in a meeting with a bunch of technical people: engineers, scientists, mathematicians. The presenter was showing various images and the fractal dimension of each: gray scale images, so the fractal dimension of the surface. (This was back when fractals were hot. It turned out to be a ridiculous project, but that’s a different story.) Some of the images had a fractal dimension less than 2. This prompted a long discussion as to what was special about these images that they had such a fractal dimension. The two mathematicians in the room tried to point out that all this meant was that the calculation of the fractal dimension was erroneous. Not only did the non-mathematicians fail to realize the numbers were wrong, but when this possibility was pointed out, they failed to integrate it into their discussion. They seemed to treat it as another possible explanation: Two people are suggesting the numbers are wrong, but lots of others are suggesting why the images might be special. (Of course, the numbers were wrong; we later determined that this was due to fitting a line incorrectly.)

  7. Oliver C. Schultheiss says:

    “Really no “questionable research practices” at all—unless you want to count the following: creating a measurement that has just about nothing to do with what you’re claiming to measure, setting up a social science model that makes no sense, and making broad claims from weak evidence.”

    Just curious — was that a paper on human motivation?

Leave a Reply

Best CBD Oil for Pain (2021)

CBD Gummies

CBD Oil for Kids (2021)

Best CBD Oil for Anxiety (2021)