“How the Internet Scooped Science (and What Science Still Has to Offer)”

Brian Silver pointed me to this post from Andrew Lindner:

This week, my manuscript, co-authored by Melissa Lindquist and Julie Arnold, “Million Dollar Maybe? The Effect of Female Presence in Movies on Box Office Returns” was published online by Sociological Inquiry. It will appear in print later this year.

So far, no surprises. A researcher promotes his work online. I do this all the time, myself.

The topic of the paper in question is the Bechdel test for women in movies, which has come up on this blog a couple times.

When we last discussed the Bechdel test, several years ago, we got this comment from Paul:

The Bechdel test isn’t a tool for evaluating individual movies. In my experience film quality is almost an orthogonal dimension. But it’s informative in aggregate.

Essentially, [the] point about giving the audience what they want is spot on. When the Bechdel test is consistently failing, that means people are being drawn to movies with poor characterization of female characters, and that raises the question of “why?”. This is also a push/pull phenomena: since we have finite choices in movies, and directors have imperfect knowledge of the public’s desires, it’s almost certain that to some of this can be explained by the consumer, but some of it is also being pushed by the producers.

Lindner was thinking along similar lines:

I wondered whether the underrepresentation of women in film was due to audiences disliking movies featuring women or Hollywood under-funding Bechdel movies. I cooked up the idea to link a content analysis of whether movies pass the Bechdel Test with data on the movies’ box office performance, production budget, and critical appraisal. That fall and winter, two wonderful students, Melissa Lindquist and Julie Arnold, and I collected the data. In short, we found that Bechdel movies earn less at the box office, but it’s because [sic] they have smaller production budgets, not because [sic] audiences reject them. A simple study, but, I think, an interesting one.

We’ll just let those “becauses” stand for now, as causal inference is not today’s topic of conversation.

Anyway, Lindner gives the submission history of his article:

lindner

I assume that “Top Two Journal” is the American Journal of Sociology? I say this because ASR and AJS are considered the top two journals in sociology, but I think the American Sociological Review is the top journal, so if he’d submitted it there, he would’ve just said “Top Journal.” Actually, I don’t know why he didn’t just give the names of the journals—it’s not like there’s any requirement to keep it secret—but I guess those details don’t really matter.

As Lindner writes, “there is nothing abnormal about this story.” Indeed, a couple years ago I published my own story of when the American Sociological Review published an article with serious statistical flaws and then followed up by refusing to publish my letter pointing out those flaws.

As Lindner might say, nothing sinister here, just standard operating procedure.

It’s a struggle to get a paper into a journal—but when it is accepted, it’s inside the castle walls and is protected by a mighty army.

11 thoughts on ““How the Internet Scooped Science (and What Science Still Has to Offer)”

  1. What exactly are we criticising here? A Journal must act as curator must it not? Every technically correct article may not deserve publication in a given Journal.

    Or are we criticising the specific curation descision in this particular case? i.e. Saying that this particular paper deserved to be published based on the specifics of its topic / quality of analysis.

    (I mean in Lindner’s case. Not publishing a follow up about errors in a previous article pointed out to a Journal’s editors is clearly uggh. )

    • Rahul:

      I think that’s Lindner’s point. The journals in his case did nothing wrong; nonetheless there was a lot of wasted effort involved, compared to what would’ve happened had the paper been posted online and open to comment there.

      • Andrew:

        But are those options mutually exclusive? He can still post a draft online & submit to Journals in parallel, right?

        I’m not sure about the wasted effort argument: With any type of curation hierarchy a rejection cycle is a necessary evil, right? Artists have to submit to multiple galleries before they get an exhibition. Writers submit manuscripts to one publishing house after another. Movie scriptwriters face rejection. Seems a natural process.

        In any case, no one’s forcing him to submit to a Journal. He’s doing that for reasons of readership or citations or whatever else motivates him.

        • Rahul:

          “He can still post a draft online & submit to Journals in parallel, right?”

          Usually, you can’t do that with many Journals in the Social Sciences. You’re often not allowed to submit to a Journal if a similar version or draft of your paper can be viewed online.

        • I’d heard of that rule but I thought it was not around any more. Do you know what’s the rationale behind it? Sounds stupid. How common is it?

          Submitting to multiple Journals in parallel is verboten obviously. But what’s wrong about an online draft post? Doesn’t arxiv survive mostly on such parallel postings? Also, NBER working papers etc.

        • The rift in practice between the social sciences and math/physics/statistics about posting things online is a sad one for the social sciences. But this difference in practice did not appear just with the internet. Before then, “preprints” served the role of conveying results before they were officially published, or even considered for publication. In math, at least, it was common to send out preprints to others who worked in one’s area when you sent paper to a journal for consideration. (The invention of Xerox machines made this easier, but we still did it before then, with mimeographed copies.) There were also journals that specialized in “research announcements” that were published before the actual papers were. So someone not on the original mailing list, or who had heard by way of mouth, or at a conference, could ask for a preprint. When the physicists developed the Arxiv, mathematicians leapt on the boat for an easier version of preprints.
          I don’t think social scientists ever had the practice of distributing preprints.

        • I still don’t get what’s in it for the Journal to block preprints. If at all, it must make a reviewers life easier.

  2. One point Lindner mentions is about the “unweildy format” of academic journal articles.

    I think this is an interesting point. But not as oft discussed as the flaws of peer review, replication, fishing etc.

    • Rahul:

      I think the three issues—unwieldy format, awkward review process, and problems with replication and criticism—go together. The review and publicity process motivates a style of writing that hypes the trivial and hides the difficulties.

  3. I wonder if someone has bothered to calculate the Bechdel index of Professors? Maybe on the basis of the probablity of female co-authors in articles.

    For extra credit one might make a male co-author with a higher Bechdel count for more. Perhaps a google like eigenvalue problem.

    “The effect of female co-author presence in articles on their citation counts”

    “Correlation between h-index and Bechdel index”

  4. ASR and AJS are generally considered co-#1’s in sociology. The outcry against ASR, I suspect, has more to do with the fact that ASR is owned by ASA and is therefore (theoretically) responsive to member concerns.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *