The real lesson learned from those academic hoaxes: a key part of getting a paper published in a scholarly journal is to be able to follow the conventions of the journal. And some people happen to be good at that, irrespective of the content of the papers being submitted.

I wrote this email to a colleague:

Someone pointed me to this paper. It’s really bad. It was published by The Review of Environmental Economics and Policy, “the official journal of the Association of Environmental and Resource Economists and the European Association of Environmental and Resource Economists.” Is this a real organization? The whole thing seems like a mess to me. I understand that journal editors can find it difficult to get good submissions, but couldn’t they just publish fewer papers? They must be pretty desperate for articles if they’re publishing this sort of thing. Any insight on this would be appreciated.

My colleague responded:

No idea why it got published – REEP is a policy journal and my guess is the editor felt inclined to give some space to “the other side.”

I was thinking about this, and it seems like a key part of getting a paper published in a scholarly journal is to be able to follow the conventions of the journal. I guess that the author of the above-linked article is really good at that.

Writing a paper in the style of a particular academic field—that’s a skill in itself, nearly independent of the content of what’s being written.

And that got me thinking about all sorts of things that get published. I’m not thinking of bad papers such as discussed above, or frauds (you can’t expect reviewers to do the sleuthing to find that) or possibly-frauds-possibly-just-big-sloppy-messes such as those Pizzagate articles, or big claims backed by weak evidence, or empirical papers with statistics errors, or run-of-the-mill low-quality work that fits the preconceptions of the journal editors.

No, here I’m talking about those hoax articles, the ones where a team of authors constructs a paper with zero scientific or scholarly content but which is carefully constructed to be written in the style of a particular journal. The journal is the target, and the goal is publication—and then the later publicity to be had by revealing that the journal got hoaxed.

For some people, this is their entire career, or at least much of it. Not hoaxing, but writing papers with very little content, papers whose main characteristic is that they’re written in the style that’s acceptable to certain journal editors.

It’s a Turing test kind of thing. Or another analogy would be old-style hack writers who could get their crappy books published. Although that’s a bit different because these books had actual readers. Then again, Richard Tol (author of the “gremlins” papers discussed above) must have thousands of readers too, as his papers have been cited tens of thousands of times.

Anyway, here’s my point. We talk a lot about what appears in scientific journals and what scientific papers have media exposure and policy influence. But a key thing seems to be this orthogonal factor, which is the ability of some authors to craft just about anything into a publishable paper in field X.

I have this skill in the field of applied statistics. But I think I use the skill in a beneficial way, to publish papers that are interesting and useful. Other people can use this skill to push propaganda, or just to promote their own careers. And some people are so good at this that they overdo it, as with Bruno “Arrow’s other theorem” Frey.

The point

OK, here’s the deal. The key lesson from the hoaxes of Sokal etc is not that the academic humanities is crap, or that postmodernism is a scam, or whatever, but rather that a large part about getting published, in almost any venue, is about form, not content. If Alan Sokal had never been born, we could see it from the papers of Richard Tol. Being able to write a paper that will get published is a skill, quite separate from the content of the paper. It’s cook that Sokal etc. have that skill, but I feel like we’ve all been missing the point on this. Until now. So we have Tol to thank for something.

37 thoughts on “The real lesson learned from those academic hoaxes: a key part of getting a paper published in a scholarly journal is to be able to follow the conventions of the journal. And some people happen to be good at that, irrespective of the content of the papers being submitted.

  1. Let’s call it what it is. Game playing.

    If you’re good enough at playing the game, you will succeed totally regardless of the merit of your work. If you’re no good at all at playing the game, you’ll never even get started. Even if you have worthwhile work to publish.

    Most professionals fall somewhere in between. They are just about good enough at playing the game that they have a chance for their work to stand on its merit (by the standards of their field, not in the judgment of this blog) but they can’t necessarily get totally spurious crap published.

    I suspect that being good at the game of getting published and being good at the game of attracting notoriety for your work tend to overlap in some individuals. That’s probably the type of professional whose work gets enough attention to be crucified here.

  2. a large part about getting published, in almost any venue, is about form, not content…. Being able to write a paper that will get published is a skill

    The problem with this explanation is that the skill is pretty widespread and that many papers by people with that skill do not get published. So “the skill” is a necessary but not sufficient condition.

    Tol gets published by editors who like the result.

    Sokal’s paper was published by an editor who liked the result. Postmodernism is (as best I can tell) a completely results-oriented field. Any set of inputs can be transformed by postmodernism into the desired result. Sokal’s paper is indistinguishable from other postmodern papers because it has the same amount of logical coherence: zero. It just lines up various statements and then jumps to the desired result. Presto. Anything “proves” what you want in postmodernism. Foucault does this at endless length.

    • Terry:

      The skill of being able to write a publishable paper when you have good material, that’s widespread (although people vary a lot in that skill, and the skill seems to be highly dependent on subfield, in that someone can have the skill to write a publishable paper in one field and not another).

      But the skill of being able to get a paper published in a good journal when you have no good material: that’s not so widespread. It may seem easy, but it’s not. Just as, in the old days when hacks made money writing pulp fiction, some hacks could make a living by writing to the market, but others tried and couldn’t do it.

      I agree that it helps if the journal editors like what you’re saying in your article (as in various notorious examples published in Lancet). But getting a contentless, or very weak, paper published is not just about conclusions, I think it’s mostly about presentation. Maybe Tol’s editors liked his results, maybe not, but Tol’s illustrious-on-paper career (currently 37000 citations on google scholar, publications in respectable journals) is testimony to his unusual ability to write to the market, an ability which seems to me to be essentially orthogonal to the content of the articles he’s publishing.

      • Andrew wrote:
        I agree that it helps if the journal editors like what you’re saying in your article (as in various notorious examples published in Lancet.

        We have striking examples of fraudulent works that are passed off as solid scholarship and accepted. It seems to me to be likely that a significant factor in the acceptance of bad work is the degree to which the editors find the results satisfying or politically desirable. I think that explains some of Wansink’s success—people enjoyed his conclusions.

        Similarly, Bellesiles Arming America was received with rave reviews by the historians even though it had profound flaws—many of which were relatively easy to identify. See https://en.wikipedia.org/wiki/Arming_America.

        I think the problem of scientists and purported scientists playing to their audiences is a larger problem than Andrew’s comment would indicate.

        Bob

    • Didn’t the editors claim, after the fact, that they did *not* like Sokal’s paper but that they wanted to include it, at least partly, because having a physicist submit a paper to their journal was so special? I can understand why one would not believe that, but what is the claim “Sokal’s paper was published by an editor who liked the result” based on? And weren’t there several editors?

      • According to the Wikipedia page, they did and Sokol responded that was part of the problem:

        https://en.wikipedia.org/wiki/Sokal_affair#Follow-up_between_Sokal_and_the_editors

        Andrew—how hard do you think it’d be for you to make up data and publish a scam paper in political science? Would it be a helpful exercise to call out the gullibility of editors in social science? Would your colleagues ever forgive you? I think it’d be pretty easy to pull off a scam paper in machine learning. As you say, it’s largely about getting the tone right, having the right academic bona fides, and having a hook (physicist publishing in humanities, Gelman’s name on the paper, a robot that can hit home runs better than Aaron Judge, etc.)

        • Maybe my English skills are lacking, but the wikipedia link goes to a circular that reads:

          “From the first, we considered Sokal’s unsolicited article to be a little hokey. It is not every day that we receive a dense philosophical tract from a professional physicist. Not knowing the author or his work, we engaged in some speculation about his intentions, and concluded that this article
          was the earnest attempt of a professional scientist to seek some kind of affirmation from postmodern philosophy for developments in his field. His
          adventures in PostmodernLand were not really our cup of tea. Like other journals of our vintage that try to keep abreast of cultural studies, it
          has been many years since Social Text published contributions to the debate about postmodern theory, and Sokal’s article would have been regarded as sophomoric and/or outdated (and therefore unnacceptable to the editors) if it had come from a humanist or social scientist. As the work of a natural scientist it was unusual, and, we thought, plausibly symptomatic of how someone like Sokal might approach the field of postmodern epistemology i.e. awkwardly, assertively, and somewhat aimlessly, with a veritable armada of footnotes to ease his sense of vulnerability. In other words, we read it more as an act of good faith of the sort that might be worth encouraging than as an exercise of the intellect whose scholarly worth had to be judged.”

          In this context, I don’t think the Wikipedia article says they did, especially as it (the Wikipedia article) also says. If “liking it” is supposed to mean “they had some reason to publish it”, then yes, they did. But this meaning would be identical with what they post here is asking about, so this cannot contribute as an explanation.

          I admit though that I am not really sure if the circular was written by the editors or somebody writing about them, the author seems to change perspective a couple of times…

      • Yes, they had qualms about the paper’s presentation and poor logic.

        But, my assertion is that they liked the results, i.e., the paper claimed that postmodern tropes could be applied to physics, thus expanding postmodernism’s territory. If Sokal had written a more careful paper with the opposite results, it would have had zero chance of publication.

        Another way to put it is that the editors admit it was a bad paper but still published it. This strongly suggests it was the results they liked.

  3. Here is another datum for consideration. It is an article about how the author successfully published social-constructionist BS once he figured out the tricks. https://quillette.com/2019/09/17/i-basically-just-made-it-up-confessions-of-a-social-constructionist/.

    The key is not so much form-over-function. They key is to exploit the lax logical and evidentiary standards in academic humanities. Any set of inputs can be used to produce a publishable product.

    Footnote: The articles strikes a false note to my ear. It sounds too perfect. While it is possible that the author had an epiphany and saw the light, I think it more likely that he always saw social-constructionism as mostly a hoax. … Or maybe the article is a hoax from the left.

    • This is somewhat off topic, but as somebody with an affinity to social constructionism (or at least to what I think how it should be understood) I read the piece with much interest. I actually think (and that’s in line with what Dummitt writes) that a lot of what he wrote at the time is not “made up” and actually quite worthwhile. Nothing of what he writes implies that thinking about social construction of gender roles is “wrong” or not worthwhile. He just says that he drew strong conclusions without much evidence and scrutiny because he was under the influence of an ideology, he was in some kind of social cocoon, which somehow became mightily influential. He learned how to behave in the right way and became successful, apparently not because he was cheating but because this was how his surroundings had taught him to behave. Which itself is a pretty good illustration of what social construction is about. The major intellectual mistake that he made in my view (and with which he was unfortunately in strong company) is to think that if something is socially constructed this implies that “it doesn’t really exist”. Social construction happens in communication, which has quite some power, but not the power to make biology go away.

      And then the absolutist (and therefore quite anti-constructivist) trap to state “it’s all about power” in case one could quite convincingly argue that power is very important without saying that it’s 100% of the story. I don’t really know his work first hand so I don’t know how much less embarrassing to himself it would’ve been without these two howlers. Anyway, the popularity of this kind of “social constructionism” looks a bit like the story of p-values, in the sense that some kind of Mickey Mouse version of a very valuable idea becomes popular among people who like to use something easy and formulaic rather than to think for themselves.

      “I think it more likely that he always saw social-constructionism as mostly a hoax.” Now that is just made up by you.

      • “that a lot of what he wrote at the time is not “made up” and actually quite worthwhile” – I wanted to write “probably quite worthwhile”. I don’t know him enough to assess that.

      • I think you captured well the problem with social constructionism when misused as a debating tactic. People who do this constantly slide from “some” to “all” or from “some” to “none”. While some of x may be about “power”, its often not all about power, and sometimes power is only a small part of the explanation. There are often fundamental reasons for why something is the way it is. Similarly, when “some” of x is due to y, they often try to obfuscate this by arguing that since not “all” x is due to y, then none of it is. (This has to be done slyly since it is so obviously dishonest.)

        Footnote: my guessing that Dummitt may have always seen constructionism as a hoax is nothing but a guess on my part, so we pretty much agree about that.

        Footnote 2: The university Dummitt works at is remarkably horrible architecturally. Its a concrete nightmare. https://beta.images.theglobeandmail.com/f45/life/home-and-garden/architecture/article38223754.ece/BINARY/w620/rv-tbozikovic-trent-champlain0302rv04.JPG

  4. File this under “things sociolinguists know.” In order to be taken seriously by a group, you have to speak like they do. That goes for being a thug in a street gang to being a cop or from being an anthropologist to being a zoologist or from being a fan of football to a fan of baseball. Every subgroup develops their own specialized language.

    What happens is that I introduce some jargon like “Gaussian” to mean a particular density that could be written out less efficietly in full mathematical detail. I base that on jargon I’ve already introduced like “probability density function” and then write “pdf” instead of “non-negative continuous function that integrates to one”. The effect of a nested sequence of these definitions over time leads to what looks like jargon-laden jibberish to outsiders.

    The goal is not to create an exclusive club—it’s to make speaking within the clique more efficient. Almost every club wants more members. I’d love more people speaking Bayesian stats and Dungeons and Dragons and baseball—please learn my languages and talk to me!

    When I first met Andrew 20+ years ago, I kept trying to ask him stats questions. I came from a machine learning perspective in natural language processing. He couldn’t understand my questions and I couldn’t understand his answers. And I’d try for dozens of minutes at a time with pencil and paper over repeated lunches—this wasn’t just in passing. No way he could’ve published in my journals or I in his journals no matter what we had to say (at least without a fluent co-author). That’s because nobody would’ve understood it, not because we were trying to keep out intruders in our fields. Reviewers only have limited attention for learning whole new subfields.

      • Baseball then D&D paved my pre-college road to statistics and simulation.

        This shouldn’t come as a surprise to some readers—I’ve blogged about both D&D and baseball before and written a Stan case study on baseball.

        I’m still proud of Little Professor Baseball, which I developed almost 20 years ago [as is evident from the HTML] around the time I was reading Jim Albert’s book Curve Ball. It’s a simplified Strat-o-Matic style “matchup model” but with percentile dice and an elegant rolling mechanic (if I do say so myself). And I generated cards for my favorite childhood matchup—the Reds vs. the Orioles. I was super chuffed when I wrote to Jim Albert about it and he wrote back before I was a card-carrying member of the comp stats club.

        The first large-scale model Daniel Lee and I fit with Stan was an entire history of baseball (as far as Daniel could easily extract it) IRT model for batter-pitcher matchups (which didn’t lead to what we expected, which is increasing player ability over time—still need to get some time to look into that)—something like 20K parameters with 10M observations—it took a few days to run.

        So yes, I like baseball. So much so I’m spending my Friday night tonight watching game 5 of the ALCS!

        • Strat-o-Matic baseball and D&D; man does that bring back memories. Luckily for me I’d finished calculus early in high school and went off to hometown State U. to learn Fortran before I discovered either. When I was coding my first game at State U. I asked my professor where I might find code for a random number generator. He said “there’s no such thing unless you pretend to believe otherwise.” So next I took a probability and statistics course and none of it made any sense other than Laplace because it (admittedly an introductory course, went straight to “blah, blah, blah so if p less than magic number then true.” So I focused on predictions (and drafting anyone available from the Black Sox, the Gashouse Gang, etc.) rather than causes. Then I became a lawyer and nowadays I live in a kind of Hell reading e.g. the only opinion to cite the ASA p-value statement which concluded therefrom: “p-values measure[] the probability that the observed association is the result of random chance rather than a true association.” Ancient Gold Dragons were easier for my wizard to kill than p-value misinterpretations among lawyers.

    • Bob:

      You write, “That goes for being a thug in a street gang to being a cop or from being an anthropologist . . .”

      That reminds me . . . we used to have a professor in the sociology department here who became famous for writing a book about being a thug in a street gang. But then he started to act like a thug in academia! I remember a conversation with him where he talked at me like he was like Tony Soprano. It didn’t come off so well. I guess that’s a risk if you move between worlds: Behavior in one environment doesn’t always work so well in another. This is related to the John Yoo line, beyond which an academic ceases to be a researcher and becomes a full-time advocate.

  5. I started looking at the linked Tol paper and got bogged down asking myself what he *should* have done. It’s an interesting problem. Suppose we 10 climate change studies, in years 1,2,…10, each estimating y (economic benefit, so negative values are harm for some one x (temperature increase). We normalize at (0,0) as a true value. The truth is y = beta*x + u+ e, where u and e are normally distributed, with one twist. The twist is that in year t, the value of u is the same as in year t-1 with probability theta, and independent of it otherwise. We unfortunately do not observe the year of the study. How do we estimate beta? Suppose we have a bunch of observations where y is bigger with x and just a few where it is smaller, so the OLS line for beta is negative. Should we estimate that the true relationship is less negative?

    This represents 10 different studies, some of which use the same methodology (and hence have the same error), but we don’t know which ones use the same methodology. All we know is that there are clusters of bias.

    Any ideas?

    • Eric:

      I think there’s no good way to do this without unpacking the separate studies and treating each of them as producing a function, not just a point. And then the next step is recognize that these 10 separate studies are not 10 pieces of data; rather, they represent 10 different models.

      • I agree with the other comments here that what I suggested isn’t the right thing to do to figure out the effect of global warming. It’s the typical meta-analysis problem of pretending studies are comparable and equally good studies, and I’d prefer an old-fashioned non-mechanical review article that explained which were the good studies and which should be ignored, and maybe bases the entire conclusion on just the one best study’s best specification. In my description, I say to assume we don’t see “the year”, but of course Tol does observe it and should use that information.

        That said, I think what I described is still kind of an interesting problem.

    • I don’t know exactly what you’re up to here, but I think it’d be fine to write down your generating process into Stan and fit to observed data. It looks somewhat problematic because you have unobserved binomial outcomes (whether or not a given year is “the same as last year”) but you could probably marginalize them away.

      • I agree with Andrew above about using this model. I mean, if you want to see what your model fits like, you could probably fit it in Stan as mentioned in my previous post, but if you want to understand the cost of global warming I’m not sure your model of Tol’s data helps in any way.

      • I’m an economist, and a 60yearold one, so I don’t think in that style, but it seems reasonable to just try to fit it. I was thinking of “cheating” and starting with an example where I know the real parameters, to build intuition in seeing where OLS goes wrong, and I guess that’s similar.

        The problem is probably better posed (as slightly simpler) by making it that we’re just trying to measure one parameter and “OLS” means the mean, but we have measurement error.

    • As somebody how took Tol’s results seriously (I have been “had”), I was wondering this, too. Any in a way, this is what annoys me most of all: That something like a review of all those models and what they said over they years would be a worthwhile undertaking. And that part of why it isn’t being done is that this sort of nonsense get published every couple of years and saturated the slots available for such a topic.

  6. > The key lesson from the hoaxes of Sokal etc is not that the academic humanities is crap, or that postmodernism is a scam, or whatever, but rather that a large part about getting published, in almost any venue, is about form, not content.

    I disagree with this conclusion. Sokal, the authors of the “Grievance studies affair”, etc. clearly had the form right, but they definitely would not have been able to publish any other type of content just because he had the right form. So, you have to have the content approved by the orthodoxy _and_ you had to use the right form, and the primary lesson we draw from those hoaxes that was that the orthodoxy was not interested in finding truth.

    Thus, there are two different problems: 1) where form is key and you can publish any noise as long as you have the right form, and 2) where content matters and you will be published as long as you reiterate it. Sokal falls in #2, not #1.

    • This is a sort of unusual perspective, given the many, many examples of shoddy research that is talked about on this site. Bem’s ESP study, himmicanes, fat arms and voting, endless soup bowls … All of these were legitimately just modelling noise (or just FULL of errors), and the conclusions are surely false, but were published anyway. For that matter, look at Bem’s meta-analysis on psi, even has Bayesian analysis!

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4706048/

      Really, skim that meta-analysis. The conclusions are surely false, right? But it has the right FORM. It sounds like science, it musters data, crunches numbers, appropriately hedges statements, and so on. But the conclusions are impossible, because there is no mechanism.

      The presence of data (which can be poorly analyzed, rife with hidden errors, or falsified) can get you access to publication in science, so long as you have the right form too, IMO. Heck, if you simulated data in R I bet you could falsify a publication on just about any topic if it had the right form.

      Writing an article in bad faith, with ideas that you do not actually believe in the humanities (ok, let’s say gender studies / postmodernism, which is really what this is about) is basically like falsifying data. Because the ideas and logic take the place of quantitative data. No peer review process in any discipline does a very good job detecting falsified data.

      (And I say all this as someone who honestly dislikes postmodernism as a school of thought quite a bit. It’s just that these hoaxes could very, very easily be pulled off in any social science or medical journal with a few lines of data simulation in R, and writing something ridiculous up with the right form).

    • Koray:

      I disagree that “the orthodoxy was not interested in finding truth” in the case of the journals that published Sokal et al., just as I disagree that “the orthodoxy was not interested in finding truth” in the case of the journals that published Tol, the journals that published Bem, the journals that published Kanazawa, etc. All these authors were people who knew how to write in the approved style, even though their content was zero (or, in the case of Tol, arguably negative). The journal editors are trying their best, but the system is set up to reward things written in the proper style.

      Perhaps a good analogy is the famous Salon of art in Paris in the 1800s (that thing that the Impressionists rebelled against, at least that’s how I remember reading about it). I assume the Salon organizers wanted good art, but (a) it had to be on their terms, and (b) they were such slaves to form that they thought all sorts of empty stuff was good art, just cos it followed the template.

Leave a Reply to Garnett Cancel reply

Your email address will not be published. Required fields are marked *