Journals submit article with fake data, demonstrate that some higher education journal you’d never heard of is as bad as PNAS, Psychological Science, Lancet, etc etc etc. And an ice cream analogy:

Alan Sokal writes:

I’ve just had my attention drawn to an article, “Donor money and the academy: Perceptions of undue donor pressure in political science, economics, and philosophy” by Sage Owens and Kal Avers-Lynde III that apparently was intended as some kind of homage to me; see here and here..

But looking at it, the basic idea doesn’t seem outlandish. Standard deviations of zero (e.g. Table 5) are obviously ridiculous but can perhaps be explained by over-rounding. (At least one more decimal place should have been given.) Also, an effect size of 0.0 with SD = 0.1 obviously can’t be significant at the 5% level. And an 83% response rate does seem implausible. But is it true that, as the author told the Chronicle, “the regression model is all wrong”?

If this is a hoax (as the authors now admit), it seems to be a very subtle one. It would be interesting to hear what professional statisticans make of it, and how they rate the journal’s peer review.

This seems related to the well-known phenomenon that some things are beyond parody. For example, I wrote this post as parody but some people took it straight. I elaborated in the comments:

One way to focus this question is to imagine that you are reviewing this paper for a journal such as PNAS or APSR. What to do? If a journal sent me the paper to review, I’d probably let myself off the hook by saying that I don’t find the paper convincing so they should get another reviewer. But what if I had no choice and had to review it? There’s a norm that says that if a paper makes important and novel claims, and you can’t find a specific problem with the work, you should recommend acceptance. This seems fair, as the alternative is to allow rejection for no good reason at all! But the result is that journals get filled up with papers that are written professionally, have all the right formatting and buzzwords, but which make no sense. Recall ESP, himmicanes, ages ending in 9, beauty and sex ratio, and all the rest.

It’s a Cantor’s diagonal problem. We know how to find the flaws in all the old bad papers, but new papers keep coming with new flaws. And the challenge is that not all the new papers have flaws! This one might really be on to something.

Ultimately there’s no substitute for external replication. In the meantime, I feel under no obligation to believe a claim derived from a bank-shot statistical analysis, even if the paper in question follows enough of the rules that I don’t want to put in the effort to figure out exactly what went wrong (or, less likely but possible, to convince myself that their analysis is really doing what it says it’s doing).

So, yeah, the journal review system is set up to fail in this way. Over and over, journals publish papers with big claims that are not supported by the data but are supported by professional-style writing, rhetoric, and strategically-placed numbers. I think that most of the time the authors of these papers are fooling themselves. Sometimes you have academic scoundrels who perform active deception, and then you can have people like the author or authors of this “Sokal 3” paper who publish hoax papers. It shouldn’t be so difficult to get a hoax paper published: if you’re not bound by the laws of nature and the caprices of real data, you can make up whatever you want. Take that Cornell professor who published that ESP paper back in 2011. My impression is he didn’t make up his data, he just exercised a lot of selection in what data to analyze and present. But if he’d been willing to just make stuff up . . . then, sure, publish away! The only reason not to is that you’ll burn a bunch of bridges. Lots of annoyed editors won’t want to deal with you again.

From the report in the news article, it appears that this latest hoax was a political stunt and that the author or authors were able to con some journal reviewers but weren’t able to con a political activist group. It seems to me pretty obnoxious that after they sent false material to the activist group UnKoch My Campus, and the group did due diligence and didn’t get conned, that the author then wrote, “Some of the UnKoch people will claim this proves the Kochs are after them.” What the hell? First they try to scam the group, then when the group doesn’t fall for it, they insult them? That’s just lowlife. No wonder the author wants to remain anonymous.

The thing that bugs me is that they connect it to Sokal. His hoax was much more thoughtful.

Sokal followed up:

I guess the question is: If one *assumes* that the data are legit, are there *still* any gross errors that peer review should have caught? Obviously there are some gross incongruencies in Table 5. Are there any other serious errors? Is it true that, as the author told the Chronicle, “the regression model is all wrong”?

The article in question has been taken down so I can’t be sure, but based on when I looked at it before, I’d say that, yeah, peer review should’ve caught the problems with this paper. Peer review should also have caught the problems with the beauty-and-sex-ratio paper, and the ovulation-and-voting paper, and the ages-ending-in-9 paper, and the ESP paper, and the Bible Code paper, and the collected works of the pizzagate dude, etc etc etc. The common theme is that these papers made bold claims and threw in enough numbers that it took some effort to track down exactly what was happening. Meanwhile, reviewers (a) are busy, and (b) often will default to publishing big claims, partly from fomo and partly from not wanting to play the role of censor. So, yeah, the publication of this article told us something we already knew, which is that you can get fake quantitative social science research published in real journals. Real gardens with imaginary data. Real tacky, though, how they insulted the activist group that was too savvy to get scammed. I think an apology is in order.

As for Higher Education Quarterly: yeah, too bad this one slipped through. Still, they behaved better than Lancet did after it turned out they’d published a paper based on fake data. When the fakeness was revealed, they retracted right away, they didn’t go into defensive mode.

The analogy you’ve been waiting for

OK, so it’s like this . . . somebody buys an old ice-cream truck and paints it to look like new, with MISTER SOFTIE in big letters and pictures of various ice cream treats on the side, then drives the truck to a moderately busy street corner playing that irritating ice cream truck music, then sells a bunch of ice creams to the customers who show up. But . . . here’s the trick . . . the ice creams actually contain Ex-Lax. Ha ha ha, all the suckers who bought ice cream get the runs! And they really are fools, because the label on the truck was MISTER SOFTIE, not MISTER SOFTEE. You can add a political twist by putting some “Donald Trump is a Loser” bumper sticker on the truck. Those big-city liberals turn off their brains. Trump Derangement Syndrome ha ha ha. Of course some people are annoyed that they paid $3 for ice cream filled with Ex-Lax, so the perpetrator stays anonymous.

OK, the analogy isn’t perfect. Real journals do publish papers with fake data, but real ice cream trucks don’t serve laxative-infused treats—at least, not that I’ve heard of! The point is that it’s really not so hard to con people, if you’re willing to misrepresent yourself.

76 thoughts on “Journals submit article with fake data, demonstrate that some higher education journal you’d never heard of is as bad as PNAS, Psychological Science, Lancet, etc etc etc. And an ice cream analogy:

    • In some cases, fake papers with fake data include real redearchers from real places – the real researchers don’t know they have been used. So your idea Keith is a good step, but some fakery still gets through.

    • IMO I think journals should hire people more akin to fact checkers in journalism. One thing that is relevant in checking an authors affiliations is to deduce potential conflicts of interest (and whether they were properly divulged in the manuscript). Checking the validity of the stats presented, and verifying additional details about the source data seems reasonable to be within that role as well.

      As an unpaid blinded peer reviewer I don’t have the ability (nor time nor inclination) to do that much work. Can only give gross feedback as Gelman discusses.

  1. Thank you for your thoughtful analysis. I am annoyed by the credulous coverage many outlets gave Sokal-squared, and also this “Sokal 3” hoax. What’s interesting about Sokal-squared, is that most of the manuscripts submitted were rejected, and of those accepted, the reviewers asked them to tone down or revise the most outrageous parts. So, these sorts of hoaxes demonstrate nothing more than the fact that a crappy paper can eventually get published if you shop it around to enough journals. Which is not exactly breaking news, except to those ignorant of the publishing process.

    One thing I recently learned is that the journal Sokal submitted his original paper to wasn’t even peer-reviewed. The journal was relying on Sokal as an expert in physics to give his take, and he seized the opportunity to give them a nonsense article. The whole episode is a grim warning about trusting treacherous STEM professors, but not really an indictment of peer-reviewed humanities articles, or humanities subjects in general.

    I am increasingly weary of these bad-faith attempts to discredit entire fields, especially since the “hoaxes” themselves are based on very shoddy reasoning. They haven’t quite stooped to the level of Project Veritas, but they aren’t far off.

    • Adede:

      Also selection bias. If you think this latest stunt was at best silly and at worst dirty politics, you might not write about it at all: it’s just one more example of an anonymous person on the internet trying to stir up trouble. But if you do think it’s a big deal, you’re more likely to write about it. This is similar to junk science like that beauty-and-sex-ratio paper: it gets hyped by the suckers, while the more sensible reporters don’t fall for it so they don’t write about it all (except to use it as an example of how things go wrong).

      I have more sympathy for the original Sokal paper. I don’t the point of that hoax was that the editors of that journal could be conned; rather, I think the point is that they publish a lot of nonsense, and Sokal’s paper was a dramatic way to illustrate this point. The Sokal 3 paper demonstrates that journals will publish bad quantitative papers that use fake data. We knew that; we read all about it every day in Retraction Watch! So there’s less of a point. Also, the Sokal 3 story has this unpleasant angle where it feels like an attempted case of political manipulation.

      • I think that unless you were on the editorial of Social Text, it’s hard to know why they published it. I believe it was a special edition, so it didn’t go through regular peer review, and they themselves said afterwards they thought parts of Sokal’s argument were strange but they let some things slide because they thought it was a good-faith attempt by a physicist who was trying to make connections between his subject matter and theories he didn’t entirely understand. Of course, one could say that they *would* say that as attempt to save face.

        I never found Sokal’s critiques all that interesting, for the obvious reason that few people are going to spend years studying something they like is nonsense. So they usually simplify or outright misunderstand something and proclaim it nonsense. And what the are critiquing *is* nonsense: the problem is that the people never said what they said in the first place.

        My larger objection to Sokal is that he really wanted to make certain questions appear ridiculous, and he did it for explicitly political reasons. He believed that politics relied upon a notion of objective reality, and wanted to ridicule any approach that suggested otherwise.

        • I think the point of Sokal original hoax was not that Social Text would published nonsense, but that Social Text (and by extension some part of the academic left) were willing to publish an article on quantum mechanics, a subject on which they had no expertise simply because it used various trendy academic buzz words. He called Zermelo–Fraenkel set theory hegemonic. Did the editory know what that meant? Did they care? I think that he exposed a certain type of lack of seriousness in the academic left. That doesn’t mean that all post-modernism or hermeneutics is unserious, anymore than it makes all social scientist unserious when Gelman points out the problem with the power pose. But, it is still important to call out the unserious unrigorous academic work.

        • To be honest, I got bored with the science wars in graduate school, so I’m mostly relying on memory. But to my recollection, the journal edition it was published in was a special edition where people from different fields were invited to reflect on how ideas in their speciality related to the concerns of the journal. The editors certainly didn’t know physics, but they assumed Sokal was contributing in good faith, and they relied upon his reputation for quality control. As I said, I believe it was a special edition, inviting people from outside the field of the journal to make what may have been speculative thoughts about possible connections in this case physics with the concerns of social theory. I myself thought it strange at the time they published it, but their defense (which, again, may have been created afterwards to save face) was that in the spirit of inviting people from the outside the field to participate, they didn’t hold a professor in physics to the same standard they would hold someone in the social sciences. I never decided if I bought that argument, but I don’t necessarily think it is implausible: this was before blog posts and all the venues we now have to publish, and I don’t see it as a fault if journals published speculative thoughts from time to time.

          And I do Sokal was pretty explicit about not just calling out problematic articles but an entire philosophical position, and he wanted to do so as an unabashed Old Leftist who believed in objective truth about externalized reality. People use the hoax to say that postmodernism itself is nonsense, because a “postmodern” journal published a nonsense article (which is a bit strange, because Social Text wasn’t a postmodern journal).

        • The one thing I always questioned was why didn’t the authors of Social Text just reply by saying “Exactly, Sokal has just demonstrated the point. This is how knowledge is generated. People with credentials and the trappings of rigor put out narratives that are accepted by the community without question.”

          I think that there is something self-referentially incoherent about some forms of
          post-modernism.

        • I don’t want to belabor the point, but “accepted by the community without questioning” is something people assume rather than demonstrate. An editor may choose to be publish an article for a variety of reasons, and Sokal announced it was a hoax before anyone could cite it. So we have no idea what the “community” would have said about it.

          As I’ve been saying, we don’t know why the editors published it, but I think it is a stretch to say that a decision to publish (especially in a special edition) is akin to giving it an editor seal of approval. Ouldn’t they have thought, “this doesn’t make sense to us but maybe someone who knows both fields could make sense of what he’s trying to say, so we’ll give it the benefit of the doubt?

        • Joe:

          I think a lot of papers get published because the editors of a journal don’t want to be censors. For example, I doubt the editors of JPSP thought in 2011 that ESP was real, but they published the Bem paper because they wanted to be fair. Also fomo. Had the editors understood the methodological problems in that paper, I can only assume they would’ve rejected it, but back in 2011 people weren’t so aware of forking paths; they just saw a bunch of statistically significant p-values and thought, Yeah, there must be something here!

          Anyway, the point is that, yeah, journal editors publish things they disagree with. That happens all the time.

          A complicating factor is that journals will publish crap if it it seems newsworthy, but they’ll also sometimes publish crap if it fits their ideological biases, so you’ll see Lancet etc. publish very weak work that pushes certain politically liberal (in the U.S.) views.

        • “they assumed Sokal was contributing in good faith, and they relied upon his reputation for quality control. ”

          For the life of me I can’t see how this consideration exonerates the editors of anything. This is exactly **the** problem. Why should any paper be published on “good faith” or reputation? What’s the point of peer review if in the end “good faith” and reputation are all that matters? Papers are rejected all the time when author(s) have clearly made a “good faith” effort. What separates these papers from a hoax?

          Your argument is worse than unconvincing: it’s exactly the mentality that leads to problems in the first place.

  2. I’m mostly infuriated that I can’t find the paper to validate whether or not this “obviously wrong” regression model is obviously wrong or not. As far as I can tell no one – including its conservative boosters (whose core critique seems to be “donor money can’t make research right wing because academics are left wing”) – appear to have pointed out what this model was and what’s wrong with it.

    I wanna see, dammit!

  3. My favorite article in this vein is Leonard Leibovici’s famous

    “Effects of remote, retroactive intercessory prayer on
    outcomes in patients with bloodstream infection:
    randomised controlled trial”

    “Remote intercessory prayer said for a group of patients is associated with a shorter hospital stay and shorter duration of fever in patients with a bloodstream infection, even when the intervention is performed 4–10 years after the infection.”

    https://www.bmj.com/content/bmj/323/7327/1450.full.pdf

    On second thought, Leibovici’s spoof is different because his data is real although the very notion of retroactive intercessory prayer is absurd–but, given the responses to his article, not to everybody. Unlike the more famous Sokal hoax, Leibovici did not target other disciplines and his presentation appeared in the BMJ’s well-known humor issue.

    • It is very difficult when you have scientific journals deal with scientifically, ahem, “problematic” ideas that are nevertheless widely believed by the general public, isn’t it? I mean, if you believe an omnipotent God is answering your prayers, what’s a little causality violation?

  4. Am I correct in my understanding that the authors submitted fabricated data for publication? If so, that’s a horrible thing to do. It just wastes valuable time and resources on the part of reviewers. All in the name of making a statement. That’s dumb.

  5. I can’t help but note you are referring readers to your parody post on coronavirus misinformation on the day that this makes the news:

    NPR looked at deaths per 100,000 people in roughly 3,000 counties across the U.S. from May 2021, the point at which vaccinations widely became available. People living in counties that went 60% or higher for Trump in November 2020 had 2.7 times the death rates of those that went for Biden. Counties with an even higher share of the vote for Trump saw higher COVID-19 mortality rates.

    On the ‘hoax’ paper, this reminds me a lot of the discussions in the auditing of financial statements regarding the “expectations gap”. Investors tend to think that an audit means the financial statements are free of fraud. Really it means that the auditors have followed set of procedures laid out by auditing standards. For the most part, this ‘hoax’ shows that what peer review actually sets out to do differs from what misinformed outsiders think it does. But unlike good-faith studies, it doesn’t merely show the gap–it exacerbates it by leading readers to think that good peer review catches intentional fraud, instead of being an articulated process with a range of goals (none of which are typically to detect data fabrication).

    • Rjb:

      Good point. Given what’s been happening with vaccine denial, it seems I was a bit too quick to dismiss that claim of behavior being linked to misinformation in news and social media. I guess this deserves its own post.

    • It is just that southern states are more likely to both “red” and hotter during the summer, so they had more cases so far. Currently the South has the lowest rate per capita. Do it at the end of the winter and this will go away.

      And all cause mortality is far from clear either. Eg, the current count is 21 + 17 pfizer/moderna deaths vs 16 + 17 placebo deaths during the RCTs. So that is, if anything, a 15% increased mortality rate in the vaccinated. This is much different than the message coming from observational studies and needs to be reconciled.

      • For Moderna, see table S19. It shows 17 vaccinated vs 16 placebo deaths (cutoff at 3/26/2021):
        https://www.nejm.org/doi/10.1056/NEJMoa2113017

        For Pfizer, see table S4. It shows 15 vaccinated vs 14 placebo deaths (cutoff at 3/13/2021):
        https://www.nejm.org/doi/full/10.1056/NEJMoa2110345

        However, the FDA has since updated the Pfizer data (NB, this has the same cutoff date as above):

        From Dose 1 through the March 13, 2021 data cutoff date, there were a total of 38
        deaths, 21 in the COMIRNATY group and 17 in the placebo group.

        https://www.fda.gov/media/151733/download

        This requires explanation, since according to the observational data these vaccines are 90%+ protective against covid deaths.

        • I think the explanation is that you are comparing different estimands.

          The observational studies are trying to estimate the effect of the vaccine on deaths due to Covid.

          The death numbers from the RCTs that you are quoting are all deaths that occurred.

          I would strongly expect the VE of Covid vaccines against Covid death to be substantially greater than its VE against all deaths.

        • I would strongly expect the VE of Covid vaccines against Covid death to be substantially greater than its VE against all deaths.

          Yep, so what would you expect? The vaccinated group should have 10% fewer total deaths?

        • I’d have to run through the numbers to work out the assumed proportion of total deaths due to Covid. And it would obviously be highly dependent on the amount of Covid in the population. Maybe 10% is reasonable? It seems a little high to me without thinking too hard about it.

          But anyway the numbers of deaths in the trials are small, if you use them to construct a frequentist 95% CI for the effect on all cause mortality you get a CI that includes:
          – implausibly large reductions in all deaths with vax
          – implausibly large increases in all deaths with vax
          – a 10% reduction in total deaths

          Basically I don’t think the trial data alone are going to shift any reasonable person’s prior belief as it’s compatible with any sensible value.

        • That’s untypically misleading even for you. Your quote is taken from the vaccine safety discussion; these deaths are potential “side effect” deaths (“None of the deaths
          were considered related to vaccination”), and Covid deaths are excluded here (you couldn’t detect side effects otherwise). Covid deaths would be discussed in the efficacy section. Table 8a lists only 1 “Severe COVID-19 Occurrence” in the treatment arm, which would include death, as per the footnote.

        • From your first nejm link:

          A total of 32 deaths had occurred by completion of the blinded phase, with 16 deaths each (0.1%) in the placebo and mRNA-1273 groups; no deaths were considered to be related to injections of placebo or vaccine, and 4 were attributed to Covid-19 (3 in the placebo group and 1 in the mRNA-1273 group) (Tables S19 and S26). The Covid-19 death in the mRNA-1273 group occurred in a participant who had received only one dose; Covid-19 was diagnosed 119 days after the first dose, and the participant died of complications 56 days after diagnosis.

        • Zhou –

          > California isn’t cold.

          I’ve noticed that noneuoid OFTEN leaves our any data that don’t fit his simplistic reverse engineering of complex phenomena with high levels of uncertainty.

          California, Arizona, New Mexico, Arkansas – just don’t fit the narrative. So we can just hand wave them away.

        • California, Arizona, New Mexico, Arkansas – just don’t fit the narrative. So we can just hand wave them away.

          Can you explain how these states “just don’t fit the narrative”? I don’t need a bunch of paragraphs, just really quickly point me to what data you are looking at.

        • They don’t fit a simplistic narrative because (1) it’s facile to treat states as if they’re monolithic blocks. Policies vary by localities. Demographics vary by localities. Meteorological aspects very within states. None of this makes sense anyway without considering variance in testing rates and factors associated with the different vvariants. And (2) in obvious ways, different states have shown trends in different relationship to “seasonality.” for example, your theory about AC as the explanatory factor in Florida doesn’t hold up for states like Arizona. And further, we can see different patterns in association with “seasonality” across different time periods 2020 to 2021.

          No one says seasonality is irrelevant. Your your simplistic statements like

          It is just thatsouthern states are more likely to both “red” and hotter during the summer,.

          (emphasis added) actually require some sophisticated analysis.

          If you wasn’t to reduce a complex causality, with many potentially influential variables and much uncertainty, to some simplistic hand-waved causal mechanism that’s your choice. It would be better, imo, if you took a more sophisticated approach as we can see here:

          https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2021GH000413

          You’d get less pushback from mez at least.

          I think it’s facile to explain covid outcomes through some kind of simplistic mechanism, whether that be “seasonality” or political affiliation or vaccination rates, etc. That doesn’t mean we can’t see signals of those factors (and others) in outcomes across localities. Of course we can. But refuting one simplistic explanation by positing another simplistic explanation seems quite unscientific to me – and it is likely an indication of motivated reasoning, imo.

        • Depends a LOT on where in CA you are. Tahoe or Mammoth have snow on the ground. San Diego is a different story. Also even here in the LA area the pre 70s construction is very poorly insulated winter overnight temps can be low enough that if you aren’t running your heater, you can see your breath **indoors**.

        • Daniel –

          > Depends a LOT on where in CA you are.

          Sure. But that’s part of the point.

          That helps show why just comparing across states, as if each state is monolithic across all potentially relevant variables, to argue that only one variable stratified by state borders explains all covid outcomes, is just poor methodology.

        • Cases are still low in California. So California is not cold right now and cases are not rising there (yet).

          I don’t see how that is even supposed to be an exception.

        • They rise when people are driven to close their windows and breath recycled air (use heat/AC). Temperature is just a proxy for this behavior.

          There have really been too many silly responses triggered here. So I am going to stop responding in this thread to avoid spamming.

      • Anoneuoid –

        > It is just that southern states are more likely to both “red” and hotter during the summer,.

        And with the wave of a hand, you dismiss any uncertainties or complexities.

        It’s funny how your hand-waving always leans in the direction of vaccines being ineffective.

        Prolly just a coincidence.

        Anyway…

        https://acasignups.net/21/10/06/rumors-age-being-major-confounding-factor-county-level-death-rates-have-been-greatly

        • Raghu –

          Hmmm. I don’t think “every” is a fair characterization.

          And I’ve never quite understood why people can’t just not read comments they aren’t interested in.

          But I don’t want to annoy. What do you suggest as a guiding principle? Basically no comments on covid if covid isn’t directly related to the topic of the OP?

        • 1: “Basically no comments on covid if covid isn’t directly related to the topic of the OP?” Yes! Is this so hard to imagine?

          2: “And I’ve never quite understood why people can’t just not read comments they aren’t interested in.” Sorry to tell you this, but I automatically ignore almost every comment you and Anoneuoid write. Nonetheless it is *still* annoying to have to wade through screens full of you two *and* to have the side header of recent comments be worthless. If you have so much free time and so much to say, write your own blog.

          Sorry to be rather harsh, but Andrew’s is one of my favorite blogs, and the signal-to-noise ratio of the comments is really going downhill.

        • Raghu –

          > Sorry to be rather harsh.

          Nothing to worry about. Harsh away. In fact I mostly ignore your comments as well and wouldn’t expect you to take that personally.

          I also don’t think you should comment less so I don’t have to scroll past your comments, I mean it’s a trivial inconvenience, but like I said I don’t want to annoy.

          > Yes! Is this so hard to imagine.

          Obviously it’s not so hard to imagine since I just suggested it!

          Anyway, henceforth, I won’t comment on covid unless it’s directly relevant to the OP.

  6. Sokal’s Hoax was not so obviously a hoax either. His quotes were real. His physics was vague, but not wrong. His metaphors were questionable, but not really much different from other physics metaphors. The absurdities were in statements like: “And as feminist thinkers have repeatedly pointed out, in the present culture this contamination is overwhelmingly capitalist, patriarchal and militaristic”. But this is not qualitatively much different from what a lot of academics do seriously say.

    • Well, I am too lazy to look it up—but I clearly recall that Sokal had a passage or footnote in which he stated that G. H. Hardy’s research into number theory, a topic that is vitally important in modern secure communications, was funded by the military. Of course, most or all of that research took place before WWII—computers and modern digital communications were yet to be invented. Moreover, the statement “Here’s to pure mathematics, may it never be of use to anyone” is often attributed to Hardy.

      That was a broad hint that he was pulling the reader’s leg.

      Bob76

      • Bob:

        This reminds me of when in a blog post, I wrote: “It’s a bit like those magazine articles that George Orwell wrote, back in 1946 or ’47 when he was cashing in on the success of Animal Farm, that series he did for Punch, on Scotland’s finest golf courses. I’m sure it sounded like a great idea at the time, to send an old-time socialist to write about this upper-class sport, but it just didn’t work.”

        I just thought this was a hilarious idea, but nobody picked up on it in comments, either by getting fooled (“Really? I had no idea that Orwell wrote about golf!”), hoaxed (“You idiot! Orwell would never have written for Punch.”), or amused (“Ha ha, nice one.”). I thought it was a clever idea. Orwell did live in Scotland near the end of his life, and he did a lot of journalism, so it’s just-almost-possible, while at the same time being absolutely ridiculous. But nobody noticed or cared! Such is the plight of the satirist.

      • Bob76, here is that obscure reference:

        [footnote 73] In the history of mathematics there has been a long-standing dialectic between the development of its “pure” and “applied” branches (Struik 1987). Of course, the “applications” traditionally privileged in this context have been those profitable to capitalists or useful to their military forces: for example, number theory has been developed largely for its applications in cryptography (Loxton 1990). See also Hardy (1967, 120-121, 131-132).
        https://physics.nyu.edu/sokal/transgress_v2/transgress_v2_singlefile.html

        The first part of that is completely true, and supported by the cite to Hardy. The last phrase is overstated, but not so ridiculous as to expose the hoax.

        It is about like saying that statistics has been developed largely for its applications to p-values. Yes, there are a few curmudgeon statisticians who refuse to use p-values, but to an outsider, what else is there?

        • Well, I don’t know the history of mathematics that well, but I’m pretty sure that number theory was not developed “largely for its applications in cryptography.”

          I doubt if number theory got any significant support for cryptographic applications prior to about 1950.

          Eratosthenes and Gauss did their work without any funding from NSF or DoD.

          That said, I would not be surprised if a significant fraction of the funding for number theory research today comes from DoD.

          Bob76

        • “That said, I would not be surprised if a significant fraction of the funding for number theory research today comes from DoD. ”

          It is certainly true that many (American) number theorists, employed by universities, receive support from the NSA (perhaps also the DoD, I’m just madly and over-confidently extrapolating from cases I know), often in the form of summer money (and/or sabbatical support at, or “at”, the NSA) It does not follow (and I believe it to be false) that most of the number-theoretical research which most of those NSA-supported do (except, maybe, such sabbaticals) is either directly or very strongly believed to be indirectly useful to the NSA (now or in the foreseeable future).

          (I am a topologist, not at all a number theorist, but 40-odd years ago I did contribute a useful algorithm to a joint paper with a friend who later got some NSA support.)

  7. The problem is that review procedures are neither consistent nor transparent. Peer review effectively gives every published paper equal weight because “passed peer review” is a binary classification. It would be ideal if every paper got published with a brief, paper-specific qualifier as to the particular strengths of the paper that warrant its publication, to serve as a basis for comparison. Just give the reviewers a rubric with characteristics of good papers: transparent, pre-registered, nonbinary outcomes (p-values), clear and plausible theory, replication, novel, impactful, etc. (or whatever criteria they want, so long as they disclose them). If you want to protect people’s feelings, you could even just list the greatest strengths of the paper without directly noting weaknesses.

    This would benefit readers by providing a basis for comparing apples to apples, and journals because, if defrauded, they can point out that “transparent” wasn’t one of the paper’s strengths. Authors would lose this blanket endorsement, but, c’mon, if anybody should support quantification of the review process, it should be quantitative researchers! Scientist, measure thyself.

    • Hm, what if we started publishing our articles in a Stack-Exchange-like format, each with ratings on several, enumerated strengths, as above? Ratings start out at the max, then reviewers, certified in that subject, go in and make specific critiques, which results in the ratings going up or down. (Starting at the max would encourage experts to read and critique papers by unknown authors or unorthodox subjects so they’d stop showing up at the top of rankings, whereas starting at 0’s would probably mean their languishing.)

      Or does this already exist?

    • We should accept that science doesn’t provide instant answers except rarely or in a trivial sense and as far as published papers go their value is largely defined with hindsight. So the sort of “value” that you are looking for is probably best defined by a paper’s impact – e.g. the number of times it’s been cited (papers, in my experience, get the attention in terms of citations and impact that they deserve) although unfortunately one generally needs to wait some time to assess this – but that’s science for you and it’s really how it should be!

      I don’t believe for a moment that “peer review gives every published paper equal weight” – it’s not obvious what that means – after all in any issue of e.g. Nature (or the dreaded – apparently(!!) – PNAS!) there are papers on all sorts of subjects – what does “equal weight” even mean? In fact as a first-off one can pretty easily triage published papers based on where they’re published – that vast underbelly of 3d rate new pay to publish journals is eminently ignorable.

      Science, in terms of finding things out about our world, is generally a slowish process. Unfortunately the “now” in which we live – e.g. a period of maybe two to three years – is extremely fuzzy in terms of the reliability of apparent knowledge accrued (e.g. in published papers) but that’s where we expect the most from science to address problems nowadays and this can become the habitat of charlatans, chancers, bloggers and self-promoters (e.g. who insist that ivermectin or bleach are good for treating Covid or global warming can’t be true since the atmosphere isn’t a greenhouse and here’s a snowball I’ve brought into the assembly).

      We really need to chill and try to be clear about what we want science to do for us.

  8. The part of the paper I found most strange is the section on control groups:

    Our response rate was high. 1000 faculty (in political science, economics, and philosophy) and 1000 administrators were contacted. Of these, 832 faculty (83.2%) and 672 (67.2%) administrative staff fully participated. 1500 were selected for a treatment group (750 faculty and 750 administrators), while 500 were selected for control groups (250 faculty and administrators)

    What does “treatment group” mean in this context? Does this imply that they donated a few million dollars from left/right wing affiliated groups to universities, to test how much that affected perceived political pressure?

    A paragraph later, they describe the control group like this:

    Of these faculty and staff, 309 (188 faculty— 75%, 121 staff— 48%) formed a control group working at colleges and universities that are not known to receive donations from the donor groups we examined. Response rates were similar. We do not know what accounts for differential response rates from our targeted control group respondents. Using a control survey group allows us to see whether perceptions of undue influence are higher at places which in fact receive money; if similar results occurred among the control group participants, this would indicate perceptions of undue influence are more likely to be illusory.

    So the control group is formed of researchers working at universities which have not received donations from the donor groups. It seems like the author mixed up the idea of a “control group” and “controlling” for something in a model.

    PS: I found a copy of the paper. Let me know if you’d like a copy.

  9. I think this is hoax is actually very useful. There are plenty of people in academia who do believe peer review ensures that methods and analyses have some basic level of merit. Of course there are plenty of examples that contradict that but they are usually apparent self-hoaxes and the authors will defend their analyses and conclusions.

    I this case we have a paper (which can still be downloaded as a pdf from the journal website) where the methods and results are full of statements that are weird, implausible, incomprehensible and/or impossible. The whole thing doesn’t make any sense, the authors themselves say so and it passed peer review in a “good journal”. It’s a very clear example.

    • Hans:

      Sure, but then the hoax is not needed, as we have better examples that are non-hoaxes, such as the papers on Bible code, beauty and sex ratio, ovulation and voting, ESP, air rage, himmicanes, ages ending in 9, etc., all of which were published in respectable journals and didn’t even use fake data! Once you’re willing to use fake data, it’s just too easy, it’s more of a con (or attempted con) than a hoax.

      • I don’t know. At my institution at least I see serious questions about the validity of statistical analyses (e.g. in a doctoral candidate’s work) routinely dismissed if the work has passed peer review and been published in a “good journal”. Most people aren’t following the various methodological crises and the “statistics wars” closely.

        Of course it would normally be pretty easy to find something absurd in a “good journal”, but to make your point you would have to hold people’s attention long enough to explain why there are obvious mistakes that should have been caught in peer review. This is easier said than done.

        It’s much simpler if you have a paper where the authors themselves say that the paper is obviously nonsense and that the methods and results were deliberately written in such a way that the problems should have been caught.

        • > It’s much simpler if you have a paper where the authors themselves say that the paper is obviously nonsense and that the methods and results were deliberately written in such a way that the problems should have been caught.

          It is also much simpler for people to dismiss the case as relevant for their own work. It is simple to look at this and say, “well yeah, if you’re willing to do anything, you can get away with anything. But I wouldn’t do that because I don’t make up my data, I only use techniques that are well-established in the field, etc. etc.”

          If you have a case of poorly-done science that follows all the rules you yourself follow, that is a much more powerful tool for instigating some self-reflection.

        • I think it would have been a more powerful demonstration with real data. That would have kept the focus on the gobbledygook methods and results which (unlike outright data fraud) should have been flagged by the reviewers.

        • Hans:

          To say that gobbledygook methods and results “should” have been flagged by the reviewers. . . . I don’t know about that, considering that gobbledygook methods and results are pretty standard in the social sciences. Consider that the Proceedings of the National Academy of Sciences is edited by actual members of the National Academy of Sciences, and you’d think that they’d know what they’re doing when it comes to research—but many of them don’t! Some great work is published in peer-reviewed journals, but I think we have to recognize that lots of bad work will be published too. Not just mediocre work—I’ve published mediocre work too!—but flat-out bad work, ridiculous stuff, obviously wrong, etc. Unfortunately, really bad work is part of the mainstream of social science so there’s no way to stop it from getting into the journals.

        • You don’t need to convince me but, in my part of the world at least, the prevailing opinion among social scientists remains that peer review catches most serious statistical errors and abuses and that results published in reputable peer reviewed journals can safely be assumed to be sound.

          To be clear, people certainly have their own opinions about the interpretation of findings and may disagree with methodology but statistical analyses in the peer reviewed literature are generally assumed to be solid.

    • I think this argument is rather weak – as far as I can tell, literally none of the discussion of this paper has been about the methods described in the paper. If the failure here is primarily about methodological weaknesses you’d think there be more of a focus on the methodology, instead of general statements sourced directly from the authors.

  10. It wasn’t my purpose, in sending my query to Andrew about this latest hoax, to revisit the issue of my Social Text parody article, about which many people (including myself) are understandably bored. But since some of the commenters have brought it up, I would like to correct some misunderstandings about what I was, and was _not_, claiming. So let me cite a brief passage from my article “What the Social Text Affair Does and Does Not Prove”, published a year after the original parody:
    https://physics.nyu.edu/faculty/sokal/noretta.html

    I’d now like to consider what (if anything) the “Social Text affair” proves — and also what it does _not_ prove, because some of my over-enthusiastic supporters have claimed too much. In this analysis, it’s crucial to distinguish between what can be deduced from the _fact of publication_ and what can be deduced from the _content of the article_.

    From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of _one_ rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a “conveniently credentialed ally” (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors’ ideological preconceptions, and attacked their “enemies”.[13]

    To which, one might justifiably respond: So what?[14]

    The answer comes from examining the _content_ of the parody. …

    And then I go on to address this latter issue.

    • Okay, you proved that the editors of an academic journal on culture, not science, failed to consult an expert on quantum gravity. But why would they? And what difference would it make?

      Quantum gravity is a failed research enterprise. Researchers publish all sorts of speculative theories that have no testable connection to the real world. It is all nonsense. You wrote a paper extending some metaphors from the nonsense of quantum gravity to the nonsense of postmodern science. Did you really think that the editors should have it checked for mathematical correctness? It is all just opinion that passes for scholarly work. An expert referee might well have said that it is an amusing opinion essay suitable for the journal.

    • Thanks for the link, indeed it clears up a lot of misrepresentations about you and your argument.

      The militaristic orientation of American science has quite simply no bearing whatsoever on the ontological question, and only under a wildly implausible scenario could it have any bearing on the epistemological question. (E.g. if the worldwide community of solid-state physicists, following what they believe to be the conventional standards of scientific evidence, were to hastily accept an erroneous theory of semiconductor behavior because of their enthusiasm for the breakthrough in military technology that this theory would make possible.)

      Unfortunately, I daresay this scenario is not as implausible as you presume–it happens all the time in fields adjacent to the frothy venture capital scene.

      It strikes me that your claims are a lot more nuanced and your methodology, insofar as you can call it that, much more sound than that of your subsequent imitators. You very easily published a word salad in a moderately respectable journal, and therefore claimed that much commentary on science is confused thought with no guidance from experts. The second imitators tried to publish word salad in journals, at great effort, over many attempts, and eventually settled for mostly unheard of publications, then proclaimed boldly that the entire fields involved are bunk. The third imitators appear to have written a coherent paper and argument where the empirical data are made up. It’s a shame to see that a genuinely thought provoking and harmless stunt has been so corrupted and weaponized as we recurse down the chain of simulacra.

      • Re: “You very easily published a word salad in a moderately respectable journal, and therefore claimed that much commentary on science is confused thought with no guidance from experts. ”

        The first half of this sentence is correct; but the “therefore” is absolutely *not* part of my claim. I thought I made that very clear, in my article “What the Social Text Affair Does and Does Not Prove”, by distinguishing between what can be deduced from the _fact of publication_ of my parody (namely, not much) and what can be deduced from _separate arguments_.

        In the remainder of that article I *do* argue that much (but by no means all or even most) commentary on science from the Social Studies of Science and Cultural Studies communities — including big names like Latour, Barnes, Bloor and Harding — is indeed confused thought. But I do that by detailed analysis and criticism of those writings. (Further details can be found in my book Fashionable Nonsense, co-authored with Jean Bricmont.) People who disagree with that analysis are of course most welcome to say so and state their reasons.

  11. The most interesting thing about this episode is that the journal appears to have completely removed the article. Even Lacour’s fake-data article is still available at Science.

    • The author has admitted that the article is fake, and includes deliberately falsified statements. That is a good reason for the journal to retract the article.

Leave a Reply

Your email address will not be published. Required fields are marked *