Skip to content

Hotel room aliases of the statisticians

Barry Petchesky writes:

Below you’ll find a room list found before Game 1 at the Four Seasons in Houston (right across from the arena), where the Thunder were staying for their first-round series against the Rockets. We didn’t run it then because we didn’t want Rockets fans pulling the fire alarm or making late-night calls to the rooms . . .

This is just great, and it makes me think we need the same thing at statistics conferences:

LAPLACE, P . . . Christian Robert
EINSTEIN, A . . . Brad Efron
CICCONE, M . . . Grace Wahba
SPRINGSTEEN, B . . . Brad Carlin
NICKS, S . . . Jennifer Hill
THATCHER, M . . . Deb Nolan
KEILLOR, G . . . Jim Berger
BARRIS, C . . . Rob Tibshirani
SOPRANO, T . . . that would be Don Rubin.

OK, you get the idea.

I’ll grab ULAM, S, if it hasn’t been taken already. Otherwise please assign me to BUNNY, B. And JORDAN, M can just use his own name; nobody would guess his real identity!


  1. Jordan Anaya says:

    Speaking of the Rockets, do you have a take on Ryan Anderson’s shooting splits? Because r/NBA has turned into PPNAS:

    • Andrew says:


      I followed your link, and all I can say is that the stuff there is better than some basketball analytics that we’ve seen in the New York Times!

      • Jordan Anaya says:

        Actually, the scientific process on r/NBA is far better than in academia. People quickly point out flaws in analyses. Look at this follow up post:

        • Andrew says:


          The New York Times is not academia, but the news media have a similar problem of not being well set up to acknowledge mistakes; for some examples, search this blog for *David Brooks* or *Gregg Easterbrook*. News media not acknowledging error is a problem, whether these errors are honest mistakes, misrepresentations made in bad faith, or simple willingness to make strong claims without good evidence.

          • Jordan Anaya says:

            I was making more of a general statement. As you know, statistical errors can go undetected in academia for years, and even once they are found journals often will refuse to correct or acknowledge them. In contrast, on Reddit someone can immediately post a rebuttal to someone’s analysis, and then people can decide for themselves which analysis to believe.

            Somewhat related, I assume you read this recent article about Daryl Bem:

              • Anonymous says:

                It can be argued that (social) psychology in its entirety is one giant big hoax:


                “Finally, a skeptic might counter that the JPSP authors could have conducted the studies, found results,
                dismissed inconsistent data, and then written the paper as if those were the results that they had anticipated all along. However, orchestrating such a large-scale hoax would require the coordination and involvement of thousands of researchers, reviewers, and editors. Researchers would have to selectively report those that “
                worked.” Reviewers and editors would have to selectively accept positive, confirmatory results and reject any norm violating researchers that submitted negative results. The possibility that an entire field could be perpetrating such a scam is so counterintuitive that only a social psychologist could predict it if it were actually true.”

                I still wonder if Bem published his ESP paper as a way to show the flaws of the way psychologists perform their research. I wonder the same about his paper on how to write the empirical psychological paper:


                Either way, i am thankful for Bem.

              • Andrew says:


                We discussed this hypothesis, that Bem’s study was all a big hoax, a few years ago here, where I wrote:

                I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the experiments on a bunch more students, gathering enough data so that nobody could doubt his claims. But Bem didn’t do that. Instead, once he felt he’d reached the statistical significance plateau, he stopped and submitted to the journal. This behavior is consistent with the idea that he did not want to push his claims further, instead wanting to get into print before any new data could reveal problems with his study.

              • Corey says:

                Jordan, that paper was bounced from a journal founded in 2006 with an impact factor of zero, and was then submitted to and accepted by an open access journal started in 2015 (thus too new to have an impact factor) and published for the low low price of $625. This isn’t exactly the indictment of the entire field that Coyne would like it to be.

              • Andrew says:


                I think Corey is right. See here. I think the blogger you linked to got faked out by the professional-looking typesetting of this particular scam journal.

                To put it another way, yes there’s a hoax, but the people being hoaxed are not the editors of that fake-o journal from “Cogent OA”; rather, the hoaxees are the people who think this is a real journal that represents real social science.

              • ZC says:

                The garden of forking paths is pretty evident here! They got rejected from one low-ranked journal, so they went down the ladder til they got a hit.

              • Jordan Anaya says:

                There seems to be a lot of backlash to this hoax which I don’t really understand. The hoax seems to be compared unfavorably to the Sokal hoax, but the journal that got hoaxed in that case didn’t even have peer review, so I’m not sure how this one is worse.

                I also don’t agree that this is a predatory journal, per se. If we are going to call Taylor and Francis a legitimate publisher then why is one of their journals not legitimate? They literally advertise the journal on their front page:

                Now whether Taylor and Francis is a legitimate publisher is an entirely different question. I looked through their list of 226 biology journals and didn’t recognize a single one. Here’s an example: “Journal of Essential Oil Bearing Plants”

                Obviously I agree that one paper doesn’t show a problem with an entire field, or even with a single journal. Garbage sneaks into every single journal and yet we all still take them seriously. But I’m happy to include this hoax as another data point for how useless peer review is.

                I never understand why people complain that preprints will allow junk to get published and worry about journalists thinking they are peer reviewed when if you are willing to pay you can literally get anything published, and peer reviewed.

              • Corey says:

                Jordan, the aspect of the hoax that is generating backlash is precisely the original authors’ claim (and the claim of the popularizers like Coyne) that the publication of the hoax paper does demonstrate problems with the entire field of gender studies. I’m sure you’re sympathetic to the backlash perspective since, as you say, one paper doesn’t show a problem with an entire field — even if that paper were not published in an open access journal that functions as a vanity press for papers that can’t even get published in zero impact factor journals.

                (For the record, Sokal’s submission to Social Text was subject to editorial review rather than peer review per se. According to those editors, Sokal strongly rejected their informal attempts to improve his submission — which doesn’t excuse them, but does suggest that the problem the Sokal hoax uncovered wasn’t merely that any sufficiently obscurantist pomo bafflegab could potentially be published.)

              • Andrew says:


                PPNAS describes itself as publishing “only the highest quality scientific research.”

                Cornell University is in the respected “Ivy League”: when two of its most famous professors publish bad work, it’s notable. Cornell has a reputation to protect.

                This “Cogent OA” journal is more of a scam. Sure, it’s a scam being run with the collaboration of respected publisher Taylor and Francis, and this episode is a blot on their reputation—but lots of respectable organizations are connected with scams around the edges. This journal makes Taylor and Francis look bad—and it should!—but I don’t think it makes social science look bad: social science is just an area rich enough to attract scammers.

                What the hoaxers illustrated with their hoax is that there are journals out there who will take your money and publish just about anything. This indeed is a scandal—as it involves the implicit collusions of hiring committees, promotion committees, funders, etc., to count such publications as real—but I don’t think it’s quite the scandal that the hoaxers were advertising.

              • Jordan Anaya says:

                Andrew, Corey:

                I’ve thought about this some more. I guess one thing about these hoaxes that is bothersome is that in science we have to trust each other. We can’t stand on the shoulders of giants if we don’t trust any of the work. We would have to spend all of our time checking that the last couple hundreds of years of research is actually correct before we embarked on advancing science. So when people abuse this trust by publishing hoaxes I can see why some people get upset.

                With that said, if we are going to use pre-publication peer review as a stamp of approval that allows people to parade their research as if it is correct, is it so much to ask that peer review catch bogus articles? We all know pre-publication peer review is useless. I could have made up all the data in OncoLnc and the reviewers wouldn’t have noticed (but the users of OncoLnc would have noticed). Post-publication peer review will always be better than pre-publication peer review, and these hoaxes help to illustrate that, so I’m supportive of them.

                I’d like to say one more thing about post-publication peer review. The end user will always be the ultimate test for a product, whether that product is an iPhone, video game, or scientific article. Sure, iPhones undergo thorough testing, as do video games, but bugs are always found once they are released, and if these problems are large enough there is a lot of public outrage. Apple doesn’t have to wait for an anonymous, slow, peer review to release their product. They release it whenever they feel it is ready, and if it’s not then their reputation takes a hit. I think we should be using the same model in research. Articles theoretically undergo checks by all of the coauthors, and sometimes other colleagues before submission. If authors post research that hasn’t undergone sufficient checks then their reputation will take a hit, as we are seeing with Wansink right now. You can argue Wansink isn’t really a success story of post-publication peer review since it took 20 years to find him, and we didn’t really find him, he outed himself in a blog post. But if people didn’t assume that peer reviewed articles are flawless then maybe people would have noticed numbers don’t add up a lot sooner.

                Now you might say “Who has time for this?”. At least for me, when my work relies on someone else’s work the first thing I do is reproduce their key findings before utilizing their data set or extending their work. So people are naturally doing this, or at least should be. The problem gets back to people who are in the Goldilocks position, like Wansink. No one actually cares about the work enough to reproduce it or even check it, but yet the media eats it up and the work influences public policy.

                So what’s the solution? I think that whenever people find work to be reproducible there should be a way to acknowledge this in some way, other than citing the paper. Similarly, if work is found to be wrong there should also be an easy way to express this. For work that doesn’t have any reanalyses, either positive or negative, then before that person is given grants, book deals, etc. for that work, there needs to be a review of the work by the granting institution.

              • Andrew says:


                I agree that pre-publication peer review should catch bogus articles. It’s just that this particular family of journals seems to have a different business model.

                I think of the standard business model of a scholarly journal as to leverage their personal and institutional connections and academic skills to attract high-quality submissions and publish high-quality work, thus establishing or sustaining a strong reputation, and using that to continue to publish good work, supporting the journal through subscriptions, advertising, and donations, all of which come from that reputation.

                But this family of journals seems to be going from the opposite direction. They’re leveraging the good name of Taylor and Francis to attract submissions of irrelevant quality but whose authors are willing to pay for publication. They’re motivated to accept poor articles because they’re getting paid for each one. Sure, they’re spending down their reputation, but until that happens, they’re making money. It’s a classic bust-out operation, as they say in Goodfellas.

              • Andrew says:


                I agree with Emily Willingham (the author of that Retraction Watch article) that “the hoax says more about the pitfalls of the publishing industry than the field of gender studies.” So the hoax did a valuable service, even if it wasn’t quite what the hoaxers were intending.

              • Cliff AB says:

                Are the Cogent X series of journals considered predatory?

                I recently submitted a paper to a journal which got rejected. One of the reviewers wrote (paraphrasing) “We don’t believe this paper is appropriate for journal X. However, we strongly encourage resubmitting to Cogent Mathematics, as we believe this would a great fit for this paper”. I never made contact with Cogent Mathematics, yet I started to receive lots of solicitations from this journal after my rejection from the other journal.

                The whole thing felt somewhat unethical. Blind review is not supposed to be an advertising + spam list opportunity.

            • Alex Gamma says:

              >>“Credit to Daryl Bem himself,” Leif Nelson told me. “He’s such a smart, interesting man. … In that paper, he actively encouraged replication in a way that no one ever does. He said, ‘This is an extraordinary claim, so we need to be open with our procedures.’ … It was a prompt for skepticism and action.”<<

  2. paul alper says:

    As long as we are in fantasy-land analysis today, this is from the Guardian:

    Trump diehards dismiss Russia scandal: ‘Show me the proof – or get off his case’

    “The latest controversy to hit the White House has rocked Washington – but Trump voters in this once-Democratic stronghold are standing by their man”

    and contains an illustration of how people reason in the U.S.:

    “Larry Hallett, Joan’s husband, used to teach prophecy [!] classes, in addition to many other pursuits. ‘I’d like to see it work out to where he [Trump] had some support from both sides, and everybody begin to come together,’ Hallett said. ‘If that happened, Biblically, that would hold back antichrist for awhile. It’s as simple as that.’”

  3. Cody L Custis says:

    What’s fascinating is the diversity of references. Without the list and context, there’s no reason I would expect C(rash) Bandicoot, L(ane) Kiffin, and (George) S (Patton) to have anything in common.

Leave a Reply