Harvard dude calls us “online trolls”

Story here.

Background here (“How post-hoc power calculation is like a shit sandwich”) and here (“Post-Hoc Power PubPeer Dumpster Fire”).

OK, to be fair, “shit sandwich” could be considered kind of a trollish thing for me to have said. But the potty language in this context was not gratuitous; it furthered the larger point I was making. There’s a tradeoff: use clean language and that will help with certain readers; on the other hand, vivid language and good analogies make the writing more readable and can make the underlying argument more accessible. So no easy answers on that one.

In any case, the linked Pubpeer thread had no trolling at all, by me or anybody else.

16 thoughts on “Harvard dude calls us “online trolls”

  1. Even then, I wouldn’t have considered the title to be trolling or trollish. Trolling is usually posting inflammatory for the sake of causing outrage. In this case, we’ve been incredibly respectful with the authors and have made arguments about why their arguments are flawed and have put a ridiculous amount of effort into it. To call critics “trolls” is an easy way to dismiss arguments nowadays and unfortunately, most people (including the senior author) don’t seem to understand what the term means

    Merriam Webster:

    a: to antagonize (others) online by deliberately posting inflammatory, irrelevant, or offensive comments or other disruptive content
    “… trolls engage in the most outrageous and offensive behaviors possible—all the better to troll you with.”
    — Whitney Phillips

    • Also, “online trolls” typically take advantage of the anonymity and distance provided by online forums to say things more outrageous than they would in person. In contrast, the authors’ critics are happily staking their scientific reputations on these critiques, which are no more outrageous than they would be willing to make face-to-face as a discussant or audience member for a conference panel on the subject. Given that epistolary criticism of published work is a centuries-old practice in the sciences, which Chang surely knows, perhaps he is trolling his critics?

  2. One underlying cause of this problem of misrepresenting statistical theory is the tendency of many people (including myself at one point) to try to guess what they theory might be. Do I need to log-transform my reading time data? Let me do some simulations on my very specific data-set with its unique peculiarities and let me then publish a paper that makes very general “recommendations” that should be adopted by the rest of the world for every future data-set. Another underlying cause is the tendency of professors to fall into a delusionary state that, just because they are experts in their particular area, they must be experts in every other area, including statistics. I rarely see any sense of humility or the admission that they’re shooting in the dark, but that one could, in theory, turn the lights on.

    I have mentioned this before on this blog, but when I was a grad student in linguistics, a professor proudly told us that in our department we taught stats over four weeks, whereas that psych dept over there gives students years of training in statistics (implying that shorter is better).

  3. Sometimes one runs out of patience and calls a thing what it is, which is probably what one should do all the time. When this happens the scoundrels and charlatans defend themselves by focusing attention exclusively on the quality of the rhetoric used to criticize them. Of course one should always take the moral and strategic high ground and avoid this sort of inflammatory language that often only enables the miscreants, but one has limited patience with some things. When the cheaters are powerful people at powerful institutions and have ignored prior good faith, disinterested criticism of their lousy work, it is time to call them what they are and their work what it is.

  4. To be fair, in the lexicon of Internet commentary the original posting is closer to “shit stirring” than “trolling”.

    But inflammatory in any case. Having a “Harvard dude” respond by further fanning the flames is somewhat of a dog bites man story IMHO.

    • Andrew G’s post referring to post hoc power as a “shit sandwich” could plausibly be referred to as trolling, or shit-stirring, or whatever term you choose for something that describes an exchange that does not meet the arbitrary politeness standards required to pass as proper academic conversation.

      But, the PubPeer comments contain no trolling, just explanations of why the work is flawed with links to some of the prior papers that have (pre-emptively! before this work was ever done!) explained why their recent string of papers about post-hoc power are nonsensical. Also, as Zad points out above, nearly all of the PubPeer comments are signed; people are putting their names to the criticism, not just hiding behind online anonymity.

      I find it a little humorous that the author has made no attempt to defend his work on mathematical or statistical grounds (because it’s not defensible), simply dismissing this as “difference of opinion” from “online trolls” that do not understand the “practical context” of the work.

      (Note: I worked as a statistician in surgical research for several years and I’m a Statistical Editor for a major surgical journal as well as a cardiology journal; I’m quite well versed in the “context” to which the author refers)

    • Brent:

      I don’t actually think the Harvard dude was reacting to my post at all. I think his reaction was to the Pubpeer thread and to the repeated suggestions (not made by me) that his paper be retracted from the journal. I don’t think the Pubpeer comments are trolling at all, and I don’t think the suggestions for retraction are trolling either.

  5. Of course, I think the topic of whether Andrew is a troll is important, but can we talk about the other thing this guy actually said. He is quoted as saying, “If we had completely fabricated our data, that would be the only justifiable reason for retracting the study.” Holy cow! So, studies that are completely wrong, used wrong methods, had major errors in the data or had only partially fabricated data should all stay in the literature continuing to mislead the public and scientific community. What is the possible justification for that standard other than wanting to protect your own career? Maybe somebody should troll this guy.

    • I think this might depend on the field. In medicine, I can see your point of view—outright mistakes and results that didn’t turn out to be replicable or turned out to be based on an erroneous analysis might need to be retracted. In other areas, like basic research on cognition could have lots of mistakes in it (basically, a lot of the stuff in psycholinguistics for example is based on incorrect or p-hacked analyses or Type M error lucky shots that can never be replicated) but these studies can stay as part of the historical record without being retracted. Differently put, if we were to take this criterion seriously, every n-th study, where n is a low number, in psycholinguistics would probably end up retracted. What continues to annoy me about psycholinguistics is the steadfast refusal of many people to release data and code. With data available, the next researcher can show in a future publication what was wrong in the original analysis/data. If data are never released, we just have to trust that the authors got it right or didn’t do any hanky panky.

      • To be fair, I am not saying every mistaken study should be retracted. I am only criticizing the Harvard guy’s standard that only outright fraud needs retraction. A retraction is a journal’s way of saying we should never have published this study because it never met our standards for acceptance. If outright fraud is the standard for retraction, then the standard for acceptance should be the same. Accept everything that is not fraud, which will be impossible to detect. So, the standard is accept everything. That may not be a bad thing, but then you have eliminated the whole point of peer-review.

        • Agree, I find this a bit problematic, the belief that the only grounds for retraction is fraud or misconduct. I think you put it well here, Steve: “A retraction is a journal’s way of saying we should never have published this study because it never met our standards for acceptance.”

          That includes fraud or misconduct, but it’s certainly not limited to that. As you’ve alluded, under this belief, studies that have major errors should remain published unless it can be proven that their data was outright fabricated. An additional bit of humor: several of us that have attempted to contact the authors directly to ask for their data (in the hopes that we could use it to help explain why their proposal is nonsense) have been rebuffed by them telling us that there is no data or code, but that they would be happy to have a Skype call to explain what they did. So they didn’t even save the data or syntax they were using to generate these results!

          On the broader subject of retraction, I was peripherally involved in a very different retraction earlier this year, described here: https://www.ncbi.nlm.nih.gov/pubmed/30783662. Unlike the paper in question today, that was a relatively easy “open and shut” case – the authors had included studies in their meta-analysis which did not meet the definition they had laid out for inclusion (e.g. different patient populations or treatment than what they were supposedly meta-analyzing) and an astute reader, Ricky Turgeon, posted about this on Twitter.

          Getting a paper like the recent post-hoc power debacle retracted is much harder, because it looks (on the surface) like it’s a difference of opinion (it’s not, but to the untrained reader, it feels that way); then it requires the journal to admit that they made a mistake; then the journal has to decide the mistake is sufficiently egregious that it’s more problematic or embarrassing to leave the paper published than it is to retract (don’t underestimate this – many journals would rather leave a bad paper published and wait for the furor from a few cranks on Twitter to die down rather than deal with public embarrassment of a retraction); then they have to weigh in potential legal liability, because apparently in the past some authors have sued journals over retraction (I was surprised to learn this); and once ALL of that is dealt with, maybe they’ll retract the paper.

          Given that the contents of this paper require at least a modest degree of technical knowledge, it’s easy to get bogged down in the weeds and create the appearance that there are two valid competing viewpoints here, and the journal is tempted to say “We’ll leave this published, but we invite you to submit and publish your concerns as a letter to the editor” – they prefer because it keeps their hands cleaner, and they look like they’re embracing vigorous debate between scientists. Except, in this case, one side is espousing a nonsensical and proven-silly viewpoint, but the typical reader likely does not know enough to see that.

        • The vague criteria being mentioned around here shows that what one needs is a more carefully crafted system for deciding when a study should be retracted. There are fairly clear guidelines on what counts as co-authorship. My students prepared one, based on official statements: here.

          The official guidelines are widely ignored at least in Germany, but at least they exist. There should be established criteria for what counts as enough problems to merit retraction. Maybe these do exist and I just don’t know about it.

Leave a Reply

Your email address will not be published. Required fields are marked *