Skip to content
 

When can we challenge authority with authority?

Michael Nelson writes:

I want to thank you for posting your last decade of publications in a single space and organized by topic. But I also wanted to share a critique of your argument style as exemplified in your Annals of Surgery correspondence [here and here]. While I think it’s important and valuable that you got the correct reasoning against post hoc power analysis on the record, I don’t think there was ever much of a chance that a correct argument was going to change the authors’ convictions significantly. Their belief was not a result of a logical mistake and so could not be undone by logic; they believed it because it was what they were originally taught and/or picked up from mentors and colleagues. I suggest that the most effective way to get scientists to change their practices, or at least to withdraw their faulty arguments, is to challenge authority with authority.

What if, after you present your rational argument, you then say something like: “I know this is what you were taught, as were many of my own very accomplished colleagues, but a lot of things are taught incorrectly in statistics (citations). However, without exception, every single one of the current, most-respected authorities in statistics and methodology (several recognizable names) agree that post hoc power analysis (or whatever) does not work, for precisely the reasons I have given. More importantly, their arguments and demonstrations to this effect have been published in the most authoritative statistical journals (citations) and have received no notable challenges from their fellow experts. Respectfully, if you are confident that your argument is indeed valid, then you have outwitted the best of my field. You are compelled by professional ethics to publicize your breakthrough proof in these same journals, at conferences of quantitative methodologists (e.g., SREE) and any other venue that may reach the top statistical minds in the social sciences. If correct, you will be well-rewarded: you’ll instantly become famous (at least among statisticians) for overturning points that have long been thought mathematically and empirically proven.” In short, put up or shut up.

My reply:

That’s an interesting idea. It won’t work in all cases, as often it’s a well-respected authority making the mistake: either the authority figure is making the error himself, or a high-status researcher is making the error based on respected literature. So in that case the appeal to authority won’t work, as these people are the authority on their fields. Similarly, we can’t easily appeal to authority to talk people out of naive and way-wrong interpretations of significance tests and p-values, as these mistakes are all over the place in textbooks. But on the occasions where someone is coming out of the blue with a bad idea, yeah, then it could make sense to bring in consensus as one of our arguments.

Of course, in some way whenever I make an argument under my own name, I’m challenging authority with authority, in that I bring to the table credibility based on my successful research and textbooks. I don’t usually make this argument explicitly, as there are many sources of statistical authority (see section 26.2 of this paper), but I guess it’s always there in the background.

21 Comments

  1. Matt Skaggs says:

    “…their arguments and demonstrations to this effect have been published in the most authoritative statistical journals (citations) and have received no notable challenges from their fellow experts”

    I was with you right up to “no notable challenges.” Has this ever happened in statistics?

    In other contexts, it has been claimed on this blog that no statistical formulation can be recommended a priori for any field, because every study is unique. No one wants to make recommendations, but everyone wants to criticize.

    • Andrew says:

      Matt:

      1. I don’t think there have been any arguments from respected statisticians supporting post-hoc power analysis as criticized above. The error that is discussed in those above links is basically universally recognized as an error, and the only people who are promoting the idea are outsiders who don’t understand statistics, or who want statistics to give them an answer that statistics can’t give. I agree that there are many many controversial topics that have respected statisticians on both sides of an issue—but not this one!

      2. I’m not familiar with the claim that you refer to in your second paragraph. Every study is unique, but there are default methods that can work well for many problems—especially when the method includes some checks for where it’s not working. Also, I disagree with your statement, that “No one wants to make recommendations.” I make recommendations in my textbooks all the time, and other textbook writers do too. Indeed, statistics is sometimes called the science of defaults.

      • Rahul says:

        Isn’t your #1 a classic No True Scotsman argument?

        • Andrew says:

          Rahul:

          Not in this cae.

          Let me give an even clearer example. Many practitioners believe that the p-value is the probability that the null hypothesis (of zero effect) is true. This is even stated is many statistics textbooks. Not the best textbooks, but many textbooks that look authoritative enough, are published by real publishers, and have authors with credible academic affiliations. Nonetheless, this statement is universally recognized as an error by anyone serious, and when it makes its way into textbooks is is as an oversight.

          That’s what the post-hoc power thing is like. When writing my paper with Carlin that was eventually published in 2014, I spent some time reading the statistics literature on post-hoc power analysis, and all I could see were papers by statisticians saying not to do it. It’s understood among statisticians to be a mistake, or at least understood as a mistake by any statisticians who write papers on the topic.

          • name withheld by request says:

            I’ve never seen post-hoc power analysis recommended by anyone I know to be a professional statistician. But I’ve occaionally had to choose between either doing it or fighting hard to justify NOT doing it by (anonymous) reviewers of papers on which I’m a coauthor. My colleagues and I almost always choose the “fighting hard” option but I reckon many people who know the procedure to be garbage just do it if it’ll make the reviewer happy and result in a decision to publish the paper.

  2. Bob76 says:

    Often this approach probably won’t work—as you recognize. But I fail to see how it is likely to hurt.

    Eight hundred years ago St. Aquinas wrote that “argument from authority is the weakest from of argument.” Weakest is not the same as useless.

    Einstein supposedly said, after being told about the book “One Hundred Authors against Einstein”—“Why one hundred? If I’m wrong one is enough.” https://en.wikipedia.org/wiki/Criticism_of_the_theory_of_relativity#A_Hundred_Authors_Against_Einstein

    But in the case of relativity it might be easy to show that it is wrong. If relatively predicted that the sun would go dark every Tuesday, it would be easy to refute.

    It seems to me that one problem with these methodological errors is that pointing out the error does not directly contradict the claim. Failed replications (if done to exactly duplicate every bit of the original work) would contradict the claim.

    Methodological criticisms are criticisms of inputs and thus only indirect criticisms of the outputs. I think that level of indirectness makes it harder for a researcher to accept the criticism. If one has been trained to think that the methods one learned decades ago are correct and lived those decades in a community that also believed that they were correct, one might regard criticisms of studies based on flaws in their methods to be “methodological terrorism.”

    Bob76

  3. Peter Dorman says:

    This is an interesting topic for me as an economist. I regard mainstream economics as often wrong on important questions, both foundational and applied. This means I’m butting up against authority. On the other hand, crazy ideas about economics (especially about money and the Fed) are widespread among the lay masses, and it’s helpful to be able to appeal to authority to counter them.

    The approach I take is that going up against authority imposes a burden, like Michael Nelson says in the OP. If the big names all say X and you say Y, you have a responsibility to explain why you think they’re wrong. The default assumption should be that the bigwigs know what they’re talking about. But once you get into the substance of the arguments, authority drops out.

    • Anoneuoid says:

      If the big names all say X and you say Y, you have a responsibility to explain why you think they’re wrong.

      Something similar is going on regarding vitamins C/D and covid. For 100 years it has been accepted that vitamin deficiencies are bad and should be treated with the corresponding vitamin. It has also been reported over and over that most severe covid patients are deficient in those vitamins. So you would think the default position would be to correct the vitamin deficiencies.

      But instead of treating vitamin deficiencies with vitamins, as was always accepted without controversy by everyone before, the authorities are now resisting this safe and cheap intervention. Instead they say more evidence is needed… because covid… and run studies that are doomed from the start to yield equivocal results (small sample size, do not make sure a deficiency is actually being corrected, etc).

  4. John Richters says:

    Andrew:

    I disagree with your reply to Nelson’s advice and think his strategy is brilliant for 2 unrelated reasons.

    1: First, Nelson’s recommendation is not, as he characterizes it and your response interprets it, “to challenge authority with authority”. He’s really recommending that you challenge authority with a rhetorical flare. The distinction is an important one and stood out in sharp relief to me as I read your 2 Annals of Surgery pieces through the eyes of a psychologist (which I am), rather than those of a statistician (which I decidedly am not). The April paper spells out the reasoning behind your complaint that post hoc power analysis is a “bad idea” and a problem “we’ll known in the statistical and medical literatures.” And then, after registering your agreement “with their goals and general recommendations”, you return the post hoc power analysis strategy, this time characterizing it as “just a problem”. In your September response to their reply you drive home your argument a bit more forcefully by pointing out that their strategy will give “inaccurate answers” because the their method “has poor frequency properties”, resulting in “a very noisy estimate of the power” that “tells us almost nothing” and “is an invitation to overconfidence.”

    Fellow statisticians reading these papers will recognize the admirable and distinctly Gelmanian traits of intellectual modesty, conciliatory tone, and understatement. Many statisticians, in fact, will find your arguments all the more compelling because of these traits. To the untutored intellects of non-statistician readers like me, though, modesty and understatement are more likely to be misinterpreted as evidence that your complaint amounts to union shop talk, statistical nitpicking over issues that are beyond the grasp of non-statistician mortals, insider issues over which reasonable people disagree, rather than a discredited strategy, the deficiencies of which are well documented and uncontested in the statistical literature. If, as Nelson’s recommendation suggests, there is a near-universal consensus among statisticians against deployment of the post hoc power analysis strategy, what better way to drive home this point than by adopting his rhetorical strategy. It would offer the combined advantages of (1) disabusing readers of any illusion that the jury is still out on the viability of this strategy, (2) shifting the burden of proof back to the shoulders of advocates—where it belongs, and (3) throwing their gauntlet on the ground in a public way that would be difficult to ignore.

    One caveat: The “I know this is what you were taught” part of Nelson’s otherwise brilliant strategy is unnecessarily personalized and in-your-face for my taste, would likely detract from its rhetorical power, and can easily be fixed by reframing it along the lines of “this is one of the many things scientists are taught ….”

    2: The second argument for pursuing Nelson’s strategy was made by Francis Bacon in his Novum Organum:

    “It would be an unsound fancy and self-contradictory to expect that things which have never yet been done can be done except by means which have never yet been tried.”

    • Martha (Smith) says:

      John Richters said,
      “To the untutored intellects of non-statistician readers like me, though, modesty and understatement are more likely to be misinterpreted as evidence that your complaint amounts to union shop talk, statistical nitpicking over issues that are beyond the grasp of non-statistician mortals, insider issues over which reasonable people disagree, rather than a discredited strategy, the deficiencies of which are well documented and uncontested in the statistical literature.”

      To me, this brings up the broader issue of “intellectual honesty”, something that I consider important in doing sound science (or social science). Admittedly, it is difficult to describe just what I mean by “intellectual honestly,” but to make a stab at a brief description: It includes the following (and probably more):

      Being aware that we can often fool ourselves into believing something, by not paying careful attention to whether or not the reasoning and evidence we use to back up our conclusions is really sound.
      Trying to give honest critiques of our reasoning or evidence, and seriously considering the critiques of others.
      Accepting uncertainty as an inherent part of the real world.

      • Rahul says:

        Good point. But we won’t get intellectual honesty till we incentivize it.

        Right now the system is biased strongly in favor of grants and citations so no wonder intellectual honesty goes for a toss.

        When was the last time you heard a grant or selection committee trying to discuss the relative intellectual honesty of candidates?

  5. This is also an interesting topic for me b/c I am not an authority on anything. LOL

  6. As much as I hate appeals to authority, I like this idea a lot. Sort of like, “Here, let’s pause this discussion for a moment while you imagine having to go head-to-head with some big shots on why you’re right … Ok, now that your confidence is recalibrated, perhaps we can talk.”

    • Rahul says:

      So I feel appeals to authority are inevitable in any scaled up system that functions. It’s a feature not a bug.

      A hierarchical system of imputed credibility is indispensable since judging each conclusion purely on its merits is impossible. A priori, grounds up reasoning every time would be impossible.

      I think people love to criticize appeals to authority but don’t really ever present a scalable alternative.

      • Dale Lehman says:

        Just like democracy – a terrible system until you consider the alternatives. However, once we accept that a hierarchical system is necessary, there needs to be a corresponding sense of responsibility of those at the top. And we know that power corrupts. So, we need systems in place that provide incentives for responsible behavior by those in authority. The only system that does not work, is one that permits those in authority to define for themselves what responsibilities they bear. When those in authority exhibit elitism, they are holding themselves to be the judge and jury of truth. What they should exhibit is a humility and recognition that they need to earn their position of authority. Too rare, in my mind.

    • I can’t imagine experts arguing against their gatekeeper roles. And if anything, they will protect their turf at all costs. Nevertheless, this gatekeeper role has been under scrutiny in the last 20 years at least. I think Philip Tetlock’s Good Judgment project, supported by David Matheny [former Director of The Intelligence Advanced Research Projects Activity (IARPA).

      It was discovered novices/hobbyists/non-experts could make even more accurate forecasts and analyze information even better than seasoned experts.

      That was tested through much of the last 12 years. Irving Janis, author of Groupthink was Philip Tetlock’s thesis advisor, a surprise. Groupthink has had many great insights into the sociology of expertise and the obstacles faced by someone who dissents from the majority view. I had the great opportunity to hear Janis when I was a teen. At Yale. He mentioned in his keynote that magnanimity is a valuable trait for decision-making. Counterintuitive. I do think that makes a lot of sense b/c your mind is freer and more fluid then. This is just a hypothesis.

      Magnanimity is not pervasive, obviously. We see this in relations among academics surely. This makes some challenges about any discipline or endeavor a very unpleasant dynamic.

  7. Bill Harris says:

    When you say it like that, Jessica, it sounds reminiscent of Geoffrey Moore’s /Crossing the Chasm/ on how you sell high-tech products. I forget the specifics, but, in general, early adopters are more likely to buy on the concept, while laggards are more likely to buy when they look around and see everyone using the new product. Substitute idea for product, and I think you have Michael’s idea.

    • Dale Lehman says:

      I don’t think that is a good comparison. It paints the authority figure as the innovator, with the rest of us as imitators. More often, the authorities have incentives to protect the status quo, thereby holding off innovation. Rather than crossing the chasm, they are more likely to pull up the bridges.

  8. David J. Littleboy says:

    “I suggest that the most effective way to get scientists to change their practices, or at least to withdraw their faulty arguments, is to challenge authority with authority.”

    I tried that. It didn’t go well.

    Some number of years ago, there was a Canadian health blog run by a couple of grad students. It was pretty good, I learned some stuff there. They put up a post based on some of the Wansink work that had been excoriated here, so I, being a nice guy, tried to warn them that there was a problem, pointing them here. I got flamed something fierce for attacking a “widely respected researcher”.

    Like they say, no good deed goes unpunished.

    Or of more relevance to this post, you can lead a horse to water, but you better make darn sure you’ve prepared it to be interested in what you have to show it before you let on what the bad news is that you are trying to tell it.

Leave a Reply to John Richters

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.