On the term “self-appointed” . . .

I was reflecting on what bugs me so much about people using the term “self-appointed” (for example, when disparaging “self-appointed data police” or “self-appointed chess historians“).

The obvious question when someone talks about “self-appointed” whatever is, Who self-appointed you to decide who is illegitimately self-appointed?

But my larger concern is with the idea that being a self-appointed whatever is a bad thing. Consider the alternative, which is to be appointed by some king or queen or governmental body or whatever. That wouldn’t do much to foster a culture of openness, would it? First, the kind of people who are appointed would be those who don’t offend the king/queen/government/etc, or else they’d need to hide their true colors until getting that appointment. Second, by restricting yourself to criticism coming from people with official appointments, you’re shutting out the vast majority of potential sources of valuable criticism.

Let’s consider the two examples above.

1. “Self-appointed data police.” To paraphrase Thomas Basboll, there are no data police. In any case, data should be available to all (except in cases of trade secrets, national security, confidentiality, etc.), and anyone should be able to “appoint themselves” the right to criticize data analyses.

2. “Self-appointed chess historians.” This one’s even funnier in that I don’t think there are any official chess historians. Here’s a list, but it includes one of the people criticized in the above quote as being “self-appointed” so that won’t really work.

So, next time you hear someone complain about “self-appointed” bla bla, consider the alternative . . . Should criticism only be allowed from those who have been officially appointed? That’s a recipe for disaster.

And, regarding questions regarding the personal motivations of critics (calling them “terrorists” etc.), recall the Javert paradox.

32 thoughts on “On the term “self-appointed” . . .

  1. I see the opposite of “self-appointed” as “elected” or “recognized by consensus”, not “apointed by king”. So a self-appointed expert is a nobody who decides they are the king to appoint themselves, riding roughshod over the conventions and knowledge of the field.

    • Consensus may elect, but it does not imbue the positions of the elected with accuracy no matter how many people agree with them. Only critical reasoning and logical argument can accomplish that. I have always been surprised by those who claim to think critically and to support scientific thinking who then dismiss contrary arguments with an appeal to authority.

  2. Yeah, it is a weird one, since you can use it in pretty much any disagreement on either side. “I don’t like these eggs.” “I do.” “Oh, the self-appointed egg expert thinks these eggs are good.”

    And weird how you never hear anyone say “I agree with all the points that person made, even though they’re clearly a self-appointed expert on the topic.”

  3. I am working together with others to get retracted a fraudulent study on the breeding biology of the Basra Reed Warbler in the journal Zoology in the Middle East (MEZ), backgrounds at https://osf.io/5pnk7/

    This posting by Andrew reminded me to statements in a “Positional Statement” from Alan Lee, the Editor-in-Chief of Ostrich https://www.tandfonline.com/loi/tost20 , and dated 31 May 2019.

    This “Positional Statement” counts 11 pages. It was received some time after the manuscript TOST-2019-0026 was rejected by Ostrich. This manuscript is mainly based on the findings of two reports which are available at https://osf.io/j69ue/ and at https://osf.io/ajsvw/

    Alan Lee wrote in this “Positional Statement”:

    “It should be noted that at the time of writing of this statement, the MEZ article has yet to be formally judged as fraudulent by any publishing committee and as such should be interpreted as ‘allegedly’ fraudulent”.

    Alan Lee does not provide in this “Positional Statement” a definition of a “publishing committee” and Alan Lee also does not provide arguments why only a “publishing committee” can draw the conclusion that Al-Sheikhly et al. (2013, 2015) contains fabricated and/or falsified data.

    Alan Lee also wrote in this “Positional Statement” that he had no opinion about any of the findings in the reports at https://osf.io/j69ue/ and at https://osf.io/ajsvw/ :

    (1) because he had never visited Iraq or Iran;
    (2) because he had towards the best of his knowledge never observed a Basra Reed Warbler;
    (3) because he did not know anyone connected to the journal Zoology in the Middle East (“MEZ”);
    (4) because he did not know anyone associated to the article Al-Sheikhly et al. (2013).

    Others over here with more or less similar experiences when communicating with editors about such kind of manuscripts?

  4. I don’t know about historians, but scientists have a pretty broad consensus that each of us has a responsibility to consider critically and publicly research we are qualified to evaluate. That’s the step in the scientific method right after “publish.” Some seem to believe that there’s a right way to do this, or at least a respectful way–private communications with the authors, letters to journals, competing studies, etc, as opposed to blogs and social media. There’s no consensus on that, though, or if there is, nobody told all the journals who refuse to publish critical letters or contrary studies, and nobody told all the authors who won’t concede clear errors or insist on adding corrections that passive-aggressively dismiss an error when a retraction is warranted. Real science is falsifiable, and real scientists falsify. May as well say, “Who appointed you to be a scientist?”

    • It’s as if the “qualified to evaluate” criterion does not actually produce reviewers with the critical thinking skills and scientific thinking necessary to accomplish the task.

      • I’m willing to define this term broadly–if you took a stats class, you have some level of qualification for evaluating use of statistics, for example. The key is the public exchange, as those with less-informed opinions can be informed by those with more-informed opinions. The review process is not public, traditionally, so it’s set up to come to idiosyncratic conclusions. Published authors can then say, “I trust the peer review process, so if it got through, it must be good.” The open-review/pre-print approach may be supplanting that, however.

        • I wrote a dissertation and made sure it was very high quality because I knew I was getting out of academia after that. It was a lot of well meaning people who had no idea what they were doing, just awful to witness.

          I could link to that and you would see that despite the original project being designed around NHST (I didn’t know any better), I did everything I recommend (no NHST, explore alternative explanations, detailed methods section, come up with theoretical models derived from some premises that make numerical predictions, etc). I wouldn’t really call it complete though. I could have done more if I had more time, but I had already spent enough attempting to salvage it halfway through. I had needed to teach myself to code, relearn calculus, etc in my spare time to even accomplish what I did in the face of people who in the end just asked “so is it significant?”

          However, I prefer to remain anon (I’m sure someone dedicated enough could figure it out though, but it isn’t like you would have heard of me or anything like that).

        • And yes, I was told by the people doing NHST on this topic that coming up with a mathematical model, etc was impossible because the topic was “so complex”. That was BS. In fact I found in one case it had already been done in the 1930s, in another in the 1910s. It is 100% mistraining and/or laziness.

        • Can you link to an example of this producing a useful finding? I don’t think it’s impossible, it’s just not a useful approach in 99% of social science. Gelman himself has said there are no good examples of structural (i.e. modelling and estimating a structural dgp) in social science.

          I’ll wait.

        • And to do that you come up with models that make surprising predictions and test them. Ideally this is done by deriving the prediction from a small set of basic premises, but you can brute force it using techniques like linear regression (or its more advanced cousins like found in machine learning) if needed.

          It isn’t hard to come up with models that explain things post hoc.

        • I think I’m hung up on the phrasing “surprising predictions.” If someone has devised a theory from which a specific prediction logically follows — why would it be surprising?

        • That is more of a cargo cult phenomenon. It is people trying to mimic the “make surprising predictions” aspect science but not really “getting it”.

        • I think the idea is that it would be otherwise surprising if the theory weren’t true.

          If you have a theory that predicts MyTheory Implies ThingHappens and NotMyTheory Implies ProbablyNot ThingHappens then when you find out whether ThingHappens or not, you find out whether MyTheory is at least approximately true or not.

          On the other hand, if you have MyTheory Implies ThingHappens, and SomeOtherTheory implies ThingHappens, and Granny’s Intuition Implies ThingHappens and DefinitelyNotMyTheory implies ThingHappens…. then finding out that ThingHappens isn’t very informative… on the other hand finding out NotThingHappens might be quite informative…

        • I usually get annoyed at the generic vagueness of “If we want result A then procedure B and variation C should do the trick.” Now I see it could be worse. Also I almost missed the point for thinking of Dr. Seuss.

        • I usually get annoyed at the generic vagueness of “If we want result A then procedure B and variation C should do the trick.” Now I see it could be worse. Also I almost missed the point for thinking of Dr. Seuss.

          The only place I’ve seen something like that is some Java code. Java devs also use very tiny font, I guess so it can all fit on the screen.

        • It comes from Bayes rule. If we measure something and check how much it supports H[0], we also need to take into account how well other explanations work (H[1], H[2], …, H[n]):

          p(H[0]|D) = p(H[0])p(D|H[0])/sum( p(H[0:n])p(D|H[0:n]) )

          I don’t think he ever mentions Bayes rule, but that was pretty much the crux of Imre Lakatos philosophy of science:

          To sum up: [The hallmark of empirical progress is not trivial verifications: Popper is right that there are millions of them. It is no success for Newtonian theory that stones, when dropped, fall towards the earth, no matter how often this is repeated. But, ] so-called ‘refutations’ are not the hallmark of empirical failure, as Popper has preached, since all programmes grow in a permanent ocean of anomalies. What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.

          http://www.lse.ac.uk/philosophy/science-and-pseudoscience-overview-and-transcript/

          Also see: https://en.wikipedia.org/wiki/Experimentum_crucis

          Basically, it amounts to requiring that you at least predict something precise instead of vague. Eg, “the correlation/effect will be positive (negative)” is not a helpful prediction.

        • The same idea is also behind Meehl’s concept of a “spielraum”:

          In social science, everything is somewhat correlated with everything (“crud factor”), so whether H 0 is
          refuted depends solely on statistical power. In psychology, the directional counternull of interest, H*, is
          not equivalent to the substantive theory T, there being many plausible alternative explanations of a
          mere directional trend (weak use of significance tests). Testing against a predicted point value (the
          strong use of significant tests) can discorroborate T by refuting H*. If used thus to abandon T
          forthwith, it is too strong, not allowing for theoretical verisimilitude as distinguished from truth.
          Defense and amendment of an apparently falsified T are appropriate strategies only when T has
          accumulated a good track record (“money in the bank”) by making successful or near-miss predictions
          of low prior probability (Salmon’s “damn strange coincidences”). Two rough indexes are proposed for
          numerifying the track record, by considering jointly how intolerant (risky) and how close (accurate)
          are its predictions.

          Paul Meehl. 1990. Appraising and Amending Theories: The Strategy of Lakatosian Defense and Two Principles That Warrant It. Psychological Inquiry 1990, Vol. 1, No. 2, 108-141 http://meehl.umn.edu/sites/meehl.dl.umn.edu/files/147appraisingamending.pdf

      • I meant something like, a minimum condition that must be satisfied before a study, result, theory or model can be considered authentically scientific is that we can describe plausible conditions under which we would reject it. Some researchers seem to believe that p<.05 (however obtained) + publication in a peer-reviewed journal means that their work can no longer be questioned, much less still be found to be in error. If that's the case, the researchers' findings are no longer scientific, they're now a metaphysical claim. Yet those researchers want to have their cake and eat it, too: they still want their work to have the weight and influence of scientific findings, despite the fact that we only give such weight to scientific findings because they emerge from and remain in a universe of public dispute, the very dispute to which the researchers believe their work should be immune.

        • ” . . . researchers want to have their cake and eat it, too . . . ”

          Most of the time when I have cake, I do want to also eat it at some point. But I still remember what happened the last time my partner caught me with a cheesecake in the midst of the nightly dark… songs about that are still being sung almost on daily basis!

          Sorry for this interruption in your regularly scheduled programming.

  5. “because they emerge from and remain in a universe of public dispute,”

    Ooo! I like that! Every author should be forced to state, on the last line of every paper, and chant for two minutes in an associated podcast:

    “we recognize that the work and findings reported herein are a contribution to, and shall perpetually remain in, a universe of public dispute”

    Nice, Michael Nelson!

  6. A possible devils-advocate version of the argument would be that there’s a sort of chaotic war-of-all-against-all without any “police” having any actual responsibility. If you compare them to the literal police, they self-select for the job but also are under the authority of people ultimately responding to elected officials. Of course, some people regard those literal police as inferior to a system relying less on the Weberian state’s monopoly on force, with Robin Hanson’s proposal to rely more on bounty-hunters explicitly pitched at undermining the “blue wall of silence”. But even under his system, there would still be government courts adjudicating claims brought forth by those bounty-hunters.

Leave a Reply to Pinker was right about cheesecake Cancel reply

Your email address will not be published. Required fields are marked *