To better enable others to avoid being misled when trying to learn from observations, I promise not be transparent, open, sincere nor honest?

This post is by Keith O’Rourke and as with all posts and comments on this blog, is just a deliberation on dealing with uncertainties in scientific inquiry and should not to be attributed to any entity other than the author. As with any critically-thinking inquirer, the views behind these deliberations are always subject to rethinking and revision at any time.

I recently read a paper by Stephen John with the title “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty”. On a superficial level, John’s paper can be re-stated as honesty  (transparency, openness and sincerity) is not always the best policy. For instance, “publicising the inner workings of sausage factories does not necessarily promote sausage sales”. The paper got me thinking about how as statistical consultants or collaborators we need to discern how other’s think if we hope to enable them to actually do better science – at least when the science is challenging. I used to think about that a lot more early in my statistical career, sometimes referring to it as exploratory cognitive analysis.

Now I was fortunate at the time to be with a group where mentoring and enabling better research was job one. Most of those I worked with were clinical research fellows and their clinical mentors. They had fairly secure funding so they were not too worried about getting grants. Usually we would work together for a few years and publications did seem to be mostly about sharing knowledge rather than building careers. For instance, if something went wrong in a study, a paper often was written along the lines of identifying what went wrong and how others could avoid the same problems.  Perhaps in such a context, learning how they thought – so I could better engage with them, made more sense. This maybe is rare in academia. Additionally, for much of the research, available or well recognised statistical methods were lacking at the time. Or maybe I just dropped the ball after leaving?

Before I left that group, I did provide some training on exploratory cognitive analysis to my replacement. It involved debriefing with them after meetings with clinical researchers discussing what the clinical researcher seemed to grasp, what they would likely accept as suggestions, what they did not seem to grasp and how to further engage with them. In some of the meetings, I arranged to purposely show up late or have to leave early so my replacement got a few solo runs at this.  I was also reminded of this activity in a JSM presentation by Christine H. Fox, The Role of Analysis in Supporting Strategic decisions. Afterwards I mentioned this to my replacement and they mentioned that it has worked well for them.  However, there does not seem to much written on the need to learn how others think when engaging in statistical consultation or collaboration. Or how to avoid providing “insultation” instead of consultation – as Xiao-Li Meng put it here.  Now, is it likely that “not being transparent, open, sincere nor honest” will help here?

John’s assumptions and goals are interesting. “All I assume is that the world is a better place if non-experts’ [non-statisticians] beliefs are informed by true experts’ [statisticians] claims”. Additionally accepting the goal of “ensure[ing] that non-experts’ [non-statisticians’] beliefs mirror experts’ [statisticians’] claims in many ways” – the ethics of science communication boils down to identifying what is permissible in doing this. This makes sense – if non-statisticians were to think like statisticians that would enable better research – right? The challenge arises because of how most non-statisticians may learn from statisticians and the preponderance of a “folk philosophy of science [statistics]” among them. This allows “agnotology” (the deliberate creation and maintenance of ignorance) by careerist statisticians or those who get something by providing statistical “help” (e.g. to get co-authorship) – to flourish. For instance, the folk philosophy of statistics’ belief that there are correct and incorrect methods in statistics. Given that, once you have involved a statistician or statistically able consultant/collaborator, you can be fully confident the methods used in your study are correct. The uncertainty has been fully laundered. An example of this I ran into often at one university was “After our discussion about analysing my observational study, I talked to Dr. X (a statistician with a more senior appointment in the university) they assured me that step-wise regression is a correct way to analyse my observational study. They also said your concerns about confounding just over complicate things and make for a lot of unnecessary delay and work. In fact, I was able to do the analysis on my own, we have already submitted the paper and it looks like it will be accepted.”

Below I discuss John’s model regarding how non-statisticians become informed by statisticians (both competent ones,  incompetent ones and agnotological ones), try to identify current deficiencies in the statistical discipline’s ability to deliver if John’s model is applicable and discern areas where I argue such a model is or is not too wrong to be helpful.

John offers a two step model of how non-experts learn from experts. It involves a sociological premise: Experts institutional structures are such that claims meets scientific “epistemic standards” for proper acceptance  and the epistemological premise:  If some claim meets scientific epistemic standards for proper acceptance, then I should accept that claim as well. John’s requisite expectations here being “The institutional structures of epistemic groups typically aim to ensure that its members assert and accept claims only when those claims do, in fact, meet relevant standards. … To think that the institutional structures of an epistemic group are working well, then, is to think that when members of the community assert or agree on some claim, the best explanation for them agreeing on a claim with this specific empirical content is that the claim meets relevant epistemic standards”. Or perhaps more pointedly “trust in individual scientists [statisticians] involves an assumption that institutions ensure that some “social type” – the accredited scientist  [statistician] – is trustworthy”. In this model, what non-experts are taken to learn is not any content but rather the apparent consensus of experts along with an assurance that certain groups of experts’ “epistemic standards”  should be taken as convincing. A nice counter example John gives involves astrological claims that do meet astrological epistemic standards but which few of us would find convincing.

In regard to statistics, I think both premises are questionable. For instance, the methods statisticians choose or recommend are largely idiosyncratic. On the other hand, when there is a wide statistical consensus such as the foolishness of gambling, many or most people ignore this. The first premise – statisticians’ institutional structures are such that claims meet statistical “epistemic standards” is certainly rather weak to non-existent. There are now some accreditation processes but they are fairly new and unproven? (Personally I have been wary of them.) Generally accepted standards for analysing studies like yours such as “Just do these two t.tests and call me if the headache from the journal reviewers still persists after re-submission” don’t seem promising – at least for challenging research. So there simply isn’t any assessment that a given statistician is competent (will be aware of a consensus) or obligated to adhere to any such consensus. That is, currently no widely accepted “system of conventions that normally enable individuals to recognize valid science [statistics] despite their inability to understand it” borrowing a phrase from Dan Kahan.

The ASA statement on p-values could be seen as a first step towards strengthening the first premise.  It is not all that clear that the first premise can be adequately strengthened to meet John’s required expectations for it to work. Time may tell. Though I don’t see assurances for this being the case – “the best explanation for them agreeing on a claim with this specific empirical content is that the claim meets relevant epistemic standards”. Rather just a symposium with invited speakers and journal review and publication of invited and open submissions. What could possibly go wrong? Least publishable units exaggerating distinctions and individual contributions? There are perhaps worse possibilities “statistical methods [could] be subject to regulatory approval”.

Now the exploratory cognitive analysis discussed earlier in this post is primarily focused on discovering if a researcher is ready/capable, along with our being able to discern how to get across to them some amount of the actual content of statistical reasoning. On the other side of the interaction, for the statistical reasoning to be sensible we have to be ready/capable along with the researcher being able to discern how to get across to us, some amount of the actual content of domain reasoning. That of course always needs to be checked – did I get it, did they get it and finally did we get it “together”. That is exploratory cognitive analysis is all about we getting it “together”. This is very different than John’s two step process of simply discerning if there is a statistical consensus on an approach and whether such consensus should convince one.

When statistical consulting or collaborating involves open ended research, it is a process rather than a set of findings that researchers need to be informed of and involved in. John’s two step model seems inadequate here. The researchers need to grasp the essential content of the statistical reasoning and how it applies rather than just being informed there is a consensus that these statistical techniques are appropriate for their research and most accredited statisticians would/should recommend it. It definitely involves getting partially into the design and choice of the workings of sausage [science] factories. Science being a process that forces and accelerates getting less wrong about things that occasionally comes to a rest, but never ends.

The two step model seems more appropriate for static claims made after a research consensus has been reached, and the researchers want to convey those claims. In statistics perhaps applications that are fairly straight forward. Statistical defaults. Here with strengthened first and second premises in statistics, exploratory cognitive analysis could be easily be skipped. Not all applications of statistics are for “rocket science” and there simply are enough competent statisticians for even a small percentage of applications. Nor do the economics support it. For many applications, as Hadley Wickham argues, safe statistics would arguably be better than statistical abstinence.

 

 

 

 

 

 

5 thoughts on “To better enable others to avoid being misled when trying to learn from observations, I promise not be transparent, open, sincere nor honest?

  1. I like how the one “fundamental ethical reason” to be honest and transparent that he mentions in the (free) first page is “generat[ing] epistemic deference”. Followed by “Maybe there are other fundamental reasons” but he “argues against this possibility.” Yeah sure… because if we can’t force people to believe us by just saying stuff, then why bother doing science? Let’s just become priests; they already start from the premise that you need faith to believe anything they say.

    Good grief. I know I should read this, but it all just seems so misguided – some combination of “the public is stupid”, “there are bad people who will try to use science for bad aims”, and something about equating the “flow of true belief throughout society” with “selling sausage”. It just seems like a lot to try to slog through.

    • Agreed.

      Keith’s discussion was worthwhile, but John’s first paragraphs didn’t give any indication that he lives in the same world as I do (or perhaps that his ethics are not compatible with mine? The sausage factory analogy really got me shaking my head.)

      • But to ramble (muse?) a little: I do think that, in general, understanding how a learner thinks and learns is important in helping them learn. But I am extremely skeptical that a devised “model” such as the one John proposes can do the job — more generally, I don’t think that any theory or model of how people learn in a particular situation is reasonable — since my experience (based on teaching) is that different people learn differently. So the teacher/expert needs to listen to the learners (plural; in particular, sometimes listening to two different learners interacting with each other can be a really good way to get insight into how different people learn).

        • Agreed. Writing is always writing for an audience, whether you are writing an academic paper, a lecture, a novel or a poem. And without some reflection about who you are trying to convince and where they are coming from and what arguments will resonate with them, you are bound to fail at communicating your intended information or perspective. But the idea that people all learn the same way is like the idea that everyone responds the same to standing in awkward yoga positions or, for that matter, to taking a Tylenol. So I too am dubious of the model.

          Models of human learning are useful. I don’t want to judge this model as a model without reading it carefully and thinking it through, but I also don’t see how a model that sees scientists as striving for “epistemic deference” as a goal in itself is particularly useful. Scientists earned that deference from the public (and philosophers) by making strides in learning that generated useful results for people (electrical grids, the internet, vaccines against diseases that killed tens of millions of people), and they did that by, for the most part, championing open discourse and honest reporting. If anything threatens that epistemic deference among the public, it is poorly conducted research that is dishonestly peddled as Science, and the key ingredient there is usually a hostility to openness in either intellectual debate and/or data sharing.

          I should probably stop commenting on the paper itself unless I actually read the thing. But I just don’t see how a model of an idealized human learner can do the work of justifying pseudo-scientific bullshitting…. because someone might twist my honest words to further a political agenda? Really? I mean, nothing bad could come from lying to the public…right?

  2. Equality does not, I think, mean that we’re all the same height or have the same net worth. It means, rather, that we’re all, or all should be, free to call BS on BS. And to do so we each must have access to the same claims, the same arguments and the same empirical results upon which such arguments rest. This quest is not about sausages or sausage sales but rather about how we each think such things ought be priced (weighed). Denigrating the non-credentialed serves only the bootstrap purpose; by which you struggle mightily to accomplish nothing; repeatedly; forever.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *