Skip to content

A Bayesian bestseller? Maybe not. Leading to a brief discussion of ethically-restricted inference

David Johnstone writes:

It occurs to me that in a bestseller type book someone could show lay people how to think Bayesianly in everyday yet sometimes vital situations. I have an example that arose lately. I had a painting contractor about to start painting my elderly mother’s house, which is old. There were signs of mould on the walls and he said he would include some solution in the paint to prevent mould from growing again. I arrived well into the job and then had a horrible thought that he had probably forgotten the treatment. I blurted out “did you remember the anti-mould” solution, and he looked a bit phased for a moment and then reached for his toolbag and said “yes I did, see here’s the bottle”. This was a small bottle of the right kind but was empty, which he presented as proof that the solution was in the paint.

So as I was driving off I wondered what the likelihoods were, given just the simplified part of the evidence that he had presented me (i.e. an empty bottle). Without going too much into it, it seems important to know whether one treatment takes a whole bottle, or just say a capful, as an empty bottle would be more of a worry if it’s just a tiny capful that is needed. But of course an empty bottle has probability of one on the possible alternative hypothesis that the painter arrived on the job with an empty bottle lying in his kit from some previous job.

There are lots of everyday situations where a Bayesian structure would give people clarity of thought and aim their attention at the right unknowns. This one is revealing I think.

My friend applied for a job in a University and there was great sensitivity about whether he or some in-house candidate with friends on the inside and a meddling nature should get it. After the process was complete apart from a signature by the Chancellor, he spoke to someone inside who knew the selection outcome and was sympathetic to him, he truly believed. This person said to him, “look it’s better for you that you don’t know” (meaning it seems not until all was signed and sealed, and it was too late for any internal rearguard action).

His Bayesian view was that this statement made very good sense if in fact he was the nominated candidate waiting for approval at the top, but was much less sensible and hence relatively improbable under the hypothesis that he had not been nominated. Most especially, a leak and loose lips could undo the whole process. So he took it as a very positive sign, which in fact was borne out when after a short wait he got the job. He later thought how clever his colleague was to convey such a strong signal while staying squeaky clean.

Do you agree with this qualitative Bayesian interpretation of the statement that was made to him? If so, isn’t it interesting that the insider’s words would in fact be regarded as bureaucratically quite proper, rather than as practically giving the game away to a good Bayesian.

My reply:

First off, unless you get James Patterson involved, I don’t see much scope for a bestseller here.

Beyond this, I don’t have much to say about your particular examples, except that your second story, involving the transfer of information, reminds me of some ideas in disclosure limitation or restricted inference, in which for security or ethical reasons you’re not allowed to fully use the data you have.

Examples include:

– Education (where, for example, it wouldn’t be appropriate to include factors such as age, sex, ethnicity, or pre-test score in determining a final course grade, even if these variables improve predictive power beyond what could ethically be used in the final grade)

– Sports (where the championship goes to the winner, not to the player or team with the highest estimated ability)

– Public surveys (where the information has to be kept coarse enough so you can’t identify individuals, either precisely or with high probability)

– Google, Yahoo, etc. (ditto)

– Military or trade secrets (of course)


  1. With my limited and neophyte knowledge of the Bayesian approach: Isn't it that it is a better way of thinking about actual probability calculation/estimation and thus we wouldn't need to teach people to think in a Bayesian way because our brains already work that way (taking previous information to create the distribution to compare new information, e.g. assimilation and accommodation)?

  2. LemmusLemmus says:

    Not sure it's exactly the kind of thing your correspondent was thinking of, but I was reminded of Gerd Gigerenzer's Calculated Risks.

  3. FH says:

    Not exactly a best seller, but perhaps something like The Invisible Heart by Russell Roberts.

  4. Mike Maltz says:

    Concerning the remark by Lemmus^2, my favorite quote from Gigerenzer et al (Simple Heuristics that Make Us Smart)is, "One philosopher was struggling to decide whether to stay at Columbia University or to accept a job offer from a rival university. The other advised him: “Just maximize your expected utility—you always write about doing this.” Exasperated, the first philosopher responded: “Come on, this is serious.”

  5. Jeremy says:

    While I agree that grades in education and sports competition outcomes are decisions that constrain the set of ethically usable data, it doesn't makes sense that either of these determinations would be trying to estimate anything in the first place. Education grades and sports scores are both strictly descriptive — they're not about how well we think they'll do next time, but about how well they did this time.

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.