Algorithmic bias and social bias

The “algorithmic bias” that concerns me is not so much a bias in an algorithm, but rather a social bias resulting from the demand for, and expectation of, certainty.

17 thoughts on “Algorithmic bias and social bias

  1. Quote from above: “The “algorithmic bias” that concerns me is not so much a bias in an algorithm, so much as a social bias resulting from the demand for, and expectation of, certainty.”

    The bias that concerns me is not so much a bias resulting from the demand for, and expectation of, certainty, but a possible demand for, and expectation of, interventions and/or actions based on (bad, and/or uncertain?) scientific findings.

    To try and further illustrate my point: imagine scientists all of a sudden are totally on board with uncertainty, and are very careful in their conclusions, etc. This view can even be strenghtened by repeatedly using sentences like “science is a process” and “science is about getting things less wrong over time”, etc.

    Although these kind of sentences can be seen as possibly scientifically valid, they (to me at least) do not necessarily imply that the world is one giant big experiment, where the people of the world are all mere participants, and where scientists can just propose lots of (ever-changing?) interventions based on (uncertain?) findings and papers that governments will implement because it’s “science” and that’s “the best thing we have” or something like that.

    Also see “Primum non nocere” (“first, to do no harm”) in this regard: https://en.wikipedia.org/wiki/Primum_non_nocere

    Thus, if uncertainty is clearly acknowledged, and communicated, BUT scientists (or others) still demand and/or expect interventions and actions based on these (uncertain) findings and papers, etc. i still reason that wouldn’t be a “good” situation. Hence, my addition to the one-sentence blogpost above is to add a 2nd sentence which i started this comment with and which i will here repeat to end this comment with as well:

    The bias that concerns me is not so much a bias resulting from the demand for, and expectation of, certainty, but a possible demand for, and expectation of, interventions and/or actions based on (bad, and/or uncertain?) scientific findings.

    • I think interventions need to be based on high quality estimates of both good and harm — and in most cases, need to be accompanied with qualifications describing the estimated harms, ideally including circumstances under which possible harm outweighs possible benefit from the intervention.

      To some extent, warning labels on drugs include this — listing possible side effects, with “black box” warnings for side effects that can be especially dangerous.

      Of course, what I just described is ideal; there is always the possibility that the research, recommendations, etc., may not be well done. But doing them well isn’t precluded by uncertain findings. If the probabilities of possible outcomes/consequences are well estimated, then we still have uncertainty, but the probabilities give a way of outlining when risks outweigh the benefits or vice versa. This is the whole idea of risk analysis.

      • Quote from above: “I think interventions need to be based on high quality estimates of both good and harm — and in most cases, need to be accompanied with qualifications describing the estimated harms, ideally including circumstances under which possible harm outweighs possible benefit from the intervention. ‘

        This seems a possible good thing, but i think that may still assume some things that i wanted to make clear, and challenge, with my comment. I will ask 3 questions that try to make my point clear in a different way. I am not saying i have a definitive point of view or answer concerning this all, but i am trying to make something clear that i have been wondering about increasingly more:

        1) Should scientists (or other people) propose interventions (possibly based on “uncertain” findings or conclusions) of which the long term good or harm has not been investigated (let alone established) yet?

        1) Should interventions be accompanied with qualifications that there is a possiblity that the possible estimated good and/or harm can possibly not be accurately gauged?

        3) Should proposing (or not proposing) interventions be (partly) based on possible other things than “high quality estimates of both good and harm”?

        (Side note: also possibly see a recent critical blogspot about a pre-print that mentions the possibility of personality traits-change via interventions here https://replicationindex.com/2019/06/06/should-governments-shape-citizens-personality/)

        • My responses to Anon’s questions (which I’ll call 1, 2, and 3, assuming this is what Anon’ intended.)

          “1) Should scientists (or other people) propose interventions (possibly based on “uncertain” findings or conclusions) of which the long term good or harm has not been investigated (let alone established) yet?”

          Only with caveats pointing out the uncertainty of findings or conclusions and lack of investigation of the long term good or harm. Also, the proper should make clear what assumptions, beliefs, and values they are based on.

          “2) Should interventions be accompanied with qualifications that there is a possiblity that the possible estimated good and/or harm can possibly not be accurately gauged?”

          Definitely yes!

          “3) Should proposing (or not proposing) interventions be (partly) based on possible other things than “high quality estimates of both good and harm”?”

          Sure — but these things need to be clearly pointed out and accompanied by a careful discussion of the uncertainty and lack of quality of those other things.

          In summary — Standards should include:
          Full disclosure
          Intellectual honesty
          Discussion of possible consequences
          Honest, careful comparing and contrasting with other possibilities.
          Probably other things that don’t come to mind at the moment.
          But definitely restraint from any kind of “sell job”.

        • Oops:
          In item (1), “Also, the proper should make clear” — should be “proposer,” not “proper”

        • Thank you for the comment! (i indeed intended to write “2)”. I messed that one up, which might be an extra shame because it’s my favorite number)

          I think i like and agree with your list (the list can of course be written down more extensivly and clearer, but that was not necessary here, at least for now, and i think i get the points).

          Concerning my point 2): perhaps that can be seen as an “uncertainty qualification about an uncertainty quantification” :)

          Assuming “intellectual honesty” covers more than (the gist of) my point 2), i would like to add that to your list, which then reads as follows:

          Full disclosure
          Intellectual honesty
          Discussion of possible consequences
          An uncertainty qualification about the uncertainty quantification
          Honest, careful comparing and contrasting with other possibilities.
          Probably other things that don’t come to mind at the moment.
          But definitely restraint from any kind of “sell job”.

          (I wanted to provide a cat picture to this comment, so i searched google with the terms “cat bias” and i couldn’t find an appropriate picture. However, i did come across this paper which i thought might be even funnier and/or more interesting to link to here: https://www.ncbi.nlm.nih.gov/pubmed/31033416. From the abstract:

          “There is anecdotal and empirical evidence for black cat bias, the phenomenon where cats (Felis silvestris catus) with black coats are viewed more negatively, adopted less often, and euthanized more often than lighter colored cats. Despite the anecdotal claims, there is scarce empirical evidence for black cat bias. Using evaluations of cat photos, the researchers examined differences in people’s attitudes toward black and non-black cats of various colorations on measures of perceived aggression, perceived friendliness, and willingness to adopt. The researchers also explored whether participants’ levels of religiosity, superstitious beliefs, and prejudicial racial attitudes were related to black cat bias. Finally, the researchers explored whether black cat bias was related to difficulties people had in reading the emotions of black cats compared to non-black cats. This study provided evidence of black cat bias in the sample. People exhibiting higher degrees of black cat bias had higher levels of superstition, but not religiosity or racial prejudice. Additionally, people who had difficulty reading the emotions of black cats tended to exhibit a stronger bias against adopting black cats.”)

  2. From the Wall Street Journal: “‘I tell all the families,’ said the doctor, ‘this is what we know now, but it is going to change.’ DNA testing, which is rapidly advancing, offers answers without certainty.”

    People: “We demand certainty.”

    Nature: “Sorry, that’s not how this game is played.”

    • “Andrew – are you absolutely sure ;-)”

      yeah — what is “certainty” ?

      what are you ABSOLUTELY certain about in your life & personal knowledge?

      are you certain the sun will rise tomorrow?
      are you certain that your perceived reality is not some illusion?

  3. Interesting. The bias that most concerns me is claiming that algorithms are somehow objective, and that therefore their outputs can be interpreted as unassailable, fair truth rather than messy outputs that reflect the assumptions and data that go in. Using “algorithms” to silence the demand for critical thought is probably their most dangerous effect in the short term.

  4. Dietvorst, Simmons and Massey’s paper, “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err” is an interesting foray into judgement and decision-making based on common, naive C-suite views that algorithms and predictive modeling are “magic bullets.” Then, when the predictions fail as they inevitably will, flip-flopping 180 degrees in that opinion.

    http://opim.wharton.upenn.edu/risk/library/WPAF201410-AlgorthimAversion-Dietvorst-Simmons-Massey.pdf

  5. What concerns me is the hijacking of the term algorithm.

    What most people study as algorithms as computer science fundamentals, from sorting numbers to distributed consensus are actually proofs. This is known as Curry-Howard correspondence. It is what gives algorithms their prestige.

    Some write by hand as “if age of customer > 25, increase insurance premium by 10%” or something similar via “AI”, and they want to call it “an algorithm” to protect against protests or to overstate its sophistication (“you wouldn’t understand how it works; trust us, it’s doing the right thing because it’s the _computer_”).

    • In the good old days, we called what are now called “algorithms” “heuristics”. (The whole point was that we knew that they were short cuts that were incorrect generally but worked some of the time. It was a serious theory of intelligence, or at least a part thereof.) We used them to build expert systems, found that they didn’t work, and called it quits*.

      Now they’re using the same heuristics, calling them algorithms, and telling the user that the user’s wrong.

      *: OK, some would say “crashed and burned”. Whatever.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *