Skip to content
 

Automation and judgment, from the rational animal to the irrational machine

Virgil Kurkjian writes:

I was recently going through some of your recent blog posts and came across Using numbers to replace judgment.

I recently wrote something about legible signaling which I think helps shed some light on exactly what causes the bureaucratization of science and maybe what we can do about it. In short I agree that we do and should use our qualitative judgment and attempts to add “objectivity” are not objective and lead to bureaucratic capture.

I don’t quite understand what Kurkjian was saying in his post but I thought it might interest you, so you can follow the link and judge for yourself.

From the rational animal to the irrational machine

I do see some connections to my idea that people used to think of humans as being special for their rationality, but now it’s our irrationality that is considered a virtue. In the past we compared ourselves to animals, hence the human was “the rational animal.” Now we compare ourselves to computers, hence the human is “the irrational machine.”

As a rational animal myself, I’m not so thrilled with this change in attitude.

26 Comments

  1. walter says:

    subjective “value judgements” dominate human thinking and action — what you eat, where you live, what you do all day, what you own, who you communicate with, what you will fight for or ignore, etc, etc.
    All humans act in their own self-interest as they subjectively perceive it.

    Facts and rationality strongly influence human judgements, but there is huge variation among humans in such influence.
    Personal values and aspirations also vary widely and are often non-rational from a scientific perspective.

  2. Steve says:

    I followed the links. The whole “legible signalling” thing doesn’t make a lot of sense to me. First, it turns on Goodhart’s law, which I think is a law only in the same sense that “Don’t wear white after Labor Day” is a law. If your “target” is to lower your cholesterol or build a car that breaks the sound barrier, do measures of cholesterol or velocity lose their meaning. Clearly, not. Situations in which Goodhart’s law actually holds reveal something more fundamental, namely, that the quantity being measured isn’t a quantity at all or alternatively it is several quantities being measured at once. (This issue reminds me of your recent post on the problem of conjoint measurement.) However, if there really is a single quantity being measured, then how could it be manipulated without actually faking the measurements. If aptitude tests are as easy to manipulate as Kurkjian suggests, then the problem isn’t signalling. The problem is that the test never measured anything (or any one thing). And, if that is the case, the test never had value as a signal. Maybe Kurkjian means something else by the term “signal.” But, the process he describes is Step 1 there is a signal that has value to help us discriminate between people who actually have the skills we need and those who don’t. Step 2, there is manipulation of the signal. Step 3, we are stuck with signals that don’t signal. (I don’t understand this last point either. Why can’t we just get rid of the bad signals.) All of this depends on the signal having value to begin with, in which case it shouldn’t be so easy to manipulate without faking the measurement.

    • I think the idea of Goodheart’s law is almost entirely related to economic or social measures, ones that rational actors can alter “on purpose”. It comes from the fact that most quantities of interest for the law can be manipulated in multiple ways, only some of which result in desirable outcomes. For example if your quantity of interest is to reduce the number of women with breast cancer in your county, you can set this to zero by murdering all the women in your county…. it’s about the opposite of the intended goal but it accomplishes the numerical goal.

      • Clyde Schechter says:

        Or, to give a less extreme example, if the goal is to improve student performance in some academic discipline and the measure is based on standardized test scores, adopt policies that discourage (or prohibit) less successful students from taking the tests. This is not only less extreme, but has actually been done!

        • Right, lots of times there are multiple ways to manipulate measurable things. Substitute women with breast cancer for endangered species, and you might well find that people have gone around murdering animals to avoid having their areas classified as special habitat, thereby losing control over land use or jobs or whatever.

          • Anoneuoid says:

            Or trying to optimize for life expectancy and infant mortality. One way is to preferentially abort ill, unwanted, and/or poor babies.

            • Anoneuoid says:

              I just discovered this got spun into a political talking point earlier this year:

              This shouldn’t come as a surprise, since it’s a relationship that’s been known for years, but the states with the harshest restrictions on abortions also have the worst infant mortality rates.

              The correspondence is unmistakable, and not hard to explain: Those states’ governments also show the least concern for maternal and infant health in general, as represented by public policies.

              https://www.latimes.com/business/hiltzik/la-fi-hiltzik-anti-abortion-infant-mortality-20190515-story.html

              Or maybe the infants that would have been aborted also tended to be more likely to die early from other causes? Seems pretty obvious to me.

              • jim says:

                “The correspondence is unmistakable, and not hard to explain:”

                Which doesn’t mean the explanation is true either.

                It’s not hard to just “explain” anything. The hard part is getting the right explanation.

              • Andrew says:

                Jim:

                I wouldn’t say “the right explanations.” There are lots of explanations, each telling part of the story.

              • jim says:

                I stand corrected. True.

              • Better are logically good explanations – those which “through subjection to the test of experiment experiment, to lead to the avoidance of all surprise and to the establishment of a habit of positive expectation that shall not be disappointed.” CS Peirce

                In other words, those that can easily and clearly be investigated into. Perhaps here, match medically motivated (as opposed to necessary) abortions in states with different policies and see which has more medical interventions for serious matters within say 30 days. (OK, discuss with clinical experts first.)

              • Anoneuoid says:

                If you just consider the reasons why someone would get an abortion, it is logical to expect those infants would have a harder time than the average. Whether it be illness, neglect, or financial issues… it is difficult to see why someone would expect otherwise.

                I think the stat we really want is life expectancy from conception.

              • Anoneuoid says:

                Here is an attempt to get at life expectancy from conception:

                Erik Meidl (2009) Effect of Abortion, In Vitro Fertilization, and Other Causes of Prenatal Death on Life Expectancy in the United States from 1925 to 2005, The Linacre Quarterly, 76:4, 374-389, DOI: 10.1179/002436309803889043

                Figure 4 shows life expectancy rising from ~30 years to ~40 years from 1925-1970 then dropping back to the mid-30s by 1980. After that it has been flat. Seems like a pretty difficult thing to estimate but I’m glad someone tried.

          • jim says:

            “people have gone around murdering animals to avoid having their areas classified as special habitat,”

            Or in the case of the barred owl, USFWS workers murder it because it’s displacing the endangered spotted owl.

  3. Steve says:

    I understand that Goodhart’s law is applied to economic and social measures. My point about Goodhart’s law is that to the extent it applies, then the problem is that the target in not a true measure of the variable of interest. But, Goodhart’s Law gets things backwards. It makes us think that the problem is incentives. If we make the “measure” the target, than we incentivize manipulation of the measure, and reduce its value as a measure. But, to me this seems wrong. The problem is with the measure not the incentives. I can’t manipulate something that is not subject to manipulation, and if it is subject to manipulation, than even if agents are not consciously trying to manipulate the measure, the measure will be biased by all sorts of actions that agents are taking unconsciously. Take your breast cancer example, the problem is with just looking at the number of breast cancer deaths alone. It doesn’t have anything to do with incentives. Yes, you could murder the female population. But, perhaps more plausibly a new disease could come along and “reduce” breast cancer deaths by killing women who would have previously died from breast cancer. The problem was that we didn’t look at all-cause mortality. I think incentives enter in at the stage of choosing the “measure.” We have an incentive to choose measures that we can manipulate, i.e., targets that don’t actually measure.

    • I think it’s not that a given measure isn’t somehow “real” or whatever, it’s that a single measure *NEVER* represents anything like a true “utility” function. In reality, what we care about is always a combination of many multidimensional things, with LOTS of dimensions.

      So when people start manipulating the world to reduce or increase the univariate thing you’re measuring, they find ways to do it that compromise other dimensions that aren’t being measured. Like my extreme example you can reduce the cancer deaths by just shooting everyone so they can’t die of cancer, or you can secretly shoot the endangered species and avoid getting your land usage rights taken away, which is what most people with land actually care about, and then there are no endangered species around to “save” and people eventually stop trying.

      Basically in the real world, no-one with power makes measures that combine a comprehensive enough set of dimensions to avoid excessive gaming.

      We know a lot about how to make good measures, things like summing over all people a concave utility combining ratios of money, ratios of longevity compared to fixed non-gameable longevity metrics (a fixed historical average), and quality of life measures. In this case you’re including EVERYONE so you can’t game the system by censoring who you include, you’re utilizing money which is a dimension specifically designed to balance desirability, you’re incorporating the known phenomenon of reducing marginal value of everything, all the metrics are dimensionless which prevents you from doing something stupid like using nominal dollars, and quantities that are of inherent interest such as longevity of life and quality of life.

      If you build such a metric, it is much harder to game than the kind of univariate ones Goodheart’s Law is about.

  4. Terry says:

    Goodhart’s Law sounds way too general and insofar as it is actually true, is well known.

    Of course people will try to game the signal. That is part of the game. Therefore, the signals that endure are those where the cost to falsely signaling you are high quality are high enough to deter false signals. All this is fundamental to signaling theory and well known. It is an empirical question as to when a signal is credible. Moreover, corruption of other sorts often completely undermines the whole process.

    Goodhart’s Law also sounds naive to me. It bewails the fact that the world is not a perfect place where everything is frictionlessly rational. I take the opposite view. I’m amazed the world functions at all given the myriads of imperfections.

    • Steve says:

      Agreed, Underlying Goodhart’s Law (I think it actually originates with Lucas) is a model that we have perfect measures and then when we place some incentive that is tied to the performance of the measure, then the measure will be corrupted. But, as you point out, this process should weed out bad measures (or signals). If the signal had real value to begin with, there out, agents should be able to offer better measure that prevent manipulation. To the extent that we believe that there are areas where the measures never seem to get better, that should be evidence that either we cannot measure the variable of interest or there was never any social value in the measure to begin with. If you believe that PhDs don’t actually signal real expertise, then the conclusion shouldn’t be somehow people in academia have gamed the system to lower the signalling value of the PhD. Your conclusion should be maybe these degrees never really played the role of a signal measuring expertise to begin with or maybe it isn’t possible to measure such a thing.

  5. jim says:

    AG: “As a rational animal myself, I’m not so thrilled with this change in attitude.”

    Aye, but your blogging shows that people – even people in the profession of rationality – are not always rational. At least in the framework that I espouse. But perhaps, in behavioral economics terms, they really are rational but their incentives are misaligned. (they operate in different framework of “rational”)

    • Andrew says:

      Jim:

      I think of “rationality” in the weak sense of having reasons for things, and evaluating competing options using logic. I’m not thinking of “rationality” in the strong sense of optimization.

      I have no reason to think of my blogging as maximizing any utility function (except in the tautological sense that every action we ever do is maximizing utility because we’re doing it). But I have reasons for much of what I do, including this!

      • jim says:

        “But I have reasons for much of what I do, including this!”

        Definitely. Good ones! I would rather you didn’t – that your mission was accomplished – but so it is. The change in attitude is manifest. Nothing left to do but try to stem the tide.

        • Ethan Bolker says:

          I don’t think Andrew has a single mission here. Even if he did and it was accomplished he’d probably not stop blogging, because he enjoys it and so do we.

          • jim says:

            I’d say Andrew’s blogging revolves predominantly around a few closely related themes: discussing and guiding the appropriate use of statistics and statistical inference in science; improving the openness of science and scientists in dealing with errors and sharing data, and perhaps ensuring the quality of experiments and results where work is centered around the use of statistics.

            No doubt, he touches on many other things but I think it’s fair to summarize these as his main interests or his “mission”.

            I share his interests in these topics. I’m also interested in his occasional posts on data viz and other aspects of analyzing and presenting data. But as interesting as his fiction analyst is, if he were just sharing his favorite novel critiques, I probably wouldn’t be here. I never read fiction.

Leave a Reply to Steve