Goals and plans in decision making

For years, Dave Krantz has been telling me about his goal-based model of decision analysis. It’s always made much more sense to me than the usual framework of decision trees and utility theory (which, I agree with Dave, is not salvaged by bandaids such as nonlinear utilities and prospect theory). But, much as I love Dave’s theory, or proto-theory, I always get confused when I try to explain it to others (or to myself): “it’s, uh, something about defining decisions based on goals, rather than starting with the decision options, uh, ….” So I was thrilled to find that Dave and Howard Kunreuther just published an article describing the theory. Here’s the abstract:

We propose a constructed-choice model for general decision making. The model departs from utility theory and prospect theory in its treatment of multiple goals and it suggests several different ways in which context can affect choice.

It is particularly instructive to apply this model to protective decisions, which are often puzzling. Among other anomalies, people insure against non-catastrophic events, underinsure against catastrophic risks, and allow extraneous factors to influence insurance purchases and other protective decisions. Neither expected-utility theory nor prospect theory can explain these anomalies satisfactorily. To apply this model to the above anomalies, we consider many different insurance-related goals, organized in a taxonomy, and we consider the effects of context on goals, resources, plans and decision rules.

The paper concludes by suggesting some prescriptions for improving individual decision making with respect to protective measures.

Going to their paper, Table 1 shows the classical decision-analysis framework, and Table 2 shows the new model, which I agree is better. I want to try to apply it to our problem of digging low-arsenic wells for drinking water in Bangladesh.

Is vs. should

I have a couple of qualms about Dave’s approach, though, which involve distinguishing between descriptive and normative concerns. This comes up in all models of decision making: on one hand, you can’t tell people what to do (at best, you can point out inconsistencies in their decisions or preferences), but on the other hand these theories are supposed to provide guidance, not just descriptions of our flawed processes.

Anyway, I’m not so thrilled with goals such as in Krantz and Kunreuther’s Table 5, of “avoid regretting a modest loss.” The whole business of including “regret” in a decision model has always seemed to me to be too clever by half. Especially given all the recent research on the difficulties of anticipating future regret. I’d rather focus on more stably-measurable outcomes.

Also, Figure 4 is a bit scary to me. All those words in different sizes! It looks like one of those “outsider art” things:

krantzmap.png

In all seriousness, though, I think this paper is great. The only model of decision making I’ve seen that has the potential to make sense.

Need a better name

But I wish they wouldn’t call their model “Aristotelian.” As a former physics student, I don’t have much respect for Aristotle, who seems to have gotten just about everything wrong. Can’t they come up with a Galilean model?

5 thoughts on “Goals and plans in decision making

  1. "Aristotle, who seems to have gotten just about everything wrong"

    Was that due to faulty logic/reasoning or faulty observations/knowledge base?

    CS Peirce (who worked in physics with Rutherford) seemed to think very highly of Aristotle. I have a wild speculation that Aristotle's quote "We are what we repeatedly do. Excellence, then, is not an act, but a habit." may have inspired Peirce to develope what the philosopher Ian Hacking has called a philosophically sound theory of Neyman-Pearson confidence intervals (around 1860). Additionally, Stephen Stigler has acredited Peirce with comming up with Fisher's randomization test about the same time.

    Now Peirce believed everyone gets everything wrong (if it involves empirical content – i.e. not just math or logic) and once wrote that we are lucky that we die because otherwise we would live to see that everything we believed was wrong.

    Hope this was not too off topic
    Keith

  2. As a former physics student, did you ever actually read Aristotle? I suggest trying the Politics.
    His physics was totally wrong, but give some thought as to how revolutionary just the idea of their being physics was not too long before him.

  3. Thanks much, Andrew (and others) for your comments on "Goals and Plans in Decision Making."

    The descriptive/normative issue is tricky under our theory. The fact is that, just as in the areas of induction (Nelson Goodman) and Justice (John Rawls), we need a "reflective equilibrium" for good decisions, in which proposed normative principles are modified when they conflict with stable, thoughtful intuitions and, on the other hand, intuitive judgments are overruled when they conflict with prescriptions based on the best models we can construct, even taking into account the discordant intuitions. This has NOT been done in the field of decision making: starting with the objections of Allais, conflicting intuitions have been overruled by a utilitarian approach, in which, following Plato, we have assumed that there is a single Good.

    While I am nowhere near satisfied with our handling of this issue in the article under discussion, it does seem clear that the sorts of norms that will emerge, in equilibrium, will be much weaker than those based on utility theory. They will consist of questions, rather than answers: Have you thought about this aspect? or tried to apply that model? or read some relevant article or book. This is the sort of prescription that we give one another all the time.

    As for Aristotle, I don't agree with Andrew's prejudice based on his Physics. The real culprit is the argument from authority (which, for centuries, used Aristotle as The Authority). Unfortunately, the argument from authority is alive and well today, and visible in applied statistics quite markedly at times.

  4. Andrew, while Aristotle may have gotten specific details of science wrong due to the earliness of science in his time, you can thank him for most of your understanding of the concept "wrong". Aristotle got the big-picture of thinking methods right. Logic, non-contradiction, a theory of concepts. Lot's of the foundational stuff you might just assume was around forever, Aristotle invented much of the infrastructure of Western rational thought. We can also thank him for keeping us out of the hands of Plato's totalitarian philosopher kings who supposedly would know the ultimate, ineffable "truth" – divorced from any worldly observations.

Comments are closed.