Is “choosing your favorite” an optimization problem?

This post is by Phil Price, not Andrew.

A week or so ago, a post involving an economic topic had a comment thread about what is involved in choosing a “favorite” (or a “preferred choice” or “the best”…terms like these were used more or less interchangeably in the comments). The thread got pretty long and, to me, a bit frustrating. Here’s what started it off: I wrote that “you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one. You need a utility function that puts everything in one dimension.” (I shouldn’t have used the term “utility function”, with its strong connection to economics and rational choice. I think the actual situation is more general. )

Much to my surprise, several thoughtful, intelligent readers disagreed (and still disagree) with that statement.

In order to decide which you prefer among A, B, and C, you need to evaluate your preference for A, B, and C so that you can compare them. You need to be able to put them in order, e.g. pref(B) > pref(A) > pref (C)…or at least, pref(B) > pref(A or B). So far, so tautological.

You need to be able to evaluate the choices even when your preference depends on multiple parameters. If I’m choosing a phone, for example, I have a choice of (speed, cost, battery life) and of course many other parameters too. So you might have a choice between:

A: (fast, expensive, long-lived)
B: (fast, cheap, short-lived)
C: (slow, cheap, long-lived)

How can you choose among these? Well, go back to the tautology above: to decide which one you prefer, you need to be able to evaluate your preference for each. The tautology is still a tautology.

You get nowhere by saying “I prefer the speed of A, the expense of, B, and the battery life of either B or C”…you need to boil it down to one ‘preference function’, although you may not think of it that way. As Daniel Lakeland put it: “Phil, here’s a more mathy way to say what I think you’re saying. All complete totally ordered fields are isomorphic to Real Numbers. This is a known mathematical fact.” (And this is also where the Wikipedia article on optimization takes you.)

So what do people object to? Different things. At first I thought the objections were specious, but the discussion changed my mind, I think there’s some stuff worth thinking about. Hence this post.

1. One respondent said “You say “a consistent dimension” but surely you know that it is possible to think more multidimensional than a single one.””

2. A couple of respondents challenged the claim that you _always_ need a single preference function. Suppose, for the cell phone, I prefer fast to slow, cheap to expensive, and long-lived to short-lived. If there were an option D (in addition to A, B, and C above) that is characterized by (fast, cheap, long-lived) then D would be the dominant choice, preferred in every dimension, so there would be no need for a single preference function. You might have one, sure, but you don’t need one.


3. Responding to my claim that to come up with a favorite “ultimately you have to be able to put these in order, or at least to have one of them bubble to the top”, another respondent said “I guess you do, but I certainly don’t. In fact I sometimes specifically avoid ranking them by “value” so as to pretend I don’t have to deal with the consequences of my choices.”


I’ll give my take on these, below, but would also be interested in reading what others have to say.

  1. My take on the first claim, that it is “possible to think more multidimensional”, of course this is true. But this doesn’t change the fact that in order to compare your preferences for A, B, C, etc. you need to map all those dimensions onto your “preference” scale.
  2. As for the second claim, that you don’t need a single “preference” function if one choice is better in all ways: I’m not sure about this one. I mean, I agree that if one choice dominates then you can optimize each parameter and thus end up at the global optimum. Does this mean there is no single preference function, or just that you don’t have to think about it? In the cell phone example, D is dominant. But imagine you’ve selected D and then learn that oops, (fast, cheap, long-lived) is not an option after all, they raised the prices and that one is actually option A: (fast, expensive, long-lived). Now no choice is dominant. Do you suddenly have a preference function that you didn’t have a moment ago? Or did you have a preference function that you had just not thought about before?

  3. “I sometimes specifically avoid ranking [options] by “value” so as to pretend I don’t have to deal with the consequences of my choices.”

    I initially responded poorly to this. What I should have said, and said later, is that IF you are trying to choose a favorite then you need a way of evaluating them on a common scale, but of course nobody says you have to choose your favorite. For instance, you could pick a phone at random and then declare it to be your favorite, which could indicate that in fact you are indifferent to which phone you get ( that is, pref(A) = pref(B) = pref(C)), or could indicate that you are picking a selection method that you prefer rather than trying to pick the phone you prefer, or whatever.

But while I was pondering this issue — by which I mean the distinction between picking the item you prefer, or picking your preferred way of choosing an item — I realized that It’s very common in practice to try to optimize something other than the choice of product you are buying. If I want to buy a cordless drill, there’s only so much time I’m willing to spend doing research. I’m not trying to find my preferred drill per se, I’m trying to find the best combination of (which drill is it; how much time do I spend doing research; how much time does it take to get the drill; how much does it cost). This is still an optimization problem, and thus still requires me to evaluate some sort of preference function…but it feels like there’s a way in which that function is changing even as I’m trying to perform the evaluation. I start out by looking up reviews of drills, I check a few prices, maybe I check availability at the two big hardware stores near me that list stuff online, maybe I look at what it costs to buy a second battery. At some point I learn that for with of the drills you can use the same charger and maybe the same batteries that you can use with some other cordless tools…is it worth it to me to consider whether someday I’ll want to buy, say, a cordless circular saw too, and how important it will be for me to be able to use the same charger and/or battery that I use for my drill? In practice, at some point I decide enough is enough, it’s time to make a decision…so one option is to declare that whatever my preference function is at this moment, that’s what I’m using. At that point, the amount of time and effort I put into doing research and checking availability is a sunk cost that is no longer relevant, but it was relevant when I started. Perhaps one way to think of this is that I had a very complicated preference function when I started, which includes the preference for more information rather than less, and for long-term desires in addition to short-term ones, etc., and that this whole decision of what drill to buy can still be thought of as a classic optimization problem. But it’s not immediately obvious how to make it fit the standard paradigm.

I think the points above are the main ones that came up in the thread a week ago. I’m going to say a few more things to illustrate some other points that might otherwise be distracting. These are related to uncertainty and indifference. Suppose I am on a game show. At some point I’m told to go into the “choosing chamber” and bring something back within thirty seconds; I’ll be given $100 for each ounce that the thing weighs. Let me stipulate that for purposes of this example I am trying to maximize my earnings, as opposed to having a funny story or whatever . I rush into the chamber and find a book, a carton of eggs, a large anvil, a lima bean, and a safety pin. It’s a game show, maybe the anvil is plastic and the book is full of lead, so I quickly heft the book, eggs, and anvil, ignoring the lima bean and safety pin. The anvil is much much heavier than the book or the eggs. I grab the anvil and make for the door…but it’s staggeringly heavy, time is ticking, and I realize I’m not going to make it back to the host within the allotted time. I drop the anvil and go back to the book and the eggs. Jeez, they seem about the same..maybe if I stood very still and tried shifting them back and forth between my hands I’d be able to tell which is heavier — even one ounce would be another $100 — but I don’t have time. I pick the book.

Points I’m trying to illustrate with that example: (1) whether doing univariate or multi-variate optimization, you don’t have to compare everything to everything else. I implicitly compared (anvil, book, eggs) to (lima bean, safety pin) and decided all of the former were heavier than any of the latter. I did not need to compare the lima bean to the safety pin. (2) This seemed like a very straightforward optimization problem, but turned out to be a problem of constrained optimization: the problem was not “find the heaviest object” but rather “find the heaviest object I can bring back in the allotted time.” (3) Sometimes there is important uncertainty in key parameters. <a href=’https://en.wikipedia.org/wiki/Optimal_decision’>Decision-making under uncertainty</a> is an important discipline.


This post is by Phil.

82 thoughts on “Is “choosing your favorite” an optimization problem?

  1. Andrew writes: “In order to decide which you prefer among A, B, and C, you need to evaluate your preference for A, B, and C so that you can compare them. You need to be able to put them in order, e.g. pref(B) > pref(A) > pref (C)…or at least, pref(B) > pref(A or B). So far, so tautological.”

    I hate the use of “tautological” here, and I don’t think that I am being nit-picky. There is nothing illogical about having an incomplete ordering of preference. There is nothing illogical about choosing based on the flip of a coin. The alleged irrationality of not having a transitive ordering of preferences dates back to Ramsey’s Dutch Book argument, where we can extract everything someone whose preferences are intransitive. But, that argument depends on the assumption that those preferences map onto a quantity. However, my preferences don’t map onto anything (and hence no function). I prefer pineapple to pepperoni, but believe it is evil to put pineapple on pizza. So, in one context I prefer pineapple, and in another pepperoni. No onto mapping.

    You could reply that when one is forced to make a decision, you simply force your preferences into a function by eliminating certain contexts. (This is like Andrew’s constrainted optimizing.) Fair enough, but then what are you maximizing. Not some pre-existing preference scale. This may seem like a trivial problem, but in the context of social choice, it is not. A woman living in medieval Europe may “optimize” by choosing a husband who is dullwitted but rich, where the choice may be quite different in a world where the she can have a career of her own. Of course, we are always going to have to hold some things constant to think clearly about a problem. But, thinking in terms of optimization often obscures that we have choices about in what context we are choosing. Deciding the best way to deal with homelessness will depend on choices we make about mental health policy, housing policy, etc. And, those choices will in turn depend on others. At the end of the day, it is not an optimization problem at all.

    • I forgot to point out that this post was by Phil, not Andrew.

      You say “There is nothing illogical about having an incomplete ordering of preference. There is nothing illogical about choosing based on the flip of a coin.”

      True enough, I never said there was. I said that “IF you are trying to pick the best one” then you need to be able to evaluate your preference for each.

      I’m not sure why you put “optimize” in quotes when you refer to the medieval woman who chooses a dull-witted but rich husband. Both the man’s riches and his wits can reasonably enter into an optimization.

      It’s interesting that you suggest that “deciding the best way to deal with homelessness… is not an optimization problem at all”. The definition of “optimization” is “the action of making the best or most effective use of a situation or resource.” How can that not be an optimization problem? If you want to say it can’t be reduced to a mathematical optimization, that’s a different claim, but choosing the best way to do something is indeed an optimization problem, that’s the definition of “optimization”.

      • This post is definitely not by me! I’ve taught decision analysis many times, so I’ve had ample opportunity to think through all the issues discussed in the above post. One thing I can say is that the academic literature on decision analysis is not always so helpful. There’s lots of sloppiness out there masquerading as rigor (for example in the common but mistaken identification of “risk aversion” as a concave utility function for money).

        Regarding Phil’s questions: the standard (and, I think, reasonable) way to define everything is to start with certain axioms about preferences, and from there it can be derived that if your preferences satisfy these axioms, then you must have an implicit utility function. On the other hand, it’s been often noted that preferences don’t in general exist until the option is considered, so it’s not clear how useful it is to consider preferences among theoretical options. I mean, sure, you can consider these preferences as a way of finding out places where people are inconsistent, but once you get to that point of incoherence, any individual preference doesn’t have such a clear meaning. This also relates to questions such as the distinction between “willingness to pay” and “ability to pay” that I mentioned in the earlier thread. I might be willing to pay a million dollars for a certain cancer treatment, but willingness won’t get me far if I don’t have the ability to pay. And once you recognize that preferences depend on external factors such as the amount of money you have in the bank, it becomes even more difficult to think of utilities in an abstract sense. Finally, as Phil notes, once you even start to talk about “optimization,” this raises the point that optimization itself has a cost. The term “satisficing” is sometimes used here, but that doesn’t really solve any problems either, in that you still have to decide when to satisfice, and by how much!

        That all said, I still think that formal decision analysis is valuable. It’s a mathematical model and, like, all mathematical models, has limitations when you try to apply it to reality. But it’s still useful for what it is.

      • Phil

        Sorry for the misattributing the post to Andrew. You used the word “tautological,” which I know people sometimes use losely, but it does mean logical truth, and I think it is important to understand that the idea that preferences need to be ordered transitively is not a logical truth. If you treat it as an axiom, then find, but we then have to ask whether the phenomena we are studying fulfills those axioms. I think in some narrow confines the answer may be close enough, but generally no.

        You write, “If you want to say it can’t be reduced to a mathematical optimization, that’s a different claim, but choosing the best way to do something is indeed an optimization problem, that’s the definition of “optimization”. “Best” requires a scale, and I took you to be talking about preferences, and a preference function. My simple point is that in general we don’t have prefences that map onto some scale. Preferences will map to multiple values depending on the context, and so no scale and therefore no “best.” Some (I suspect most) decisions cannot be reached be choosing the option highest on a scale. The Golden Rule is an excellent guide for behavior. But, it does not optimize anything. It simply demand I engage in behaviour that is not in violation of the rule. Many options may satisfy it and it does not rank those options.

      • Phil, I 100% agree with you that “deciding the *best* way to deal with X” is an optimization problem.

        The thing is that lots of people make real actual decisions by whims: “I’ll eat a cheese sandwich” or “I’ll buy the green car” without trying to do “the best” decision.

        That being said, I don’t think we should make social decisions that affect many people without some effort at optimization. It’s one thing if you have a cheese sandwich but if given the option would have preferred a turkey sandwich… it’s another thing if we force all school age children to each only cheese sandwiches in their school lunches because hey some guy thought that’d be ok and didn’t make any effort to understand what better options there were (like providing variety)

        • Daniel:

          This is related to what I call institutional decision analysis. Ironically, decision analysis is often explained in the context of personal decision making examples, even though, as you say, formal decision analysis makes much more sense for public decisions that involved shared costs, benefits, and responsibilities.

        • “formal decision analysis makes much more sense for public decisions that involved shared costs, benefits, and responsibilities.”

          The more formalized the decision process, the more vulnerable it is to the biased assumptions of the few individuals involved in the formalization. What if the nudgelords established the “formal” decision making process?

        • Jim:

          I’m not saying that formal decision analysis should necessarily be used to make the decision. What I’m saying is that formal decision analysis should be performed. Then if the decision maker wants to do something different, he or she can explain why.

        • “What I’m saying is that formal decision analysis should be performed. ”

          In principle this makes sense. In practice it doesn’t, since activists would use the formality to derail and obfuscate the obvious solution. Some engineering decisions – suspension bridge or floating bridge? – seem superficially safe from activists mucking up the process. However, even such an apparently clear technical decision could easily be overwhelmed by a model loaded with activist baggage, so thick and heavy that no explanation would ever be forthcoming if it were exposed at all. In the end the “explanation” for the counterintuitive result would be “the models say…”.

        • Yes to whims etc. Maybe I should make a keyboard shortcut that says “IF you are trying to pick the best option, then…” because I agree that you do not have to try to pick the best option.

          As to your point about social decisions: In a book I read a long time ago…maybe in How To Lie With Statistics, or maybe A Mathematician Reads The Newspapers, or maybe somewhere else… the author said that people who work on things like decisions about health interventions (such how many ER beds should we have) often face complaints that “people are being reduced to numbers” and so on. The author says that the more important the decision, the _more_ important it is to allocate resources efficiently. The fact that people’s lives are involved makes it more important to get the answers right, not less. You are in fact going to spend a finite amount of money on health care, and if you spend it inefficiently people will die. If you think people’s lives aren’t being given enough value, you should try to get people to assign a higher value, not argue that quantitative approaches shouldn’t be used at all.

  2. Another point to think about is that the ordering is established by the equivalent of weightings, but you don’t know the weights. You only know snapshots of the weights, based on your current state of knowledge, you current psychological makeup, your current emotional state, and so on. The weights depend on your goals, too, and the goals are only known as estimates.

    If this were a statistical question, you would realize that you are dealing with estimates and allow for it, but in this context it’s harder for the chooser to see.

  3. Well, now you’re on interesting philosophical turf. To the orthodox economist, everybody always be maximizing as long as they’re choosing. *What* they’re maximizing may be very complex, in that may include mental strain of maximizing (or mental pleasure from problem solving) in the maximization function. But to say that you aren’t maximizing is to say that you had another choice you would have preferred even at the time and you didn’t make it (when taking into account the difficulty of making the choice). The name economists have for that is “irrationality.” And it’s not to say that people never act irrationally, but irrational decisions are hard to square with being *choices.*

    Lots of economics reasoning is fitting what *appears* irrational into a meta-rational decision… a point you’ve made many times. Indeed, lots of the times that pop economics falls into the “people need to learn from economists” mode is is the thought economists have that people haven’t thought hard enough about the problem (optimally, for them) but might be taught or given an alternate heuristic which will make all the difficult pain of mental strain disappear, at which point “true optimality” can emerge. (I put the last part is scare quotes, but almost everything in this comment could go into scare quotes if you want to get really philosophical about it.)

  4. You keep claiming that the things you write are “tautological” when they are ill-defined or wrong.

    > In order to decide which you prefer among A, B, and C, you need to evaluate your preference for A, B, and C so that you can compare them. You need to be able to put them in order, e.g. pref(B) > pref(A) > pref (C)…or at least, pref(B) > pref(A or B). So far, so tautological.

    No. In order to say that you prefer, say, A to B and C and D what you NEED is to establish your preference of A over B, your preference of A over C and your preference of A over D.

    In an appropriate notation, where the “curved” sign ≻ denotes “is preferred to” we have:

    A ≻ B
    A ≻ C
    A ≻ D

    You write pref(A), pref(B), etc. but it’s unclear what does it mean precisely or why would that be needed. There is nothing tautological about it.

    > As Daniel Lakeland put it: “Phil, here’s a more mathy way to say what I think you’re saying. All complete totally ordered fields are isomorphic to Real Numbers. This is a known mathematical fact.”

    The facts about ordered fields are irrelevant when we are talking about ordered sets. It’s possible to define a preference relation in R^2 that cannot be represented by a real function.

    Anyway, to prefer A among {A B C D} one doesn’t even need the preference relation to define an order. The preferences on {B C D} could be intransitive and it could be still possible to prefer A to the other options if, for example, in addition to the relations described above we had:

    B ≻ C
    C ≻ D
    D ≻ B

    • Indeed when we have discrete choices the fields math doesn’t necessarily enter, for example we never need to consider what happens when we multiply some things by 3 and others by sqrt(2).

      But in many real world decisions we do have basically continuous quantities to work with. Esp when it comes to dollars. Do you prefer the phone that costs $217.55 and operates at 2.1GHz to the one that costs $203.43 and operates at 1.9GHz? Do you prefer 1.5lbs of sliced turkey and 1lb of sliced swiss at a cost of $11.13 or 1.15lbs of sliced turkey and 0.66lbs of sliced swiss at $8.13?

      • When choices are discrete a utility representation does exist. The problem is that this won’t be necessarily true with (multi-dimensional) continuous choices.

        • Yeah, we have vN-M theorem that says the utility representation exists so long as you have uncertainty and are considering continuous lotteries (ie. the quantity that can change continuously is “probability to get X”). I’m not sure what theorem we can get from something like “for all quantities A,B,C representing an amount of each good you get, you can pick a preference between any two choices {A1,B1,C1} and {A2,B2,C2}”

          I think if you have the ability to pick between those for any quantities and some mild other conditions, it’s not going to be hard to prove a utility representation exists, and then Phil is right that you act as if you have some U({A,B,C}) that is a real number.

        • I maintain that even in the discrete case, if you are choosing between phones A, B, and C as outlined in the problem, you do have a ‘preference function’ even though you evaluate it only at those three points in parameter space.

      • The field axioms aren’t used anywhere in conventional utility theory either, even in the continuous case. If you’re claiming to do arithmetic with utilities then you’re moving beyond order preferences into cardinal utility, and I don’t know of any reason to think that the reals are suitable in that case (completeness smuggles in more assumptions than you’d expect). Utility functions exist for transitive preferences mainly because the reals are big enough to be stuffed with *wildly* infinite ordinals. Only their order structure is used. The reals are only ‘preferred’ as a universal domain due to familiarity, rather than as a canonical answer.

        More to the point, Carlos’ main objection seems to be that preferences are only a *partial* order (or even a pre-order). Decision sets without a maximal element are then resolved semi-randomly, or are so deeply contextual that talk of utility functions can only be a post-hoc technicality. Continuity also doesn’t really bear on whether this is correct. E.g. in response to your examples, someone could say “I prefer buying more food for less. If I have to choose between (more food, more cost) and (less food, lest cost), the decision depends on hundreds of other details that won’t fit into a single paragraph description of the decision set, and also probably fluctuates strongly day-to-day”.

    • Carlos (and/or Steve), I hope one of you will update the Wikipedia entry on discrete optimization, or rather on all three ‘branches’ it mentions (combinatorial optimization, integer programming, and constraint programming), since if you click through you find that all three of those are based on maximizing an objective function. Please modify whichever of those is appropriate, or add another branch. I, and many others, would be interested in a method of finding your preferred without having a way of evaluating each choice relative to the others.

      I don’t understand how you can say that you prefer A to B and C and D unless you are comparing your preference for A to your preference for B and C and D. I agree that you do not need to compare B, C, and D among themselves, but I don’t see what that has to do with it.

      • As I said above, the Golden Rule is a prefectly good decision rule that doesn’t optimize anything. To use Andrew’s term it “satisfices.” Options satisfy the rule, but they aren’t ranked. I choose all the time in situations where some options are acceptable and some are not, but there is no ranking. I don’t know how to explain our point, logic does not dictate that preferences have to be transitively ordered. By introspection, I can tell that (1) there is no complete ordering of my preferences, and (2) there is no mapping of my preferences onto a utility space or preference scale (whatever you want to call it). An onto mapping is required for a function. Like I said, we can force such a mapping. And, maybe there are times when we can say we think we are close enough to (1) and (2) that it doesn’t matter. But, that does not mean that there is an objective function that can always be maximized. Nonetheless, I concede if Wikipedia disagrees with me, Carlos and I must be wrong.

      • You assume that there is such a thing as “your preference for A”, etc. but there doesn’t need to be.

        What is so hard to understand about preference _relations_ and how “preference of A over B” and “preference of A over C” is all that you need to say that you prefer A over B and C?

        You assume as well that “preference of A over B” and “preference of B over C” have to imply “preference of A over C” but preference relations are more general than that.

        And even when the preferences are transitive and the set of choices is ordered it may be still impossible to define “your preference for A”.

        Imagine that choices are points in a plane like A=(x,y) and B=(x’,y’) with preferences defined as: A is preferred to B if x>x’ or if x=x’ and y>y’.

        Do you understand that given three distinct points it’s trivial to determine which one is preferred? For example, (3,5) will be preferred to (-2,10) and (3,0).

        Do you understand that a “preference” function of (x,y) that you can (let alone need to!) evaluate for the three points to compare them and put them in order and pick the preferred one doesn’t exist?

        • Nope. It would need to be higher at x,y+epsilony than at x,y but also higher at x+epsilonx than at any x,y value.

          Let epsilony go to infinity, then the function must continuously increase. Suppose it goes to infinity, then the function at x+epsilonx is everywhere infinite.

          Suppose that instead it asymptotes to a fixed value as epsilony increases… Then at x+epsilonx it is everywhere above that fixed value, so its discontinuous everywhere in x… But it can’t be discontinuous everywhere in x because it needs to always increase with increasing x. If it jumps by a discrete value for every infinitesimal increment in x then it increases by a nonstandard amount for any limited epsilonx, and again its infinite.

        • Aha, finally I get the point!

          A is preferred to B if x>x’ or if x=x’ and y>y’… Carlos, thank you for this excellent example. This is a common rule for deciding who wins a division in sports! Choosing which team will be the division champion, we might choose the team with the most wins, or, if there are two teams with equal wins then we choose the one with the best head-to-head record or something. The second parameter is completely irrelevant except in a specific case of the second parameter. It’s pretty unusual to choose things this way — the sports example was the only one I can think of at the moment — but it does happen.

          So this is indeed a case in which you can’t evaluate a “preference for A” (independent of the parameters that describe B) and a “preference for B” (independent of the parameters that describe A) and compare the two.

          The key characteristic that makes this example work is that the value you place on choice A is dependent on the parameters of choice B. I’ll concede that one could, in principle, make choices this way as a matter of course — hey, they’re your preferences, you can choose them however you like.

          This specific example can’t be written as an optimization problem that evaluates a preference value for A, and preference value for B, a preference value for C, etc., and then compares them — and I admit, I didn’t think there were examples like this! — but the problem can still be written as a classic optimization problem in which one maximizes an objective function. Assign each choice a preference score given by S = y*(x==max(x)), where the quantity in parentheses is 1 iff x has the greatest value of all of the choices, zero otherwise. You want to find the choice that maximizes S. So this particular example can still be solved by putting everything on the same scale — what’s your score S.

          But I do have to admit that although you _can_ solve the problem by defining an objective function and finding the choice that maximizes it, in this case one does not _have_ to do that, and in fact it is slightly awkward to do so. I think it’s still a true statement that you have to be _able_ to put the choices on a common scale, but you don’t have to do so, and I did think you had to. So: I agree, choosing which you prefer among A, B, and C does not necessarily require that you have a concept of Pref(A), Pref(B), and Pref(C) and then compare these! You’re right, Carlos, this is not a tautology after all. I’m still kinda blown away by that.

          Carlos, you asked “what is so hard for [me] to understand about this…I dunno, how does one answer a question like that? It was hard for me to understand because I had no examples, just your assertion that it was possible.You kept saying there were cases where you could say you prefer A to B without having a preference for A that could be compared to a preference for B, but I could not think of any examples and indeed did not think there were any. Perhaps it should have occurred to me that the sports tiebreaker is such an example, but it didn’t. Your example (x > x’ OR x=x’ and y>y’) is fantastic.

        • > Carlos, you asked “what is so hard for [me] to understand about this…I dunno, how does one answer a question like that? It was hard for me to understand because I had no examples, just your assertion that it was possible.

          I had given multiple examples of how using only (binary) preference relations – without needing to use any concept of absolute preference scale.

          You can say say that you prefer A over B, C, and D by using those binary relations to determine that you prefer A over B, that you prefer A over C and that you prefer A over D. No need for evaluating a universal preference function, no need for ordering the values. I still don’t know what part of that is unclear to you. _That_ seems a tautology!

          I even noticed that you don’t need to have transitive preferences for B, C and D – and therefore your “tautological” preference function doesn’t need to exist – to be able to prefer A to the rest. You didn’t comment on that.

          But even if the pref() function does exists, it’s not _needed_. It doesn’t _have_to_ be used. It seems that you still don’t accept that even when utility representations exist people may not be evaluating such a function for every option to pick the best one – and in general they are not doing it.

          People often go through choices comparing them with the best one so far (as in the desserts example). Or maybe they iterate discarding somehow the least preferred choice one by one: if they were really evaluating the utility function and ranking the values they could pick the top choice right away.

          Remember your opening comment: “Too many economists (and their followers) believe their simplified models of the world are much closer to the real world than is actually the case.”

        • Carlos,
          You say “You can say say that you prefer A over B, C, and D by using those binary relations to determine that you prefer A over B, that you prefer A over C and that you prefer A over D. No need for evaluating a universal preference function, no need for ordering the values. I still don’t know what part of that is unclear to you. _That_ seems a tautology!”

          IF you are trying to choose the globally best or preferred option then saying you can do so by pairwise comparison doesn’t tell us anything. No part of “traditional” optimization — optimizing an objective function — prevents you from doing this. Or, to put it another way: if you have an objective function that depends on the parameters that describe a choice, and you have a finite number of choices, then you can find the best choice by starting with one of the options, choosing another one, and retaining the one you prefer. Repeat until you have exhausted all of the choices and you will find that you are left with your preferred choice. The fact that you can find your preferred choice by doing pairwise comparisons does not disprove the assertion that you need a preference function.

        • > The fact that you can find your preferred choice by doing pairwise comparisons does not disprove the assertion that you need a preference function.

          Depending on what do you mean now by _need_, these are the counterarguments. (I’m not sure if you object to them.)

          A) You don’t need to _use_ a preference function to check pairwise preference relations. (The fact that with some` conditions infinitely many such representations exist is irrelevant.)

          B) A preference function doesn’t need to _exist_ for you to check pairwise preference relations. (It doesn’t exist if the preference are not completely ordered and even then it may not exist.)

          On the other hand, you have not even tried to prove your assertion that a preference function is needed – which is of course not possible, but at least you would have become aware that you need additional assumptions to prove its mere existence.

        • You know what? I dunno what I was thinking when I said “somebody’s” example (with the x==x’ condition) illustrates that you sometimes can’t evaluate your preference for A and your preference for B and compare them. Of course you can. Actually I gave a function that satisfies that particular condition in one of the comments, but more generally there is nothing special about conditioning your preference for one choice on information about another choice.

          I still think you are conflating two issues: whether you have a preference function, and how to maximize it. Yes, you can maximize your function by pairwise comparisons (if there is a finite number of choices) but for each comparison you are applying some function to choose the winner. Do you prefer a phone that is (cheap, fast, short-lived) to (cheap, slow, long-lived)? You need to provide an answer to that, and then choose whether you prefer that answer to (expensive, fast, long-lived) or whatever. If you are claiming that you are ending up at your preferred solution then you need to have a consistent way of evaluating these.

          Hmm, this ties in with Andrew’s point that people discussing this sort of thing usually start with a set of axioms and then derive truths conditional on those axioms. I’m assuming that (for instance) if you are really choosing your preferred phone, or optimal phone or whatever we call it, then the order you compare phones doesn’t matter. You could first compare A to B, then compare the winner to C, then compare the winner to D; or you could do some other order; and either way you would end up choosing the same phone (unless there are multiple phones between which you are completely indifferent, and which you prefer over all others in which case you will end up with one of them). To me that is part of what it means to choose your preferred phone. But if order matters — maybe you haphazardly choose which order to do the comparison, and always choose the fourth phone you consider — then OK, you’ve got me, you don’t need a preference function to do that. Uh, actually I would argue that that particular rule would really just mean you are indifferent to what phone you get.

        • > I dunno what I was thinking when I said “somebody’s” example (with the x==x’ condition) illustrates that you sometimes can’t evaluate your preference for A and your preference for B and compare them.

          If you mean that you can write a preference function

          pref(x,y)=…

          such that given any two points A=(x,y) and B=(x’,y’) – with x,x’,y,y’ arbitrary real numbers – you can evaluate pref(x,y) and pref(x’,y’) and compare the values recovering the preference relation

          A is preferred to B if x>x’ or if x=x’ and y>y’

          could you kindly write down pref(x,y)?

          If you mean that for two given points you can write a function that’s trivial. But then you won’t have a “consistent” function that represents your preference.

          While you’re at it, please write down as well the values

          pref(A)=…
          pref(B)=…
          pref(C)=…
          pref(D)=…

          giving a preference function representation of the preference relations

          A ≻ B
          A ≻ C
          A ≻ D
          B ≻ C
          C ≻ D
          D ≻ B

          which you will agree – I hope – that determine a preference for A over all the alternatives B, C and D.

          > Yes, you can maximize your function by pairwise comparisons (if there is a finite number of choices) but for each comparison you are applying some function to choose the winner.

          But not necessarily the same function (see above). Even if such a function exists you cannot rule out that the comparisons may be done evaluating different functions which are not the single-function representation of the preferences.

          Evaluating a “consistent” function and ordering the choices may not even be an option no matters how strongly you insists that one needs to do so. (For example if you prefer the heavier of three objects and you have just a balance scale that can compare two objects but not determine the weight.)

          Unless – once again – by needing to use a preference function includes using pairwise comparisons to establish a ranking that you call “preference function” and then you argue that one “needs” to evaluate that ranking to rank the options (!) and picking the best one.

        • Carlos,
          This is starting to look ridiculous even on my computer, the text much narrower than the giant gutters on left and right. I hope this comment shows up in the right place.

          No, I cannot write a function f(x,y) that does what you say. But I can write a function f(x, y, x’, y’) that does what you say. So can you. My conceptual error earlier was to think that the preference for A had to depend only on parameters of A whereas in fact the preferences can, and sometimes will, depend on all available information. If I’m trying to decide what basketball team I prefer to wager on, it’s perfectly kosher to consider not just each team’s “ability” but also the specific matchups between the teams — does team B have a center who can guard Team A’s center, and so on. The objective function I’d be trying to maximize would be the expected payoff. Maybe I bet on one game per week, so I’d have a ‘preference’ (i.e. an expected payoff) that I could evaluate for each team that competes this week.

          I’m so baffled by your balance scale example that it makes me wonder if we’re still talking past each other somehow. If I understand it correctly then you have a univariate objective function: you only care about weight. You can weigh objects against each other in order to find the one with the maximum weight. By doing so you are maximizing the objective function.

          And I have never claimed that you need to order the options in order to pick the best one. If I am trying to select the heaviest option among (egg carton, book anvil, lima bean, safety pin) then I do not need to worry about whether the lima bean is heavier or lighter than the safety pin.

        • If the points are fixed it’s easy, isn’t it? The canonical way would be to check the preference relation for every pair and count for each point the number of points that it’s preferred to: f(3,5)=2, f(3,0)=1, f(-2,10). [Again, this is not a function that you NEED to evaluate for each of those points point to know which one do you prefer. You can do that evaluating the pairwise preference relation criteria without USING the utility representation.]

          But it’s not so easy in the general case! Can you write down a preference function f(x,y) that evaluates to a higher value for the preferred point for any pair in [0 1]x[0 1]?

    • Carlos, I’m still trying to understand where you’re coming from.

      I think you’re saying there is an important distinction between “I prefer A to B” and “my preference for A is greater than my preference for B.” The latter requires some sort of preference function, but you’re saying the former does not. Do I understand that correctly, and, if so, could you think of another way to say it to try to explain it to me?

      • Choose some a in X such that
        f(a) > f(b)

        for all b in X

        vs

        Choose some a in X such that

        g(a, b) = True

        for all b in X

        No mapping onto the reals is needed, just binary relations in X.

        These imply different algorithms/heuristics. In multidimensional cases without continuity/independence in the von neumann morgenstern sense, the second one may work to describe “rational” preferences for no preference function exists, so no real valued function is being optimized. Otherwise, the two are different pictures and the latter can be said to be “optimizing some function.”

        • Somebody,
          How is the latter different from a function with a ridge at (a,z) for all z? That is, you care about the first parameter but are indifferent to the second?

          To put it another way, if g(a,z) =1 for all z, and some number lower than 1 for g(c,z) for c not equal a, isn’t that the same?

  5. I think it’s wise to think first about societal choice and understand that first.

    Then you have to face issues such as Arrow’s theorem (and then the workarounds, and then the issues with the workarounds), and that preferences are in actual fact ‘inconsistent’ and intransitive but yet society isn’t an impoverished money pump (or is it? And even if partly so, what stops the worst case?). And society will simply not make some choices (it seems it won’t face up to some of the probabilistic/gambling choices needed to get to classic decision theory); you can’t point a gun to its head and say ‘you all must jointly decide on this hypothetical’. And so very much dependence on framing.

    Because there are real problems there. You might say that individual choice is different because – unlike a society – there it is a single (hopefully rational) entity with a single united interest and which should have usefully stable preferences. But maybe that’s true for you. It’s not at all true for me – except as an approximation – and I doubt I’m unique in that.

    Solve societal choice first.

    • Oh sure, this is a much much harder problem. If we could agree on the objective function things would be a lot easier, but that’s not gonna happen.

      Indeed we don’t even agree that there is an objective function to be optimized. It’s not just that we don’t agree on what it is, we don’t agree there is one.

  6. Phil:

    You made a lot of good points rejecting your own argument – especially the point that the value available in a given decision is often a moving target. A given problem might have to reach a certain level of value before people take them seriously enough to do a serious dimensional reduction. Daniel’s point is also true: for many decisions, people just decide.

    One thing that’s missing in your “single axis” thinking: my linear algebra is a bit rusty but as I recall from chemistry, you can represent n+1 chemical components of a system in n dimensions, but you have to project through one chemical component or some location in composition space, which introduces assumptions or distortions or both; so while you *can* represent multiple dimensions on a single axis, you *cannot* normally represent them accurately. So in a sense if you have to remember your assumptions and take account of the distortions, in a conceptual sense you haven’t eliminated the other dimensions.

      • “Preferences are only ordinally meaningful”

        Before they are “preferences” they can be expressed as quantitative comparisons along multiple axes of functionality. You can project them onto a single dimension to determine the relative utility – given the assumptions of the projection – then of course if you want you can remove the actual coordinates and simply assign each point on the line an ordinal number.

        Or hey you can just draw a line and put them on the line, giving them ordinal numbers, in which case all the preceding discussion is moot.

  7. Perhaps a converse question to ask is that how often does human solve optimization by partial ordering. In statistics, many optimization problems we encounter are continuous, or can be embedded into a bigger continuous problem, while modern statistics and machine learning is especially featured by their use of the gradient in the optimization. Discrete problems seems overall much harder or typically requires some customized algorithm. When human makes decision, or at least what I can recall, it is often some ordering being involved rather than the gradient. Maybe that is why human decision making does not scale to high dimensions.

  8. Maybe related: Tversky’s “elimination by aspects” represents both the multidimensional (or multi-attribute) character of the choice options and the unidimensionality of the preference scale. A condition that the theory implies is moderate stochastic transitivity of the pairwise choice probabilities:

    If P(A, B) >= 0.5 and P(B, C) >= 0.5 than P(A, C) >= min{P(A, B), P(B, C)}

    Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79(4), 281-299.

  9. Phil:

    Another important factor in optimization of choices: optimization might not matter that much at the individual level, since the market has already trimmed the choices to the highest value choices, and in any given category the difference in value between the choices is small and/or obvious.

  10. The mathematics about preference functions – existence or nonexistence, properties, and optimization thereof – are all very interesting. But let’s not forget that complex social decisions need to get made and that preferences (whatever forms they take) are often in conflict and do involve ethical dimensions. I agree that optimization of social decisions is important – but beware the constraints you impose on these social decisions. If I believe in the rights of species to exist, and believe that right usurps any financial measure of value – how do you incorporate that into a social decision? You might be inclined to point out that my actions reveal that I clearly put a finite value on species existence and, of course, you are right. But I don’t think that allows you to use a derived value from my private actions as a measure of the value I attach to public decisions about endangered species. And I don’t believe that the mathematical optimization problem is a good process for making some of these social decisions. Notwithstanding Daniel’s concerns about irrational and careless individual decisions (which are rampant), these are still people and I am reluctant to dismiss their values so quickly (though, trust me, I would love to dismiss many of those values).

    A philosopher might suggest that some ethical decisions need to be resolved through a different process than optimization. Could that lead to poor decisions? Sure. Perhaps even often. But I’m still reluctant to replace the messy human social interaction with a utility maximization problem. Daniel has suggested that people should be engaged on defining that function – I agree that would be nice and could lead to much better decisions. But here in the real world, it will be a group of “experts” acting as if they know what different elements of the public believe and how to tradeoff those values. It was once said that ecology and democracy were incompatible (there was a book, Ecology and the Politics of Scarcity, that made this point) and I think there is an essential truth in that. We may have to choose between giving everyone their voice and having some select group make decisions for them.

    I believe these issues are not new. I think there is a history of political philosophers who have considered these problems (and I am woefully ignorant of much of their work). I am reluctant to think we can or should ignore this history in a belief that the mathematics will resolve the issues of differing ethical values and the needs to make interdependent decisions.

    • Dale writes: “I am reluctant to think we can or should ignore this history in a belief that the mathematics will resolve the issues of differing ethical values and the needs to make interdependent decisions.”

      I agree that there is a long history of philosopher considering these decisions from a different standpoint. However, you still seem to be of the opinion that optimization is justified mathematically (and less messy). As demonstrated above several times, the mathematical axioms on which optimization is justified generally fail. Preferences are not generally transitively ordered and do not map onto anything (no function). In most of the history of Western thought “rational” is not confined to utility maximazation. I think it is strange that economists seem to think that behavior that is no maximizing some function is irrational. The Golden Rule or Kant’s Categorical Imperative or the Ten Commandments are not maximizing anything. It is bizzare to pretend that in every situation there is a rank ordering of all the alternatives. Is there really an answer to the question of the best allocation of the US federal budget? Those pretending such problems can be optimized are the ones being irrational. Moral reasoning proceeds much more as Kant described than Bentham. It is one reason that most decisions should be left to a democratic and legal process or when possible left to individual choice because there is no “best” there is just the impermissible and choosing in fair and transparent way.

      • “The Golden Rule or Kant’s Categorical Imperative or the Ten Commandments are not maximizing anything. ”

        ???

        I believe they’re attempting to maximize harmony and general well-being of society? No? What am I missing?

        “Is there really an answer to the question of the best allocation of the US federal budget? ”

        At some general level and given some set of priorities there is an answer to that question.

        • Jim:

          Kant’s thinking is deontological. That tradition is not consequentialist. The consequences do not matter to the decision for the deontologist — only conformity to the rule. Utilitarianism is a form of consequentialist moral reasoning. The Golden Rule or Kant’s Categorical Imperative will give the same result regardless of consequences. You may disagreed that deontology is the correct moral theory and argue that rules like the Golden Rule have to be justified on a consequentialist, as Mills did, but it is not correct to claim that not such deontological tradition exists. On its face rule based decision making is deontological.

          You write, “At some general level and given some set of priorities there is an answer to that question [to whether a single best allocation of all federal resouces exist].
          You are assuming what is in dispute, utility theory or some other quantity we can maximize. As noted above this requires several very restrictive axioms to be true, i.e., completeness, transitivity and a function. (I would add the assumption that utility is a measure space as well.) There is simply no reason to believe these assumptions hold generally. That has been demonstrated above. If by “some general level and given some set of priorities,” you just mean that the axioms hold, then fine, but that is circular.

        • Steve, seriously?

          You can claim that the Golden Rule and the Ten Commandments are deontological if you like. However, they have obvious practical consequences! Do you deny that? :) I mean, here you are, claiming things are done because they must be morally so, and claiming that the fact that they are ultimately beneficial for society is an afterthought! That’s hilarious steve.

          “You are assuming what is in dispute, utility theory or some other quantity we can maximize”

          No, you are assuming that. But whatever your assumptions, it’s startling that you claim that all this fighting over the Federal budget has no consequence in maximizing utility. It obviously does – for some given function! Is it the absolute ideal maximization? Surely no one believes it is or even close. But that’s because people have to compromise between their various ideas of maximization. Seriously, why do you think Jayapal wants to add $80B in IRS agents? For no reason??

          It’s hard to take a claim like that seriously, it’s so obviously wrong.

      • I think you may be misclassifying the Golden Rule here. I don’t think it is an objective to maximize, I think it is the necessary condition when someone is maximizing. It even nests the constraint, if I want to be treated like a King, I have to treats others like kings, which pfft, forget that. I think the objective is max “living the most proper life”, subject to constraints abound, and the condition at the max the Golden Rule.

        On the allocation of the Fed Budget, I don’t think that anyone says that can be maximized in this way. Something about the 2nd FTWE applies here.

    • “I would love to dismiss many of those values”

      :)

      the “rights of species to exist” would be news to the billions of species that once existed but no longer do, including many human ancestors. This is a great example of a manufactured “right” – one that is clearly rejected by 4.5 billion years of history on Earth alone, and who knows, maybe another 9+ billion years elsewhere in the universe.

      This is almost stooping to the level of “nudge” – selecting some bizarre concept out of the milieu because it’s socially popular and imbuing it with a natural fundamental basis that in reality doesn’t exist in any way, shape or form.

      • jim
        You don’t believe in free will. Humans have no responsibility for our decisions since we are clearly just following some biological process that is beyond any control. Of course, I can’t prove you wrong – I just don’t want to live my life that way.

        • Dale,

          You’re turning a simple and accurate observation about species’ existence into a categorical statement about free will.

          “I just don’t want to live my life that way.”

          But you already do. You stand idly by while millions of mice are ruthlessly slaughtered and consumed by hawks, but you’re reluctant to let the collective live as you live. This is a common theme in your comments and – though not expressed – a common them for many frequent commenters here.

          “Humans have no responsibility for our decisions”

          Responsibility to who or rather what? That’s the question. Most people accept that humans share a responsibility for the well-being of humans. But do we have a responsibility for the well-being of other forms of life, beyond the benefits they provide to us? Most people accept that we shouldn’t wantonly destroy other life. But do we have a responsibility to protect species just because it exists? Even the ESA doesn’t go that far – protection of biodiversity has always been advertised as a direct human benefit – a protection of resources that can be used by humans in the future.

          I personally see no reason why humans should protect other species just because they exist. Other species don’t make any effort do the same, so the need for this protection is hardly rooted in any natural aspect of the world.

        • jim –

          > I personally see no reason why humans should protect other species just because they exist.

          No reason why you should have, but note my wish to protect other species is neither solely because of the benefits they bring to the human species nor simply “just because they exist.”

          > Other species don’t make any effort do the same,

          Hmmm. I’m not sure how you’re so confident about that statement.

        • Joshua: obviously you’re entitled to whatever views and beliefs you’d like to hold regarding any subject, including species. If you want to feed the squirrels, by all means, do so.

          However, we’re discussing the subject in the context of laws in which some people are attempting to force **other people** to accept “rights” granted to other species. In this context, the primary argument for the ESA was that species are a reservoir of potential benefits to humanity.

          Regarding your last comment, even other members of the same species act to eliminate different groups of that species. The record shows that indigenous Americans were tolerant of Europeans for only a short time, then took ample pains to get rid of them. Unfortunately for them, at least, they were overwhelmed by technology and numbers.

        • This reminds me of a case I was involved with long, long ago. We were assisting a nuclear power plant in relicensing its cooling system. Once-through cooling systems kill a lot of fish larvae and juvenile fish through entrainment and impingement. The point we made early on is that the number of fish killed doesn’t really matter at all, since the ecosystem can only support so many adult fish, and as long as that niche is filled the human use of these fish is sustainable. (There are a whole host of nuances to this statement which are unimportant for the point I’m about to make.) One of our opponents objected that this doesn’t take account of how this affects the larvae and juvenile fish who are killed. We successfully pointed out that, while one could take the view of the poor dismembered larvae, that’s not the way the law worked, and if it did work that way, we’d have to reorder society in a lot more ways than simply changing the cooling systems of nuclear power plants.

          The really funny part came when we *valued* the fish through the valuation human use, either directly as food or as intermediate outputs to things that ate these fish, eventually eaten by humans. The objection that we weren’t valuing the fish as they would value their own lives was… an interesting discussion.,

        • Don’t know if you’re read “The Overstory.”

          It’s basically about how much goes on with trees that we previously were totally unaware of. One sub-theme is that the way that we attach different “value” to different species of animals, and even plants, is pretty much totally arbitrary.

          I have to say it prompted me to think about that issue in ways I haven’t previously thought about. Given the amount that trees contribute to and “communicate” with the environment around them, how do we really distinguish their value to our on-going existence from our own?

        • Haven’t read it. Sounds interesting,

          As I said, there are nuances in this problem which would not have contributed to my main point. While I still believe that the only source of value is ultimately someone’s human value, there are plenty of examples where you can make an irreversible mistake in valuation — in turn, that requires, in some cases, an explicit risk posture which is indistinguishable in practice from valuing Nature on its own terms. That said, I also think that in practice such cases are very rare.

      • It’s easy to make an argument that species don’t have a right to exist, but to make that argument based on the fact that many species have gone extinct is the past is not the way to do it. If that were the standard then there would be pretty much no rights at all. We can’t say you have a right not to be enslaved, because look at all the people who have been enslaved in the past. You don’t have a right to free speech, because look at how many people had their speech suppressed in the past. And so on.

        The idea of “rights” is a construct of our ethical system. As Mark Twain wrote: “Man is the only animal that blushes. Or needs to.”

        • Phil:

          The distinction between a “right to exist” for species and a right to speak for humans is that there is no way to prevent species from going extinct, regardless of what ethical constructs are imposed. It is not possible to create a “right to exist” for species, any more than it is possible to create a “right to be born” for any individual of any species.

          That’s aside from the practical absurdity of a “right to exist” for species decided on and enforced by humans, which is already creating situations where sea lions are being shot to protect salmon and barred owls are being shot – not because they directly harm spotted owls but because they outcompete them for the same resources! Who has more of a “right to exist”? An individual sea lion or its prey? An individual spotted owl or a barred owl?

          Thankfully the environmental community hasn’t yet woke to the plight of the cockroach!

  11. At the human level, it would be irrational to choose the optimal phone. The differences among most phones, for any or all of the various criteria, may be trivial in practice, or unpredictable/unverifiable without doing far too much contemplation/research on highly technical issues, or even just an illusion (lie) conjured by advertisers. And there are regulatory systems in place to ensure a minimum level of quality/usefulness. For general use, then, one’s unlikely to get a truly bad phone from an established company at a price that is not obviously low, or to overpay egregiously for a phone at a price that is not obviously high. But we feel more satisfied with our phone choice if we go through a ritual (cargo culture!) of decision-making, so manufacturers and advertisers and salespeople provide us with reasonable-looking criteria for justifying an expensive purchase. In reality, I buy an Apple over an Android because my friends did, and I’m satisfied that the extra money I paid was for something substantive rather than superficial. But in a market where phone choice doesn’t really matter, superficial criteria are the only true way to distinguish one phone from another, unless you’re willing to put in the effort to acquire the fundamental knowledge required to evaluate product specs and fine print. If you’re not using your phone for some specialized utility, it’s not worth it, particularly given that you have 20 other choices to make this week alone that are at least as consequential and require at least as much effort.

    Obviously, phones are an arbitrary example here. More generally, the fundamental objective in most decision-making processes is to ensure that our ultimate decision is sufficiently satisfactory as to prevent a) meaningful harm that outweighs benefits, and b) regret. We can generally put in more effort to make a better choice, but we will eventually hit diminishing returns, and even determining whether our decision-making process is an optimal process is, itself, a bear. The opportunity costs of making all the truly optimal choices in life would be far greater than randomly picking one of the many perfectly adequate phones and moving on with your damn life.

    • *And even if you do need the phone for a specialized utility, unless you’re Edison making the lightbulb, someone smarter than you has already put together a reasonable set of criteria for that. Your effort should be pointed toward finding their website, where they’ve posted the criteria and the optimal choice given those criteria.

    • I agree with the point that putting the effort into finding “the best phone for you at this moment” might not be worth it for a lot of people, because the difference between first- and fifth-best may be small and the effort involved would be large. This ties into my point about buying a drill: I’m not really trying to optimize my choice of drill, I’m trying to optimize some function that takes into account both the drill and the effort I spend choosing it. But this might still be thought of as an optimization problem.

      As for your specific example, though, I disagree that phone choice “doesn’t really matter” as you put it. Choosing iPhone vs Android certainly matters to me. Some people want a really large phone, others want one they can easily fit in a pocket. Some people care a lot about the camera, other people are a lot less picky. If we imagine that the next time I’m getting a phone I can either buy my own, or I can choose to be given one for free that will be randomly selected from the twenty most popular phones in the world but I am not allowed to replace it for a year, I would certainly choose to buy my own.

      Perhaps all phones are about equally good to you, but they aren’t to me!

  12. TLDR: No.

    Let us suppose we’re shown two pictures. Choose the duck and you’ll get a punishment (“Fear of the duck!”, as Iron Maiden sang); choose the hare and you will receive a neat prize. Now, as probably most could guess, the pictures we’re shown are both the same famous ambiguous figure, the one which one can see as either duck or a hare (or as a combination of both or something else).

    What I’m getting at here is that the objects that we base our decisions on are not fixed. Rather these “decisional objects” are _construed_ from a complex world: we’ll choose the picture that we _choose_ to see as a hare. These constructs are based on our goals, beliefs, attitudes, culture, other people and so on.

    Now, it might be tempting to say that maybe we can construct a “metalevel”, for example “context”. We might then want to condition these choices on that “context”. For example someone said that they like pineapple in a pizza but not otherwise. If we condition the liking of pineapple on the dish in which it is in, it seems that the problem has been avoided. Preference for pineapple is fixed in these contexts and our theory has been salvaged!

    But I will argue here for something more radical: that these contexts are also part of that construction: on a very low level there are no such things as context, and for example the statement “pineapple is good on pizza but not in liver casserole” is just a useful heuristic, or an abstraction, that has been created _so that we could make reasonable context dependent decisions_. In other words, it is not that decisions arise from these contexts per se but that these contexts have been construed/abstracted in order for us to _make_ decisions. For this reason these kinds of heuristics seem so useful but we shouldn’t think of them as anything else than that; we shouldn’t think them as the fundamental reason behind decisions or preferences.

    Another commenter wrote that societal choice is a simpler problem. What I just said applies here too. It’s bit like that famous quote by Stalin: one death is a tragedy, million deaths is a statistic. But does killing million people remove the “tragediness” of the single deaths that constitute the total number of deaths? Of course not. When tallying the deaths into a single number, the individual tragedies are, in a way, _removed_, an abstraction is construed, perhaps, to remove guilt. Here again we see that it is not that tallying deaths _really_ removes the tragediness of the singular deaths, but that the tallying is done in order _to abstract the singular tragedies away_.

    When we abstract people into “societies” these abstractions are often made for the practical reason of (political or otherwise) decision making. So of course it seems like “societal choice” is a simpler problem – that’s because at that level we are using abstractions that are constructed in order to make those decisions easier.

    Simplicity of decision making is the goal, not the (causal) reason.

    And now we have reached the lowest circle of this hell. Of course decisions can be thought as utility functions or whatever if we abstract them in such a way that they can be thought of as utility functions or whatever. But the painting is still not a pipe and will never be.

    • Duck or hare,
      You may have a good point but your example doesn’t illustrate it. It’s a funny example of something but I’m not sure what it says about optimization.

      I’m not even sure of the terms of your example. Are we to assume we will be punished whichever picture we choose, or will be rewarded whichever picture we choose? Or that the person making the offer intends to punish us if we pick a specific picture and reward us if we pick the other, but we have no way of knowing which?

      In these cases we are either don’t actually have a choice (and thus can’t optimize) or we have insufficient information to optimize. Neither of these contradicts the fact that we have a preference function that makes us prefer the prize to the punishment.

  13. > My conceptual error earlier was to think that the preference for A had to depend only on parameters of A whereas in fact the preferences can, and sometimes will, depend on all available information.

    Ok, so there may be no preference function that can be evaluated separately for each option A, B, etc. to then compare the values pref(A), pref(B), etc. that represent your preference for A, B, etc. on a single scale. I was not sure if you had changed your mind again.

    You no longer “need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one”. What you need is to be able to order the choices somehow and by doing so you put them onto a consistent dimension.

    And in fact to pick the best you don’t even need to put the choices in order. You only need to verify that one choice is better than each one of the other choices and this doesn’t need to be done evaluating _one_ preference function. Things do not need to be put onto a consistent dimension – beyond the separation of the choices into two distinct groups with one group containing a single “preferred” element.

    Or maybe you don’t agree with the previous paragraphs – it’s difficult fot me to understand what are you claiming at each moment. In that case, please keep reading.

    > I’m so baffled by your balance scale example that it makes me wonder if we’re still talking past each other somehow. If I understand it correctly then you have a univariate objective function: you only care about weight. You can weigh objects against each other in order to find the one with the maximum weight. By doing so you are maximizing the objective function.

    Let’s stay with the balance scale example for a moment. Maybe you can explain the following in more detail:

    > In order to decide which you prefer among A, B, and C, you need to evaluate your preference for A, B, and C so that you can compare them.

    What value do you get when you evaluate your preference for A (almond)?

    What value do you get when you evaluate your preference for B (banana)?

    What value do you get when you evaluate your preference for C (coconut)?

    What happens when you compare them?

    • >And in fact to pick the best you don’t even need to put the choices in order. You only need to verify that one choice is better than each one of the other choices and this doesn’t need to be done evaluating _one_ preference function. Things do not need to be put onto a consistent dimension – beyond the separation of the choices into two distinct groups with one group containing a single “preferred” element.

      Now, suppose that the “choices” vary “continuously”, let’s say in the two dimensions of grams of gold and grams of platinum. Suppose there’s a “feasible frontier” which is a curve starting at (0,p) and going to (g,0) which has to do with what combinations are possible to acquire through various chemical reactions with available raw ore materials etc.

      Is it possible to evaluate some preference function pref((g1,p1),(g2,p2)) for every feasible set of pairs, and yet not have it correspond to a “utility”? Real question, I don’t know the mathematical answer to, but my inclination is to do something with nonstandard analysis, parameterizing the feasible curve by some parameter s, dividing s into a nonstandard number of infinitesimal increments, setting a value U(s=0) = 0 arbitrarily, and evaluating the preference function between adjacent pairs of s values, if the preference for s+ds is greater than for s then U(s+ds) = U(s) + ds if it’s less then U(s+ds) = U(s) – ds. Then determine whether there’s a standardization of U. It seems like there would be.

      • Assuming the curve is parameterized such that s goes from 0 to 1, q(s) is a point in the plane along the curve, and U(q(0)) = 0 at one end of the curve, then if we think of U as a function of s, U(s) is strictly less than or equal to 1 and strictly greater than or equal to -1, for all of the s_i for i from 0 to N with N nonstandard, since N*ds = 1 and N*( -ds) = -1 and the U increases or decreases by ds for each increment.

        Therefore, at every s_i for i from 0 to N we have U(s_i) is a limited number and st(U(s_i)) exists as a standard number U*.

        Let ss be any value between 0,1 then let q(ss) be the point along the parameterized curve, and U*(q(ss)) can be defined by U* = st(U(s_i)) for any s_i such that st(s_i) = ss. We can prove that U(s_i) ~ U(s_j) whenever both s_i and s_j are infinitesimally close to ss and hence the standardization exists, because if the s_i and s_j are infinitesimally close to ss then they are infinitesimally close to each other, and hence are within n*ds where n is standard and n*ds is infinitesimal so the change in U between s_i and s_j is infinitesimal, hence the standardizations are equal.

        It’s been a while since I’ve tried to use IST formally, and I’ve had 1 glass of wine with dinner, so I wouldn’t stake my life on it, but I believe we have that being able to evaluate a preference for any two points along the curve implies that there exists U(q) for q along the curve such that pref(q1,q2) iff U(q1) greater than U(q2)

  14. Carlos, you say “And in fact to pick the best you don’t even need to put the choices in order. You only need to verify that one choice is better than each one of the other choices and this doesn’t need to be done evaluating _one_ preference function. Things do not need to be put onto a consistent dimension – beyond the separation of the choices into two distinct groups with one group containing a single “preferred” element.

    Or maybe you don’t agree with the previous paragraphs – it’s difficult fot me to understand what are you claiming at each moment. In that case, please keep reading.”

    I do disagree with those paragraphs above. I’m sorry I was unclear on this. I did think you (or rather ‘somebody’ had found an example of a case where you really truly did not need to be able to express things in terms of a single dimension, but I was a bit addled when I thought that.

    I do agree with part of one of the paragraphs: you do not need to put the losing choices in order. I’ve said this a few times, using the example of trying to choose the heaviest item from anvil, book, lima beans, etc. Actually I had said this from the very start, saying things like “you need to be able to put the choices in order, or at least to say A > (B, C, or D).”

    As for almond, banana, coconut, I remain baffled. We are talking about weight, yes? You agree that in this instance we have a single dimension: we prefer the heaviest one? I’m not sure why you think this would be hard for me. Actually I doubt I would need the scale, unless you have an almond the likes of which the world has never seen, and ditto a coconut, but OK. Choose two at random or haphazardly, let’s say the coconut and the almond. We use the balance you have graciously provided. We prefer the coconut. Now compare the coconut to the remaining choice, the almond. Again we prefer the coconut. We have concluded that the coconut is the heaviest of the choices, and therefore maximizes our preference function. We do not need to compare the banana to the almond, but of course we are able to if we want. We have put all of the choices on a single dimension — the “weight” dimension — and have chosen our preference.

  15. Am I correct in thinking that the reason you need preferences to be transitive, etc. is so you can rule out arbitrary prices?

    How does advertising fit in to your model? Are advertisers maximizing their utility by changing consumer preferences to become more irrational?

    How does the failure of Covered Interest Parity fit in with your model? Why do risk-free arbitrage conditions persist in foreign exchange markets, when utility maximization for financial firms dictates that they ought to close those free-lunch spreads? How are financial firms leaving money on the table by violating CIP, and what are the implications for rational pricing theory?

Leave a Reply

Your email address will not be published. Required fields are marked *