Economics as a community

We had an interesting discussion recently regarding a way that economics is a community, more so than other academic fields such as epidemiology, statistics, political science, etc.

This arose last year in the midst of the coronavirus struggle, Joseph Delaney quoted Tyler Cowen as saying:

But I ask you, where are the numerous cases of leading epidemiologists screaming bloody murder to the press, or on their blogs, or in any other manner, that the most commonly used model for this all-important policy analysis is deeply wrong and in some regards close to a fraud?

As Delaney noted, statisticians and epidemiologists have been writing about this on their (our) blogs! Maybe Cowen was been spending too much time following pundits on twitter?

I asked Cowen what he thought about this, and he replied:

Not remotely like what economists do when say Judy Shelton is nominated for the Fed. And Shelton is not even the most important event of our lifetimes.

Interesting point. I don’t think that statisticians and epidemiologists—or, for that matter, sociologists or political scientists or computer scientists—have the sense of being a community in the same way that economists do. Economists can, to first approximation, “speak with one voice.” They identify with their profession. Even the “heterodox” economists seem keenly sensitive that they are part of the economics community, even when feeling they are in the minority on some issues. This is a strength (or, sometimes a weakness) of the economics profession, that economists think of “economist” as their primary identity. I don’t feel like statisticians or epidemiologists have that same feeling of loyalty/love/hate/belonging to their professions. Maybe I’d say that economics has some aspects of an ethnicity, in the way that we see lapsed Catholics, or Jews who are annoyed at Henry Kissinger, or whatever. A sense of community and belonging, even if you don’t feel like you completely belong.

Perhaps one indication of this is that I’ve never heard of Judy Shelton! And I bet that most statisticians have no idea who’s in charge of the Census Bureau or the Bureau of Labor Statistics.

I think this is an important general issue to consider, the cohesion of academic fields. I remain struck by the idea that economists feel a certain responsibility for each other, and expect other fields such as epidemiologists to do the same.

P.S. Lots of interesting discussion in comments, which leads me to want to clarify one point. The above post is not intended to be a criticism of economics or economists. It’s just an observation, or an impression I have, that the field of economics is more cohesive than other academic fields in social science. Cohesion is neither good or bad in itself.

225 thoughts on “Economics as a community

  1. I mean, I’d suggest “economists follow economics blogs more closely than statistician blogs” as perhaps a simpler explanation. I don’t know that there’s anything particularly unique about their discipline, more that economists are complaining about their reading habits not suddenly including a lot of prominent epidemiologists and statisticians without having made a concerted effort to include them.

    • Look at the massive outpouring of admiration for Leeland Wilkinson this past week, for instance — a name that I’d expect the average economist to not know offhand. People (myself included!) are seriously bad at recognizing how little their world overlaps with most other people’s.

      Or as XKCD put it: https://xkcd.com/2501/

    • M:

      Here’s what I wrote above:

      I don’t think that statisticians and epidemiologists—or, for that matter, sociologists or political scientists or computer scientists—have the sense of being a community in the same way that economists do. Economists can, to first approximation, “speak with one voice.” They identify with their profession. Even the “heterodox” economists seem keenly sensitive that they are part of the economics community, even when feeling they are in the minority on some issues.

      It doesn’t seem to me that statisticians (or researchers in these other fields) have this same feeling of being part of a cohesive community, the idea that they do, or should, speak with one voice. Yes, we statisticians have our heroes (like Lee Wilkinson), we have statistics journals, etc., but I still don’t feel there’s this same community that I see in economics.

  2. I’m an outlier on this one – the relative homogeneity of views among economists is a terrible thing. It reflects a combination of selection (who decides to become an economist) and training (the orthodoxy, laden with difficult mathematics, occupies most of economists’ training, despite some notable exceptions – a few well known heterodox universities). Given the subject matter and myriad uncertainties, there should be far more dispute among economists than there is. Society is not well served by the single voice they hear when they should hear many.

    That single voice is somewhat illusory. Despite at least a half century of studies, economists still can’t decide whether minimum wages lead to unemployment (though I think there is now a consensus that it increases unemployment among the young and uneducated, with possibly little impact on other groups). Regarding whether today’s inflation is temporary or not, whether the Fed should be raising interest rates, whether climate change requires aggressive policies, etc. there is little agreement among economists. The tribal agreement is to denigrate the views of all other disciplines rather than articulate a clear path forward. Such articulation would require value judgements and ethical positions, both of which are anathema to “economic science.” We have the math to prove that we don’t engage in such things.

    We can quibble about the details, but in the grand scheme of things I’d say that the “community” that economists exemplify is an unhealthy sign. It does pay well, however.

    • Dale –

      I’m confused by your comment. You say there is relative homogeneity but then seem to suggest that is only really with regard to view within the community about the community or regarding other disciplines but not about issues within economics. Is that right?

      I’m curious as to what evidence there is to quantify/describe the relative level of homogeneity among economists.

      • Yes, the homogeneity is regarding the view of other disciplines, not on specific issues within the discipline. There are a number of studies about economists’ views and the consistency (or lack of) thereof. I can’t recall the specific citations at the moment, but a number of published studies have been done polling economists on views of things such as minimum wages, rent control, inflation, economic growth, etc. There are differences, to be sure, but I suspect the divergences are less than among statisticians or epidemiologists (though that is speculation on my part). To the extent that economists agree amongst themselves, but policy-makers pursue different courses of action, I would say is evidence that there is insufficient heterogeneity within economics – if it is so clear that rent control is a bad idea, then why are such policies pursued?

        It may seem easy to point out that people stand to gain via particular policies (such as votes in St. Paul, MN recently approving a rent control measure), but at the very least we would have to say that economists have been unsuccessful in communicating their views – I’d say because those views leave out too many things that are important to people. Now that I say that, I’m not so sure economics is different than other disciplines – after all, epidemiologists have been unable to persuade a significant portion of the population that COVID vaccination is desirable. Perhaps I am changing my mind – all disciplines are insufficiently heterogeneous.

        • > if it is so clear that rent control is a bad idea, then why are such policies pursued?

          Because it’s very clear that rent control is a transfer of wealth from young people and housing owners to old people who rent, and there are many more old people who rent than there are housing owners and young people don’t vote (esp. those under 18).

          I think this is a straightforward thing that Economists get right and the general public is 100% wrong and usually for ignorant reasons. Of course the people who vote for this stuff aren’t wrong about the fact that they personally will benefit from it, it’s just wrong in a moral sense the way that say voting in a law that says people under 30 should not be allowed to own houses would be for example.

        • Daniel:

          For some reason, economists used to talk about rent control all the time. It really got their goat. What got my goat was economists always complaining about rent control and never complaining about tax breaks for mortgages. It struck me that economists should dislike rent control and mortgage tax breaks for the same two reasons:

          1. Usual reason of interfering with the free market.

          2. Subsidizing people staying in the same place (if you have rent control it’s a disincentive to move; if you have a mortgage it’s also a disincentive to move; in both cases there are big transaction costs associate with moving, even beyond the direct costs of moving).

          Reason #1 is obvious; reason #2 should be relevant to economists too, as it gets in the way of people moving to new jobs, etc.

          Anyway, the weird thing to me was that economists seemed more bothered by rent control than the mortgage tax break, even though the two policies had these similar effects.

          The mortgage tax break is much bigger than rent control, though, so why were economists more bothered by rent control? My take on it is that economists have a default bias toward owners—all that work on “the firm,” etc. Rent control stops owners from making more money; the mortgage tax break allows owners to keep their money. So, from that perspective, much different.

          I guess that rent control is a bad idea, but it bugged me that economists were using it as their go-to example of a bad policy.

        • Rent control has the effect of physically causing deterioration of the housing stock. Owners don’t maintain buildings that they are losing money on, soon buildings are rat infested etc. I think that’s A major reason why rent control is worse. But I do think Economists are generally not in favor of the mortgage tax break. Myself I’m A flat tax + UBI proponent. I’d get rid of 100% of “tax breaks” at least for income tax. Again for similar reasons. Graduated taxes reduce economic output of the most valuable people, making the pie smaller for everyone (for example especially educated and talented women, who tend to face huge marginal tax rates)

        • In addition to what Daniel said, Reason 2 doesn’t quite hold. Wherever you move, you get a mortgage interest deduction (if you buy a house), and most people don’t use the deduction anyway.

          Also, the owners of the buildings (while not keeping all the rents) are able to claw back some of it. In addition to not doing maintenance, there might be large one time fees (perhaps under the table) to get the apartments under rent control.

          I think the reason you haven’t heard many economists talking about rent control lately is because the policy used to be more prevalent but has (for the most part) gone out of fashion. It seems to be making a come back, though, so you might have that getting your goat again.

        • Jfa:

          Just to be clear: my point regarding reason 2 is not that a homeowner with a tax deduction will be less likely to move than a homeowner without a tax deduction. My point is that the existence of the tax deduction is likely to have the effect of increasing the homeownership rate, which will then reduce mobility in general.

          Indeed, I’ve heard policymakers argue that the increase in homeownership rate is one of the good things about the mortgage tax break, and it’s seemed to me that economists are supportive of this argument, for the reason that economists generally have a favorable view toward owners. But from a strict econ perspective, I’d think that economists should think of homeownership either as a negative factor (as it impedes mobility) or as a neutral factor, not as a positive.

        • Andrew, many economists have pointed out tax breaks on mortgages are bad policy, e.g. as discussed in a pop piece here:

          https://fivethirtyeight.com/features/the-tax-deductions-economists-hate/

          One reason rent control may be a more common target is rent control is a common policy in many countries, whereas mortgage tax breaks are not. As far as I am aware (and I may not be), this is pretty much an American policy, uh, innovation.

          Another is modeling the consequences of rent control is a natural application of the 101 supply and demand model–and in this unusual case the 101 model is actually more or less right.

          I don’t agree that most economists would view “interfering with the free market” is always bad. That’s a cartoon version of libertarian ideology, not mainstream academic economics, which routinely offers policy prescriptions to alter outcomes which would be generated by unfettered markets.

        • Chris:

          Yes, that makes sense. I don’t know that economists talk so much about rent control anymore. It’s something that I remembered from a few decades ago, that economists I knew would always be bringing it up. They just seemed soooo bothered by it, much more so than they were bothered by the mortgage tax break. I see your point, that price controls are such a convenient classroom or textbook example, that this is why it would come up so frequently. Mortgage tax policy is more complicated: it has big real-world implications but doesn’t fit into a class lecture so easily.

          Kinda like how I keep writing about ESP, beauty and sex ratio, etc. These are super-clear examples of fallacies so I keep going back to them for pedagogical and rhetorical purposes, even though applications such as clinical trials are much more important.

      • For evidence, a good source is Klein, Davis, and Hedengren, “Economics Professors’ Voting, Policy Views, Favorite Economists, and Frequent Lack of Consensus,” Econ Journal watch, January 2013. Among there conclusions (and contradicting my stated views), is the statement that “The economics profession exhibits greater ideological diversity than other fields.” There are a number of references in that piece. I’m not at all sure any more what I think about the economics community compared with other disciplines – except that I find far less diversity of views (as well as other dimensions) than I think would be healthy. Perhaps that is true of all fields.

        • Dale –

          That was an interesting arc to follow: I appreciate your willingness to modify your view as you explore your thoughts, and assimilate evidence that runs counter to your original viewpoint.

          I agree that viewpoint diversity is a complex issue, although I find it funny when anti-woke types argue that the proper response is something on the order of a quota system.

          I would offer, however, that the problem lies beneath viewpoint diversity, as that’s more an outcome. The underlying problem, IMO, is identity-protective and identity-defensive cognition. It wouldn’t matter how people classified by viewpoint if they were more open and less biased.

          In that sense, maybe the real question is whether there’s some signal in open-mindedness if you stratify by profession. My own bias is that actually, maybe economists and maybe more likely epidemiologists could be among the most willing to change views as evidence unfolds (as you showed in your trajectory in these comments). Maybe because they’re always being humbled because the difficulty of establishing causality in their fields is immensely complex.

          The again, on top of that IMO, openness to new evidence is negatively correlated with strength of identification with a particular viewpoint. So if economists of epidemiologists are more likely to be strongly identified with particular viewpoints, the influence of any features of the field in themselves might be swamped by that underlying condition.

    • Sometimes, if there’s a clear answer, there should be homogeneity. I’d argue that there are important policy changes that economists nearly universally agree on, not because of tribalism but because their answers are genuinely hard to argue with. Not just trivial academic matters either. I haven’t seen much dispute since Arrow’s “uncertainty and the welfare economics of medical care” that health care is inherently uninsurable and American privatized health insurance (as distinct from health *care*) is bad. Proponents of increasing land value taxes over distortionary taxes date back to Henry George. Pretty much everyone agrees that public investment in naturally monopolistic infrastructure like rail/roads would be good, and that it’s too hard to build in American cities. Changing in American government is just hard to do, and the fact that they’re not being acted on doesn’t really lead me to conclude that they’re bad ideas or to blame economists.

      Not that I’m not anti-economist, because I am.

      • Bryan Caplan, Robin Hanson, John Cochrane would disagree about health being genuinely uninsurable and instead blame policies that prevent insurers from charging individualized rates and prohibiting catastrophic plans.

        • You are correct – many economists do not agree that health is inherently uninsurable. But some, such as me, find the conditions for private insurance unappealing. Health care, the largest sector of the economy, is one area where I believe there is extensive disagreement among economists. You will probably find significant agreement that 3rd party payer creates all sorts of problems – but little agreement about what we should do about it.

        • The Caplan post you link to is fairly ridiculous. It is the definition of a straw argument. He does not cite the economists he claims favor socialized medicine (which I believe few do), nor does he provide any support for the beliefs that he claims these economists hold. It is simply a convenient way for him to state his positions as different from these unnamed economists. I would also say that Caplan, though far better known than I am or will ever be, is a fairly politically extreme economist. I would characterize his views as representative of the libertarian-leaning branch of economics.

        • Sorry to prolong this, but I think it is important that people understand where economists’ positions are coming from. You cite Caplan, so take a look at Caplan’s calculations of our COVID policies: https://www.econlib.org/life-years-lost-the-quantity-and-the-quality/. Now, I’m not saying he is wrong here, and he does raise important issues (many of which have been discussed on this blog for months now). But his calculations of the relative value (there’s that word again) of months of life under COVID restrictions vs under “normal” life (whatever that is) strike me as poor methodology for choosing COVID policies. Yes, we can try to measure willingness to pay for wearing vs not wearing masks, but do we really want to base COVID policies on the answers to that question?

        • Those three authors are, distinctively, nowhere close to experts on health insurance. John Cochrane wrote one paper in 1995 and has decided he is an expert (his policy suggestion is basically to make health insurance markets even more byzantine); the other two have never been anywhere near it.

  3. There’s a passage in Thomas Kuhn’s Structure of Scientific Revolutions that I’ve always liked about the self-confidence of economists:

    “To a very great extent the term ‘science’ is reserved for fields that do progress in obvious ways. Nowhere does this show more clearly than in the recurrent debates about whether one or another of the contemporary social sciences is really a science….Inevitably one suspects that the issue is more fundamental. Probably questions like the following are really being asked: Why does my field fail to move ahead in the way that, say, physics does? What changes in technique or method or ideology would enable it to do so? These are not, however, questions that could respond to an agreement on definition. Furthermore, if precedent from the natural sciences serves, they will cease to be a source of concern not when a definition is found, but when the groups that now doubt their own status achieve consensus about their past and present accomplishments. It may, for example, be significant that economists argue less about whether their field is a science than do practitioners of some other fields of social science. Is that because economists know what science is? Or is it rather economics about which they agree?” (p. 159-160)

  4. There’s a great paper from 50 years ago by Axel Leijonhufvud in which he catalogues “Life among the Econ”, an ethnic group with castes and hierarchies and totems. The priesthood of this profession (err ethnicity) are fanatical about their “modl” building. Here’s one line: “The status of the adult male is determined by his skill at making the “modl” of his “field.””

  5. Maybe I’m too much of an outsider to economics but I don’t see how econ is any more of a community than statistics or epidemiology or any other field.

    Rather, I would guess that to the extent that Cowen is right about economists mobilizing more than epidemiologists/statisticians, it’s moreso due to economics’ history communicating to the public and being politically active. Economists have been in high levels of government influencing policy and writing editorials, etc. far more so than any other field. So I just think they would be better equipped to scream bloody murder in a more effective way than other fields.

    That’s only assuming Cowen is onto something which I’m not sure that he is; I’ve also have never heard of Judy Shelton.

    • Michael:

      Here’s what I wrote above:

      I don’t think that statisticians and epidemiologists—or, for that matter, sociologists or political scientists or computer scientists—have the sense of being a community in the same way that economists do. Economists can, to first approximation, “speak with one voice.” They identify with their profession. Even the “heterodox” economists seem keenly sensitive that they are part of the economics community, even when feeling they are in the minority on some issues.

      It doesn’t seem to me that statisticians (or researchers in these other fields) have this same feeling of being part of a cohesive community, the idea that they do, or should, speak with one voice.

  6. It’s true that statisticians tend not to share a single identity, perhaps because so many of us identify as members of another field as well (any of a plethora of social science fields, medicine, education, program evaluation, even economics). I, for example, have described myself as an education researcher who doesn’t have a “substantive” specialty (e.g., teacher performance measures, social-emotional learning, reading intervention, etc.), when I might equally describe myself as a statistician with a substantive specialty in education. Casting myself–and seeing myself–this way is probably a product of the structure of my graduate program, my mentors, and the fact that education research teams tend to have a single quantitative methodologist. Also, statistics isn’t itself a social science: the theory of statistics applies equally to the movements of molecules in a gas as to the propagation of disease or the expression of political views.

    That said, I’m not sure it’s our lack of community that’s the problem, so much as our lack of political power as a group. Imagine if all the statisticians across the country rose up with one voice to oppose a government appointment or policy. Now, the question is, did that image make you laugh out loud or just smirk?

    • I thinking along the same lines. Statistics (and computer science) are methodological/tool-building fields. Consequently, our identities tend to diffuse into the fields where our methods are applied. Furthermore, often many different methods can be applied to the same problem and all yield interesting insights. So there is less of a sense of “my method is correct” the way a scientist can claim that “my hypothesis is correct”. Methods are more or less useful; hypotheses are more or less correct.

  7. I think economists have to deal a lot more with cranks than other fields and I think facing this hostility might both drive the group identity, and also make them relatively more insular and unwilling to interact with non-economists.

    I guess there’s the occasional P=NP type of crank in computer science. I assume there’s similar stuff in statistics too. But it’s rare to run into this kind of thing, generally speaking, there’s just not many people worked up about it.

    But any time an economist makes any kind of public comment, somebody pops out of the woodwork to say that it’s a fake science and blame them for the financial crisis, all wealth inequality, inflation, or whatever. On top of this you get laypeople who latch on to various novel economic theories (ranging from heterodox to nonsense) that they don’t even understand, arguing forcefully for them.

    This would tend to make people band together. Even if you totally disagree with another economist, at least you can have a coherent discussion with them vs. getting yelled at by someone who doesn’t know what they’re talking about.

    • There are lots of statistics cranks: people who claim null hypothesis statistical testing is useful or even required, or who call a huge correlation that arises out of hundreds of potential comparisons a discovery, or who draw statistical conclusions from comparisons of p-values, or who say that an effect is even more likely to be real when it’s estimated from a small and/or noisy sample. The difference with econ is that stats cranks have thousands of peer-reviewed publications and include top scientists in their respective fields. They outnumber legit statisticians many times over.

      That may actually be why we don’t have a strong shared identity: we’ve responded to cranks, historically, by diluting our message rather than by closing ranks.

    • This is an excellent point. I’m very far from being an economist-identified economist, but something primal kicks in when I hear uninformed bloviating on economic topics.

      This defensiveness-against-cranks is one factor, I think, in the widespread sense among economists that their discipline (even its heterodox fringes) are the custodians of economic policy. It devolves on them, or so they think. Example: there are endless angels-dancing-on-the-heads-of-pins debates among economists about the fine points of cost-benefit analysis, but in practice decision-makers commission the analyses whose conclusions they know they want. Even so, the economists think it matters who comes out on top in these debates. Maybe those in power have a greater tendency to cite simpatico economists than practitioners from other fields, and economists think causation runs from their ideas to policy choices. (Cue Keynes on this, about defunct scribblers. I have always thought this was not his most shining hour.)

      On a related point, it used to be common among economists of a certain broad stripe to demean dissenting arguments as “sociology”. With the rise of behavioral econ, one hears less of this.

      • +1 (except for the Keynes comment, I like that quote) Isn’t part of the explanation that an economists’ nascent education is to learn that commonsense is wrong, e.g. imports lead to more growth not less, adding debt to a business will lead to greater returns, and so on. That naturally leads to an us against them mentality with respect to their profession, which builds a sense of community.
        I don’t think most other fields start their training in this way.

    • As a Physicist turned Economist, it is interesting to see people’s reaction when I tell them what I do for a living.
      Very few people offer their take on Physics, but also just as few people pass on the opportunity to make some comments about Economics, although more are about the economy.

      The sad but obvious reality is that Economics as a field does have its share of problems.
      Most of them are not Econ-specific and include issues that are frequently addressed in this blog.
      But then a huge chunk of the criticisms of Economics is really targeting the caricature TV Economists (e.g. Peter Navarro)
      or some Mathematician/Physicist who claims to have fixed a fundamental flaw in Economics usually due to their own misunderstanding of some basic terminology/assumptions.
      The fact that much of the criticism from the outside is not very valid or relevant does end up making the Economists defensive.
      This is a shame because frankly what a lot of us are doing isn’t creating real knowledge or value* and we could do with some serious introspection, whether it is fueled by criticism from the outside or not.

      • Too many economists (and their followers) believe their simplified models of the world are much closer to the real world than is actually the case.

        I started to write a longer comment to explain why I think this — the economists-for-the-layman books that I’ve read, the podcasts and articles I’ve seen, etc. — but it was getting unwieldy and there’s no point to it really.

        Just as physicists know that planes aren’t frictionless, gas isn’t infinitely compressible, etc., economists know that markets aren’t collections of rational actors who have the same information. They aren’t dumb, at all. But “they” (by which I mean the subset of economists I find worthy of mocking) seem to forget these limitations when it comes to discussing what happens in the real world.

        For example, you’ll see grown men (and they are usually men) say that the value of something is what someone is willing to pay for it…and they’ll mean it!

        Economists have a lot of insights to share; if I didn’t think this was true I wouldn’t have read a dozen or so books about economics by economists. But it’s not hard to find economists making nutty statements.

        • “say that the value of something is what someone is willing to pay for it…and they’ll mean it!”

          I say that and mean it. I have **alot** of experience with how things vary in value. What makes me laugh is that so many people believe that the value of something is the price tag on it, or what someone declares it is, or some “intrinsic value” (wrt stocks) – or my favorite, that something is so important that it’ “can’t be valued” – people say that and mean it!

          Nothing has any more or less value than someone else is willing to pay for it at any given moment. You don’t need to go any farther than eBay or the stock market to see that.

        • Jim –

          > Nothing has any more or less value than someone else is willing to pay for it at any given moment.

          Consider the possibility that you’re not all-knowing and all-seeing. That maybe while you’re smarter than most others, it’s not like you’re always way, way smarter. Or at least not always.

          Consider that the answer to the question of whether something might be worth more than what people are willing to pay for it might be somewhat subjective, and might vary by context and conditions.

          Then consider a situation where someone has a sick family member who needs hands-on care. To empty bed pans and clean that family member up. To feed them and bathe them, and speak kindly to them and generally give them loving and kind care. To help lift them out of bed and to help clothe them. You get the picture.

          Now suppose this person with the ailing family member has limited means to pay for a caregiver. They’d like nothing more than to pay that competent and loving caregiver that comes to care for their family member a huge salary, but the best they can do is hire someone from an agency that pays $12 per hour. And let’s say, to make the example more challenging, the company is run by someone who inherited it, and wbo mostly lives off a trust fund and likes to get high a lot and gamble a lot but makes an appearance every now and then so as to continue to draw a solid six-figure salary.

          In such a situation I happen to think that the “value” of the servicea provided by the two individuals’ might not exactly, precisely, match what they earn.

          Now I know that you’re
          not one to be swayed by sentimental virtue signaling – so I can count on you to give it to me straight – ​am I really a hard, cold economics reality denying fool for having a different opinion than you on that topic?

        • “Consider the possibility that you’re not all-knowing and all-seeing”

          Ha ha ha! That’s a certainty I confront *every single day*. Believe me! :)

          But in all honesty I think that situation is covered. The value is what someone is willing to pay at any given moment. Sticker price – “the going rate” – isn’t equivalent to value. Value is the **transacted price**, and it’s not fixed or stationary like a sticker price. It can vary widely and rapidly. In your example, apparently a business owner is willing to transact at a value much below sticker price. So be it. But tomorrow he’ll transact with a different customer at the sticker price or possibly even higher, and the value will rise.

        • jim –

          > The value is what someone is willing to pay at any given moment. Sticker price – “the going rate” – isn’t equivalent to value. Value is the **transacted price**, and it’s not fixed or stationary like a sticker price.

          So I’m reasonably sure that you wouldn’t care for an ailing stranger for $12 per hour, but I’d guess you’d do so for some wage – say $1,000 per hour? And there are people who would do so, probably more skillfully than you – for $12 per hour.

          So then, what is the precise value of that service?

          > In your example, apparently a business owner is willing to transact at a value much below sticker price.

          I don’t see how you got that from my example.

        • Jim I’m half with you and half with Phil. IMHO the **proper** way to value something in Economics is the value that people would be willing to pay for it **if they all had substantially symmetric information**.

          That someone is willing to pay $10k a bottle for a “snake oil” medicine that does nothing doesn’t make it in some sense “actually worth $10k” except in the trivial sense that you can scam people out of $10k with it. No one should believe that by selling scam snake oil to unsuspecting little old ladies we are “increasing the wealth and GDP of the nation” or anything like that.

        • guess I’d have to think about this one. Suppose there is a water filter that can filter out Arsenic from Bangladeshi well water. Suppose the villagers know it exists and works. Suppose the cost to produce it is high enough that Bangladeshis can’t afford it, but visitors from the US routinely buy them for their visits, lets say for $50 which is about 15% more than the materials and labor cost to produce. Now, how does the fact that people living there have very little income affect our assessment of the economic value of the filter?

          Suppose there is a fake filter that does nothing and sells for $2, and the people of Bangladesh are not aware it is fake. Its clear to me the true value of the fake filter is $0 which is presumably what the Bangladeshis would pay if they had proof it was fake. But its not clear to me that its meaningful to say something like “if Bangladeshis made on average $5000/mo like the American do, they would be willing to pay $80/mo so the Americans are getting a very good deal because they only pay $50”

          I mean, if Bangladeshis made $1M/mo they’d probably pay $1000 for the filter, if they made $400/mo they’d likely pay $30 for the filter… The issue is that income is a scarce resource, but **information** is not. Information can be copied freely, it costs nothing to tell everyone who comes to a store “these filters work, and the $2 ones are useless” but it requires actually doing something with scarce resources to give the Bangladeshi people $400/mo additional to spend.

          So I think you may have a point in some sense but I think its really important to consider the fact that “information wants to be free” in some sense.

        • Daniel, I agree this is subtle and my strong convictions are loosely held :) However, I disagree information is free or “wants to be free”. In some sense, I am more with Jaron Lanier: information is alienated experience, and it should cost something. That “information is free” is a specific ideology, and can be argued to be at the root of much of what is wrong with the internet and social media.
          But that very lengthy philosophical discussion aside, my primary point here is that willingness to pay is always relative to *ability* to pay. In any given market with many potential buyers, who all share same information about “A” (as you say, a yuge IF), the realized price will be a very local thing that sort of averages over the varying abilities to pay.
          Imagine you are selling a rare painting at a silent auction, that might reasonably be expected to go for $10 million in some marketplace of museums or whatever. But when you run the auction, Elon Musk shows up unplanned and unannounced, and for various reasons the painting elicits a sentimental feeling in him, and because he simply must have that painting, drastically overpays to $30 million. Et voila! Painting sold $30 million.
          Is the painting *worth* $30 million? Sure, in some trivial sense because it sold that way.
          Now repeat without Elon Musk, and the sale price might revert closer to $10 million.
          Now let’s move across markets as you suggest, to attempt to price a water filter in Bangladesh. The value of the service to them depends on a whole heckuva lot of factors, but obviously they have hard constraints on *ability* to pay, and more so, the less ability one has, *the very preferences can change*. I have no idea what neoclassical theory says on this point, but my prior is that it is of course incorrect ;)
          All I can say is in myself, and in people close to me I observe, preferences change with ability to pay – your rank ordering of priorities can revise over time this way. I think this is part of Eric Weinstein’s general critique of neoclassical marginalism, but I confess I am moving rapidly into an area where I very well may not know what I am talking about haha.

        • Daniel said:

          “**if they all had substantially symmetric information**”

          We agree! :) Excellent point. Yes, I’m with you on this one. My statement “Nothing has any more or less value than someone else is willing to pay for it at any given moment” presumes that it’s a fair deal in that both parties have full knowledge of the condition of the transacted property, and no ruse has been conducted by buyer or seller. I would not claim that value = price under the condition where the transacted property has been misrepresented.

        • It goes beyond misrepresentation though. For example the “market for lemons” A seller of a well maintained used car knows that it is well maintained and free of hidden damage, the buyer does not. So the buyer is not willing to pay as much as if they had full information. Suppose there is a kind of black box recorder which can demonstrate reliably to the buyer that the car was not abused. The price they are willing to pay goes up. I’d say the value of the car without the black box is the same as the value with the black box its just that without the black box the transacted price doesn’t properly match the value of the car.

          with or without the black box, the car is the same, A well maintained used car. A person who knew the owner wouldn’t need such a black box and would pay the full price.

          To the extent that some information requires using up resources to acquire its legit to think that some information has a price (ie. The price of making the box) But a lot of information is freely available but not understood by consumers or even hidden in the market (unscrupulous used car dealers that say clean away the oil stains or don’t mention that they changed the transmission fluid but it was discolored and Burnt smelling). This represents inefficiency and not inherent changes in value so to speak. The car isn’t “worth more because the seller didn’t mention that the transmission fluid was burnt”

          I don’t think we disagree I’m just elaborating for the benefit of discussion.

        • “It goes beyond misrepresentation though. ”

          Sure, I agree. That’s the noise in buying and selling. Both the buyer and the seller can have incomplete information, which could be a benefit or detriment to either.

          For example, I might read something on TSLA, determine that it’s the most undervalued stock in the world, and buy it. The day after I buy it, Musk could drop a doozy to the press and the stock price could drop significantly. OTOH, the day after I buy the stock, the gov could announce a new incentive to buy electric vehicles, and the stock price could jump 10%

        • As I’ve said, jim you are hopeless. But it appears to be spreading. Yes, there are conditions on willingness to pay as an economic measure of value – symmetric information being one. But there is a more basic issue: willingness to pay is linked to income. Nothing in economic theory says that the distribution of income is “right” or “optimal.” Different distributions create different WTP, and so, different economic values.

          There are two important points here. First, these are economic values, not the sole measure of value. Economic values govern much of our lives, but not all. I’m not saying that other values are “better,” but we must accept that people may have other ways of measuring values. That was my earlier point that we should ask economists what they are willing to pay to have their measure of value used rather than some other measure. If you need a concrete example, think of someone who says that endangered species have a right to exist and that their “value” has nothing to do with what someone is willing to pay for their existence.

          The other point is the distribution of income – something economists have always struggled with. Samuelson’s economics text (I believe the most widely sold textbook of all time – perhaps in any discipline) famously contains the sentence “when a democratic society doesn’t like the distribution of income, it uses redistributive taxation to rectify the situation.” That is almost verbatim, and it appears early in his text and I remember it 45 years after having taught using that book. It is a dangerous sentence. Economists know well that taxation causes distortions, so redistribution is not a simple matter. That sentence has the effect of saying that everyone should favor economic efficiency (which is based on WTP and costs) and it is for other disciplines to worry about the messy subject of distribution of income. And I’d suggest that jim embodies the problem with that – if you don’t like the distribution of income, then there is nothing sacred about WTP.

          (I will add that the problems don’t end there – rejecting the distribution of income does not say how value should be measured or who should determine values – here, my views are closer to jim’s than he might believe. Humans have a poor record to replacing market values with other values. But I view that as a failing of the human species – we haven’t found ways to incorporate “community” values, “environmental” values, “social” values, etc. that work better than market-based values. That does not make market values good, just better than the alternatives.)

        • “As I’ve said, jim you are hopeless. ”

          Ha, yeah, as hopeless as reality can get. It’s not “the dismal science” for no reason.

          If you want a more natural analog for “willingness to pay” then exclude humans and try to understand how other organisms “pay” for their living. No doubt, a crow would use all the public funds available to get rid of owls because owls prey on crows. Look to ecology for the “distribution of income” too! So many intellectuals seem to believe we’re animals in anatomy and evolution but not in economy.

          I think most of your comments reflect the wishful thinking that’s invading modern and intellectual discourse. We just wish it wasn’t so that owls eat crows, so lets reconstruct the world in that image – and ignore the real one. I assure you this will not lead to progress.

        • But you do pay for them through regulation of industry, which does directly determine their value. At one time, they were free. Today, as population rises and the demands on the resource increases, they’re no longer free, and no longer “invaluable”. There is clearly a cost to maintaining them

        • Somebody said:

          “I’m completely unwilling to pay for sex or friendship. Those things therefore have no value.:”

          But you do pay for sex and friendship and even love. You just don’t want to admit it. Jill wouldn’t be with Joe if he was a janitor.

        • My God I thought you were just an extreme libertarian, but it turns out you’re actually just tremendously stupid, and every second anyone has spent reading or responding to your words is wasted and irrecoverable.

          Good thing I have plenty of time.

          If I ran into a person I found attractive, who said they could be mine for $x, there exists no $x at which I would take that offer. Evidently, that’s not true for you, but it is true for me.

          “But you just don’t want to admit it”

          Well, of course if you assume things are true, regardless of observable reality, then you can continue to believe they are true.

          “Well, you really WANT to take the offer, but won’t because of moral compunctions”

          How convenient, a theory which, when confirmed by data, is predictive, and when contradicted, points at a reality which is more true than truth, despite corresponding with no observable elements of reality.

          “Maybe not in so many words or a dollar amount, but you pay in other ways, like time and effort.”

          Really straining the definitions here. I’m willing to sacrifice, or “pay” some time and effort for these things, but there is no cash offer I would accept instead. If someone offered me a bushel of apples for money, I would accept the offer for some dollar amount. Therefore, these two types of value must be incomparable, in which case you gain no insight by conflating definitions in this way.

          “But there is some amount of effort and time you’re willing to sacrifice for these things, and there exists a supremum to that effort”

          No, there isn’t. It depends on the context and interpretation of the meaning of those sacrifices. Some people refuse to wait 30 minutes for their friends to arrive because it is interpreted as a disrespect; those same people may willingly die to save the lives of those same friends. People risk their lives for their friends all the time, that is an observable fact. Do you really then believe those same people would stay friends with those people if they asked then to work without pay for the privilege of friendship, or to similarly risk their life to prove loyalty?

        • Revised and corrected for readability

          “But you’re willing to sacrifice effort for these things, and there exists a supremum to the amount of effort you’re willing to sacrifice, so you can define value as the supremum of effort you’re willing to sacrifice.”

          No, there actually doesn’t exist such a supremum. What sacrifices you’re willing to make are sensitive to context, and the interpretation of the sacrifice asked for or volunteered changes the nature of the friendship itself. Some people refuse to wait 30 minutes for their friends to arrive because they consider being made to wait disrespect; those same people may willingly die to save the lives of those same friends. People risk their lives for their loved ones all the time, that is an observable fact. Do you therefore believe those friendships would continue if the rescuees made their friendship contingent on work without pay? Or if they asked their saviors to risk their lives in a similar manner just to demonstrate loyalty?

          You can maybe define a predicate of the form “would you offer sacrifice X for your friend?” and query over the set of imaginable sacrifices, but that set of potential sacrifices is, at best, partially ordered, while dollar amounts in “willingness to pay” are totally ordered. They really are completely different concepts.

          I haven’t even addressed this

          Jill wouldn’t be with Joe if he was a janitor.

          because it’s such nakedly stupid sentence. Many people are married to janitors and other low earners. Some people are devoted to partners who are medically paralyzed and can offer nothing in return. What are you trying to prove?

        • > Well, of course if you assume things are true, regardless of observable reality, then you can continue to believe they are true.

          This is what I tried to explain to jim above – that he’s not all-knowing.

          He said that is obvious to him, but yet he argues as if otherwise:- he consistently says that he knows better why people hold their beliefs than they themselves understand.

          I certainly wish I had his mind-probing skills! They would be convenient.

          I’ll also note that jim’s unwavering fearlessness about expressing honest views about moral issues is admirable.

          I have noticed an interesting coincidence that all those who lack his integrity in that regard just happen to disagee with him about economics.

        • The WTP = value claim holds only if either (a) you define value as WTP, in which case it becomes an identity, or, if you have a more everyday, encompassing conception of value, or (b) *both* of the following are true:

          1. Value to a society is the sum of individual valuations.
          2. Individuals have a single, consistent preference ordering, such that WTP in one context cannot be contradicted by explicit or implicit valuation in another.

          If it’s (a) there’s nothing to talk about, and it’s not interesting. If it’s (b) it is counterintuitive on its face, and there are endless counterexamples.

          I prefer to think that there are many types of values, differentially salient in different situations. There are contexts in which consumer valuation is the most important, and others in which it isn’t.

          Incidentally, this is prior to consideration of market failure. That’s because, if we believe both b1 and b2, we can use various techniques to estimate what consumer value would have been absent such failures.

          Finally (and this gets to an aspect of b1), how should one think about value in nonconvex environments with multiple local equilibria? If individuals choose in such a way as to best reflect their preferences locally, does it matter that there exists another set of choices that might satisfy them more if they could coordinate and identify it? If you think as I do that most real world environments are nonconvex on both the choice and production sides, this is an important question.

        • very well said. I 100% agree. The nonconvexity point was always significant in my mind, though it attracted little attention from economists (perhaps an inconvenient truth?).

        • Dale, the inconvenient truth is that no matter how resources or income are distributed, everyone will have some limitation on their resources, and thus be subject to their willingness to pay. This in fact is the fundamental concern of every life form on earth and the fundamental driver of evolution. From that standpoint you can think of successful entrepreneurs as roughly analogous to a species that radiates into a new niche with abundant untapped resources; taking the analogy a step further, you can think of the emergence of fossil fuels as roughly analogous to a major evolutionary event, say the emergence of calcareous skeletons or land-dwelling organisms.

        • Peter:

          No one has inexhaustible resources, so everyone is subject to some limit on what they can pay for what they want or need. Do you dispute that?

        • Of course. But how does that invalidate my comment? Do you think my argument is wrong? If so, at what points?

          Incidentally, your observation about scarcity holds in the presence of all sorts of market failures as well as the more general argument I made. Does that mean WTP = value if there are externalities, public goods, asymmetric information?

        • Jim (and Daniel), let’s discuss this simple model that (I claim) has important features of the real world.

          The city of SimpleTown (pop. 100,000) owns several parks. The city is in financial trouble and decides to sell Nice Park to the highest bidder. The highest offer is from a property developer who bids $50 million.

          Each person in SimpleTown values the park too. Some of them visit it several times a week, others go years without visiting. Some are rich and some are poor. Some of the wealthy ones who live near the park and use it frequently put a value of $1000 per year on the use of the park. For others, the value is only a few dollars per year. Collectively, if you add up the the value each person places on the park, it comes to $10 million per year. This is worth much more than a single $50 million payment (indeed the net present value of $10 million per year is something like $200 million).

          So the people collectively value the park at $200 million, yet the high bid is only $50 million. Why don’t the people band together to buy the park?

          Well, how would that work? The closest thing to being workable is that The People would form some sort of nonprofit corporation and then contract to pay (each year) the full value they place on the park in order to pay a mortgage for purchasing the park. Actually I’ve deliberately built in enough margin that each person would only have to pay 1/4 of the true value they place on the park and they could still be the high bidder. But this is never going to happen: it’s organizationally impossible; there will be inevitable free-riding (indeed, what happens when people die or move away, to be replaced by other people who value the park similarly on average but did not sign the original contract). You aren’t going to get 100,000 people to each contribute the required amounts, even they agree on what they are.

          All of the practical issues listed above, and many more, constitute what economists call “friction.” Note, Daniel, that this friction exists even in a world of complete information.

          Friction is one of the reasons the market value of a thing can differ from the value it provides to its users.

          Economists know about friction. I did not make it up. I didn’t even make up this example!

          And yet, you still find people like jim who believe the value of something is the amount someone is willing to pay for it. (That is usually called “market value” in the lingo, and is only one of several types of “value” that can be assigned).

        • Phil
          I don’t like your example. It is a classic case of the free rider problem and economists have a number of tools to deal with it. I’m not saying the solution is easy nor that the efficient solution (to make it a park) will necessarily result. But the problem is that you are using the willingness to pay as the right measure to value the park with. I’m not even saying it is necessarily wrong – but it is not the only way to conceive of value.

          A better example would be if the aggregation of the individual WTP for the area as a park is less than the developer is willing to pay to develop the area. Then, the obvious conclusion is that the area should be developed, not used as a park. That may even be correct – but I would maintain that there are people that would object on grounds other than what they are willing to pay to preserve the area.

          There are two (at least) issues here. First, you are permitting the WTP of the poor and rich people to be simply added together. But the poor person has far less income, so their WTP will be lower than the rich person’s, yet they “value” the park more in some sense. Jim appears to reject that there can be any other sort of value. This ignores the reason why the land is not currently privately owned – it is publicly owned. As a result, it should serve community values whatever those are, for better or worse.

          The other issue is that some people believe there are “rights” that supersede values. Suppose an endangered fish (like the famous snail darter that stopped a large hydro facility years ago) in the area that would be harmed by development, and that nobody is willing to pay to preserve that fish. Is it worthless? To humans, perhaps. But that is why we have some notions of rights, some reason for laws, some sense of ethical guidelines other than market values.

        • Dale, with this example I’m just trying to illustrate that problems like free riding etc THAT ARE ACKNOWLEDGED BY ALL ECONOMISTS lead to the value of something being different from the amount someone is willing to pay for it.

          Yes, I agree with you that there can be problems with using economic valuations as a proxy for “true value” (whatever that means) but that is very different from my point about economists believing their models even when their simplifying assumptions are clearly wrong. jim might or might not agree that the only value is economic value, but I think even he would agree that free riding exists and that 100,000 people in a city do not constitute rational actors able to participate in a frictionless market in order to collectively buy a park. And yet, there he was jumping right in with the statement that the value of a thing is whatever someone is willing to pay for it. He’s making my point perfectly. No need to complicate it by discussing morals and ethics and stuff like that.

        • Joshua,
          If you prefer A to not-A then you have somehow compressed all of the parameters that describe A and all of the parameters that describe B onto a single “preference” dimension that lets you say A > not-A.

          If a worm can (A) wiggle to the left into soil that is warmer and more alkaline, or (B) to the right into soil that is cooler and less alkaline, and its little wormy brain ‘prefers’ A, then it has somehow compared (higher warmth, higher alkalinity) to (lower warmth, lower alkalinity) and compressed this into a single dimension that says A is better than B. The worm (presumably!) does not understand chemistry and does not know what alkalinity and temperature are, and does not need a ‘rational’ process for deciding. In order to have a preference it just has to put the choices on its preference axis.

          Yes, there can be situations in which a random element plays a role, or in which someone (or some worm) is so indifferent between options that the choice is haphazard. But _if_ you have a preference then this implies that you have some way of taking all of the parameters and putting them on a ‘preference’ axis (or ‘utility’ axis or whatever you want to call it).

          That’s all I’m going to say on this, I’m worn out.

        • What else is there to say anyway? After all, when writing those paragraphs you did somehow considered (among many other things!) all the alternative texts that you could have written and concluded that these were the best possible closing remarks to finalize the discussion.

        • > its little wormy brain ‘prefers’ A

          FWIW, a worm has two brains, or rather cerebral ganglia.

          Isn’t that pushing ‘patternalism’ a bit far? We might as well say that soda machines *decide* to give us cans and change.

        • You’re arguing with someone who earnestly believes the value of sex and friendship is how much they’re willing to pay for it

        • Phil, even if you accept WTP as a valuation, the fact can be that markets are inefficient and don’t lead to the optimal transaction. I think we both agree that the optimal outcome is for the people to pay the city to buy the park. That it doesn’t happen doesn’t mean that adding up the willingness to pay (under perfect information) isn’t at least closer to some ideal economic value than say the $50M of the developer is.

          (note, in this case, perfect information would include knowing exactly how much other people had offered to put into the “kitty” to buy the park). So for example suppose there’s a kind of fundraiser thermometer, and there’s a process through which the people can bid and then converge on a price.

          I’m willing to say that there are plenty of meanings to “value” in fact, I insist on it. The issue is for decision making we need to place numerical quantities in a consistent “dimension” and compare them. Is “the total value of the park as a park” bigger than “the value to the developer as an apartment complex”. Without knowing this in some sense, we can’t decide what is the optimal outcome.

          At the very least, I insist that some kind of WTP can *not* be used as the metric unless it’s arrived at through a process in which all relevant information is made clear to the market participants. That’s the main point I was really trying to make. In many ways it’s about the *actual price paid* NOT being the same as the actual **value**.

          And yes, I believe “friendship” is something I can value without being willing to pay money for it. But I do think Jim has a point that we pay in other ways. If we want to maintain a friendship we should devote time, and be willing to listen to our friends complain about their problems, and help them fix their broken lawnmower, and such like that. Giving up things like time and effort and such to maintain a friendship is a real thing. Also giving up say reduced prices on our electricity bill in order to have cleaner air is a form of paying for clean air.

          Dollars are an intermediate quantity. It’s the things you could have done with the dollars which are the things you actually “pay” with.

        • Daniel, I feel like you’re overcomplicating it. In the hypothetical example I’ve given — which is similar to real-world situations that occur every day — Nice Park is worth much more to the people of Simpletown than anyone is willing to pay for it. I first saw this example in an economics article and I think almost all economists would agree that the phenomenon it captures is a real on. And yet I have seen economists and their, uh, fans, claim (as jim is doing) that the value of a thing is indeed equal to what people are willing to pay for it. Sure, maybe that would be true if you ignored both free-riding and market friction (I’m not sure whether free-riding could be termed a type of friction), but that model is too different from the real world to give a good answer in real-world situations similar to the Nice Park example. This is analogous to a physicist arguing that a cannonball and a feather would fall at the same rate if you drop them off the Leaning Tower of Pisa, because that’s what would happen in the absence of friction.

        • I reiterate that I believe Phil’s example is a poor one. Economists do not mean by WTP what people say in a survey or even what you could actually collect from people – it is a concept intrinsic to the individual, and yes, economists can read people’s minds in that we can discern their WTP even if they cannot. So, in Phil’s example, the people are WTP more for the park than the developer is WTP for development. There are any number of reasons why markets may fail, and the outcome may diverge from what is efficient, and most economists would agree that some kind of intervention is appropriate under those circumstances (though they probably disagree about what form that intervention may take – taxes, regulations, zoning, etc.).

          What Daniel says that I take issue with is “The issue is for decision making we need to place numerical quantities in a consistent “dimension” and compare them.” This is unusually narrow thinking for Daniel. Why must there be a consistent dimension for comparing alternatives? There is a long history of multicriteria decision making that permits consistency without insisting on a single dimension. The fact that after a decision is reached, we can infer some consistent measure consistent with that decision is very different than saying that such a measure is the proper way to reach a decision.

          Now, I’m not arguing that have public discourse and voting on a decision is superior to doing a cost benefit analysis according to the rules of economics. I am arguing that these are legitimate alternatives and that it is worthwhile to debate the best way to make decisions under particular circumstances. I’m with Peter Dorman on this – in many areas of life I believe markets make better decisions than the alternatives we know of (most consumer products, for example), but in other areas (health care, environmental services, for examples) I think we can do better than markets (although if we aren’t careful, we can also do worse). But the idea that a single consistent dimension is the best way to make all decisions strikes me as fundamentally wrong. My response is “how much are you willing to pay to have a single consistent measure used for making decisions about land use, health care, etc.?”

        • Dale, my example (which I reiterate I did not make up) illustrates that the value of something can be very different from what someone is willing to pay for it. That’s all I’m trying to do. You can dislike it as much as you want, but unless you tell me it doesn’t illustrate what I’m trying to illustrate then I don’t see your problem with it.

        • The idea that the way an individual makes decisions about how to spend their scarce time (to use your example) equates to the only sensible way for society to decide how to spend scarce resources is almost incomprehensible to me. I mean plenty of people hold that view, but to believe it is self evident and the only way to decide things is wrong, short-sighted, and dangerous. Having a utility function is one thing – using it as the basis for social decisions is quite another. Further, economic theory makes a number of assumptions about utility functions that are difficult to believe at an individual level, and inappropriate for use at a societal level (without making a number of strong assumptions).

          I am not objecting to your characterization of one way to make decisions (including ones like the land use example you provided) – but I do object to you portraying it as the only (fill in the blank – consistent, rational, correct) way to make decisions. Single dimensions are not stupid, they may be simple, but I do find them soulless under certain circumstances.

        • @ Phil

          I’m jumping into this without having fully followed the discussion at hand so I do apologize if I have missed something blatant.

          “….If you grant that, by choosing A over not-A, you have indicated that you prefer A over not-A, then you have somehow evaluated all of the many parameters involved in A and not-A and managed to put them on an axis in which A > not-A. This does not require rational thought. Birds do it, bees do it. Even educated fleas do it.”

          I believe the above to not necessarily be true in the sense it is too absolute about information. I agree that in making a choice for A you’ve decided on the basis of information at hand, and on the basis of how you’ve processed it, that A is to be endorsed over NOT A. However, that doesn’t mean you have all or most of the relevant facts nor does it mean you’ve synthesized it in a way you will come to like in 10 minutes from now.

          I’m just pointing that out because you specifically mention that the decision of A over NOT A implies such synthesis (“somehow evaluated all of the many parameters involved”) whereas a decision implies no such thing though a decision does imply at the very least your willingness to accept ignorance over such parameters.

          That said, doesn’t it have to be immediately true that we reduce decisions into a go/no-go framework (a single dimension) and hence reduce all of those aspects (known or not) to that dimension. I’m pretty sure this is what you are saying and it seems immediately true. Is there really controversy about this?

        • AllanC,
          Yes to everything you say…especially the final paragraph. To make a choice you have to compress all of the dimensions into a single one so that you can evaluate whether A is better or worse than not-A. I would not have thought there was controversy about this since, as I noted previously, it’s close to being a tautology. But evidently some people do disagree.

        • Dale, you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one. You need a utility function that puts everything in one dimension. This does not have to be dollars, but it’s often convenient to express it in dollars.

        • Dale is acknowledging that it’s true that a representation in those terms must exist, but the decision making process does not necessarily need to involve computing that representation. Furthermore, I’d add that identifying that representation with “value” isn’t always correct outside of a strictly economic context.

        • Phil
          Really, are you serious? You say “a consistent dimension” but surely you know that it is possible to think more multidimensional than a single one. Also, choosing the “best” option is not a requirement – in fact, it may not even be desirable. Kenneth Boulding had a great quote: “under uncertainty, optimization leads to hitting the target of disaster.” Sometimes, satisficing works better than optimizing. Sometimes finding a decision region that dominates another region can lead to good decisions. Are you really suggesting that we must optimize and must have everything measured in one dimension? From your many contributions on this blog, I find it hard to believe you are really suggesting that.

        • Dale, yes, I’m really serious.

          Suppose you’re trying to decide whether to spend your evening watching a movie, volunteering at a shelter, hanging out with a friend, fixing your dripping faucet, getting some work done, working out at the gym, or practicing the guitar. You can only do one of these — let’s pretend you can’t practice guitar while hanging out with your friend, and so on. There are many social and psychological dimensions here…but ultimately you have to be able to put these in order, or at least to have one of them bubble to the top, in order to choose one. You have to conclude (possibly) that A > B > C > D > E, or, at least, that A > (any of B, C, D, E). This ability to put choices in order requires that they be put on a single dimension.

          Economists and decision analysts refer to this as a “utility function.”

          This can’t be news to you.

          Please stop with the patronizing view that anyone who thinks you can put all this stuff on a single dimension is soulless or stupid or simple. You’re the one who doesn’t understand something.

        • Phil –

          > …but ultimately you have to be able to put these in order, or at least to have one of them bubble to the top, in order to choose one.

          I guess you do, but I certainly don’t. In fact I sometimes specifically avoid ranking them by “value” so as to pretend I don’t have to deal with the consequences of my choices.

          And economists have been grappling with adjusting from the previously mistaken assumption that people typically behave in such a way as to “maximize utility.”

        • Joshua, what you’re saying doesn’t really make sense. At any moment you are choosing to do something, even if “something” is just veg out in your chair. That thing you have chosen to do is the thing that, at that moment, you have selected from all of the other choices you could have made. Whether you like it or not, you’ve chosen one option out of many. You’ve implicitly put your options in order and selected the best one.

          I don’t have the patience to argue this point. You and Dale can believe what you want.

        • Phil –

          > Joshua, what you’re saying doesn’t really make sense.

          This might be part of the problem – what seems to me to be your belief that everything I do has to “make sense.” This notion of the rational actor, indeed, was one of the foundations of economics thinking up until fairly recently – but surely you know of the drastic change that has taken place in that regard. I’d like to claim that I’m that rational, but I’m certainly not. Much of what I do doesn’t actually “make sense.” I’m not saying that I act irrationally – just that I don’t always act in some way that really conforms to some construct of what’s “rational.” I often live in a space between rational and irrational.

          > At any moment you are choosing to do something, even if “something” is just veg out in your chair. That thing you have chosen to do is the thing that, at that moment, you have selected from all of the other choices you could have made.

          Again, you might work that way – I have no reason to believe that you don’t – but I don’t. At least not all the time. If I did I’d go crazy. I don’t run down some kind of list of all the possible things I might do at any given moment, evaluate their utility, and then decide which one to do (on the basis of its utility return).

          At any rate, my overall sense of this discussion is that you, Daniel, and jim are arguing from some theoretical construct as if you can then apply that construct to reality. Certainly I think that applies to jim’s argument.

          I don’t think reality works that way. Even more problematic is jim’s repeated argument, which as near as I can tell you think works for you also, that you can get into MY head to know how I think and that it must conform to your theoretical model. That’s a flawed mode of thinking, IMO.

          I suspect that this discussion might touch on to the notion of bounded rationality.

        • Joshua,
          If you choose action A over action not-A, them you have some function that somehow evaluated A against not-A and decided A is better. I’m not saying that decision has to be formal or has to be rational or has to be consistent with other decisions, I’m just saying that in order to make a decision you somehow have to be able to put one thing above another on your scale of preference or utility or whatever name you choose to give it. And no, I do not think your choice has to “make sense.” It would be nice if your discussions on this blog made sense, though!

          In order to prefer A to not-A, you have to consider whatever factors of A and not-A are important to you, and somehow compress them all into a single dimension so you can determine which one you prefer. This is very close to a tautology.

        • Phil –

          First things first.

          > It would be nice if your discussions on this blog made sense, though!

          It’s a pet peeve of mine when people say “That doesn’t make sense” in blog comments – not only when directed at me, but generally.

          I think that what you really mean is something like “I don’t agree,” or “I don’t fully understand what you’re saying.” I mean yeah, it’s possible that I might say something that doesn’t make any sense, and I have little doubt I’m just about the least smart person in this room, but believe it or not I’m actually smart enough to not make nonsensical arguments. At least not usually.

          Now for the actual discussion.

          > If you choose action A over action not-A, them you have some function that somehow evaluated A against not-A and decided A is better.

          I get that might be how you function in the real world, and I get that as a logical construct, but that’s not how I live my life. At least not all the time.

          Your first assumption is the biggest problem with your argument. When I choose action A, i’m not necessarily actually making a choice of action A over action not A, or action B, or C, etc. In fact, often when I take action A it’s specifically because I didn’t think about making a choice, and didn’t want to think about making. choice. If I had consciously thought about the fact that I was making a choice, I would far more likely have chosen to take action not A, or action B, or action C, precisely because the expected utility of those other actions would be higher. I don’t think that translates into me acting “irrationally,” likewise it doesn’t fit your construct and neither does it exactly “make sense.” It lives somewhere in between all of that.

          I get that it might be hard for you to understand how people who aren’t as smart and productive and logical as you might go about living their lives – but the problem is you can’t just reflect on how you live your life and just assume that your way of going about life applies for others.

          > I’m not saying that decision has to be formal or has to be rational or has to be consistent with other decisions, I’m just saying that in order to make a decision you somehow have to be able to put one thing above another on your scale of preference or utility or whatever name you choose to give it.

          I’m not sure how else to say this, but that’s not how it works for me.
          And you can’t get into my head. Now you can think that you know better how it works for me than I know how it works for me. But I tend to doubt that’s true – and usually that’s just a basic logical flaw when someone makes an argument based on the assumption that they know better what’s going on in someone’s else’s head than that person knows.

          > In order to prefer A to not-A, you have to consider whatever factors of A and not-A are important to you,

          I make plenty of decisions each and every day that don’t involve consideration of what’s important (or most important, or what’s most important) to me. Of course that would mean something like whether I’m going to eat a bagel, which doesn’t involve any consideration of what’s “important,” but even more, whether I’m going to eat a bagel with cream cheese slathered on it, when I”m trying to reduce my cholesterol intake, and then there’s the whole aspect of my reflex to reduce my cholesterol intake when I’m not even at all convinced that dietary cholesterol intake actually has a meaningful impact on heart disease. So with the bagel with cream cheese there is some aspect of deciding what’s “important” but lot’s of choices I make are at the long end of a chain of all kinds of instincts and impulses and u-turns and head fakes that really have little to do with what is “important” to me or, or more “important’ to me, or even if they are directly related to what is “important” to me, the end result largely reflects a lot of confusion about just what exactly IS important to me.

          Simply the choice of A and not not A is not going to be a choice of what is more important to me, let alone choosing A versus B, and C, and an endless list of things, that if I thought about them and actually made the consideration you’re telling me I make, I would actually, most certainly have NOT chosen A, and go crazy in the process or at least just be completely paralyzed probably starve to death before I reached the end of all the possible choices that I had to evaluate for their relative importance!!!!

          > and somehow compress them all into a single dimension so you can determine which one you prefer.

          Look, you can argue by assertion all you want. You can keep telling me that you know what goes on in my head when I decide to choose A. That’s your prerogative.

          But again, this notion of the rational actor is a concept that has been written about and studied quite a bit. It is indisputable that there has been a large-scale revision in economics because theories based on the “rational actor” turned out to be not very predictive of reality. That is very close to what I’ve been getting at here and I have repeatedly referred you to that aspect and you have yet to address it. I also referred you to bounded rationality, another concept that is close to what I’m arguing here.

          Now indeed, like I said, maybe what I’m saying “makes no sense,” but if so, then you should consider either (1) explaining to me why my argument isn’t at least adjacent to the direction that many economists have taken recently vis a vis the “rational actor” and “bounded rationality” or, (2) explain to me why there are so many smart people, much smarter than myself, who share with me that characteristic of making no sense.

        • Phil –

          I Googled “rational actor” and here’s the first hit. Now of course, I’m not I’m suggesting that investopedia is some kind of ultimate authoritative choice, and I hate it when people substitute decontextualized online definitions but an argument, but it might be a reasonable source to assess whether or not what I’m saying “makes sense.”

          Rational choice theory states that individuals rely on rational calculations to make rational choices that result in outcomes aligned with their own best interests.

          Rational choice theory is often associated with the concepts of rational actors, self-interest, and the invisible hand.

          Many economists believe that the factors associated with rational choice theory are beneficial to the economy as a whole.

          Adam Smith was one of the first economists to develop the underlying principles of the rational choice theory.

          There are many economists who dispute the veracity of the rational choice theory and the invisible hand theory.

          Now let’s forget about the invisible hand part…

          That first sentence looks to me to be a reasonable rough approximation of at least part of what you were saying.

          And the last part seems to me to be a reasonable rough approximation of a leat part of what I was saying.

          So maybe you could help me to understand why I’m not making sense if you can explain to me why that reference is unrelated (at least in part) to our discussion.

          Otherwise I’ll have to continue to live in non-sense making ignorance, or just assume that I was making sense, and you just didn’t get what I was saying.

        • FFS I’m not talking about making a rational choice! Nor a correct choice, a logical choice, or however else you are trying to characterize what I’m saying. I’m talking about making A CHOICE. If you grant that, by choosing A over not-A, you have indicated that you prefer A over not-A, then you have somehow evaluated all of the many parameters involved in A and not-A and managed to put them on an axis in which A > not-A. This does not require rational thought. Birds do it, bees do it. Even educated fleas do it.

        • This is in reply to your reply to AllanC above. Of course, I don’t disagree that when you choose A you have revealed that A is preferred to notA – with the caveat that it is a choice made at a particular time with particular information and under particular circumstances. If any of those things change, your choice may change. I do think it is a tautology to say that you have reduced things to a single dimension by which you have evaluated A to be superior to notA. That single dimension may not be knowable ahead of time; indeed, it may not be stable or useful in any sense except that it can be used to describe the decision that you just made. So, it isn’t clear that it is useful in any way. It can’t be used to necessarily predict any future choices, nor can it be used in a normative sense to evaluate your choices.

          Let’s be concrete. I choose to get in my car and drive 15 miles to go hiking. Clearly, I have spent some money and time and exposed myself to risk in driving to that particular location. After the fact, we can try to estimate all those costs, reduce them to a single dimension (whether money or something else, hopefully not utils), and then declare that my hiking is “worth” those costs to me. I do not disagree with any of that.

          But I still don’t see what the use is of that calculation. It certainly can’t be used to apply to a societal decision about whether to improve railroad grade crossings to save lives – it reflects a personal decision I made and not a social decision. Further, it can’t be used to predict my future behavior, unless you want to make bad predictions. I suspect my other behavior will appear quite inconsistent with that one decision, unless we accept the fact that my preferences are highly unstable. Can it be used to state the implicit value I put on risks to my own life? I suppose we can do that, but I’m not sure what use we can make of that. Such an exercise can be instructive for me, and it may even influence my future behavior by showing how inconsistent many of my decisions are. But I would object to anyone taking that implicit value and applying it to my valuation of any other risky decisions. The context of those other decisions matters and I’m not willing to be held to the idea that because I exposed myself to that particular risk of driving to that particular trail that it means I am willing to be exposed to risks X, Y, and Z because the cost “less” than the risk I voluntarily exposed myself to.

        • Phil –

          Earlier I offered two ways that I might view your statement that what I way saying doesn’t make sense: (1) You didn’t understand what I was saying and (2), you just disagree with me. Now I realize that a 3rd might actually be more appropriate – that I don’t understand what you’re saying. Here’s why that might be the case:

          In this thread you have now said that:

          > To make a choice you have to compress all of the dimensions into a single one so that you can evaluate whether A is better or worse than not-A.

          and

          > You need a utility function that puts everything in one dimension.

          and

          > Economists and decision analysts refer to this as a “utility function.”

          and

          > You’ve implicitly put your options in order and selected the best one.

          and

          > you somehow have to be able to put one thing above another on your scale of preference or utility or whatever name you choose to give it.

          and

          > In order to prefer A to not-A, you have to consider whatever factors of A and not-A are important to you,

          and, finally,

          > FFS I’m not talking about making a rational choice! Nor a correct choice, a logical choice, or however else you are trying to characterize what I’m saying. I’m talking about making A CHOICE. If you grant that, by choosing A over not-A, you have indicated that you prefer A over not-A, then you have somehow evaluated all of the many parameters involved in A and not-A and managed to put them on an axis in which A > not-A. This does not require rational thought.

          Bold added of course.

          So leaving aside a few things that I emphasized, I see you saying that I’m looking at a series of options, considering and then ranking them in terms of what’s important to me, what their utility might be, indeed what their “utility function”
          might be, to what extent I prefer them each relative to (all) the others, and what is the “best,”….

          but that there isn’t logic or rationality or an evaluation of correctness involved.

          Yeah. I’m having a hard time understanding what it is that you’re saying. Because to me it looks like what you’re saying isn’t consistent, and maybe even is self-contradictory. I have a hard time, for example, how evaluating an (infinite?) list of possible actions, and ranking them on the basis of importance, or preferability, or utility, wouldn’t be a logical or rational process.

          Let me try to explain my viewpoint again. I’m sorry that it’s repetitive, but I still haven’t actually seen an indication that you accurately understand what it is that I’m saying (maybe we should try Rapoport’s rules here: https://rationalwiki.org/wiki/Rapoport%27s_Rules>)

          Let’s imagine I decide to go for a walk down my into the woods behind my house. I start by picking up my left foot and putting it down. Then I pick up my right foot. One might say at that point I have many options of what action to take next. I could “choose” to do a belly flop onto the ground, or turn a one-footed back flip (well, actually I couldn’t choose to do that but I could choose try to do it), or I could “choose” to put my right foot down.

          I dunno. That seems to me to be kind of off to refer to that as a “choice,” and certainly to refer to it as a choice based on which of the many options available to me is the most important, or returns the most utility, or is the one I most prefer.

          Here’s another example. Today I worked on a big pile of branches I had amassed, to be used for building cages around my fruit trees to protect them from deer. I was kind of pissed because I didn’t get to building those cages and I had this big pile of wood that I had collected, and it was starting to rot a bit and would no longer be very good for building the cages. And I was in a bit of a bad mood because some dude online was telling me that comments I had written were nonsensical. So anyway, I decided to chop those branches up into shorter sections so I could use them as mid-sized kindling in the fireplace.

          The pile was a knotted mess. I reached for one branch, randomly, to pick it up. It wasn’t the one closest to me. Did I opt for action A by reaching for that branch and not any of the others? Yes.

          Did I reach for that branch because reaching for that branch was more important to me? No. Not really.

          I reached for that branch (A) as opposed to the others (not A), just because.

          It was random.

          Did I choose option not A? No! Did I not choose it because it was further away? No.

          It was random that I didn’t choose not A. I reached and picked something. In the act of putting my hand on a branch, it became option A, and not, not A. It wasn’t option A until my hand landed on it. The act of putting my hand on it is what made it option, and all the other branches not A. There was not choice A and a list of not As until my hand it a branch. I certainly hadn’t ranked the branches according to ANY metric prior to my hand landing on A.

          I also didn’t pick option B, or option C, and so on. But it wasn’t because I chose not to pick them – not in any meaningful sense. I didn’t reason through the counterfactual of what it would have mean had I chosen one of them. There wasn’t any ranking on any scale.

          And then when I put my hand on A, there was resistance. It was hard to pull it out of the pile. I figured maybe one of the others would pull out more easily. If I pulled really hard on A, then the whole pile would likely shift and start falling all over the place. So I randomly put my hand on another branch, and I pulled. And there was resistance there as well.

          Was it branch B, or C, or D that I pulled on second? I have no idea. Was it not A? Not in any meaningful sense. It was just a random branch. But yes, in another sense it WAS not A.

          Were the choices sorted out along some series of choices by some consistent metric? No. It was random. Then I reached for another branch and it resisted less. Was it also option not A (the second of the not A’s?). Was it option C, or D? I have no fucking idea. It was random. It didn’t resist. I pulled it out and cut it up.

          And maybe if I weren’t pissed off because I didn’t use the pile the way I had intended, or maybe if I hadn’t had someone telling me that what I said made no sense, I would have been more patient. Maybe I would have been more deliberate, and actually looked at the pile in some depth, to see if I could figure out some kind of logical system for pulling out the branches, or an ordering in terms of likely efficiency (a kind of ranking of the utility of my choices).

          Sorry for the rambling, but what I’m trying to convey is that I think there is something of a continuum. I think there are plenty of times that I choose to act on something that is pretty much random. I mean sure, in some sense you could say if I acted, as a matter of definition, then I chose A and not, not A, and there must have been some reason or ranking in why I did that.

          But I think there are plenty of situations in every day life that just isn’t a meaningful description of what I do. But there is a rather seamless transition, step by step, to decision-making where there absolutely is a coherent system of assessments being made.

          I see this as a kind of continuum going from a rather random process where what you’re describing doesn’t apply to one where it does, more at the end of the spectrum as described by the rational actor concept.

          At any rate, it seems to me that along that continuum, you are making some kind of a distinguishing line between what it is that you’re describing and what economists are describing when they talk about the rational actor. I mean it seems to me that what you’re describing is pretty damn close to the rational actor description, but you’re insisting (I think) that absolutely (and obviously) isn’t the case. That you aren’t describing the rational actor process (because you say rationality has nothing to do with it) – but I don’t get it. I don’t understand where you are drawing the line of distinction.

          I’m wondering if you could try to explain to me where you do so?

        • “So the people collectively value the park at $200 million, yet the high bid is only $50 million.”

          1) “friction” is just a cost that you’re leaving out of the “value”.

          2) What people *say* things are worth to them in dollars and what people are actually willing to pay for them are two different things. That’s the whole point of WTP. You can say anything you want, but at the end of the day the value is how much you let out of your wallet or how much you’re willing to accept for the item to get rid of it and have the cash.

          3) But even if your example were correct in every detail and every single person involved knows this, they’ll still sell the item at the discount price in order to resolve the situation quickly. In other words, they make the choice to sell at a lower price / value in order to not be bothered by the details for years on end, as they would in your example.

          Your example actually clearly illustrates my point rather than refuting it. The higher value you claim is just a fantasy value; but even if it were real, people don’t care because they want the cash now. Therefore, the value is what it sold for.

    • I like your “banding together” idea for another reason, and that is that economists have no standard for truth.

      When a crank proposes some flawed statistical analysis, anyone can break out the maths and prove them wrong; you don’t need to ask your colleagues, because the standard of truth can be applied by anyone in the field. In the hard sciences, truth is less of a social construct.

      In economics, my (flawed?) perception is that any crank economic theory has to basically be resolved by argument at authority, i.e. by citing the consensus of the field, because it’s difficult to prove things (except some very simple ones). Thst means they need to be a lot less open-minded about the heterogenity they allow in their field.

      • Mendel:

        That’s an interesting point. Something similar happens in medicine/epidemiology. It wasn’t enough to just let the Andrew Wakefields of the world publish their research and then let it be refuted. Maybe the key issue here is that these are high-profile, important topics. In the hard sciences, topics are more academics. An exception that proves the rule is climate change: it’s hard science and it has crank theories, but the crank theories can’t just be left to wither on the vine because they have policy relevance.

  8. I am surprised at the number of people here who have not heard of Judy Shelton. If you scanned New York Times headlines last year you’d have learned that her out-of-the-maintstream economic positions (particularly on the gold standard) caused much controversy when Trump nominated her for a position on the Federal Reserve Board of Governors.

  9. Economics in the USA, like psychology in the USA, seems torn between being a real academic science (in which case they would have to say “we don’t know” a lot) or being sages who pronounce practical wisdom. There are many economists who believe they are the only thing standing between their country and the next Great Depression, and that they need to speak with one voice and hide any doubts. (Just like the Miligram experiment, the Stanford prison experiment, and the experiment with teens in a British park were presented as the thin line between civilization and authoritarianism). I think that is why US economics, like US psychology, publishes a lot of very weak arguments backed by very great authority and very effective publicity.

  10. [Top level because nested comments become unreadable]

    I’ve not followed the exchange closely but Phil seems to have gone from:

    “Just as physicists know that planes aren’t frictionless, gas isn’t infinitely compressible, etc., economists know that markets aren’t collections of rational actors who have the same information. They aren’t dumb, at all. But “they” (by which I mean the subset of economists I find worthy of mocking) seem to forget these limitations when it comes to discussing what happens in the real world.

    “For example, you’ll see grown men (and they are usually men) say that the value of something is what someone is willing to pay for it…and they’ll mean it!”

    to:

    “you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one. You need a utility function that puts everything in one dimension. This does not have to be dollars, but it’s often convenient to express it in dollars.”

    and

    “Whether you like it or not, you’ve chosen one option out of many. You’ve implicitly put your options in order and selected the best one.”

    It doesn’t seem so nutty to relate the value of something for an individual to the individual’s ranking of everything. At least if we assumme that such a ranking exists!

    Expected utility provides a nice framework to think about rational choice but more from a normative than a descriptive point of view. There are enough examples that show that people is not always rational in this way – or at least that we may need take into account the thinking process and related emotions and not just the ultimate choice. It’s not clear how to save coherence in these simple cases, much less that we can have it when looking at a lifetime of choices.

    • It’s not necessarily descriptive even if your decisions are rational. I’m going to reiterate this quote from Dale because I think it’s important

      The fact that after a decision is reached, we can infer some consistent measure consistent with that decision is very different than saying that such a measure is the proper way to reach a decision.

      A decision being equivalent to utility maximization is not the same thing as a person actually literally computing a utility function and maximizing it. Sometimes (most of the time) problems are easier to conceptualize outside of the expected utility framework.

      It doesn’t seem so nutty to relate the value of something for an individual to the individual’s ranking of everything. At least if we assumme that such a ranking exists!

      It seems pretty nutty to me. If my friend cheats me out of $50, I’m going to demand that they pay me it back, and if they refuse I will cease contact with them. At the same time, if that friend’s life is in danger, I might risk my life to save them. In addition, I’m willing to pay $50 to see the Lehman Trilogy. Going by this logic, I value the Lehman trilogy more than my friendship, which I also value more than my life.

  11. > It’s not necessarily descriptive even if your decisions are rational.

    > A decision being equivalent to utility maximization is not the same thing as a person actually literally computing a utility function and maximizing it.

    The equivalence of what is observed with what the theory predicts is what would make it descriptive. If your preferences under uncertainty are rational in the sense of satisfying some (not so mild) axioms, the representation theorem of von Neumann and Morgenstern says « only » that they may be described as the maximization of the expected value of some function. It’s not intended to be a mechanistic description.

    • It’s not intended to be a mechanistic description.

      The part under dispute is the sentence “for decision making we need to place numerical quantities in a consistent dimension”, which is an explicitly mechanistic description. And if the subject of discussion is a moral allocation of resources, then what people actually think and feel is relevant, rather than just the ultimate decisions made.

      the representation theorem of von Neumann and Morgenstern says « only » that they may be described as the maximization of the expected value of some function.

      Stepping away from the probabilistic setting, there are rational (in the axiomatic sense) preferences that cannot be represented as any utility function

      https://en.wikipedia.org/wiki/Lexicographic_preferences

      though I admit these are contrived.

      • > rational (in the axiomatic sense)

        Sure. There are different axiomatic definitions of rationality. For vNM-rationality, under some axioms, preferences can be represented by numbers and uncertainty results in expectations.

        By the way, I have no interest in defending what other people may have said in other comments. When I said descriptive I definitely didn’t mean it in a mechanistic sense – and I think this is consistent with the standard usage.

  12. Enough of the name calling (myself included). There is an important issue here, and one I think worth all this effort. Every decision we make reveals something about our preferences. When Baby Jessica fell down a well in Texas, enormous resources were spent to save her. The cost of what we expended could have been “better” spent in any of a myriad of ways that would have resulted in saving many more lives. There are many such examples we can cite, including recent controversies regarding wearing masks, getting vaccinated, COVID shut-downs, etc. I do believe it is useful to point out the costs of various decisions and to show inconsistencies with other decisions we make, either personally or socially. Why do I spend x hours of my time searching to save $y on my next TV purchase when I could easily earn more than $y by spending x hours on a consulting job? Personal decisions are one thing, social decisions are more important. Why do we spend $z to save Baby Jessica when $z could save many more statistical lives by devoting it to eliminating railroad grade crossings? And so on.

    Yes, these exercises are useful. The issue, I believe, concerns whether they are normative principles to follow for decision making. Science fiction leads the way: Spock and Data would never save Baby Jessica, but Kirk or Bones certainly would. Fundamentally, I think there is something about being human at stake here. If rationality requires a single dimension for measuring utility then decision making become algorithmic. I believe humans depart from this algorithmic approach to life frequently and regularly. I won’t attempt to defend the human decisions as superior to the algorithms – indeed, in many respects they are not. But they are human decisions and something in me believes we should protect the right of people to behave as they do, regardless of whether or not it is consistent or rational.

    For societal decisions, this is much more complicated. At what cost are we willing to protect irrational decisions. Arrow’s Impossibility Theorem has something to say about the possibility (lack thereof) of embodying fairly uncontroversial principles in a single social welfare function. We can each have our own utility function and those can display virtually any behavior we want – but what are we to do when our individual utility functions conflict with one another? We could decide on the basis of WTP. We could submit decisions to majority vote. One person, one vote vs. $1, one vote. And, we have many other methods of making decisions that use some dollars, some votes, and some mixture of elitism or expert opinions. These are the fundamental issues at stake in this discussion.

    I don’t pretend to have the answers. I do think it is worth discussion and I am interested in hearing what people think – and people on this blog are among the most thoughtful people I can regularly interact with. From this lengthy discussion we have had, I would even back off from some things I have said. The one thing I will cling to, however, is resistance to the idea that our decisions must conform to a single metric. That, I think, is one more step towards removing humans from the decision-making process, for better or worse.

    • > If rationality requires a single dimension for measuring utility then decision making become algorithmic.

      NO. We still must negotiate that utility function, we must have discussions about what we want to achieve (as a society). Sure if individuals want to act irrationally, as long as they’re not harming people directly, let them. Sit in your underwear and eat cannabis brownies, or gamble away your trust fund or whatever. But we can’t let social policy be crazy irrational crap decided on by a small number of people. Imagine if when a baby falls down a well the law says that all able bodied men within 100 miles must converge on the location and spend at least 10 days trying to save the baby. No. Not ok.

      Normatively, decisions that affect more than a few tens of people should be made by an approximation of a utility maximization principle in which the affected people have a say in the utility at use. Anything else is authoritarian madness.

    • Dale:

      I appreciate your comments. The reality is that we mostly agree. But to some extent I think you misunderstand my contention. The “Baby Jessica” story illustrates that. Whether or not Baby Jessica *should* be rescued is a different question from what people are **willing to pay** for that rescue.

      Furthermore, Baby Jessica isn’t just “a life”. She’s an **innocent baby life**. In the minds of the community – many of whom are parents – the parents are also victims. The event was likely preventable but unforeseen – it could happen to anyone. Rescuing Jessica is also rescuing the parents from their grief and anguish at having not prevented this horrible event. In this instance the parents of the community are willing to pay a very high price to rescue Baby Jessica.

      Also I feel like you’re boiling everything down to individual means to pay for every single good or service separately. That’s not a legitimate characterization of WTP, because we often pay for things as a society, even though one person seems to “get more” out of it than another. How much does each individual “pay” for the Interstate Highway system, vs how much they get out of it? It’s just impossible to calculate. But the fact that the Interstate system allows almost everyone to receive some goods that they either couldn’t otherwise get or couldn’t get at a reasonable price induces us to pay for it as a society.

      You’ve seen me argue against things like collective medical care. I argue against that because it’s become such an immense source of waste that **people would be better served by paying individually**, and leaving the needs of those who can’t pay individually to charity – or some combination of public funding at a drastically reduced level and charity. I find this an ever more compelling argument as people are now trying to extend public health care to mental health care, which IMO is about 85% snake oil.

  13. The ethnicity model is right. It’s something like “I against my brother. I and my brother against my cousin. I, my brother, and my cousin against the world”

    That is, economists disagree violently among themselves (ourselves), but all are agreed that they (we) are the ones who should be arguing and that outsiders should keep their noses out of issues they don’t understand.

  14. I know, it’s hard to believe I am continuing this discussion. I just want to point to a pertinent real-world example – not meant to support any positions anyone has taken, but to stimulate some meaningful thinking about decision-making, WTP, utility, etc. The state of Oregon undertook a laborious decision making process to rank treatments they would cover under Medicaid. A good description can be found here: https://pubmed.ncbi.nlm.nih.gov/9226780/. Rankings were based (at least in theory) on cost effectiveness, medical opinion, and public opinion. An interesting study revealed little consistency of the rankings with cost effectiveness data (https://pubmed.ncbi.nlm.nih.gov/8778541/). Further issues with the rankings revealed themselves when the state legislature allocated funds – effectively drawing a line between what the state would pay for and what they would not. That put the ranking to a real test, and revealed considerable public discomfort with the rankings (https://journalofethics.ama-assn.org/article/oregons-experiment-prioritizing-public-health-care-services/2011-04).

    This is an example of how messy real-world decisions are made using a variety of “scientific” and emotional factors, without resorting to a single dimensional ranking. As such, it might be viewed as either any example that reducing such decisions to a single dimensions is not realistic, or not desirable, or how failure to measure things on a single cardinal scale led to poor decisions. It is a complex case and I don’t think it should be easily seen as evidence of either a failure or success in decision-making. What I propose is that it is one of the more ambitious undertakings I have seen in public decision making and that illustrates the problems with virtually any methodology that might be adopted. These are not simple issues. Statistics may be hard, but public decision making for issues such as health care are exponentially harder.

    • Dale,
      If you want to make a decision you have to reduce to a single dimension because you have to be able to decide that one decision is better than another.

      If you don’t actually need a decision then you can spend forever saying A is better than B in this way, but B is better than A in this other way, and C is better than both of them in both of those ways but is worse in a third way. Repeat ad Infinitum.

      If you want a decision, though, you need to be able to put them in order or at least to say that one is better than the rest.

      I really see nothing I can add here. Im sorry this isn’t clear to you.

      • I recently bought a car.

        Did I make the decision to buy such and such make and model, with such and such configuration?

        I think so.

        Did I reduce every car in the market with every combination of options, every potential seller, etc. to a single dimension to decide that this option was better than every other option?

        Definitely not.

        A heuristic procedure to narrow down the seller, manufacturer, model and options until I got – after a few days of rumination – to some final choice seems a better description of how the decision was made.

        The only way that I can reconcile my experience with your “to make a decision you have to reduce to a single dimension” is by thinking of the binary decision “order this car now and call it a day” and “not yet”: at some point the former was ranked higher than the latter.

        Or maybe that was not a true scotsman, I mean, decision.

        • In fact “narrow down” is not the right image either. More like simulated annealing, jumping across the automobile landscape until settling down into a local optimum…

        • Carlos, I honestly don’t know what you’re talking about. You had a wide selection of cars to choose from, or you could choose not to buy a car at all, or you could choose to put more time into selecting a car, or you could pay a car expert to help you choose a car, etc., etc., etc. You had many choices that involved different amounts of time, money, mental effort, and so on, and somehow you boiled all of that down into the decision to buy the car that you bought at the time that you bought it. You preferred that choice to all other choices. What part of that seems odd or unclear or ill-defined?

          Don’t even bother answering. It’s hard to believe you aren’t trolling. I have completely lost patience with this conversation.

        • Well, let’s forget about the heuristics discussion.

          “If you want a decision, though, you need to be able to put [A,B,C] in order or at least to say that one is better than the rest.”

          It ain’t necessarily so. [Even after watering down the requirement to have a ranking of all to just asking for one being superior to each of the others.]

          Some extremely simple counter-examples:

          You could make a decision completely at random. [You may say that this means that all got the same “score” and randomness is the only way out though.]

          You could make a decision at random based on some weights. In this case there is a one dimensional ordering but it doesn’t determine which one is chosen. Only the relative likelihood. [I don’t think your model can be saved in this case, except trivially by saying that once the decision is taken the “score” becomes 1 for the chosen option and 0 for the others or something like that.]

          Why is a stochastic model less true than yours as a matter of principle?

          Remember what you wrote about how too many economists (and their followers) believe their simplified models of the world are much closer to the real world than is actually the case.

        • By the way, I wonder if there are other properties of choices that are evidently true for you.

          For example, let’s say that – given the choice between A, B, and C -A is chosen indicating that A is prefered to B and C.

          Does it follow that if C had not been in the menu A would have been prefered to B?

        • Hahaha OK, you got me, I can’t resist responding. What got me is your attitude that I have this strong feeling about how you “should” decide things, that you think is surely a matter of choice, whereas in fact I am making a simple mathematical statement that every reasonable person except you agrees with: that if you want to claim that one thing is preferable to another then you have to have a scale of preference on which one is higher than the other. It’s almost a tautology.

          To get to your “example”: You’re right, you don’t have to choose your preferred car. You can instead choose your preferred method of choosing a car. You could hire someone to choose one for you, or you could choose a car at random from all of the cars available on craigslist, or you could choose at random from all of the cars available at your local dealerships, or you could choose this, or you could choose that. Somehow you decide which way you prefer to choose a car, and in order to do that you need to decide which of these methods you prefer.

          Or…wait, you don’t need to choose a method of choosing a car. You could choose a method for deciding what method to use. Or you could decide what method to use to decide what method to use to decide what method to use. And so on. But at some point, if you prefer one approach to another then you have to somehow evaluate those possibilities relative to each other so you can decide which one you prefer.

          As for your other question, it seems like you’re looking for a discussion of Arrow’s Theorem or something, but I don’t see the relevance. Nobody said your preferences have to be rational or coherent or whatever.

          If, given a set of choices, you have a preference for one over the others, then you have necessarily been able to put them on a “preferences” scale in order to compare them.

        • > If you want a decision, though, you need to be able to put them in order or at least to say that one is better than the rest

          > if you want to claim that one thing is preferable to another then you have to have a scale of preference on which one is higher than the other.

          “Making a decision” and “claiming that one thing is preferable to another” are notequivalent. Unless you restrict the meaning of making decisions to claiming preferability. I take some decisions by flipping a coin and I don’t think that by “preferring” whatever the coin shows I claim that it’s the “preferable” thing in any meaningful sense.

          I’m more than happy to concede that whenever a decision is made somehow the value 1 – in some arbitrary scale – is assigned to the chosen option and 0 to every other option. This doesn’t necessarily involve any comparison though.

          Regarding the “every reasonable person except [me]”, I’m not sure if that means that I’m reasonable and Joshua isn’t or maybe you missed this comment of his: https://statmodeling.stat.columbia.edu/2021/12/15/economics-as-a-community/#comment-2038095

        • whereas in fact I am making a simple mathematical statement that every reasonable person except you agrees with: that if you want to claim that one thing is preferable to another then you have to have a scale of preference on which one is higher than the other. It’s almost a tautology.

          Pure semantics:

          That is not actually true. Consider a choice where you have 2 real valued dimensions (let them be commodities). For any two pairs (x1, y1), (x2, y2)

          I prefer 1 to 2 if:
          * x1 > x2
          * x1 = x2 AND y1 > y2

          These preferences are totally-ordered and hence “rational” but do not admit any utility representation–there is no common scale.

          The proof involves the uncountability of the reals, so it’s irrelevant to real life.

          This does not violate von neumann morgenstern because of the consistency axiom (which is irrelevant outside of a risky setting).

        • Well, to be fair he said “if you want to claim that one thing is preferable to another”. You can simply assign a preferability value of 1 to the former and 0 to the latter. It’s really a tautology!

        • Perhaps Phil’s theory applies to random decisions I make, when I’ve “chosen” to make random decisions?

          Except, actually, I don’t think I usually actually choose to decide things randomly.

          And I kind of doubt that worms choose to head towards warm soil.

          But then again, I’m not making sense.

      • Phil –

        > If you want to make a decision you have to reduce to a single dimension because you have to be able to decide that one decision is better than another.

        There are many decisions people make that involve a negotiation between/among different dimensions, necessarily.

        For example, choosing to have a kid. People know that on one dimension it will add a lot of stress, require a lot of work, likely cause unhappiness in some respects. There’s a lot of risk involved. They’d have to give up many things they enjoy in life. They’d have to sacrifice many short term pleasures.

        On the other hand, there are many satisfactions involved. But more related to this discussion, there are many long term benefits. It might add to the meaning of life. To a sense of accomplishment or satisfaction.

        For me, that doesn’t boil down to a single-dimension decision. Not in any meaningful way.

        You are consistently arguing by assertion. I keep asking you to engage in the discussion of the parameters, of trying to share perspectives. And each and every time, you just insist on arguing by assertion. It is because you say it is. Sometimes you’re even throwing in some insults, or expressions of exasperation.

        Well, again I’ll ask you to explain how you’re distinguishing your argument from the rational actor argument many economists made for decades. As a field, as I’m sure you know, they’ve moved away somewhat from that theory as it turns out it didn’t predict behaviors all too terribly well.

        Again, I’ll say I think there’s a kind of continuum, with random acts and bounded rationality at one end of the spectrum and rational actor decision-making theory at the other. And as near as I can tell, what you’re describing is akin to rational actor theory but you’ve said that rationality isn’t involved.

        So apparently you’ve carved out some space along that continuum for your viewpoint. Leaving aside my skepticism that the space you’ve carved out applies universally as you’ve been arguing, I again ask you how you are distinguishing your law governing all decisions ever made from bounded rationality and/or rational actor decision-making theory.

      • Phil –

        Because I’m an impatient sort, I’m going to repost this in sections because it got caught in the spam filter when I posted it as a whole:

        > If you want to make a decision you have to reduce to a single dimension because you have to be able to decide that one decision is better than another.

        There are many decisions people make that involve a negotiation between/among different dimensions, necessarily.

        For example, choosing to have a kid. People know that on one dimension it will add a lot of stress, require a lot of work, likely cause unhappiness in some respects. There’s a lot of risk involved. They’d have to give up many things they enjoy in life. They’d have to sacrifice many short term pleasures.

        On the other hand, there are many satisfactions involved. But more related to this discussion, there are many long term benefits. It might add to the meaning of life. To a sense of accomplishment or satisfaction.

        For me, that doesn’t boil down to a single-dimension decision. Not in any meaningful way.

        • Part 2

          You are consistently arguing by assertion. I keep asking you to engage in the discussion of the parameters, of trying to share perspectives, of describing definition. And as near as I can tell you are repeatedly arguing by assertion. It is so because you say it is so.
          I’m asking for an explanation. Sometimes you’re even throwing in some insults, or expressions of exasperation. I don’t give a shit about that, really. But I’d like something more than just arguing by assertion, saying me your argument is axiomatic or a tautology.

          Well, again I’ll ask you to explain how you’re distinguishing your argument from the rational actor argument many economists made for decades. As a field, as I’m sure you know, they’ve moved away somewhat from that theory as it turns out it didn’t predict behaviors all too terribly well.

          Again, I’ll say I think there’s a kind of continuum, with random acts and bounded rationality at one end of the spectrum and rational actor decision-making theory at the other. And as near as I can tell, what you’re describing is akin to rational actor theory but you’ve said that rationality isn’t involved.

          So apparently you’ve carved out some space along that continuum for your viewpoint. Leaving aside my skepticism that the space you’ve carved out applies universally as you’ve been arguing, I again ask you how you are distinguishing your law governing all decisions ever made from bounded rationality and/or rational actor decision-making theory.

        • Part 2

          You are consistently arguing by assertion. I keep asking you to engage in the discussion of the parameters, of trying to share perspectives, of describing definition. And as near as I can tell you are repeatedly arguing by assertion. It is so because you say it is so.
          I’m asking for an explanation. Sometimes you’re even throwing in some insults, or expressions of exasperation. I don’t give a shit about that, really. But I’d like something more than just arguing by assertion, saying me your argument is axiomatic or a tautology.

        • Well, again I’ll ask you to explain how you’re distinguishing your argument from the rational actor argument many economists made for decades. As a field, as I’m sure you know, they’ve moved away somewhat from that theory as it turns out it didn’t predict behaviors all too terribly well.

          Again, I’ll say I think there’s a kind of continuum, with random acts and bounded rationality at one end of the spectrum and rational actor decision-making theory at the other. And as near as I can tell, what you’re describing is akin to rational actor theory but you’ve said that rationality isn’t involved.

        • part 3

          Again, I’ll say I think there’s a kind of continuum, with random acts and bounded rationality at one end of the spectrum and rational actor decision-making theory at the other. And as near as I can tell, what you’re describing is akin to rational actor theory but you’ve said that rationality isn’t involved.

          So apparently you’ve carved out some space along that continuum for your viewpoint. Leaving aside my skepticism that the space you’ve carved out applies universally as you’ve been arguing, I again ask you how you are distinguishing your law governing all decisions ever made from bounded rationality and/or rational actor decision-making theory.

        • Joshua, you say:
          “For example, choosing to have a kid. People know that on one dimension it will add a lot of stress, require a lot of work, likely cause unhappiness in some respects. There’s a lot of risk involved. They’d have to give up many things they enjoy in life. They’d have to sacrifice many short term pleasures.

          On the other hand, there are many satisfactions involved. But more related to this discussion, there are many long term benefits. It might add to the meaning of life. To a sense of accomplishment or satisfaction.

          For me, that doesn’t boil down to a single-dimension decision. Not in any meaningful way.”

          Yes, many many factors are involved in deciding whether you would like to have a child. But in the end it comes down to: do you prefer having a child to not having a child? That is one dimension.

          I’m not sure what you mean by it not being one dimension “in any meaningful way.” Or maybe I should say, I don’t care if you consider it “meaningful” or not, you have still somehow boiled all of those many dimensions down to a preference, and either Pref(have a child) > Pref(not have a child), or not.

        • Joshua again:
          As for whether I’m “arguing by assertion”, I guess that’s fair but since I am asserting a tautology I don’t really see an alternative. I claim that if you prefer A to B then you must have a way of evaluating whether, in your view, A is preferable to B. You disagree with this. I think the burden is on you to demonstrate that you can have a way of deciding whether you prefer A to B without deciding that you prefer A to B.

        • Phil, I think others are correct that it’s possible, and probably even the most common way, to make decisions without actually in any way rationally evaluating the person’s real preferences. So and so decides at some point in time to do something (say eat a bucket of ice cream) without in fact actually preferring that to the alternative of going out with their friends to get a Reuben sandwich and play mini golf. They just didn’t really consider the choice, they were super hungry, and a bucket of vanilla ice cream was what they had available to them, if anyone had said to them “hey what about calling up Joe and getting a Reuben sandwich and playing some mini golf” they’d have done it, but c’est la vie. And I think way WAY too many decisions are made like this.

          But, yes, if you are going to consider decisions and try to determine which is the “best” then you have to have a notion of “best” I agree with you on that. I just think that way way too many people don’t consider anything at all when they make decisions, and you can see this in the enormous numbers of people who are in terrible crushing debt and/or homelessness and etc when in fact they made a large series of choices most of which were based on nothing more than emotion which directly led to their plight.

        • Daniel –

          > I just think that way way too many people don’t consider anything at all when they make decisions, and you can see this in the enormous numbers of people who are in terrible crushing debt and/or homelessness and etc when in fact they made a large series of choices most of which were based on nothing more than emotion which directly led to their plight.

          I think that’s certainly partially true.

          But it’s also easy to think that at least sometimes people have more choice (or agency) than they actually do have, or at least envision that they have because of their lived history.

          W/r/t your particular example….I own some rental property and often run into prospective tenants who have a long history of making decisions that result in poor outcomes over the long term. So I can look at those poor outcomes (e.g., a bad credit rating, a history of evictions), and assume that they made choices only on a short term, emotional basis, without much consideration or ability to tolerate delayed gratification (there’s even the possibly debunked marshmallow experiment that putatively shows a kind of etiology of that behavioral characteristic – as perhaps hereditary in nature).

          But as someone who’s relatively well-off, and who basically learned how to cope with life in an environment where it was more easily possible to make financial decisions based on long-term outcomes, it can be hard for me to appreciate just how much harder that approach is when there is much more pressure to meet immediate needs, and the possibility of delayed gratification is, in fact, a more more nebulous, or unlikely to actually ever manifest.

        • Phil –

          > I claim that if you prefer A to B then you must have a way of evaluating whether, in your view,

          As others have alluded to, there is much about the definition of “prefer” there that’s maybe complicated.

          I would imagine that it’s not that hard to find a textbook that says that worms “prefer” a certain range of soil temperature. But I think that without being overly-semantic, it’s reasonable to question whether worms really “chose” towards soil of one temperature as opposed to another, because they “prefer” it.

          Yes, I agree that if you have identified that you “prefer” one choice over another, you have identified at least one set of dimensional criteria for evaluation. However, I think that often we make choices without having identified such a set of dimensional criteria, or, have to wade through a complex web of sets of dimensional criteria such at it’s not meaningful to say that the choice has been made based on only one dimension. At least somestimes.

          > You disagree with this.

          Not really. The problem is that despite my attempts to clarify, you seem to think that I disagree that if you’ve identified one and only one set of dimensional criteria, and on that basis you rate two or a finite set of identified items, and rank order the choices on the basis of those criteria, then you are making a choice on the basis of a one-dimensional ranking of the items your choosing between.

          OF COURSE I AGREE WITH THAT, AND YES, THAT IS TAUTOLOGICAL.

          And no, I don’t disagree with that. My point is that construct is quite limited in range, and as near as I can tell, you’re saying that all choices are made within that construct.

          > I think the burden is on you to demonstrate that you can have a way of deciding whether you prefer A to B without deciding that you prefer A to B.

          But that’s at best orthogonal to what I’ve been saying, if not a complete non-sequitur.

          That’s why I’ve been (fruitlessly) trying to get you to tell me how your construct is different than the rational choice concept (except by, maybe, saying that theory involves rationality whereas yours doesn’t).

        • Phil –

          I missed this earlier comment… let me go back to that.

          > But in the end it comes down to: do you prefer having a child to not having a child? That is one dimension.

          Perhaps this is getting closer to the source of the disagreement. I don’t think it’s meaningful to say that it’s simply a choice of whether I’d prefer having a child to not having a child. There’s no way for ME to look at that decision in such a simple manner. I might prefer having a child in some respects and prefer not having a child in others. It’s a trade-off. There’s ambiguity involved. I’m not sure what I “prefer.” Maybe I’d prefer to not have a child but do so anyway because my partner wants a child.

          It’s may not be that I actually know which I’d “prefer” in any meaningful sense. Just because I decide to have a child doesn’t mean I’ve decided that’s my “preference.”

          I think this may be a difference in style, or approach to life. I don’t object to you seeing your decision matrix in such a fashion. I don’t doubt that works for you and describes your process. I’m saying, however, that doesn’t work for me, or describe my process, at least not all the time.and I don’t think I’m singular in that respect.

          > don’t care if you consider it “meaningful” or not,

          Lol. And I don’t care whether you care whether on not I consider it “meaningful.” My point wasn’t that you should care what I consider as meaningful. What I’m telling you, when I’m telling you that I don’t consider it “meaningful” to say, for example, that worms “prefer” warm soil, or to say that my decision whether or not to have a child boils down to a single-dimensional decision, is that I don’t agree with you and I’m trying to understand what really lies at the root of our disagreement.

          I mean yeah, I could take your word for it that what I’m saying just doesn’t make sense. But I’d rather have you try to explain to me in a way I can understand, why it doesn’t make sense. And just repeating that it doesn’t make sense because you assert it doesn’t make sense (effectively) doesn’t help me to understand that.

          Which is why I keep asking you confirming and exploratory questions to better understand your viewpoint.

        • “That’s why I’ve been (fruitlessly) trying to get you to tell me how your construct is different than the rational choice concept (except by, maybe, saying that theory involves rationality whereas yours doesn’t).”

          That’s the difference. “Rationality” is a whole different ball of wax. What I’m talking about has nothing to do with rationality. I feel like you’re asking “please explain why your viewpoint is different from Gibbon’s views about the fall of the Roman Empire.”

          I am claiming that if you prefer A to B then — no matter how many parameters and dimensions are involved in the evaluation — then you have been able to determine that your preference for A is greater than your preference for B. That’s all I’m saying. Please stop claiming that I am talking about rational actor theory.

          As to how to explain that if you prefer A to B then you have applied some sort of evaluation that allows you to determine that you prefer A to B…I’m at a loss. Sorry.

          If it helps: I shouldn’t have said that in order to choose something you have to know whether you prefer it. That’s wrong. This is all about preference. If you prefer A to B then you have evaluated A and B and determined that you prefer A to B. I’m just repeating myself but I don’t know what else to do.

        • Phil –

          > That’s the difference. “Rationality” is a whole different ball of wax. What I’m talking about has nothing to do with rationality.

          So maybe this is the logical ending point.

          Again, since I don’t see HOW you can differentiate what you’re describing from one that requires a rational thinking process, I don’t see how you distinguish what you’re describing from the rational actor decision-making theories. I think those theories have some utility, but also fail to paint a full picture.

          I fail to understand how reducing all possible options to a choice between only two, and considering and then choosing on the basis of what’s “important” or what the “utility” is or what I “prefer” doesn’t involved a rational thinking process.

          I asked you that repeatedly, and you haven’t answered.

          I also don’t understand the logic of equating ALL of my decision-making processes to a worm moving to soil of a certain temperature range. They don’t seem analogous to me.

          Maybe if you could reference some more elaborated treatment that explains your theory, it would help me to understand what you’re arguing.

          I’ll leave you with one more, I think interesting example.

          In this thread you have repeatedly indicated that in terms of what is the best option for you, you were going to stop writing any more comments.

          Yet you have repeatedly written comments after having said that. This isn’t at all an uncommon phenomenon. You see it all the time in comment threads. People say “Talking to you about this is a waste of my time” and then they continue to talk to that person (along with usually, blaming that person for wasting their time, illogically, as if someone waste’s their time, they themselves are obviously the one’s responsible).

          Anyway, to me that indicates that you were deciding what actions to take along multiple axes of decision-making, that were in some ways inconsistent with each other. Specific to a particular action, you varied in terms of what axis of analysis took prominence at any given moment. But at no point where any of your decisions about that reduced to a single axis or single-dimension of reasoning. And yes, despite that your actions couldn’t precisely described as “rational” (e.g., continuing to respond after indicating you thought it would be a waste of your time), they were taken as part of a rational thought processes.

        • I gave an example with an earthworm. Unless you think earthworms are applying a rational process, perhaps that example would be instructive.

          You’re making up this thing about reducing options to only two. I’ve been talking about A vs B or A vs not-A because if we can’t agree on that then there’s no point moving on to multiple options. And evidently we can’t agree on that.

          You keep insisting that I explain a tautology to you. I can’t do that.

          Flip it around. You sometimes find yourself faced with a variety of possible options. At least sometimes I would guess that you want to choose the one that you prefer. In order to do that, do you not have to put them in order by preference, or at least to put one of them above the other? Why don’t YOU explain to ME how you can choose between options that you prefer without evaluating your preferences for them.

        • Phil –

          > You keep insisting that I explain a tautology to you. I can’t do that.

          That’s not what I’m doing. I have asked. you a series of questions, repeatedly, and repeatedly you have responded to my comments but not answered my questions. I really don’t know why you engage in that from of exchange.

          Again, as opposed to how you characterized what I’m doing, THIS is one of the questions I’ve asked but you haven’t answered:

          I fail to understand how reducing all possible options to a choice between only two, and considering and then choosing on the basis of what’s “important” or what the “utility” is or what I “prefer” doesn’t involved a rational thinking process.

          In response you first focused on my saying you’ve reduced all possible options to a choice between only two. In your process, ultimately, as I see it you’d HAVE to have eliminated all possibilities but two and then choose between those two, but let’s just forget about that aspect for now.

          The more important aspect if that you’ve said thinks like your choosing which is more important, or which returns the most utility, or which you prefer, or which is best, etc.

          Can you please explain how you can choose which action to take is the most “important” option, for example, without engaging in a rational thinking process. I honestly don’t understand that – and yet you insist that rationality has nothing to do with it.

          Please, answer the question – or reference something other than your explanation that I could refer to to better understand your position. Is this an idea that you and you alone hold? If not, can you point me to someone else who explains it? If you can, maybe I can understand even if I can’t understand from your explanation.

        • Phil –

          > You sometimes find yourself faced with a variety of possible options. At least sometimes I would guess that you want to choose the one that you prefer. In order to do that, do you not have to put them in order by preference, or at least to put one of them above the other?

          Of course I do. I have said that many times in this discussion. But I’ve also said that process doesn’t describe ALL of my decision-making processes, and as near as I can tell you’ve essentially mocked me for saying that’s the case.

          > Why don’t YOU explain to ME how you can choose between options that you prefer without evaluating your preferences for them.

          Again, OF COURSE that applies for some situations. I’ve never even remotely suggested otherwise. Again, my point is that I don’t think that describes my decision-making process in some universal fashion, as you insist is the case.

  15. > [20 Dec] I shouldn’t have said that in order to choose something you have to know whether you prefer it. That’s wrong. This is all about preference. If you prefer A to B then you have evaluated A and B and determined that you prefer A to B.

    > [21 Dec] I gave an example with an earthworm. Unless you think earthworms are applying a rational process, perhaps that example would be instructive.

    Does the worm claim that one choice is preferable to the others?

    • At least in the hypothetical example, I’m imagining that the worm prefers one over the other, yes. I don’t think it would claim that to be the case, no. It would be a case of revealed preferences.

      • Phil –

        > At least in the hypothetical example, I’m imagining that the worm prefers one over the other, yes.

        It seems to me that the use of “preference” to describe the process of a worm seeking out a certain range of soil temperature is problematic at the level in which you’re using it. As I said above, yes, you might find in a textbook say something like “worms prefer soil temperature in a certain range,” but I don’t think I’m being overly semantic in raising questions about that description in within this context.

        I don’t think it’s meaningful to say that a worm “chooses” between options of soil temperatures based on preferences, because I don’t think it is meaningful to say that worms, “choose” one soil temperature over another. They are instinctively drawn towards soil that’s within a certain temperature range. It’s not a “choice.” There is no realistic option that they’d choose soil that’s not within that range. So they’re not considering the two soil temperatures and choosign one of the other. There is no actual “choice” involved because (not A) wouldn’t even be a possibility. Not to mention, it’s not based on a “consideration” of what’s important, or has the most utility, as you have described above.

        That would be like saying a baby “chooses” to suck on her mother’s breast to get sustenance – as if a baby would “choose” to not do so. Of course, it’s certainly true that some babies have difficulty suckling, and so turn away from their mother’s breast, but certainly you’d agree that’s not because they’ve decided that breast milk is unimportant, or doesn’t have utility, or isn’t what they’d prefer over other options, right?

        Let me check – are you saying that all human decision-making processes are essentially the same process as a worm seeking soil of a certain temperature? Or would that only be the case for some human decision-making processes. And if it’s the latter, where what is the distinguishing feature?

        • I disagree about your characterization of the worm. Worms choose to move towards appropriate temperature soils. If they’re placed in an environment where they have insufficient oxygen, then provided they aren’t missing certain genes, they choose to stop feeding and to initiate an “escape” behavior where they look for a way towards higher oxygen. If placed near a noxious chemical substance they will seek a path away from that substance. Etc etc.

          These are choices in the following sense. If given multiple options, such as there’s a noxious substance and lack of oxygen and different temperatures, they will choose one behavior over another, whatever that is (I know something about worm behavior and even built a Bayesian model for low oxygen escape at one point, but I’m not that familiar with the details that I could tell you which they prefer). Nevertheless, they’ll arrive at a behavior, suppose it’s to prefer escape from the low oxygen environment over eating or favorable changes in temperature…

          Now you can say that this is not a *conscious* choice but it’s up to you to define consciousness in that case, and that’s an extremely tricky thing to do. No, it’s better to say that within the limited sort of consciousness that a worm has, it exhibits some consistent preferences. You certainly can’t ask a worm to exhibit it’s preferences for a Feynman Lecture On Physics vs a dramatic reading of Don Quixote… but it will decide on a behavior when faced with food, low oxygen, and a noxious chemical (say capsacin). it may even stochastically switch between modes of behavior. Who knows.

          The point I think Phil will say is that when given a variety of options, if it chooses one *consistently* then it reveals a preference that can be described as a utility function over the options. If it reveals a stochastic mixture of behaviors with relatively stable frequencies then again it’s a utility function with frequency of behavior as a dimension of choice.

          We should note that it’s entirely plausible to make decisions based on **multiple factors** while still relying on a utility function f(a,b,c,d…) where f is a scalar function expresses the tradeoffs of preference between various vectors of possibilities.

          But I’ll go further. I will admit that people of course **can** make decisions that are inconsistent and irrational and can’t correspond to an even approximately slowly changing utility function. For example people could just roll dice and consult the i-ching or whatever to decide what they’ll do at every step in their life. It will be hard to say anything about preferences here other than “they prefer using the i-ching and dice to all other modes of acting”. This can be formulated as a utility function too I suppose.

          But none of that is relevant to what I’m going to make as a normative claim which is that for **social** decision making where many people are involved, we should *as a matter of ethics* base our decision making on an attempt to capture relatively stable *average* preferences across many people and optimize that utility outcome under some computationally feasible optimization effort. Any other procedure places too much power in the hands of too few, and I believe this is a major problem with our current politics at the moment. Democracy in which people vote on preferences and then mechanistically from among the known options a computer decides on the policy that would exhibit the maximum utility under those preferences, if it were possible, would be vastly vastly superior to what we have now where a couple of senators can back-room deal billions of dollars to their buddies.

        • Daniel –

          Thanks for your response. I’ll respond to one aspect, and think more about the others.

          > These are choices in the following sense. If given multiple options, such as there’s a noxious substance and lack of oxygen and different temperatures, they will choose one behavior over another, whatever that is (I know something about worm behavior and even built a Bayesian model for low oxygen escape at one point, but I’m not that familiar with the details that I could tell you which they prefer). Nevertheless, they’ll arrive at a behavior, suppose it’s to prefer escape from the low oxygen environment over eating or favorable changes in temperature…

          You’re clearly more of a wormologist than I, so let me describe a thought experiment for your input.

          Say you had 1,000 specimens of a particular species of worm. And let’s say my textbook says that worms of that species “prefer” soil at a temperature or X +/- 2 degrees.

          And. let’s say you place those 1,000 worms in a large tank with soil at a range of temperatures of X +/- 12 degrees, and with the soil distributed in such as way that each worm could find sufficient space to reside at any given temperature within that range. And lets say that soil doesn’t vary by any measures other than temperature.

          It is my assumption that after a period of time, the distribution of the worms in the soil would look fairly predictable: Most of the worms would be in X degree soil, with a diminishing distribution as you move out from X degrees plus or minus in each direction, and zero worms residing in soil that is, say, -/- 3 degrees from X degrees.

          That suggests to me that there is no meaningful “choice” of the worms based on soil temperature “preference.” It’s simply that worms have an instinct to move to soil at a temperature in a given range.

          Further, I”m going to guess that if you marked each worm, and repeated the experiment, you wouldn’t find that particular worms gravitate to X – 0.5 degrees as opposed to X degrees, or X + 0.5 degrees. Now of course I could easily be wrong in my assumptions here. And maybe you don’t know the answer. And maybe no one knows the answer to such a silly hypothetical, navel-gazing experiment.

          But if the worms would wind up distributed as I’m speculating, for me that would not describe “choices” being made based on “preference.” But some combination of instinct and random distribution.

          I’m not saying that my view is fact. Only that it is my opinion, and I don’t think it’s an opinion that “doesn’t make any sense.” I think that the “answers” in this discussion are necessarily somewhat subjective (and that Phil’s assertion that his view equates to axiomatic fact is kind of problematic).

          Of course, this is only one aspect of the discussion above – and I’d still like it if Phil could reference even one source that describes something similar to his theory of a universal one-dimensional basis for all decision-making, and I’d like it if he could describe for me how choosing preferences on axes such as “importance” or “utility” wouldn’t require rational thought. Of course, he could say that it is proven by the simple fact that worms make “choices” based on “preference,” and they aren’t capable of “rational thought.”

          And then we’re right back to where we were, I guess.

        • Joshua, if you look at humans you will find that they live in dwellings with a certain number of square feet per person and that it’s distributed in a stable way around an average. Does this mean humans have an “instinct” for a certain amount of space and that they don’t “choose”?

          Worms definitely make choices. If they’re too cold they choose to start moving towards higher temperatures in a way that rocks very definitely do not.

        • Daniel –

          > Joshua, if you look at humans you will find that they live in dwellings with a certain number of square feet per person and that it’s distributed in a stable way around an average. Does this mean humans have an “instinct” for a certain amount of space and that they don’t “choose”?

          Given the choice of living in whatever size dwelling they preferred, I’d guess you’d see a skew quite different from what exists.

          And if you repeated the experiment, humans would show preferences. I’d guess not so with worms beyond just a range they need to survive.

          Thiss were my points.

          You’re certainly entitled to be sure that worms make choices.

          I remain unconvinced. Instinctual reactions doesn’t seem to me to require “choices, ” and certainly not Phil’s contention of choices based on “preferences” after “considering” importance, utility, etc.

        • Daniel –

          > Joshua, if you look at humans you will find that they live in dwellings with a certain number of square feet per person and that it’s distributed in a stable way around an average. Does this mean humans have an “instinct” for a certain amount of space and that they don’t “choose”?

          Given the choice of living in whatever size dwelling they preferred, I’d guess you’d see a skew quite different from what exists.

          And if you repeated the experiment, humans would show preferences. I’d guess not so with worms beyond just a range they need to survive.

          Thiss were my points.

          You’re certainly entitled to be sure that worms make choices.

          I remain unconvinced. Instinctual reactions doesn’t seem to me to require “choices, ” and certainly not Phil’s contention of choices based on “preferences” after “considering” importance, utility, etc.

        • That was weird that my post showed up twice.

          Although not quite as weird as the notion that choices made on the basis of preferences and importance don’t require rational thought.

          Sure wish Phil would explain that one, but he has this far steadfastly refused to do so.

      • Well, if we have to assume that the worm has a preference and its choice reveals that preference the example may support your argument.

        But what if the worm “somehow” decides that left is “twice as interesting” as right and randomly goes to the left with probability 2/3 or to the right with probability 1/3?

        Say it goes to the right. Does that reveal a preference? If it does, it’s was not the preferred option if we look at that “twice as interesting” mentioned above as putting the options onto a single dimension.

        Or maybe you mean that – whatever the way our little friend chooses its next move – it’s ultimately represented somehow within its little cerebral ganglia as Left=FALSE, Right=TRUE. And the choice those indeed reveal that “preference”.

        Is that what you mean, that any choice necessarily comes down to somehow arriving to This=TRUE and That=FALSE on a binary axis?

        • I was working in the garden the other day and uncovered a bunch of worms.

          I stood back and marveled at how the worms preferred to wriggle as opposed to not wriggle.

          And I saw some worms started wriggling by curving their cute little bodies to the left, and some by curving their cute little bodies to the right. I wonder why some preferred to start wriggling in the one direction as opposed to the other?

        • You may wriggle and squirm all you want, but you will not escape the fact that if you prefer one option to another then you’ve evaluated the options on a common axis.

          Whether worms prefer one option to another is hard to know. But if they do prefer one option to another then they, too, have put the options on a common axis.

          I can hardly believe one person can disagree with this, much less two!

        • If all that you mean is that any choice necessarily comes down to “somehow” (unspecified) arriving to This=TRUE and That=FALSE on a binary axis, I don’t disagree.

        • Yep, that’s pretty much all I mean. Let’s remember how all this nonsense got started, it seems like. hundred years ago:

          I wrote:
          “Dale, you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one. You need a utility function that puts everything in one dimension. This does not have to be dollars, but it’s often convenient to express it in dollars.”

          Dale replied “Phil
          Really, are you serious? You say “a consistent dimension” but surely you know that it is possible to think more multidimensional than a single one.”

          And then we were off to the races.

          If you want to choose the best choice then you have to be able to put the choices in order, or at least so put the best one above the other ones. That seems inarguable to me but it turns out I was wrong. (Not wrong about the need to be able to say one choice is better than the others if you want to say one choice is better than the others; that is indeed correct. But I was wrong that it was inarguable).

        • Phil, here’s a more mathy way to say what I think you’re saying.

          All complete totally ordered fields are isomorphic to Real Numbers. This is a known mathematical fact.

          So, suppose there are many dimensions along which things matter, say a vector of coordinates that describe real-world measurements (examples for the worm would be temperature, oxygen content, and food content of the current location). Let a,b be members of this vector space. If there is a scheme in which we can assign some quantity U such that U(a) >= U(b) implies that the vector a that describes a real world situation is preferred to or equivalent to a vector b that describes an alternative real world situation and we Suppose that for all possible a and b we can assign such a function value then we can prove that U is real valued and hence uni-dimensional.

          Basically this says mathematically what you already said in words: if you can assign preferences of one thing over another, then those preferences are real numbers and hence 1 dimensional.

          Where I think it went wrong is that you accidentally claimed that to *make a decision* requires being able to set such numbers. Whereas in fact it’s only that to have a notion of an *optimal decision* you must be able to do so. If you’re willing to make willy-nilly decisions, you can do a lot of weird stupid stuff without ever having any consistency or transitivity of decisions (such as saying a > b and b > c but c > a and hence the conditions have no ordering).

        • Carlos –

          > If all that you mean is that any choice necessarily comes down to “somehow” (unspecified) arriving to This=TRUE and That=FALSE on a binary axis, I don’t disagree.

          What is the binary axis that a worm uses to choose to wriggle to the left because that’s preferable to wriggling to the right?

        • Joshua, I don’t say that to make a choice the worm uses any axis, binary or otherwise, representing preferences. I even discussed an example before of how the choice could be stochastic, with “more prefered” – in some sense – options being just more likely.

          But I can agree that making a choice consists -at least implicitily – in separating the chosen option from the other option(s). Somehow. By definition.

        • Carlos –

          Thanks.

          > But I can agree that making a choice consists -at least implicitily – in separating the chosen option from the other option(s). Somehow. By definition.

          Well, I don’t really agree with that (I get it as some abstract or theoretical or mathematical concept but in reality I often make choices out of a blind stab, an evaluation of probabilities with an acknowledgement of uncertainy, or just randomness).

          But, my bigger issue is with the reverse engineering from actions (whether or not they can actually be reduced to abinary 1 vs. 0 frame) to a “choice.”. Even for a worm, let alone a human.

        • It’s not the reverse-engineering from an action to a choice. It’s simply the identification of an action with a choice – if one wanted to define choice in such a way.

        • Carlos –

          > It’s not the reverse-engineering from an action to a choice. It’s simply the identification of an action with a choice – if one wanted to define choice in such a way.

          Okay. I see what you mean.

  16. Daniel,
    Yeah, I later backtracked that to say that it’s all about preferences: if you can say you prefer A to B then you have necessarily compressed all of the dimensions to a single metric.

    That does not seem to have resolved things for everyone, though, as you will see if you read up.

    I’m still not clear (at all) on what the others disagree with.

    • To be clear: I have no disagreement with the mathematical statement – the way Daniel has expressed it is fine. However, I do disagree to characterize any other way of making decisions – other than adopting a measurable utility function – as “weird stupid stuff” to use Daniel’s terminology. I maintain that the measurable utility you assign to my choices after I made them is generally not useful for predicting my future decisions (in some cases it is, but in many cases it is not, and I won’t agree to call those cases weird or stupid) and most certainly is not useful for a normative theory about how social decisions should be made.

      I believe that is the heart of the matter and where we all started. The notion that WTP is the proper single measure is one that I don’t agree with.

      For those of you that are not economists, WTP has specific meanings in economics, so if you are thinking of it as just an isomorphic measure it may not mean what you think it does. It is one isomorphic measure, and there are rules that apply to how it is derived. It does depend on your income. It is not invalidated by market failures (imperfect information, free rider problems, etc.) as those only serve to prevent WTP from being realized in the market. It also precludes values I would call “citizenship values.” You can always translate citizenship values into WTP, but that is not consistent with what I mean by the term. And I lament the fact that the very notion of citizenship values has become almost nonexistent. For example, we have succumbed to evaluating politicians and political issues on the basis of how the impact our self (and family if you like). The very notion that there are values beyond self interest has almost become absurd in people’s minds. I fully realize that such values are fraught with ambiguity and difficult to separate from self-interest. But I lament that fact – we sorely need such values. That is my opinion and the reason I have continued to resist arguments suggesting that we must measure things on a single dimension and that that dimension be WTP.

      • Dale. Note that above I tried to make my normative statement about *group* decision making. I fully agree that looking at what one person did one afternoon and saying that because they decided to drive to yellowstone that they are willing to accept X probability of death in order to travel Y miles would be folly. What is needed is for there to be observed stable preferences on average. Many people are consistently, day after day, willing to drive 10 miles per day to work in order to get an income of say $60,000/yr and so we shouldn’t make decisions on the basis that risks that are 1/100th of the 10 mile drive are vastly more “expensive” than $60000/person/yr

        Many revealed preferences are fairly stable through time on average across many people, and to fail to take that into consideration when making societal decisions is the kind of “stupid stuff” I’m saying we shouldn’t do.

        My own theory of societal decision making is that we must accept imperfect approximations to utility, and we must accept imperfect optimization, but we should strive to make decisions that lead to relatively large values of some negotiated approximate utility function and we shouldn’t accept policies that can obviously improve that utility through relatively obvious ways.

        Examples are: we shouldn’t let politicians give contracts to their friends to make “bridges to nowhere” and “guaranteed profits” for inefficient businesses that happen to operate in their districts, and we shouldn’t stick to politically traditional policy ideas such as minimum wage and expanding medicaid or extending copyright or patent protections when there are obviously better outcomes to be had from things like UBI, flat rate stop-loss insurance, shortening copyrights and eliminating them for “abandoned works”, and pre-paying for vaccines and then distributing them patent free to the world… There’s clearly utility considerations which lead to almost every “popular” economic or political policy being **way way** sub-optimal on measures that *most* people would agree on if you just had the proper conversation (for example getting together 100 randomly chosen people in a “jury” type situation and carefully eliciting how important various kinds of social outcomes are).

      • WTP has specific meanings in economics, so if you are thinking of it as just an isomorphic measure it may not mean what you think it does. It is one isomorphic measure, and there are rules that apply to how it is derived.

        Also this. Saying that the value of friendship can be your willingness to pay because you sacrifice things for friendship and you can think of that as paying is just stretching definitions for no benefit. It doesn’t make anything more scientific or measurable, it’s just trying to be clever. On top of the practical problem of income distribution in social decision making.

        • “WTP has specific meanings in economics, so if you are thinking of it as just an isomorphic measure it may not mean what you think it does. It is one isomorphic measure, and there are rules that apply to how it is derived.”

          That’s OK, I couldn’t care less about economists’ definition of WTP.

          I’m interested in what things *actually* cost when people *actually* transact them – **reality**. It’s obvious that everyone has resource limitations. To suggest that there is, in the real world, some “WTP” that is independent of resource limitations is silly. It’s hard to believe educated people are contending that. Nothing can be valued independent of the sacrifice required to obtain it. If people had infinite resources everything would have infinite value. The claim that there is some value independent of the sacrifice required to obtain something is just another form of the “give-a-student-a-dollar-and-see-how-they-spend-it” claim that underlies Dan Arielly’s research – a belief that how they spend that unearned dollar has some clue to what “real” value is. It’s flat out bogus. So that’s where we’re going – the happy joy story that Arielly flogs was shot down because of its poor methodology, so believer economists have dreamed up a more elaborate ruse to fool themselves.

          “it’s just trying to be clever”
          It’s about as clever as society is old. :) I guess you could rewrite all of human history to conform to your contention, but then what would happen to all those claims of privilege? You mean after all wailing about privilege people aren’t using their money to get what they want and protect their position in society? That would be very big news.

          Perhaps economists and sociologists should consider the idea that after 300K years of evolution, humans and human society are successful precisely because they already understand intuitively what economists and sociologists are now in the **very early** stages of learning. It’s one of those “facts on the ground” things.

        • Welcome back, jim. Now you’ve gone off in a different direction. If I reject economist WTP for deciding issues about health care or environmental policy, that does not mean I deny that resource limitations or sacrifice are necessary for making decisions. But that leaves open a wide range of decision making processes – market based WTP is but one of these. Given the huge inequalities in resources available to people, I find many ways that resource limitations and/or sacrifices could be taken into account other than the way that markets recognize these. I’m not saying that these ways are necessarily superior to markets – but once you open the door to thinking about the limited resources that people really have, and the sacrifices that they actually make, the appeal of market-based decisions is far less attractive in my opinion.

        • Nobody has disputed that decisions are made under resource constraints, or that people make trade offs, or that people use money and power to get what they want. You might *want* people to be disputing that so you have something meaningful to pontificate about, but unless you read what people are actually saying it’s just a waste of your time to respond to them.

          My dispute is the identification of “willingness to pay” with “value.”

          Person A has $500,000 in their bank account. Person B has $100,000,000 in their bank account.

          Both need a heart transplant to live. Person A is willing to pay $500,000. Person B is willing to pay $1,000,000.

          Do you, Jim, really believe that Person A therefore values their life less than person B?

          The problem with this identification is in fact an ancient human wisdom.

          Mark 12:41-44
          41 And Jesus sat over against the treasury, and beheld how the people cast money into the treasury: and many that were rich cast in much.

          42 And there came a certain poor widow, and she threw in two mites, which make a farthing.

          43 And he called unto him his disciples, and saith unto them, Verily I say unto you, That this poor widow hath cast more in, than all they which have cast into the treasury:

          44 For all they did cast in of their abundance; but she of her want did cast in all that she had, even all her living.

          A second question. Let’s say you walk 10 minutes, pick up a pizza for $24, then walk 10 minutes back home. Someone asks “how much did you pay for that pizza.” Do you say $24? Or do you tell them “$24, and the walk to the pizza place, and the walk back with the pizza, and the time I spent on the phone ordering it, and some of my personal pride from picking up take out instead of cooking myself.” In my opinion, the first one is the less confusing, because it actually conforms to how people use the word “pay”. Maybe you actually do conflate it with “sacrifice” or “trade-off” in your communication–but you shouldn’t if you want people to understand you, because those words mean different things!

        • > “$24, and the walk to the pizza place, and the walk back with the pizza, and the time I spent on the phone ordering it, and some of my personal pride from picking up take out instead of cooking myself.”

          To be clear, I think **this** is the notion that is relevant to the policy discussions though. When we say for example “how much do you pay for the COVID vaccine” the *wrong* answer is “I walked in and asked for it so they gave it to me, so it was free”

          The correct answer is something like “there was X dollars given by the govt to the manufacturer, plus we gave the manufacturer a 20 year monopoly on the patented technology, and the guaranteed money given to the limited number of manufacturers drove out other manufacturers from trying to make alternative vaccines and …”

          We should do a full accounting of all the **things** that we don’t have now because of resources allocated to making these vaccines and *that* is the amount we paid.

          This is actually usually in the first chapter of econ textbooks but then economists themselves even often fail to really think this way and tend to think along the lines of “the govt paid $10 per dose” because it’s easier to measure.

      • Dale,
        You would have saved me a lot of hassle (mostly in dealing with people other than you) if you had not disagreed with the statement that in order to determine that one choice is better than another you need to put them on the same axis. NOW you say you don’t object to this, but you started out by not just objecting to it but by mocking me for saying it!

        How and whether to determine a utility function, whether we can trust what we think revealed preferences are telling us, how to get around Arrow’s Theorem for group decisions, etc., etc., those are all questions of great practical importance. But to even begin to answer those, you need to decide: are we or are we not, collectively or individually, trying to optimize some function? If you think the very idea of doing so is mockable then I don’t see the point of getting into more detailed discussions. But if we give up on the idea of trying to make “the best decision we can under the circumstances”, for _some_ definition of “best”, then I don’t know what we are trying to do.

        • We are almost in agreement. I do believe we are trying to optimize something. Let’s call it utility, though I don’t like a lot of the baggage that comes with that term. What I believe is that utility depends on several things – some of which can be measured by money and some which cannot. For me, the notion of rights trumps money. This means that the utility function I have in mind may not be well behaved – it may not be continuous, differentiable, etc. So, the optimization problem is not one easily solved by using a WTP measure, especially WTP as conceived by economic theory. This is why I keep getting stuck on your reference to the “same axis.” Perhaps there is no disagreement but I guess I hear “same axis” as requiring some properties of a utility function that I do not agree with.

        • Now that I’ve refreshed my memory a bit, I’m not so sure that lexicographic preferences are to be dismissed so easily. I think my notion of rights trumping WTP implies a sort of lexicographic preference.

          The extreme case, of course, is unrealistic but a more nuanced version might well be what I am thinking of. I believe the mathematics would get quite complex to actually characterize the type of preferences I am thinking of without making it an extreme case – I’m not sure I would go the extreme of saying that I prefer an outcome where the rights of an endangered species to exist are protected to one where they are not, regardless of how much of other goods accompany that outcome. But to some extent I am suggesting that some goods/services cannot be traded for others. To anticipate jim’s objection (and possibly others), our choices do reveal that we make tradeoffs, even for some outcomes that I might characterize as “rights.” For example, our health disparities reveal that we value Black lives less than White ones – but I don’t think anyone would suggest that we use that in making public policy decisions (at least I’m not).

          All I want is that the required public discourse about values not be translated into an engineering problem that has a deterministic solution.

        • It’s not the discussion of values that’s the engineering problem… It’s the discussion of policy! Too often the discussion comes down to “my tribe wants X” vs “X sucks and my tribe wants Y” where X,Y are *policies* like maybe “medicare for all” vs “no on whatever the liberals say” (these days that seems to be GOP politics)

          I argue that arguments about which policy we prefer are stupid and fruitless. The real argument is and should be about how much we value **outcomes**. For example what percentage of people are bankrupted by healthcare bills, how many people are homeless, what is the cost of an undergraduate degree in economics, what is labor force participation rate… Etc.

          Once we have some function such that it tracks the average answers to such questions, the policy we should choose is the one the engineering suggests maximizes said thing. As it is now people with zero skills in analyzing policies will march in the streets to demand policies that actually HARM their main goals.

        • I don’t disagree with this – but I will point out that your position is really not consistent with any notion of democracy. I’m not saying I’m a fan of democracy – it has serious limitations. But your proposal will end up with the elites making policy decisions, something you seem to reject at other times. I only hope the elites are the “right” sort, not just chosen because they have the right pedigree.

          The basic problem of how you decide policy if you don’t want to impose it on people, but when people are not educated enough to rationally contribute to the discussion, is ultimately the problem we face. I have no answers.

        • I believe that it’s possible (logically, not politically) to create a system where the elites don’t get to choose the policies by rigging the value system. For example people could vote on which are the important questions by score voting from a list of hundreds, and then a random selection of 1000 people from the SSN rolls could be tasked, like a jury, to evaluate the relative importance of these various outcome questions that achieved a relevant overall score. The resulting utility function can be used by something like the CBO or GAO or one of those groups to evaluate all policies proposed in a call-for-proposal, and then lawmakers could be required to write a law expressing that policy, with their only input being the minutia of definitions and rules that make the policy conform to the proposed one.

          Good luck, right? But it’s important to realize that we live in not the worst of all possible worlds, but a pretty crappy world compared to one in which we actually just try to work out our goals via negotiation and then use technology to achieve them. Technology is perfect for figuring out how to do X… it’s figuring out what do we want to achieve that’s the hard part. Most people won’t even discuss it really. Next time you have a political argument just try asking them what their goals/preferences among outcomes are. Good luck with that.

    • 2 highly semantical objections:

      1. It’s mathematically untrue. Lexicographic preferences over Rn are totally ordered and CANNOT be compressed into a single metric/dimension. This is the classic pathological example from introductory microeconomic theory. It does not violate what Daniel said because these preferences are not dedekind complete–The set of pairs (1, x) in R2 can be bounded from above by (2, 0) but does not have a supremum. This is irrelevant to real life, and I mostly want people to acknowledge that it’s kind of cool.

      2. I don’t think it works as a mechanistic description of the decision making process. Even in real-world situations (like lexicographic preferences over finite or countable dimensions), you can, for example, just sort your alternatives starting from the most significant digit to the least by a radix sort algorithm, then pick the biggest one. At least on a computer, I pick the greatest element this way all the time, and it creates no problems for me! I could have collapsed them into a unidimensional float sortkey and gotten equivalent results–but I didn’t. Sometimes it’s easier to not do that and your decisions can still be rational.

      Here’s an example of what I mean. Consider the 4-tuple

      (Breakfast, Lunch, Dinner, Dessert)

      where any of the meals = 1 if I’ve eaten it, 0 if I haven’t. My order of priorities is

      dinner is strictly more important than lunch, which is strictly more important than breakfast, which is strictly more important than dessert

      If someone gave me a set of options, I would just go by the algorithm

      * Check if dinner, if both/neither, then
      * Check if lunch, if both/neither, then
      * Check if breakfast, if both/neither, then
      * Check if dessert, if both/neither indifferent

      This kind of “order of priorities” decision making is totally fine! It’s rational, well defined, organized, and I never actually had to compute the unidimensional utility representation, which is good because that’s hard and slow.

      • Re the lexicographic ordering thing. I’m not entirely clear on the topic so I’ll defer to your greater knowledge. yes, the reals are a complete ordered field, if we restrict ourselves to non-complete perhaps there’s something else. But I agree that this probably doesn’t seem very relevant to the real world. Imagine for example lexicographic preferences for peanuts and dollars. Having 1 peanut and any amount of dollars dominates having zero peanuts and any amount of dollars. Really? So for example I’ll give you a peanut and you’ll give me a $100M and you should say this is absolutely in your best interest. Not likely.

        Agreed that actually computing the utility is not always needed, but we’re just talking about whether a utility function expressing the preferences exists, and outside of very odd pathological mathematical cases it does.

        In real world decision making where the decision is not immediately obvious, actually computing utility and maximizing it is going to lead to better decisions in general. And I believe that satisficing is a crappy heuristic compared to a Bayesian Compute Budget (ie. based on prior knowledge estimate how much improvement you are likely to get from x amount of time spent searching, and then choose an amount of computing time that approximately maximizes your prior expected utility for improvement)

      • You bring back memories from long, long (long) ago. Lexicographic preferences! Yes, kinda cool. There was a time I played with such things (and probably got tested on them). I fear my math abilities are rusty and I know I mostly lost interest in such things. These days I am more concerned with how actual decisions are made and how they can be improved upon. In this long and winding discussion, a few alternatives have come up:

        jim: WTP (the economists’ measure, not the lay person’s) is the best way to make individual and social decisions. Suffice to say, I don’t agree with jim.

        Daniel: let’s have some decision making process where people negotiate what the social welfare function should look like, and use that to make improved decisions. I actually like this approach, but good luck trying to get that to work.

        Dale: Some values take precedence over others. The notion of “rights” trumps market values in some circumstances. For proof, just consider that when economists measure the value of statistical lives, they do not adjust for race or gender (“Black lives matter” but matter less than White lives if we use WTP). In particular, for environmental decisions and many health care decisions, I’d rather leave WTP out of the discussion and focus more on debating/discussing ethics and rights. That is supposedly what we have political institutions for, and a major reason we don’t just leave everything to the market.

        Unfortunately, for Dale, good luck with that! Our political institutions appear less functional than our markets. As I’ve said before, I am not an optimist.

      • Somebody, I don’t understand what that meal algorithm illustrates. If you know dinner is the most important meal then you have already expressed your preference for dinner vs the other meals.

        • I’m not choosing between meals, I’m choosing between possible meal configurations, like:

          (dinner yes, no lunch, no breakfast, dessert yes)

          vs

          (dinner yes, no lunch, yes breakfast, no dessert)

          vs

          (dinner no, lunch yes, breakfast yes, dessert yes)

          I never have to collapse things into a single dimension to pick a best element, I can just iterate through each dimension from most important to least. Yes it’s equivalent to as-if I had collapsed them into a single dimension, but I haven’t done that computation, and I object to the prescriptivist notion that every other way of making decisions is stupid and for losers. I did this while vacation planning—I don’t feel like a loser!

        • There are multiple ways of representing equivalent problems, and some are easier to solve in one representation than the other. I don’t think choosing one representation as canonical really helps me, but that this is a heuristic approximation to utility is a completely valid way of looking at it. I just want to be clear that other ways of actually carrying out the decision making in practice are not necessarily pathological.

          I agree that making sure you can transform a decision making process into a utility maximization, even if the utility approach is too clumsy in the moment, is very often the best way of making sure your process is coherent. I also agree that negotiating something like a utility function and local elasticities of substitution is the best way of making decisions collectively. And people do actually do things this way sometimes!

          https://www.epa.gov/environmental-economics/mortality-risk-valuation

        • Yes to all of this.

          Additionally, you can choose what you want to try to optimize (if anything). Sometimes when I’m working my wife offers to pick up food from a local restaurant. She asks what I want. If I were to take the time to think about it I would have some mild preference for one thing or another, but often I just say “you know what I like, anything would be fine” and let her do the choosing. Here I’m not trying to get my preferred dish, I’m expressing some preference in a higher-dimensional parameter space that includes “how much time and mental effort do I want to devote to making a choice.” I’m still attempting to maximize _something_, but “my enjoyment of the food” is not it.

        • The EPA example is exactly the reason I am prolonging this discussion. In some ways, the described approach to valuing a statistical life is worthwhile – but in other ways it is prejudging an ethical debate that does not take place. There are some issues of rights involved with risks to life – context matters. Rather than have public discussion of whether and how these circumstances affect the policy decisions, we have a $7.4 million figure to apply to a variety of public decisions about environmental quality and safety in order to produce “rational” decisions. I am not contending that rational decision making is objectionable, but I am objecting that a number of ethical issues have already been answered by the derivation of the $7.4 million figure – without highlighting the underlying assumptions of that value. In a way, the EPA use of this figure sweeps aside the very ethical discussion that should be taking place.

        • Many of you probably know this, but some readers probably don’t. My favorite fun fact about VSL calculations is that many different agencies arrive at different figures and we do not force them to agree, such that across federal government there is something like a 2x differential between the lowest and highest value, with EPA actually falling somewhere in the middle. I used to work on cost-benefit analyses for EPA at a federal contractor, so I have some very cynical opinions about how these things are actually deployed in the real world.

        • If you are interested in delving further, the EPA work is quite informative. The linked document is quite good in describing the state of the art and some of the issues. What I worry about is that many important ethical issues get swept aside in what has become a highly technical literature. As one example, you can look at the excellent review, “Structural benefit transfer: An example using VSL estimates,” Ecological Economics, 2006, by V. Kerry Smith et al (Smith is a Nobel Prize recipient). This paper is but one example of some of the very good work – and highly technical – that has been done by environmental economists. I don’t want to diminish its usefulness, but I am concerned that significant ethical issues get lost in the process of trying to solve technical and somewhat narrow economic issues. For example, the Smith paper contains the following:

          “There are at least four advantages to a more structural approach to benefits transfer. First, it assures consistency of the benefit estimates derived from benefit transfer with a well-defined preference function. A key implication of this consistency is the assurance that willingness to pay can never exceed income.”

          If survey respondents happen to be poor, do we really want to assure that willingness to pay can never exceed income? Perhaps, but I’d suggest there are a number of ethical issues involved here. Do we really want a fairly small group of economists to decide these issues, in the name of rational and consistent decision-making?

  17. Phil, I find that

    > any choice necessarily comes down to “somehow” (unspecified) arriving to This=TRUE and That=FALSE on a binary axis

    is VERY different from

    > you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one
    If you choose action A over action not-A, them you have some function that somehow evaluated A against not-A and decided A is better.

    You are wandering from “choose the best”, to “prefer”, to “make any choice” and back but these are not exchangeable. You could make “prefer” exchangeable with either one of the others, but not both. By the way, “choosing the best” clearly has a connotation of rationalilty.

    Even if we stick to the rational choice some of the things you’ve said were wrong.

    “You have to be able to put the choices in order” is not the same as “you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one.”

    Somebody has given an example of how this may not even be possible in principle.

    And when it could be done you don’t necessarily “need” to do it. If the waiter offers for dessert “apple pie”, “ice cream”, “rice pudding” or “cheescake” and I go for the ice cream I didn’t need to put all the choices in a consistent dimension.

    Maybe my baseline was “no dessert” and upon hearing “apple pie” I somehow compared both and decided to stock with “no dessert”. Then I was offered “ice cream” which I compared with “no dessert”, preferring the former. Later I compared “ice cream” with “rice pudding” and “cheesecake” separately, and “ice cream” it was.

    At no point did I need to put all of them on a single axis to compare them to choose one.

    I could do more head-to-head comparision and (hopefully, because we assumed before that we’re in a rational choice setting) produce a ranking. Without ever putting all of them on a single axis to compare them.

    Once I have the ranking I can put them all on a single axis labeled 1st, 2nd, etc. But that’s more the outcome of the ranking than something I needed to do to rank the options. Or maybe that’s your whole point, that to rank the options you need to rank the options. But I wouldn’t say that you can obtain the ranking by comparing their positions on the ranking.

    By the way, the latest incarnation of your inarguable claim – “you need to be able to say one choice is better than the others if you want to say one choice is better than the others” – drops completely the “ordering of choices” and “consistent dimension” aspects. What you are saying now is just that you have somehow separated the choices into two distinct groups {best} and {not-best-1, not-best-2, not-best-3,…}.

    • To say one thing is best and the others are lesser requires evaluating a “goodness” criterion on which one thing comes out above all the others. That is a single dimension.

      • So it’s settled science, apparently. Worms certainly make choices (based on preferences). And the choices humans make are no different, because like with worms, the choices humans make don’t involve rational thought.

        And if that were to be questioned, Phil will just insist that it is so!

        Glad we’ve clears that up!

      • No. That is not true. It requires that one thing comes above each of the N-1 other things. This can be done in N-1 different dimensions.

        One can prefer A to B because it’s cheaper (and equally nice) and prefer A to C because it’s nicer (and equally priced).

        To do so one does __NOT__ need evaluating a goodness criterion in a single dimension that would make it straightforward to determine that the more expensive and nicer B is better than the less expensive but not-so-nice C, or worse, or that they are equally good.

        You can keep repeating that as long as you want but that won’t make it true.

        For a change, you could try adressing the counter-examples if you don’t find them valid.

        At which point of those examples would you say that this required evaluation of a goodness criterion in a single dimension on which the “ice cream” and A come out above the alternatives is happening?

        • Carlos,
          In the example you gave, you have said you prefer A to B, and you prefer A to C. In what sense do you not have a goodness criterion in which A is better than both B and C?

        • Ah, I think I see your point. It’s the same one “somebody” made a few days ago. If you want to choose the “best” from among A, B, C, D, etc., then in some circumstances you can just compare A to not-A — a comparison you need to reduce to a single dimension — but if A dominates all the rest then you don’t need to compare B to C or B to D or whatever. That’s true! Is that your point?

        • You may not need to evaluate the Utility function, but the vN-M theorem says that one exists and is unique up to addition and multiplication by a constant right?

          So you can restate Phils point as something like “if you want to pick an optimum outcome you must **be able to** reduce the problem to a single dimension / utility”

        • Getting into rational utility is a whole extra level of complexity. No matter how irrational your preferences, if you can say “I prefer A to not-A” then you’ve somehow taken all of the dimensions that describe A and all of the dimensions that describe the alternative(s) and determined that your preference for A is greater than your preference for the alternative(s). Let’s see if we can get people to agree with this, before moving on to agreeing that ideally one should make rational decisions.

        • The vNM theorem is about preferences when there is uncertainty and you have a consistent ranking that includes not just A, B and C but every uncertain combination of the p(A)=a, p(B)=b, p(C)=1-a-b.

          When there is no uncertainty you don’t need a representation theorem to know that if you have a ranking, say, A>B=C you can trivially represent it with numbers x,y,z satisfying x>y=z

          But you don’t need such a ranking – and it doesn’t even need to be achievable – to say “one thing is best and the others are lesser” or “to pick an optimum outcome”.

        • Sure but beyond uncertainty there is also in most real world decisions a “quantity” involved. So we want to know whether we prefer 118 lbs of apples, 13lbs of chicken, 19 lbs of broccoli and $2300 in cash to 55 lbs of apples, 21 lbs of chicken, 42 lbs of broccoli and $1800 in cash.

          You can I think derive a similar representation theorem for continuously varying but certain outcomes. The main assumption is just that at any point there should be some linear exchange rates for epsilon changes in the quantities. That will produce level sets… And then by choosing the $ dimension to be arbitrary you can define an orthogonal direction to the level set, and wind up with a continuous function of all the inputs. (Hand waving here)

        • > You can I think derive a similar representation theorem for continuously varying but certain outcomes.

          I think that the lexicographic preferences that have been mentioned several times show that you can not. Unless you put additional constraints on those quantities.

          Of course, you may say that infinite precision and unbounded quantities are required in practice and effectively a finite discrete set of values for each good is enough.

    • Carlos,
      First, let’s get terminology out of the way. I’ve already said in an earlier comment (but you may have missed it among the zillion comments): I was wrong when I said you need to put things on the same axis if you want to choose. What I should have said is that you need to put them on the same axis if you want to choose the one you prefer, or the one you think is best. You’re right that to me “the one I prefer” is the same as “the one I think is best for me right now” so I have been loose with the language about “prefer” vs “best.” I think the same discussion would work for either situation, if they are different for you. In this comment I’m going to stick with “prefer” but you may replace that with “think is best” in every instance and everything will still work.

      Second, let me try to address what you think is a counterexample: the desserts. You are offered a choice (sequentially) of no dessert, apple pie, ice cream, rice pudding, and cheese cake. You chose ice cream. In your example you have provided enough information to put your choices at least partially in order by preference: ice cream > (rice pudding, cheesecake in some order) > no dessert > apple pie. The various parameters that describe the desserts — size, calories, sweetness, all of the many flavor parameters, texture, etc. — have all been compressed onto a single scale which I have called “preference”, and here they are laid out on the scale, although you did not provide enough information to determine the relative order of cheesecake and rice pudding.

      I’m not sure how the sequential-ness is supposed to affect this argument. Perhaps you’re suggesting that your preferences can change in an instance, and that if these were presented in a different order or at a different pace you might have had a different preference. Well, people’s preferences do change with time, it’s true. But then I think these are best viewed as a bunch of separate decisions, each of which required you to put just two items on your “preferences” scale at a time. At the moment you compared “no dessert” to “apple pie”, you preferred apple pie to no dessert. At the moment you compared “no dessert” to “ice cream”, you preferred “ice cream.” For each decision, you had to put the items on a single scale — your preference scale — and choose which one you preferred. So I don’t really see how the sequentialness changes the picture.

      Actually you might have been better off (for the sake of trying to come up with a counterexample) if you didn’t hear all of the choices. That would connect to “somebody’s” point, and to one you made earlier too: suppose ice cream is your favorite dessert in the whole world and you would never choose anything else if ice cream is available. As soon as the waiter says “ice cream” you hold up your hand and say “Say no more, my good man! Bring me the ice cream.” In this case there is no need for you to put any of the other choices on your preferences scale, indeed no need to be ABLE to put the other choices on your preferences scale. You know A > B and A > C and A > D and A > E, and to do that you needed to know qualitatively where A stood (on your preferences scale) relative to each of the others, but you did not need to come up with an order for _all_ of them. TBQH I dunno if there’s anything non-trivial there or not, as far as insight. It feels like “not” to me but maybe there’s some point of logic or philosophy that I’m missing. My inclination is to think it’s like this: suppose I have a bunch of objects and I’m trying to figure out which one is heavier. There’s a book, a camera, a box of crackers, a few other items…and a large anvil. I only care about a single dimension — how much do they weigh — and I’m trying to pick the heaviest. I pick the anvil and don’t worry about the relative weighting of the rest. I don’t need to, because I know the anvil is the heaviest.

        • It’s hard to see how one would understand the waiter’s words without some thought, too. Some choices do require thought.

          Yeah f you’d prefer to return to earthworms, I suppose we could posit a situation in which all directions are various types of awful in many ways (too cold, too acidic, too poisonous), except one, which is fantastic.

      • You said that “you need a consistent dimension to compare things because you need to be able to put choices in order if you want to choose the best one.”

        But now you seem to acknowledge that if you want to choose the best one (the dessert that you prefer) you do not need to put choices (desserts) in order to do so and you do not need a consistent dimension to compare things (desserts). Because as you noticed you can choose the dessert that you prefer with a sequence of binary choices and all those independent comparisons need not happen on a “consistent” dimension.

        Another example where you can choose the best option without putting choices and order and without the need to compare them all on a consistent dimension:

        You want to buy a particular model of smartphone. You find three sellers:

        A: $1000, delivery in two days

        B: $800, delivery in two days

        C: $800, delivery in one week

        Yet another example where you can choose the best option and it’s not even possible to put the choices in order:

        You want to buy a six-pack of Guinness. You find three sellers:

        A: $13, shipping costs unknown

        B: $15, shipping costs unknown

        C: $10, shipping costs $2

        • The consistent dimension all of the desserts are on is “preference.” All of the desserts are somewhere on your preference scale.

          You continue to disagree with this but o don’t see how and you won’t tell me. So I’m giving up.

        • Phil, I’ll try to make the disagreement as clear as humanely possible.

          You want to buy a particular model of smartphone. You find three sellers for the smartphone you want to buy:

          1a) Seller A ($1000/2days) is more expensive than seller B ($800/2days) everything else being equal

          1b) You prefer B to A

          2a) Seller B ($800/2days) is faster than seller C ($800/1week) everything else being equal

          2b) You prefer B to C

          The comparisons in 1a) and 2a) are independent and do not happen on a consistent dimension. The scale used in the former (“preference=cheaper everything else being equal”) is not the scale using in the latter (“preference=faster everything else being equal”).

          Most people would say that from 1b) “you prefer B to A” and 2b) “you prefer B to C”) one can readily deduce 3) “you prefer B to both alternatives A and C”.

          But _you_ don’t agree that it was possible to choose the best one (B) without needing to put the three choices A,B,C in order by comparing them on a consistent dimension.

          It seems that to go from 1b) “you prefer B to A” and 2b) “you prefer B to C” to 3) “you prefer B to both alternatives A and C” _you_ need to put all three on a single-dimension reflecting what you already know. That you prefer B to A, so the value(B) will be higher than value(A), and that you prefer B to C, so the value(B) will be higher than value(C). You don’t tell us how would you set those numbers apart from that though. Only that you _need_ to do that to compare them and conclude that value(C) is higher than both value(A) and value(C). Go figure.

        • Carlos,
          You prefer a cheaper, faster phone to a more expensive, slower one. In your example one choice is dominant: it is both faster and cheaper.

          I claim that you have a “preference” function that you are using when evaluating the phones. With these particular phones one choice dominates so the details of the function don’t matter. You’re saying that since one choice dominates, you have no such function.

          Imagine that, once you’ve chosen B, you find that it is not available after all. It turns out you can buy no phone, or the cheap slow phone, or the expensive medium-speed phone. Now do you suddenly have a preference function that you didn’t have a moment ago?

        • > I claim that you have a “preference” function that you are using when evaluating the phones. […] You’re saying that since one choice dominates, you have no such function.

          I say that one does not need to evaluate such a function to prefer one option over the others. You could do that by ranking the options pairwise by some method which is not using the same and only preference function that you claim I have. And I don’t even need to rank all the pairs, only to determine that one option is preferable to every other option.

          Now, if I can rank all the pairs this could also be represented by infinitely many preference functions that can map all the choices onto a single dimension so they could be compared and ranked. One could say “do not have” such a function or that I “have infinitely many”. What I say is that I don’t need to use that function and evaluate it for each choice and put them in order to pick the best one.

          > Imagine that, once you’ve chosen B, you find that it is not available after all. It turns out you can buy no phone, or the cheap slow phone, or the expensive medium-speed phone. Now do you suddenly have a preference function that you didn’t have a moment ago?

          That preference function is a mental construct. It appears magically when I think about things. I don’t think that I “have” a pre-existing preference function for “cost versus delivery delay regarding a Samsung Galaxy S on the 25th of December of 2021 given that my current phone works well but the camera quality is not as good and I have some sky holidays programmed for new year but I may still cancel them as there is no snow and Omicron cases are shooting up, etc.”

          If you think that such a function is something that I “have”, at what point would you say that I “acquired” it? When I wake up this morning? When I had the first thought about replacing the cellphone? When I was born? Is it eternal?

        • “If you think that such a function is something that I “have”, at what point would you say that I “acquired” it? When I wake up this morning? When I had the first thought about replacing the cellphone? When I was born? Is it eternal?”

          I think the discussion of “which phone do I prefer” only makes sense if there’s some sense of a stable preference function — or, since you object to that whole concept, at least a stable-enough set of selection criteria and importances — that is close to constant for the duration of the selection process. If you aren’t trying to optimize _something_ then I’m not sure what we’re discussing.

          But as for when and how you acquire your preferences, I think there’s more to this question than I recognized at first. If I’m trying to choose a car and someone points out that one car has a feature that I never even knew existed, does this give me a new preference function or merely provide another dimension on which to evaluate pre-existing preferences for the functions of a car? The day Ford first offered cars in a color other than black, did this simply allow people to take advantage of pre-existing color preferences, or did color preferences for cars suddenly spring into being? I’m not sure what is the right way to think of this, or whether it matters.

          Moving on: “You could do that by ranking the options pairwise by some method which is not using the same and only preference function that you claim I have.”

          This is something I should have asked earlier: is it your claim that there is exactly one situation in which you can arrive at your preference without evaluating the choices on a single axis — the case where one choice dominates — but that except for that situation you do need a single function? Or are you saying you never need a single function? Or that there are certain cases in which you need a single function and others in which you don’t? (E.g. maybe when there is a continuum of choices you need a function, but when there are discrete choices you don’t?)

          I will say this has been a much richer vein than I recognized when I noted that, if you are trying to choose your preference between A, B, and C, you need a way of evaluating how much you prefer A, B, and C! I’m going to do a separate blog entry about this issue.

  18. Andrew, I think there’s two explanations:

    1) Economics is a lot more agglomerated. “Top” and “influential” economists tend to be much more tightly located than (to my knowledge!!) other disciplines. This probably reflects the abundance of funding in econ + allied departments that allow superstar departments to emerge, as well as the fact that econ is not capital-intensive so top economists don’t crowd each other out physically. That makes it a lot easier to coordinate. We know that such economists are also overwhelmingly produced by a small number of feeder departments.

    2) As mentioned upstream, econ training is harmonized enough that it’s easy for economists to talk to each other even at great distances across the field. I don’t work on macro, but thanks to the fact that I was forced to take a year of macro theory in the first year of my PhD, I can cobble together at least some intelligence about macro issues, and it wouldn’t be hard for me to see that Judy Shelton was nuts even if I didn’t have other economists to tell me that.

    I think there’s reasons to see the above two institutions are bad, but it does allow for greater communication and thus greater coordination, IMO.

  19. Phil –

    Which do you prefer, ice cream or pie or brownies or flan or fruit compote or jello or chocolate cake or gulag jamun or spanikopita or halvah or rice pudding or a cookie?

    I couldn’t tell you which I prefer, except that I don’t prefer jello.

    None of them are more important, or return more utility. It is precisely because I have no (categorical) preference among them, that none are more important or more useful, that considering which to eat would require no rational thought.

    But if you put a plate of all those deserts in front of me, I’d surely pick one.

    Of course, you could add a condition – say I had to pick the desert to eat in the car as I’m driving.

    Then I’d use rational though to narrow the choices down. It would be cookies or halvah. Then I’d probably generally favor cookies as halvah would probably leave my hands greasy. Then I would step the rational thinking up a notch – to a conditional analysis.

    Do I have a napkin available? Then I’d probably pick the halvah, except if it looks like it’s poorly made, or if it’s a tiny piece of halvah such that eating it would clearly not compare in bet pleasure retirn (after discounting for annoyance) to the cookie.

    So either I make a choice WOTHOUT reducing my decision-making to an analysis along a particular (but not single) axis (I pick a….pie?….just because)….

    or I prioritize my decision-making by considering a few axes of priority (i have to dqr while driving, I have a napping – not a single one) requiring rational thought.

    And none of the priorities you have specified that I’d necessarily have to use, would apply.

    • I think we should make a distinction between “you behave as if there were a function U(x) that is a scalar utility” and “there is such a function that you can call out the values of”.

      Representation theorems are interesting rather than being obvious precisely because they say that X “is equivalent to” Y when those things are not obviously equivalent.

      I’m hand waving here but suppose I could come up with a representation theorem which says something like “for all randomly selected subsets if I can restrict your dessert choices to a random subset, you will still be able to pick your favorite from that random subset” is equivalent to “there exists a U(x) such that you will always pick the one from the random subset whose U(x) is largest”

      Now Phil can say “you need *to be able to* calculate U(x) in order to pick your favorite” and you can say “I can pick my favorite without calculating U(x)” and **both of you are correct**

      I think that’s approximately the situation we’re in. Sure, the fact that logically such a U(x) must exist, and hence “you are able to” (not computationally, but logically) doesn’t mean that you are actually able to in practice call out values of U(x) for each possible dessert, nor that you do something like that when making your dessert choice. Nevertheless, some such function exists mathematically and this is all that Phil really means I think.

      • > I’m hand waving here but suppose I could come up with a representation theorem which says something like “for all randomly selected subsets if I can restrict your dessert choices to a random subset, you will still be able to pick your favorite from that random subset” is equivalent to “there exists a U(x) such that you will always pick the one from the random subset whose U(x) is largest”

        If there is a finite set of choices and you can pick the favourite for any subset in a consistent (rational!) way then a ranking could be constructed. But it “exists” only as a mental construct. And such a ranking can be represented by infinitely many utility functions.

        One does definitely do not need to evaluate any of those imaginary utility functions to rank the choices. You could rank them by performing pairwise comparison that will in general be much simpler.

        It don’t think that it’s correct to say that you need *to be able to* calculate U(x) in order to pick your favorite”. You don’t even need to be able to rank all the choices.

        If with option A your daughter dies, with option B your son dies and with option C both survive, you don’t need to be able to choose between your son and your daughter in order to prefer – hopefully! – option C.

        • Sophie’s choice! Now, what did Sophie’s utility function look like? I think that makes a better discussion than trying to debate the utility functions of worms.

    • Joshua, it may be that you are completely indifferent to something; you have no preference. What color is the tubing of the windshield sprayer under the hood of my car? I don’t care. That’s fine, it’s possible to have no preference. Your preference function — I’m trying to avoid the use of the term ‘utility function’ because of its connection to economics , but you can substitute that if you want — can be flat in some region of parameter space or can be equal for some discrete options.

      My claim is that IF you are trying to decide what choice you prefer then you have some “preference function”, for which the inputs are parameters that describe the options, and the output is a degree of preference for each option. It would be unusual for you to think of this function as being quantitative, although that can sometimes happen, especially when one of the parameters is dollar cost; for instance, if you are bidding on a house you would rather have (the house, but be poorer by $X) than (no house, but keep the $X) but only up to some value of X.

      So, when choosing desserts, you (if you are typical) consider how much you like each dessert, and how much it costs, and maybe how many calories are in it or whether you think it will give you indigestion or whether it’s bad for your cholesterol, and you choose the one for which Preference( tastiness, cost, calorie acceptability, indigestion potential, cholesterol effect) is maximized. Most people don’t think of it this way, but if they take these parameters into account then they somehow need to decide whether their preference for ice cream is greater or less than their preference for pie, which is greater or less than their preference for gulab jamun. (I do love me a good gulab jamun). As I’ve said before, this is pretty much tautological: if you want to decide whether you prefer A, B, C or something else then you need to decide which of A, B, or C you prefer, based on your information about A, B, or C.

      The only objection I’ve seen that makes any sense to me is the one raised by Carlos, above. Carlos might agree that there are cases in which you need a preference function, but he thinks that there is one situation (I think exactly one situation, Carlos can correct me if I’m wrong) in which that’s not true: he says that if one situation dominates the others — it is better in every single parameter — then you do not need a single “preference function”, you can have multiple independent functions. You can see his cell phone example, above. To translate it to the dessert example, he says you don’t need a single preference function if one dessert is better than ALL of the rest in terms of tastiness AND price AND calorie count AND indigestion potential AND its effect on cholesterol levels. This dessert would dominate the selection such that you can apply each individual optimizing function rather than having a unified optimizing function.

      To switch to his simpler (two-parameter) cell phone example, he has a preference function that is sloped downwards on the cost axis (so more expensive is worst), and sloped upwards on the speed axis (so faster is better). The phones are at three points in the parameter space: one at (fast, cheap), one at (slow, cheap), and one at (fast, expensive). The (fast, cheap) one is the best. That’s way I think of it. Carlos says No no, he doesn’t have a two-parameter preference function, he has two single-parameter preference functions: one for speed and one for cheapness. He does not need a single function.

      In my thought experiment, he suddenly learns that the (fast, cheap) one is not available; maybe the price went up, and this is now the (fast, expensive one). Now his two individual functions don’t produce an answer. Does he suddenly have a two-parameter preference function that didn’t exist a moment ago? Or did he have that preference function all along? Is this a distinction without a difference, or is there really something important here?

      Daniel, is this what you mean by make a distinction between “you behave as if there were a function U(x) that is a scalar utility” and “there is such a function that you can call out the values of”?

Leave a Reply to Tom Dietterich Cancel reply

Your email address will not be published.