Palko writes, “When the arrogance of physicists and economists collide, it’s kind of like watching Godzilla battle Rodan . . . you aren’t really rooting for either side but you can still enjoy the show.”

Hey! Some of my best friends are physicists and economists!

But I know what he’s talking about. Here’s the story he’s linking to:

Everything We’ve Learned About Modern Economic Theory Is Wrong

Ole Peters, a theoretical physicist in the U.K., claims to have the solution. All it would do is upend three centuries of economic thought. . . .

Ole Peters is no ordinary crank. A physicist by training, his theory draws on research done in close collaboration with the late Nobel laureate Murray Gell-Mann, father of the quark. . . .

Peters takes aim at expected utility theory, the bedrock that modern economics is built on. It explains that when we make decisions, we conduct a cost-benefit analysis and try to choose the option that maximizes our wealth.

The problem, Peters says, is the model fails to predict how humans actually behave because the math is flawed. Expected utility is calculated as an average of all possible outcomes for a given event. What this misses is how a single outlier can, in effect, skew perceptions. Or put another way, what you might expect on average has little resemblance to what most people experience.

Consider a simple coin-flip game, which Peters uses to illustrate his point.

Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total. Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss. In economics jargon, the expected utility is positive, so one might assume that taking the bet is a no-brainer.

Yet in real life, people routinely decline the bet. Paradoxes like these are often used to highlight irrationality or human bias in decision making. But to Peters, it’s simply because people understand it’s a bad deal. . . .

This is not quite a “no-brainer”; it’s kinda subtle, but it’s not so subtle as all that.

Here’s the story. First, yeah, people don’t like uncertainty. This has nothing to do with the economic or decision-theoretic concept of utility, except to remind us that utility theory is a mathematical model that doesn’t always apply to real decisions (see section 5 of this article or further discussion here). Second, from an economics point of view you should take the bet. The expected return is positive and the risk is low. I’m assuming this is 100 marginal dollars for you, not that this is the last $100 in your life. One of the problems with this sort of toy problem is that it’s often not made clear wat the money will be used for. There’s a big difference between a middle-class American choosing to wager the $100 in his wallet, or a farmer in some third-world country who has only $100 cash, period, which he’s planning to use to buy seed or whatever. Money has no inherent utility; the utility comes from what you’ll buy with it. Third, from an economics point of view maybe you should *not* take the bet if it requires that you play 50 times in succession, as this can get you into the range where the extra money has strongly declining marginal value. It depends on what the $100 means to you, and also on what $1,000,000 can do for you.

The above-linked argument refers to “plenty of high-level math,” which is fine—mathematicians need to be kept busy too—but the basic principles are clear enough.

And then there’s this:

Peters asserts his methods will free economics from thinking in terms of expected values over non-existent parallel universes and focus on how people make decisions in this one.

That’s just silly. Economists “focus on how people make decisions” all the time.

That said, economists deserve much of the blame for utility theory being misunderstood by outsiders, just as statisticians deserve much of the blame for misunderstandings about p-values. Both statisticians and economists trained generations of students in oversimplified theories. In the case of statistics, it’s all that crap about null hypothesis significance testing, the idea that scientific theories are statistical models that are true or false and that statistical tests can give effective certainty. In the case of economics, it’s all that crap about risk aversion corresponding to “a declining utility function for money,” which is just wrong (see section 5 of my article linked above). Sensible statisticians know better about the limitations of hypothesis testing, and sensible economists know better about the limitations of utility theory, but that doesn’t always make it into the textbooks.

Also, economists don’t do themselves any favors by hyping themselves, for example by making claims about how they are different “from almost anyone else in society” in their open-mindedness, or by taking commonplace observations about economics as evidence of brilliance.

So, sure, economists deserve some blame, both in their presentations of utility theory and in their smug attitude toward the rest of social science. But they’re not as clueless as implied by the above story. The decision to bet once is not the same as the decision to make 50 or 100 of a series of bets, and economists know this. And the above analysis is relying entirely on the value of $100, without ever specifying the scenario in which the bet is applied. Economists know, at least when they’re doing serious work, that context matters and the value of money is not defined in a vacuum.

So I guess I’ll have to go with the economists here. It’s the physicists who are being more annoying this time. It’s happened before.

The whole thing is sooooo annoying. Economists go around pushing this silly idea of defining risk aversion in terms of a utility curve for money. Then this physicist comes along and notes the problem, but instead of getting serious about it, he just oversimplifies in another direction, then we get name-dropping of Nobel prize winners . . . ugh! It’s the worst of both worlds. I’m with Peters in his disagreement with the textbook model, but, yeah, we know that already. It’s not a stunning new idea, any more than it would be stunning and new if a physicist came in and pointed out all the problems that thoughtful statisticians already know about p-values.

OK, I guess it would help if, when economists explain how these ideas are not new, they could also apologize for pushing the oversimplified utility model for several decades, which left them open to this sort of criticism from clueless outsiders.

If someone actually offered me a bet in which there’s a positive expected value, I would suspect a rat and decline it.

If something seems too good to be true, it probably is. [Sounds vaguely like something Yogi Berra would have said.]

Of course, that’s not quite true. If I buy, say, 3M stock, it’s because I expect that it is more likely to go up than down. But that doesn’t really fit the scenario since the true odds are unknown for investment decisions that depend on the future.

But that’s my criticism of this whole line of economic experiments. They don’t seem to me to reflect the fact that we seldom can do an actual calculation of the probabilities, the way you can with a coin flip or dice.

Yes, the vast majority of human decisions are made using heuristics, also known as logical fallacies.

Any model of human behavior that does not have that playing a dominant role isn’t going to work on an individual level or on average.

“Yes, the vast majority of human decisions are made using heuristics, also known as logical fallacies.”

This is true and well known among economists. But this:

“Any model of human behavior that does not have that playing a dominant role isn’t going to work on an individual level or on average”

does not follow. In general, economics models are very useful both for policy and prediction, within the scope of what the models are designed to describe. Economists are acutely aware of “irrational” behavior, but “rational” models tend to outperform the “behavioral” models in the real world, both in terms of accuracy and tractability. As one of my macro profs once told me, “rational expectations models are the worst models, except for all the other ones.”

Using heuristics (formal logical fallacies)

isrational though.There’s a rhetorical problem that usually pops up in these conversations. The economic definition of “rationality” is very specific, and not all that closely related to the colloquial meaning (hence the scare quotes). In most settings, heuristic decision making is irrational in the economic sense of the word (there are exceptions).

But in the real world, of course, people can and should use heuristics all the time. Economists are (mostly) well aware of this. It’s not included in the models though because 1) it makes things a lot harder, and 2) you don’t gain any performance boost from it (depending on the setting, of course).

This latter point was the thing that surprised me the most about macroeconomics when I was learning it. Specifying preferences or introducing irrational behavior into the models in a tractable way matters much less than you’d think. Though there all some hard-core formalists still out there, the reason rationality persists as a modeling paradigm is not due to ignorance!

Can you provide a source for this? I would believe the models just have too many parameters in that case, but not that one properly modelling use of heuristics would have poorer performance in principle.

We discussed something similar in Phil’s post about probability of test result being positive.

The concept of probability there is not useful for that individual. It is meaningless to invoke exchangeability and frequentist limit notion.

Much of economic theory deals with expectations. And we have the same problem here. The mathematical notion of expectation is useless in many real economic situations faced by individuals.

I highly recommend Ariel Rubinstein’s Economic Fables. He lays out a convincing case that much of economics is like a game of chess – full of toy models that ought to be treated as such. In this sense it is not very different to Mathematics. It is entertaining to play with objects you cook up.

P.S.

I wonder if there is language issue in all these economic experiments. Much of the mathematical terminology also has simple colloquial usage and that could be misleading. A case in point is mathematical expectation and expectation or expected value as it is used colloquially.

“He lays out a convincing case that much of economics is like a game of chess – full of toy models that ought to be treated as such.”

How is being full of toy models like a game of chess? Chess is not full of toy models.

What’s the problem with defining risk aversion with a declining utility function for money? I can’t find the “article linked above” mentioned in the post.

Carol:

See section 5 of this article.

To be clear, nothing in those examples contradicts the VNM axioms, so it’s still perfectly reasonable to use an expected utility framework to describe those preferences. What you are showing, however, is that risk preferences are not constant throughout the income domain. While constant risk aversion is usually assumed, it’s not a necessary condition for the expected utility theorem.

I think you are being uncharitable to the physicists here. A lot of dynamic macroeconomics is build on naïve utility functions, the decision context of agents in these models is similar to the setup assumed by Peters, and his approach might be fruitful. It is also true that behavioral economists have documented departures from standard utility functions for 50 years. But the benchmark dynamic general equilibrium models are not using this stuff.

James:

But what about this:

That’s just wrong. Expected utility theory can completely handle the idea that losing your last $100 is more than 1/10,000ths as bad as winning $1,000,000 is good. Econ has all sorts of problems, but this isn’t one of them.

You might be right about benchmark dynamic general equilibrium models, though. I know nothing about that.

I read Peters as making the following argument:

1. in many important economic contexts the time value does not equal the expected value 2. in such contexts, it is rational not to accept the utility maximizing bet 3. most relevant economic models assume the contrary 4. hence this is a big problem for economics.

I don’t think Peters at least in his article that is making a splash, would disagree with “Expected utility theory can completely handle the idea that losing your last $100 is more than 1/10,000ths as bad as winning $1,000,000 is good.”

From Peters’ lecture notes, “Ergodicity Economics”, page 46, “The conventionally offered reason for this predictive failure is that the value to an individual of a possible change in wealth depends on how much wealth he already has and his psychological attitude to taking risks. In other words, people do not treat equal amounts of extra money equally. This makes intuitive sense: an extra $10 is much less significant to a rich man than to a pauper for whom it represents a full belly; an inveterate gambler has a different attitude to risking $100 on the spin of a roulette wheel than a prudent saver, their wealths being equal.” So Peters is certainly aware of your point. The article you quote from is a strawman.

If anyone is interested, Here is an explication of utility theory in benchmark macroeconomic modeling: http://www.centrosraffa.org/public/624d62e6-d1c6-4ba9-8827-fb1ddaedf430.pdf and you can see these standard functions all tacitly assume ergodicity.

> you can see these standard functions all tacitly assume ergodicity

Where would I see that?

Not 100% sure of this, but when you derive the optimal consumption from u`(c_t) = βE_t[v`(a_{t+1}R_{t+1}] you are averaging across possible states of the world, so you are assuming an ergodic process. Excuse the horrible notation.

> you are averaging across possible states of the world, so you are assuming an ergodic process

Don’t believe everything you read on the internet. This is nonsense. Only an explicitly anti-Bayesian philosophy requires repeated trials across time to interpret an average, and given that this is an explicitly Bayesian blog you won’t get much traction here. But even if you accept that probabilistic evaluations of one-time events are meaningless, these are philosophical positions, not “assumptions” in the mathematical sense of the word.

That equation is one property of the solution. The thing being optimized is the total discounted utility of a sequence of consumptions c(1), c(2), …, c(T). Something like

Expected_Value_as_of_t=1 [ Sum_from_t=1_to_t=T [ discount_factor^t * utility( c(t) ) ] ]

So yes, it’s an average of something across what we consider (at t=1) that are possible “states of the world”.

That something that we are averaging is the the sum across time of something else: the discounted utility at each time in that “state of the world”.

Why do you say we’re assuming an ergodic process when we calculate an expected value over a “distribution of states of the world”?

What ergodic process would that be?

An infinite, stationary process that has an equilibrium distribution such that the values are in the set that contains the total discounted utility (in our original “time” dimension) of every possible “state of the world” { Sum_from_t=1_to_t=T [ discount_factor^t * utility( c(t) ) ] } with a probability equal to the probability of that “state of the world” in our original formulation?

Carlos, yes you are right. To bring it in line with the example from Peters. There is an asset which generates a risky return and a safe asset, you choose to consume a fraction of the assets and allocation between the two assets in each time period (the residual you don’t consume is reinvested in the assets). Asset growth between periods depends on the amount one chooses to consume and the realization of the returns, and the portfolio allocation. The fraction you choose to consume is a function of the riskiness of the asset and your risk aversion (which as is pointed out elsewhere in this thread, is a problematic concept). Peters is pointing out that the fraction one consumes is quite different if the asset returns are additive (ergodic) or multiplicative (non ergodic). Actually I am not sure of the maths here. Perhaps the formula still work if you take the process to infinity in time rather than averaged over infinite possible assets in a single period? But at any rate you end up with quite different optimal consumption paths. Peters claim is that you can ditch the utility function/risk aversion and you should and people do choose the consumption share and allocation between the two assets that maximizes the growth rate in consumption.

I agree with Andrew that it is wrong to discard risk aversion/utility concepts in entirety. But I think Peters is onto something by questioning the notion of expectation that is used in these models. In particular, its appropriate to think about an individual’s wealth over their life as a time average not a “ensemble average”.

Somebody, you can also cash this out in Bayesian terms if you don’t like possible world talk. For instance, you have a prior on the data generating process (the risky risky asset returns) but don’t know the odds with certainty. I don’t think Bayes vs classical stats is essential to this problem.

“One of the problems with this sort of toy problem is that it’s often not made clear wat the money will be used for.”

Change that to

“One of the problems with this sort of toy problem is that it’s often not made clear for wat the money will be used.”

Wat? Wat?

This is something up with wich I will not put.

Funny I had started to write a post about this paper (was going to call it “Are economists the original multiversers?”). Though we’re not economists, we spent some time talking about it in my lab when it first came out.

Here’s a few recent responses from economists to Peters’ paper:

Economists’ views on the ergodicity problem, by Doctor, Wakker, and Wang: https://www.nature.com/articles/s41567-020-01106-x

Ben Golub’s rendition: https://twitter.com/ben_golub/status/1338175642932715520

Don’t miss Peters’ reply to Doctor, Wakker & Wang: https://www.nature.com/articles/s41567-020-01108-9

The Economists’ Views: https://www.nature.com/articles/s41567-020-01106-x.epdf?sharing_token=9F3DN_34snuY8BPKXYG4fdRgN0jAjWel9jnR3ZoTv0OcQftUgrOs8slt-g2b5T7R5rhCSc2_wteK7xPQ-mcU-pMHnxqPUh6oA2qVPtVfB9LSoyabzTMDXIcrGzemquyR_zDznCkZXv1-LULIEwHqEWpwPgthDOZhJ1wsJdQC_ag%3D

Supplementary information: https://static-content.springer.com/esm/art%3A10.1038%2Fs41567-020-01106-x/MediaObjects/41567_2020_1106_MOESM1_ESM.pdf

Reply: https://www.nature.com/articles/s41567-020-01108-9.epdf?sharing_token=efF08tlDqTYBC1swrULV5tRgN0jAjWel9jnR3ZoTv0MxETx2O57SObuxv4eCh4AQ7dNX2rUATlwNq7QVqX49mNB_X6dIXoy_IE5YR26RgYc_1Cn_6OfmGwMl-fqq3WclPkZBgre-L1YMtuIXPTrJawQEk5VAbfBjujf-DKYAmg4%3D

That’s quite a non-response: “I’m not sure where the disagreement lies”.

Sadly, the more I read about this the more relevant I find these remarks from Ben Golub linked above:

“Doctor et al. have done a generous thing, though unfortunately the learning will likely be lost on the EE crew itself. They are very committed to the bit, and the idea that their magic bullet will not restart all of economics is too bitter a fact to swallow.

“In their commitment to the hope that they will redirect a mature field with a simple, known idea (and without engaging with current work on the same issues), they embody the main feature of scientific cranks.”

Jessica:

The Doctor et al. article is reasonable, but I think they miss the point that the experimental condition is not defined until it is explained what the $100 is relative to. I’m ok with losing my last $100 because I’m getting my next paycheck in a week. For other people, that last $100 has a different meaning. Both the physicists and the economists dodge this key issue, in their rush to create universal theories. Maybe that’s one place where it’s a benefit to be in a field such as biology or political science, where there are no completely universal theories because these are, in some sense, historical sciences.

My sense was that Doctor et al (and Golub’s gloss on it) are taking on the core error in Peters’ accusation that utility theory assumes ergodicity. But agree that base wealth/diminishing marginal utility of wealth are things that economists deal with all the time that the experiment Peters reports doesn’t seem to deal with.

The issue is that what these economic theories and then good for? They are supposed to aid in understanding the man-made ‘Economy’ which is essentially the resultant of human decision making in the context of money matters (or utility as they would like to call).

The fact that conventional theories are not good at describing real world economic phenomena is enough reason to abandon them. Economic is basically ‘saying bullshit in the language of mathematics’. It is astonishing that a theoretical approach that is neither true nor useful survived this long.

Devan:

When it comes to macro, economic theories can be useful to predict what will happen to the money supply when the exchange rate is allowed to float, and things like that. I’m not saying existing theories are perfect, but I have a feeling they’re better than no theory at all.

When it comes to micro, economic theories can be useful to predict what will happen to sales when you lower your price, and things like that. I’m not saying existing theories are perfect, but I have a feeling they’re better than no theory at all.

The theories have lots of room for improvement, and I’m cool with physicists or anyone else helping out on that. But I don’t know that it helps when they claim that existing theories have problems that they don’t actually have.

Yes…I agree that physicists are no better than economists when it comes to the studies. However, i think what physicists are advocating is that even the approach adopted to study economic phenomena by conventional economists is flawed. If somebody wants to study a phenomena quantitatively, then in my opinion, one has to atleast proceed along the Physics way (whether a quantitative approach is feasible or useful for social sciences is very much debatable). That means making reasonable assumptions about the phenomena at hand. The conventional theory even fails to acknowledge that the ‘maximization’ in utility maximization can be

Anon:

Sure, but then let’s model things for real and not talk sloppily about $100 without clearly specifying the context where this $100 will be used.

…done in different ways. See for e.g. https://www.nature.com/articles/srep13071

There is a truism ” all theories are wrong but some are useful” which applies to virtually all human theories (physics,biology …) if you are able to think deeply enough and have sufficient information. Theories are approximations which are very useful in a given context but fail in predictive power in more general situations. Think about different “laws” of gravity in the contexts of: earth, our solar system, galaxies, double slit quantum experiments. Newton to dark matter. Context and utility are the important factors which can be restated as boundary conditions and how much and who is willing to pay.

Maybe we need to place greater emphasis on human tendencies, realistic expectations and motivations which can diminish the true state of the first two.

Sorry for the rant but after 8 decades it becomes easier to see the promotion aspect of our competing claims to knowledge.

Andrew says:

“When it comes to macro, economic theories can be useful to predict what will happen to the money supply when the exchange rate is allowed to float, and things like that.”

and:

“When it comes to micro, economic theories can be useful to predict what will happen to sales when you lower your price, and things like that.”

These are terrible examples. Money supply is almost always exogenous in modern macro models, and the micro example is…well…strange. I’ve been a microeconomist my entire life and I’ve never once encountered a microeconomist who has studied anything remotely like this, at least in academia.

But this is not meant to be a criticism of Andrew, because I think it speaks to his broader point quite well: As a group, economists are terrible at explaining what they do to the general public. I’m continuously shocked at how little people know about economics as a field, even people who work and coauthor with economists! It’s even something of a meme within the economics community (for instance, https://twitter.com/kevinrinz/status/1190628172531978240?lang=en).

The biggest culprit here is Econ 101 pedagogy. Econ 101 is basically irrelevant to academic economics, yet this is all that the vast majority of people are exposed to from the field. The gap between what economics actually is and what people learn about economics is bigger than in any other discipline, I imagine. Almost all of the economic criticisms that I see in the comments here miss the mark and betray a misunderstanding of economics. But given the information asymmetries, I think they are reasonable beliefs nonetheless!

+1

This reminds me of Shalizi’s old joke paper “A Simple Model of the Evolution of Simple Models of Evolution” (https://arxiv.org/abs/adap-org/9910002). It’s a funny takedown of how once physicists get a pop-science view of some field, they immediately assume they can “solve” it, and because they only get feedback from their peer physicists rather than folks in the field they are “solving”, they never even realize how wrong and/or useless they were.

I also find it funny that this physics guy didn’t like how economics ignores many aspects of actual behavior, but then he didn’t bother to look at the entire subfield of cognitive psychology that studies human decision behavior (and which has eschewed simplistic perspective on expected utility since its inception). In particular, framing effects like those Andrew and the other commenters have noted are huge.

I’d recommend checking out the sample articles from the journal “Decision” to get a sense of the state-of-the-art with respect to the study of human decision making (https://www.apa.org/pubs/journals/dec/sample).

P.S.: Could some enterprising upper-midwesterner turn this into a good Sven and Ole joke?

Thanks for the link to Shalizi’s paper; it brightened an otherwise gloomy morning.

There’s another paper (editorial actually) that is a takedown of electrical engineers and physicists. It dates back to 1956 but it reads like some of this current discussion. It’s “Two Famous Papers” by Peter Elias. A search for “two famous papers elias” will find it and many references to it.

It begins:

It is common in editorials to discuss matters of

general policy and not specific research. But the

two papers I would like to describe have been

written so often, by so many different authors

under so many different titles, that they have earned

editorial consideration.

The first paper has the generic title “Information

Theory, Photosynthesis and Religion” (title courtesy

of D. A. Huffman), and is written by an engineer

or physicist. It discusses the surprisingly close

relationship between the vocabulary and conceptual

framework of information theory and that of

psychology (or genetics, or linguistics, or psychiatry,

or business organization). It is pointed out that the

concepts of structure, pattern, entropy, noise, transmitter,

receiver, and code are (when properly interpreted)

central to both. Having placed the

discipline of psychology for the first time on a

sound scientific base, the author modestly leaves

the filling in of the outline to the psychologists. He

has, of course, read up on the field in preparation for

writing the paper, and has a firm grasp of the

essentials, but he has been anxious not to clutter

his mind with such details as the state of knowledge

in the field, what the central problems are, how they

a,re being attacked, et cetera, et cetera, et cetera.

There is a constructive alternative for the author

of this paper. If he is willing to give up larceny for

a life of honest toil, he can find a competent psychologist

and spend several years at intensive mutual

education, leading to productive joint research.

Thanks for the Elias reference—I’d forgotten about that completely!

People don’t like uncertainty and economists have a name and model for that https://en.m.wikipedia.org/wiki/Ambiguity_aversion

“It’s not a stunning new idea, any more than it would be stunning and new if a physicist came in and pointed out all the problems that thoughtful statisticians already know about p-values.”

Ha!, that sounds like a criticism of E.T. Jaynes.

Now while Peters is maybe over playing the significance of what he found, the old ways may have been approximately true and the Kelly criterion already existed to address the cases where they weren’t, I think this blog post underplays what he is saying.

Especially when you mention the “declining marginal value” of money, this is not what Peters’ theory is about. Even with a 1 dollar always equals 1 util situation, in the 50%/40% toy example, if you play the game betting $100 every time, you have positive expected returns in the long run. If you play the game but betting what you have left after the previous game every time, you have an expected loss in the long run.

That result is really counter intuitive to me. I had to implement a little monte carlo simulation of the toy example to prove to myself that it was true. According to Peters, there were papers written by Nobel winning economists that “proved” it to be impossible and that were used as a foundations for lots of subsequent economics (there were also papers saying the opposite so Peters results aren’t new, but according to him, these were largely ignored).

For those that haven’t seen it, the Gresham talk is worth watching:

https://www.youtube.com/watch?v=f1vXAHGIpfc&t=305s

This is my take too.

I guess I missed something on my first quick read. I had to build my own simulation model to see what you say and, indeed, the probability you lose money (betting whatever is left each time) is greater than 50%. It appears to converge to around 55%, despite the expected value of the bet being positive. I’m not sure if that changes my reaction to this example – that it is a poor example to repudiate economic theory with, but I’ll reserve judgement awhile longer.

Correction: I misread the example (even the misquoted example provided). You are almost guaranteed to lose money.

> If you play the game but betting what you have left after the previous game every time, you have an expected loss in the long run.

It depends on what do you mean by “expected loss”. You can “expect” a loss (i.e. a loss is more likely than a gain) but the “expected value” is always positive.

> That result is really counter intuitive to me. I had to implement a little monte carlo simulation of the toy example to prove to myself that it was true.

One way to look at it: consider the case with two flips. You can win both (25% probability, 125% gain), lose both (25% probability, 64% loss) or win one and lose the other (50% probability, 10% loss). Losing is more likely than winning, but the expected value is positive (10.25% gain) and higher than for one flip (5% compounded gives 10.5%). As the number of flips grows, the probability of losing goes to one. The median outcome goes to zero. But the average outcome continues to growth by 5% with each flip.

Another way to look at it: in a long sequence of flips, the “typical” outcome is that you win as many as you lose. You are as likely to do worse as you are to do better. With each pair “win/lose” you will be losing 10%. So “typically” (and more likely than not) you’ll lose most of your money if you play more than a dozen rounds.

> the “declining marginal value” of money, this is not what Peters’ theory is about.

But maybe it is? If your utility is logarithmic the negative change in utility of losing 40% is larger than the positive change in utility of winning 50%. To optimize expected utility you just refuse to play.

Well, I couldn’t see what was going on either—even after doing a simulation. But it turns out that restating the problem makes it easy to understand.

This is a random walk with multiplicative steps—it starts at 100 and the multiplier is either 1.5 or 0.6. Take logs of everything and it’s now an additive random walk that starts at ln(100) and has steps of ln(1.5)=0.41 and ln(0.6) = -0.51. So, its a random walk that takes bigger steps down than up. On average it goes down.

If the decrement multiplier were 1/1.5 = 0.667, then the steps would be equal (ln(2/3) = -0.41) and the expected value of the gain would be zero.

Bob76

> Especially when you mention the “declining marginal value” of money, this is not what Peters’ theory is about

A declining marginal value is part of a logarithmic utility of money which is exactly what Peter says is correct behavior and is exactly equivalent to the long run growth maximization he proposes.

> If you play the game but betting what you have left after the previous game every time, you have an expected loss in the long run.

This is completely incorrect.

> That result is really counter intuitive to me. I had to implement a little monte carlo simulation of the toy example to prove to myself that it was true.

It’s false

> According to Peters, there were papers written by Nobel winning economists that “proved” it to be impossible and that were used as a foundations for lots of subsequent economics (there were also papers saying the opposite so Peters results aren’t new, but according to him, these were largely ignored).

No there weren’t, and Peters didn’t say that either

Did you watch the talk I linked to?

Did you?

You said

> If you play the game but betting what you have left after the previous game every time, you have an expected loss in the long run.

He demonstrates at 4:24 that you have a positive expected value in the long run. You don’t need to do a monte carlo simulation for this; apply the tower property of expectations/law of iterated expectations and inductively you get an expected return of K(p(w-l) + l)^n, where p is the probability of winning, w is the winning multiplier, l is rhe losing multiplier, K is the principle, and n is the number of bets. p(w-l) + l =1.05 > 1, so in the limit your expected returns go to infinity.

You’re confusing expected returns with either

1. Most likely returns, which are near 0

2. Almost sure convergence of returns to 0 (you end up with nothing with probability 1, even though expected returns is infinity)

The point of expected utility is to explain that the bet is bad because your expected utility is negative.

I think I understand what you’re saying and you may be right. I’m still not sure the point requires considering declining marginal value of money since the cases where you gain any value become so incredibly rare on long chains of bets regardless of how you map to utility.

What is the point precisely? How long does the chain of bets need to be?

If it’s single bet, is it good or no? If you do it twice in a row, is the bet good or not? What about three flips? Ten? Forty-two?

I don’t know if that’s a very good explanation of Ole Peters attitudes or agenda. The article seems to be misinterpreting and sensationalizing for drama. His main agenda is pointing out areas where ergodicicity assumption leads to errors, particularly within economics. He’s obviously not the first economist to do so, and he frequently cited others (including Nobel prize winners) who developed methods that don’t depend on ergodicity.

Lest anyone confuse me with that smart “Joshua” who commented above…

I don’t understand why we’d want to focus on math to get to understanding that people don’t always act in such a way that they “maximize utility.”

Seems to me that psychology is more the way to go (not that you’d necessarily have to choose one over the other).

I don’t get it. It is fine to have a discussion about the shortcomings of economic theory (to put it mildly), but this example just seems stupid. Economic theory, even at its worst, never assumed that people maximize the expected monetary outcome – that is one of the purposes of using a utility function. Concave utility functions will embody one form of risk aversion (indeed, that is sometimes the way risk aversion is defined), and maximizing expected utility is perfectly consistent with the given example of the 50%/40% gain/loss bet. So, that seems like a poor example to use. The more sophisticated counterexamples (such as the section of the paper that Andrew cited above) involve deviations from the simple expected utility model. One of the most common is the loss aversion model which assumes that people are more averse to losses than failure to realize gains. But that is only the beginning – more complex models have been proposed and purport to explain many behaviors that don’t conform to the simple model.

But, the initial example provided just seems to completely miss the mark.I haven’t bothered to go to the original Peters piece, but if the quoted example is correct, then it is a nonstarter. All the example does is show the expected monetary value of the bet and then equate this with expected utility, which would only be true if the utility function were linear. Of course, if utility functions were linear in money, then the insurance industry would not exist. Even economists are aware enough of the real world to not base their theories on linear utility.

> I haven’t bothered to go to the original Peters piece, but if the quoted example is correct, then it is a nonstarter.

The quoted example is incorrect and Ole Peter should sue Brandon Kochkodin for defamation. The original piece is also pretty bad, and I wrote why I think so below

This article is a terrible summary of Ole Peter’s argument. According to the paper

https://www.nature.com/articles/s41567-019-0732-0

there are three main critiques:

1. The idea of endowing each individual with a bespoke utility function to perform expectations over makes the theory excessively flexible. It’s hard to divine which utility function to use in predictions pre-hoc, and you can justify a large set of observations post-hoc by choosing an appropriate utility function. “The initial correction — expected utility theory — overlooked the physical problem and jumped to psychological arguments, which are hard to constrain and often circular.”

2. He also makes the philosophical argument that expectations in a non-ergodic process involves assuming that

“individuals can interact with copies of themselves, effectively in parallel universes”. Or, in other words, since non-ergodicity means their life’s worldline doesn’t revisit that bet a bajillion times, they never actually experience the positive outcomes with a small probability, so it shouldn’t influence their decisions. This is an extremely anti-Bayesian view of probability, so it’s not surprising that we don’t like it. If your view of probability comes from statistical mechanics ensembles, I guess I can see where he’s coming from, but to me an expected utility over a stochastic decision is just a way of collapsing a space of unknown parameters into a decision following some axioms of coherence.

3. He then introduces an experimental “falsification” of expected utility theory through an experiment where people seemingly change their utility function depending on the type of bet. Vaguely, they give people a series of bets where single-period betting choices made by maximizing expected linear utility creates optimal growth, then a series of bets where single period betting choices made by maximizing expected logarithmic utility creates optimal growth, and observe that people have seemingly changed their utility functions. Problem is, you can still fit this into an expected utility framework by defining utility over final wealth and choices over betting strategies.

They seem to have missed the point of utility theory altogether–it can actually be whatever you want so long as it’s coherent in that it avoids dutch books. It’s flexible to the point of not being falsifiable because it’s not really a scientific theory, it’s a system for building descriptive models of decisions under uncertainty which exhibit certain desirable properties.

And I do disagree with the idea that individuals are really endowed with a particular utility function encoded in their psychology that they use to assess every risk, and economists do use that idea to build heterogeneous agent macroeconomic models, and that’s bad and terrible, rant rant rant. But that’s not really a critique of expected utility as a construct so much as it is a particular family of macroeconomic model where it’s applied, and also Ole Peter here uses the critique to sell their own univariate optimization model, long term average growth rate maximization, that they think is the master key to explaining society. I’m sure if I spent a month, I would have an experiment that thoroughly falsifies the idea that people generally maximize their long term average growth rate. People are not doing BFGS in their head all day, and anything that takes that as the premise in all situations and tries to build it into a master explainer of all things is doomed to failure, whether you’re a macroeconomist or a physicist or a philosopher or whatever.

The experiment described:

https://arxiv.org/pdf/1906.04652.pdf

Though the paper is used to sell long-term growth optimization based microfoundations, and is rife with weird zombie statistics like prior-width robustness and bayes factors, it’s a pretty clever experimental setup with some cool results that don’t look like noisy bullshit.

I do want to talk more about that terrible bloomberg article:

The article posits the coin tosses as an example of something expected utility can’t explain. Ole Peter’s paper explicitly describes the coin tosses as the motivating example for expected utility, and clearly states that it can be accounted for with logarithmic utility of wealth. The article states that expected utility means we “try to choose the option that maximizes our wealth”. No you dolt, it’s the EXPECTED UTILITY OF WEALTH. It proceeds through the whole article to describe how expected utility of wealth is equivalent to average-case wealth.

“Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss. In economics jargon, the expected utility is positive, so one might assume that taking the bet is a no-brainer.”

Whoever wrote the article clearly believes that “expected utility” means “expected money”, which is the exact problem expected utility was meant to solve. If you read this article and thought Ole Peter didn’t understand the difference between the expectation of a function and the function of the expectation or didn’t know about Jensen’s inequality, I wouldn’t blame you.

Thank you, this is helpful.

According to the Kelly Criterion if you are offered 1.25 to 1 odds with a 50% chance of winning you should risk 3.3% of your bankroll, not 100% as in the example. So, the bet as outlined is 30 times what Kelly recommends, which would indicate ruin would be highly likely at some point. The real question is why anyone would take the bet, unless they REALLY had a bankroll in excess of $3,000.

…and could wager that $3,000 in $100 increments

Utility theory is a long-term interest of mine, and I had stumbled on debates about ergodicity decades ago. I don’t think there is much new or interesting in Peters, although this particular issue is not one I’ve spent much time on.

What makes this kerfuffle so irrelevant for me is that the other problems with utility as employed by economists are so much greater—and in my view fatal. I realize this gets away from the theme of this thread, so I won’t get into a big disquisition. Suffice to say: (1) Utility theory makes assumptions about decision-making that are wildly at variance with psychological models and evidence. (Behavioral econ just scratches the surface, since its frame is “deviations from utility maximization”.) (2) Normatively, it is intolerably reductionist, scrubbing out all the substantive knowledge of well-being developed by other fields of study and experience. (3) (And this is my particular bugaboo…) It rests on the assumption that all cost and preference sets are convex, so that utility gradients can be interpreted as meaningful for individual and collective decision-making.

Those are the big problems; differences between expectations that do or don’t depend on time/sequencing dependence are inconsequential by comparison.

Like statisticians who want to chuck NHST, I’m for getting rid of utility theory, except where it’s a useful heuristic, as in much of game theory. But just as a heuristic. (Behavioral economists may detect a note of irony.)

I wholeheartedly agree with this assessment.

A quick demurrer:

“Utility theory makes assumptions about decision-making that are wildly at variance with psychological models and evidence.” Sure. But (a) any set of consistent choices can be shoehorned into utility theory (which is, to Peters a flaw) but it is also true that any choice at all can, by definition, be shoehorned into psychology. Sometimes crazy people make crazy choices! “My brain made me do it” is indeed a broader criterion than “a process which aims, with error, to make choices which make me feel better rather than choices that make me feel worse.” But maybe that’s too broad to be useful. Depends…

(b) “intolerably reductionist.” Sure. Guilty. (But not too guilty, if the alternative to reductionism is unconstrained theorizing.)

(c) “Depends on convexity of the choice set.” Only to solve, and prove optimality. Utility theory is, to me, a useful normative theory which ahs a lot of challenges in a lot of situations as a positive theory. Like every sufficiently powerful theory ever.

Wrong place for a debate on this, but I should at least make clear that I’m aware of these demurrals from many years of disputing this stuff with economists.

Can’t believe nobody has brought up xkcd 793 yet.

https://xkcd.com/793/

I thought about it, but I don’t know how to find an xkcd by topic.

Bob76

I found it by just typing “xkcd physicists” into google. No guarantee it will work, but I’ve had a good track record googling stuff.

One from SMBC is also great

https://www.smbc-comics.com/comic/2012-03-21

Thanks!

My heuristic to decide whether to listen to theoretical physicists who have solved deep economic problems is: how much money have you made from implementing your ideas? This can readily be adapted to analogous cases of theoretical physicists “solving” the energy crisis, the water crisis, etc. etc.

It turns out that experts in their respective fields are, well, experts and know a lot of stuff, so the probability that an outsider can come in and turn everything around is small.

I’m glad that people brought up the Kelly Criterion. My vote goes to Kelly and Bernoulli. Back in 2008 I started reading economics blogs, and I confess that have yet to see any of them talk about the Kelly Criterion. I suppose that is because of marginalism, not utility theory. I dunno. For small amounts of money, money is a reasonable proxy for utility, on the assumption that small changes in utility are roughly linear. IIUC, marginalism won out in economics in the late 19th century. By taking one’s stake into account, the Kelly Criterion is definitely not marginalistic. Bernouilli also took the stake into account by taking on the question of losing it all in a series of bets with positive expectation. It turns out that the best way to minimize losses by putting only a part of one’s stake also provides the best expected return on investment. By contrast, by ignoring the stake, marginalism seems to advise going all in all the time.

Example: In a popular book on economics the author, a professor, advises his classes, when they make charitable donations, to give all their donation to only one charity, in order to maximize the personal utility of their donation. If I were in his class I would ask him if he diverisfies his investment portfolio. Surely he does. Then I would ask it is a good idea to diversify his portfolio, why is it not a good idea to diversify his charitable donations? I would be interested to hear his answer.

In the bet in the article I would take the $100 as the stake and not bet it all.

Be aware that economics blogs are extremely unrepresentative of the field as a whole. As those blogs are a major entry point for non economists, this is a big part of the problem I think.

“For small amounts of money, money is a reasonable proxy for utility, on the assumption that small changes in utility are roughly linear”

Not quite. When dealing with money, we think of utility in terms of what we call “indirect utility.” There’s a simple mathematical trick that you can do to switch between thinking of utility in terms of goods and services and utility in terms of money. Conceptually and mathematically, “marginal utility of money” is well defined. And this relationship is generally not assumed to be linear, as that would imply risk neutrality.

“Then I would ask it is a good idea to diversify his portfolio, why is it not a good idea to diversify his charitable donations? I would be interested to hear his answer.”

They are two separate things, which is why the behavior is different. Investment is saving–it’s just future consumption. Charity is a specific expenditure that grants utility now.

Thanks for your reply. :)

“Charity is a specific expenditure that grants utility now.”

That may or may not be the case. This year, for instance, I am making a single donation to a food bank, because of what I hope is a short term emergency. But usually I donate to charity as an investment in human capital. Also, diversity is a way of dealing with uncertainty, which is a normal fact of life. That holds true for portfolio diversity as well.

That’s a perfectly fine and noble way to think about charity! But it’s still probably not an “investment” in an economic sense, because it has no significant effect on your future income. To put it another way, if you didn’t gain utility from the donation, then you definitely wouldn’t have it as part of your portfolio. But ultimately I don’t find the distinction that meaningful, because it only serves the internal consistency of the model.

The bigger problem is the assertion that one should “give all their donation to only one charity, in order to maximize the personal utility of their donation.” That’s nonsense, and unsupported by economic theory. Donate to whoever you want! Your preferences are your own, and the person who made that comment is trying to impose their preferences onto you. That’s both morally and economically repugnant.

Your last sentence got caught in my throat. Is it obviously “morally and economically repugnant” to try to impose one’s preferences on another? It is not so obvious to me. I understand the sentiment and I generally support free choice. But there are many circumstances where preferences need to be challenged, where conflicts are unavoidable, and where it is absolutely required that we try to impose our preferences on others. If that imposition takes the form of violence or coercion, I guess I’d agree with you. But if it takes the form of persuasion, it may well be required in a civil society.

Maybe I’m missing something here, but the expected payoff of this wager is negative. The wager is multiplicative, not additive. The wager was described as:

Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total. Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss.

So, starting with $X, if you flip heads you get $X * 1.5; if you flip heads you get $X * 0.6. If you make this wager many times, flipping heads a times and tails b times, your final bankroll is

$X * (1.5)^a * (0.6)^b

Since you’re just as likely to flip heads as tails, over “many” iterations, a/b converges to 1. But each matched heads-tails pair (1.5 * 0.6) = 0.9 < 1. On average you will lose 10% of your bankroll in each matched heads-tails pair of iterations.

After 50 flips, the expected value of your bankroll is $X * (0.9)^25 = $X * 0.072. You should expect to lose 92.8% of your bankroll if you can afford to play that long.

All these smart people here, plus the person who wrote the article, and it took 13 hours and 60+ comments to realize that the example underpinning this whole discussion was completely mischaracterized. Math is hard.

Math is easy. Thinking clearly is not ;)

> On average you will lose 10% of your bankroll in each matched heads-tails pair of iterations.

What you’re missing is that you’re not guaranteed to get matched heads-tails pairs. The excess-tails / excess-heads distribution is symetrical but the payoff is skewed to the upside. When you include the full distribution of outcomes, the average gain for a pair of flips is 10.25% (i.e the mathematical expectation of the outcome if you start with $1 is $1.1025, not $0.9).

As I wrote in another comment: https://statmodeling.stat.columbia.edu/2020/12/19/in-this-particular-battle-between-physicists-and-economists-im-taking-the-economists-side/#comment-1618147

“Consider the case with two flips. You can win both (25% probability, 125% gain), lose both (25% probability, 64% loss) or win one and lose the other (50% probability, 10% loss). Losing is more likely than winning, but the expected value is positive (10.25% gain) and higher than for one flip (5% compounded gives 10.5%). As the number of flips grows, the probability of losing goes to one. The median outcome goes to zero. But the average outcome continues to growth by 5% with each flip.”

Note that in that comment I also make the same argument about matched pairs that you do. It’s true that in a “typical” game the expected value is negative. But when you include the “fluctuations” around that “typical” game the expected value becomes positive.

> After 50 flips, the expected value of your bankroll is $X * (0.9)^25 = $X * 0.072.

That is just as wrong as saying that the expected value after one flip is $X * 0.949 or that expected value after two flips is $X * 0.9.

Carlos:

Yup. You are right here, and Lex is wrong. Probability is tricky, and I guess the misleading term “expectation” doesn’t help any.

“That’s a perfectly fine and noble way to think about charity! But it’s still probably not an “investment” in an economic sense, because it has no significant effect on your future income.”

I regard my own charity as action in the world. That is not particularly noble, IMO. For me it is part action (karma) and part renunciation (moksha). Everyone is different. It is the action part that makes the future important.

“The bigger problem is the assertion that one should “give all their donation to only one charity, in order to maximize the personal utility of their donation.” That’s nonsense, and unsupported by economic theory.”

That’s very interesting, given that an economics professor writes that he teaches that to his classes.

A financial economist says, “S&P 500 expected return is 10% p.a.” Here he uses expectation in the sense of Lebesgue integral under some probability measure. The investor who hears it thinks S&P 500’s return is “expected” to be 10% p.a., more often in the sense that returns would be closer to 10% p.a. I have come across plenty of professional investors (including CIO of a large pension fund) who think this way.

Here is a recent JoF paper on how investors perceive correlation:

https://doi.org/10.1111/jofi.12993

How statistical/probability concepts that communicate economic theories are perceived probably has as much to do with issues that goes to the heart of philosophical issues of probability (and hence severely limits useful application in decision making at individual level) as it is with rationality of rationality of people.

I think financial economists use mostly geometric expected returns, not arithmetic expected returns. At least for long-term forecasts. Sometimes both estimates are given. As for the investors interpreting those forecast, often they don’t even understand well the difference.

Yes…I agree that physicists are no better than economists when it comes to the studies. However, i think what physicists are advocating is that even the approach adopted to study economic phenomena by conventional economists is flawed. If somebody wants to study a phenomena quantitatively, then in my opinion, one has to at least proceed along the Physics way (whether a quantitative approach is feasible or useful for social sciences is very much debatable). That means making reasonable assumptions about the phenomena at hand. The conventional theory even fails to acknowledge that the ‘maximization’ in utility maximization can be done in different ways see for .e.g https://www.nature.com/articles/srep13071

von Neumann and Morgenstern’s theory of utility imagines that people have a best (1) and worst preference (0), and when offered a bunch of free lottery tickets for the best option in place of some certain option, x, where 0 < x < 1, there’s a point at which the person will take the lottery tickets over x. Its a little more complicated (e.g. an axiom of consistency), but vN & M’s demonstrated that it is almost trivial that variable personal preferences can map onto a single objectively comparable interval scale. This theory of utility is consistently misunderstood. A plain overview appears in an early chapter of Ken Binmore’s *Rational Decisions*

What if utilities are partially ordered?

That’s a bit terse. Let me offer one possible example.

Right now in the debates in the US about different actions to take in the face of the pandemic, the actions are evaluated in at least three dimensions: 1) health, 2) economic well being, 3) freedom. Not that these dimensions are well defined and uncorrelated, but suppose that for some individual action A is better than action B in terms of health and economic well being, but worse in terms of freedom. So far, we cannot say that A is preferable to B or vice versa. What happens if we are forced to choose between A and B and A is chosen? Does that mean that A is better than B? No, because under different circumstances B might be chosen.

It is common that human preferences may be intransitive. Given a binary choice, a person may choose A over B, B over C, and C over A. That is often interpreted as irrational. But it may well be rational if preferences are only partially ordered.

Either partially ordered or intransitive preferences are not “rational.” But I think your example violates neither of these conditions, rather, it violates the independence axiom, which says that if you prefer A to B, then you must also prefer the lottery

pA + (1-p)C

to

pB + (1-p)C

for any lottery C.

This assumption is generally thought to be the most problematic. The famous Allais paradox illustrates how we can show it fails in experimental settings, for example.

By the way, in contrast to Andrew’s claims above, violations of expected utility theory are extremely well-known in economics. There’s a solid 70 years of research on exactly this sort of issue, and it is standard undergraduate fare.

Chris:

Of course, violations of expected utility theory are extremely well-known in economics. I didn’t claim otherwise! In my above post, I’m siding with the economists, not the physicists. My only criticism of economics was in the way that utility theory is often presented in their textbooks. This is the same way that I criticize statisticians for their textbook presentations of null hypothesis significance testing. But, yes, economists fully understand that it could make sense to do a single bet without that being the same as committing oneself to a series of bets, and economists also understand that utility theory is just a model.

Rodan’s original Japanese name was “ラドン” in case anyone was wondering (pronounced Radon).

https://projecteuclid.org/euclid.ss/1009212411

Andrew, I’m curious to drill down on why you think that is a good bet. I get your point that the ‘utility’ of money is what you can buy/do with it, and will depend on life context. But presumably, your reason for saying “take the bet” is that the *expectation* value is positive, right?

A while back, I remember one of those big lotteries had got to really large pay-off land – like a few billion. Tickets cost only a couple dollars, and despite the astronomical odds, it turned out the expectation was positive for a little while there. Being a quantitative dork, I bought a ticket just to humor myself.

But is that a *good* bet? Sure, I had the cash and it didn’t put me out, but I’m not sure it’s correct to say that it’s a good bet. After all, you are just donating money to the lottery with a higher level of certainty than nearly anything else in life! Working through the math on this bet, and the extension to geometric stochastic growth, was extremely illuminating for me, and it really does drive home the difference between ‘typical’ and ‘average’!

So, we can agree that the ensemble expectation is meaningfully different than time average in these non-ergodic cases. My question is: you say “take the bet!”, but how many times? Ole Peters is a hard-core temporal frequentist and basically thinks probabilities only meaningfully exist in a long-run repetition sense (I totally disagree with that FWIW). Do you advocate taking the bet just once? Or having taken the bet, you should take it again (assuming the amount of money is still trivial)? How many lottery tickets should I have purchased?

I guess I’m wondering about the fuzzy boundary between the one-off evaluation in which the ensemble expectation is our only probabilistic guide, and the hypothetical long-run over which the non-ergodic time average effects kick in, and the average is driven by fewer and fewer larger and larger outliers…

Chris:

I’d take the bet: a 50/50 chance of $150 or $60 is better than $100 to me. I mean, sure, do what you want, it’s only $100 at stake so who really cares, but from basic economic principles, yes, I think it’s a good bet. Regarding multiple bets: I guess I need to know what the rules are. If the option is: (a) do nothing and keep the $100 or (b) start with $100 and play 20 straight times, each time betting what I have based on my original stake, then I’d do (b). If the option is: start with $100 and play X times, each time betting what I have based on my original stake, and where X is a number I must choose in advance, then what value of X would I pick? Ummm, I’m not sure, I guess I’d want to look at the distribution carefully. I’d pick some value of X more than 20. I’m not quite sure how much more; I guess I’d want to look at the distribution. It’s just a binomial (X, 0.5) distribution, appropriately scaled and exponentiated.

Basic economic principles state that both risk (e.g. std dev) and expected return should matter. For example, in the case of quadratic utility, E(u) = E(x) – 0.5*a*var(x), where a is “risk-aversion” coefficient. In this world, a heuristic like Sharpe ratio E(x)/Sd(x) can be useful to compare bets.

That said, I deeply distrust graduate student experiments involving imagined sums of money or small amounts of money.

What’s interesting is that the example itself seems wrong, as long as one keeps the gains and losses on the table.

Multiplying by 1.5 and then by 0.6 is multiplying by 0.9 – so this does not seem to work out all that well. One needs to pocket the gains and to compensate the losses on each move to keep things additive (and thus profitable).

So the example is not very good and very ambiguous.

Michael:

It depends how many times you play. If you just play a few times, it’s a good deal because you’re getting potential large gains with only the risk of losing some part of $100. If you commit to playing a thousand times in a row, that’s another story. The problem isn’t clearly defined until you (a) clarify what will happen to you if you lose the $100, (b) say how many times you have to play, and (c) clarify what you will do if you win $1,000,000 or whatever. This is part of my problem with the example: it’s presented as some sort of math problem, but there’s no good answer without some real-world context.

That’s true, as long as you keep the game additive (by making sure you have $100 on the table each time you play).

But if you keep what you have on the table (without taking out the gains or replenishing the losses), the game becomes multiplicative.

In this case, you cannot lose $100 (you did not replenish, and you can’t reach zero regardless of how many times you multiply a positive number by 0.6). But the expectation is no longer profitable. Roughly speaking, on average each two moves your total is multiplied by 1.5*0.6=0.9, so the game without taking out the gains each time one wins and replenishing the losses each time one loses becomes a losing game.

The problem with their paper is that without additional clarification an average person would assume that there is no taking out the gains and replenishing the losses, and so this person’s intuition would inform this person that this is, in fact, a losing game – on average after 2 moves $100 becomes $90, then after next two moves $90 becomes on average $81, and so on. And I would assume that this background assumption that the whole total is kept on the table and therefore the game is on average a losing multiplicative game is what’s behind the correct refusal to play.

> on average each two moves your total is multiplied by 1.5*0.6=0.9

How about this?

On average, in one move your total is multiplied by x = 1/2 * 1.5 + 1/2 * 0.6 = 1.05

On average, in two moves your total is multiplied by x * x = 1.1025

You can do the full calculation if you want: 1/4 * 1.5 * 1.5 + 1/4 * 0.6 *0.6 + 1/4 * 1.5 * 0.6 + 1/4 * 0.6 * 1.5 = 1.1025

If you agree with the first calculation (the solution is the average of the outcomes, weighted by their probability), why wouldn’t you agree with the second calculation?

If you don’t agree with the first calculation, how would you complete the following sentence?

On average, in one move your total is multiplied by ___

You think in terms of ratios. When you win, you gain 50%, so the ratio is 1.5:1 = 3/2.

When you lose, you lose 40%, so the ratio is 1:0.6=5/3 > 3/2. So the loss is actually worse than the gain, the ratio is less favorable.

If you want to average in an additive way, you need to consider logarithms of those ratios. The result is a product; if you want to treat is as a sum, you need to convert to logarithms (and then convert back to see what is the actual effect of the average).

(All this assumes no replenishment of losses and no taking out the gains, the whole total is kept on the table. People are correctly assuming that the house is cheating, misrepresenting a bad deal as a good one, so they refuse to play.)

I guess you agree with my calculation of (arithmetic) average. I agree with you that the geometric average may be more relevant depending on unspecified details. As you said, the problem is ambiguos. That doesn’t change the fact that the expected return is positive.

As for the house cheating, if they were offering a two-rounds $100 game to anyone who wanted to take it, wouldn’t they be losing on average $10.25 per game? Wouldn’t the players be getting a $10.25 profit on average?

Yes, actually, if one quits after 2 rounds, one wins on average. One only wins in 1/4 of scenarios, and loses in 3/4 of scenarios, but the average is higher: 225+90+90+36=441 = 4*110.25.

And then when one multiplies the ratio by the result of that, the multiplication is distributive, so we can multiply by the average.

So, it follows that I have been wrong. My analysis of financial aspect of it has been wrong.

What’s really going on is that more and more rounds you play with total on the table, informally and not quite correctly speaking, the limit is a “lottery” – your top gain can be very big (exponential), but the probability that you will be in the net positive tends to 0 (the average ratio is less than 1).

So I was wrong about psychology of this too; when people play lottery or not play lottery with high payout only (no small payouts), they usually just think “big gain, small chance” vs “small loss, large chance”, and not about the average. (So the effect we see here is comparable to something like: sell 900,000 tickets at 1 dollar each, and pay out 1 million to one ticket, or sell 1,100,000 tickets at 1 dollar and pay out 1 million to 1 ticket; most people would decide whether to play each of these two versions or to refuse with equal chances, regardless of this difference in expectation. So they do see that they will almost certainly lose, and they are aware of a potentially huge upside, but they don’t actually compute the averages before deciding.)

Yes, so you are right – if one keeps it on the table, it is essentially a lottery profitable to the player and losing to the house on average (but the chances of winning become smaller as the number of rounds become bigger).

> Multiplying by 1.5 and then by 0.6 is multiplying by 0.9

But the example is not “multiplying by 1.5 and then by 0.6”. It would be one of the following sequences:

Multiplying by 1.5 and then by 1.5

Multiplying by 1.5 and then by 0.6

Multiplying by 0.6 and then by 1.5

Multiplying by 0.6 and the by 0.6

This works out quite well in one case. Not that well in the others. On “average” it’s profitable. That average could not what you care about.

You’re right, there’s some ambiguity there. Whether you like the bet or not, depends on the details.

Standard theory is flexible enough to accommodate both preferences depending on the choice of utility function.

Maybe the paper claims that standard theory is wrong and only one of the solutions is right. You should never take the bet. Not even once.

Maybe it doesn’t claim that, but then I don’t know what the example is intended to show.

This is just a fat-tails thing. The reason that it is profitable “on average” is that there will always be some lucky person who wins a lot, while everyone else loses. But that average profitability is meaningless since no one will ever experience it. The median experience is that you lose all of your money. It’s the same as the Powerball, sure sometimes the Powerball has a positive expected value, but it is meaningless because the probability of winning is so low. If you run the Kelly Criterion on it, you find that it would recommend that if you have a bankroll of say $500 million, then it is worth buying one $2 ticket. But effectively even though it is a positive expectation bet, it is still a bad bet. Same with this. Sure it’s a positive expectation bet, but the condition of having to wager all of your bankroll each time, makes it a bad bet.

> Sure it’s a positive expectation bet, but the condition of having to wager all of your bankroll each time, makes it a bad bet.

It makes _what_ a bad bet?

Playing one time is a bad bet?

Otherwise, how many times would you have to commit to play for it to become a bad bet?

“how many times would you have to commit to play for it to become a bad bet?”

If the stopping condition is some finite number of times and you can afford to lose your original bankroll, then go for it. But if the stopping condition is when your bankroll drops below 5¢, everybody stops with 5¢ or less, or dies without cashing out. Hmmm. Let’s put a stopping condition on the upside. Say that you cash out if your bankroll is $1,000 or more, then it’s probably a good bet, eh?

> But if the stopping condition is when your bankroll drops below 5¢, everybody stops with 5¢ or less, or dies without cashing out.

Under these conditions any bet is bad.

Say it’s a coin flip and you lose 10% or win 100%. It’s a bad bet, even though the geometrical average is positive.

Hey, even if it’s a sure gain of 100% in every flip it’s a bad bet.

Right. One flaw of the proposed game is that it is an infinite sequence of bets in which you always put up your bankroll.

“It makes _what_ a bad bet?” – the game we are talking about, where you bet on the coin flip and lose 40% or win 50%. If you have to bet all of your bankroll it is a bad bet. Any bet where you bet your entire bankroll is a bad bet unless your probability of winning is 100%. If your probability of winning is not 100%, then the optimal bet size will be less than 100% of your bankroll, so betting 100% of your bankroll means ruin eventually. We know there is only a 50% chance of winning this game described above, so the optimal bet size cannot be 100%. Therefore, it must be a bad bet.

Yes, one time is a bad bet. If you define bad bet as any bet which will lead to ruin over time. The only good bet is one with a positive expectation and made in an optimal amount or less, but never more than the optimal amount.

Sure, you won’t go broke if you just play one time, but that doesn’t make it a good bet. It’s not like playing one spin of roulette magically makes it a good bet instead of a bad bet.

Tbw:

But this is one of the points of my above post. The concept of “your bankroll” is itself artificial. I can lose $100 and I’ll be just fine. Losing $100 is not “ruin.” Also, regarding “over time,” you have to specify how many times the game will be played. Eating one sandwich is healthy and not at all irrational; committing to eat 1000 normal-sized sandwiches within the course of one day is a bad idea and could well kill a person.

Also, your definition of a “bad bet” is . . . well, as Phil might say, it’s your definition so you can define it however you want, but the idea that there’s some “optimal bet” and that any bet is “bad” if it’s more than that amount . . . that’a a pretty strong rule. In the setting being described here, the bet in question is evaluated not with respect to an optimum but to the alternative of not betting. If you’d rather not risk any of your $100, that’s your call, but I think it makes perfect sense to risk some or all of $100 for a positive expected-value bet.

It would have been more realistic in terms of investment or gambling if the initial stake had been something like $35,000 where you stand to win $17,500 or lose $14,000. One problem with that is that such large amounts of money are surely not proxies for utils.

So, I think there are 3 issues you are raising 1) what if it’s a single bet rather than an infinite series 2) the amount of money is insignificant so ruin is off the table and 3) my definition of a “bad bet” is extreme and also merely my definition.

Regarding point #3, the article states:

Peters takes aim at expected utility theory, the bedrock that modern economics is built on. It explains that when we make decisions, we conduct a cost-benefit analysis and try to choose the option that maximizes our wealth.

So we are talking about the maximization of wealth. Wealth is maximized via the Kelly Criterion. Any amount wagered that diverges from what the Kelly Criterion spits out is not optimal and therefore doesn’t maximize wealth. Now if you bet less than Kelly while you don’t maximize wealth you at least aren’t inviting ruin, so I would be willing to make bets for less than Kelly. But the minute you exceed Kelly you invite ruin, so yes that’s a bad bet and I would never do it. Yes, that’s a strong rule, but a necessary one if you are going to maximize wealth which is what this is all about isn’t it? In the bet described, the optimum amount to bet according to Kelly is negative, so you shouldn’t bet.

As for point #2, I again go back to the fact that we are talking about wealth maximization, the fact that $100 won’t ruin you doesn’t really matter. Losing $100 doesn’t maximize your wealth either.

As for point #1, the single bet instead of a series, I think my beef with using the expected value boils down to this, it simply isn’t enough information in that it doesn’t distinguish between wildly different wagers. For example, a $100 bet that returns 20% for winning and -10% for losing, or 110% for winning and -100% for losing also have the same expected value as the 50%/-40% in the article. But in effect one is a $10 wager at 2 to 1 odds, one is a $100 wager at 1.1 to 1 odds and the example in the article is a $40 wager at 1.25 to 1 odds. What matters is not just the 10% favorable spread in the payouts, but how much you have to wager to get that 10% spread. The variance in the returns is critically important, and expected value just sweeps that under the rug and ignores it.

As for sandwiches, I think a better analogy is that while eating one monte cristo sandwich won’t kill you, perhaps eating one every day will eventually kill you. As a general rule of thumb evaluating diet choices by saying what if I eat this every day probably isn’t a bad starting place. I’m not advocating being a monster and never eating a Reuben, or never making a single questionable or bad bet, but as a guiding principle, I think looking at the long-term impact of repeating a decision over and over is a good place to start evaluating if it is really a good decision.

Tbw:

You quote: “Peters takes aim at expected utility theory, the bedrock that modern economics is built on. It explains that when we make decisions, we conduct a cost-benefit analysis and try to choose the option that maximizes our wealth.”

But that’s wrong. Expected utility theory (which, incidentally, can simply be called utility theory, as one of the consequences of the theory is that utility is the same as expected utility) does

not assume that we’re conducting a cost-benefit analysis when we make decisions. It’s a theory that says that for decisions to be coherent, we have to actas ifwe’re doing these cost-benefit analyses—but the field of economics recognizes that we can’t actually be doing so. Utility theory represents an ideal of coherence, not a description of how decisions are made.Regarding the other points, it depends how many bets will be made. The number of bets can’t be infinity because we have a finite lifespan and at some point we might want to spend that damn money. The appropriate decision can depend on the number of bets. It can make sense to do it for 1 round or 20 rounds but not for 1000 rounds.

> So we are talking about the maximization of wealth. Wealth is maximized via the Kelly Criterion. Any amount wagered that diverges from what the Kelly Criterion spits out is not optimal and therefore doesn’t maximize wealth.

That’s not correct. It doesn’t maximize wealth, it maximizes the _logarithm_ of wealth.

> Now if you bet less than Kelly while you don’t maximize wealth you at least aren’t inviting ruin, so I would be willing to make bets for less than Kelly. But the minute you exceed Kelly you invite ruin, so yes that’s a bad bet and I would never do it.

That’s not correct. It’s the minute you exceed _twice_ Kelly that you will go down to zero.

> “It makes _what_ a bad bet?” – the game we are talking about, where you bet on the coin flip and lose 40% or win 50%. If you have to bet all of your bankroll it is a bad bet.

If there is _one_ game we’re talking about, it’s the one described in the Bloomberg article quoted by Andrew:

“Consider a simple coin-flip game, which Peters uses to illustrate his point. Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total.”

I cannot imagine that anyone would read that description and think that there is an implicit “Starting with the whole of your fortune which amounts to the hefty sum of $100” there. Everyone will understand that $100 is the size of the initial bet, unrelated to theire own level of wealth which will typically be orders of magnitude higher.

I guess that the article doesn’t properly describe what Peters example is about. I’ve not read carefully the example in the document linked from the Bloomberg article but the Nature Physics example is clearly different:

“For example, a gamble can model the following situation: toss a coin, and for heads you win 50% of your current wealth, for tails you lose 40%”.

I think we can agree that there are different games being discussed in the comments, so it’s useful to be clear about what we’re talking about.

Perhaps my use of the term ruin is confusing, I don’t mean that you lose all of your wealth necessarily, just your bankroll. Obviously, what people consider their bankroll will vary, but the point is that if you lose your bankroll you are ruined in the sense that you can’t play anymore.

People seem to be hung up on the $ amount of the bet, and that the $100 isn’t significant to them or to many people, but that seems irrelevant to me. It seems to me that Peters’ point is that economists aren’t measuring wealth optimization correctly. By simply looking at the average expected return they are ignoring the fact that in an example like he lays out the expected positive return is generated by an extremely thin slice of the population, while the rest of the population loses their money. Perhaps the wealth of the full population of people is maximized that way, but certainly not the wealth of an individual, as the vast majority of the population will lose their $100. So, why would we expect an individual to make a decision to maximize the wealth of a larger group at their own expense?

One of the arguments here is why wouldn’t you play the game once, since it has a positive expected value? Ok fine. We played once. Why would you not play a 2nd time? The bet still has a positive expected value. Then why not a 3rd time? We’re talking about independent events. If you commit to play the first time you should be willing to play forever, since the wager doesn’t change. And yet if you do, you go broke. The only answer is not to play.

Tbw:

You write, “If you commit to play the first time you should be willing to play forever, since the wager doesn’t change.” That’s not true. The wager does change. The amount of the bet changes. The first time, you’re betting $100. The second time, you’re betting $150 or $60. That’s not much different, but for the umpteenth bet, you might be betting $1,000,000. That can make a difference.

There’s also the time component. At some point you want to spend the money. If you play forever, you never get to spend the money, which makes the economic analysis moot.

> economists aren’t measuring wealth optimization correctly. By simply looking at the average expected return

Not the expected return, the expected utility of the return. Peter’s proposal for this situation is completely equivalent to a logarithmic utility of wealth, which is an example of economists’ expected utility framework

> Perhaps my use of the term ruin is confusing, I don’t mean that you lose all of your wealth necessarily, just your bankroll. Obviously, what people consider their bankroll will vary, but the point is that if you lose your bankroll you are ruined in the sense that you can’t play anymore.

In that case I really don’t get what’s your line of reasoning.

> People seem to be hung up on the $ amount of the bet, and that the $100 isn’t significant to them or to many people, but that seems irrelevant to me.

How large or small $100 is relative to your total wealth is what determines if, in order to maximize the expected geometric growth rate of your wealth, it’s better to bet or not to do it.

You said that there is an optimal bet size. It will be x% of your wealth.

Let’s say you are given an option to play that game once with a $100 bet.

a) If x% of your wealth is $100 then you want to play. $100 is exactly the optimal amount to play.

b) If x% of your wealth is more than $100 you also want to play. You’d like to bet more but $100 is better than nothing.

c) If x% of your wealth is between $50 and $100 the $100 bet is again suboptimal. And, in this case, riskier. You would have prefered to bet a bit less. But to maximize growth you still prefer to play.

d) If x% of your wealth is below $50, the $100 bet is definitely too high. You pass.

> Any bet where you bet your entire bankroll is a bad bet unless your probability of winning is 100%. If your probability of winning is not 100%, then the optimal bet size will be less than 100% of your bankroll, so betting 100% of your bankroll means ruin eventually.

That’s not correct. Imagine you have 99.9% probability of a 1000% gain, 0.1% probability of a 1% loss. The optimal bet size would be close to 100%. Betting 100% of your bankroll would be suboptimal but would not “mean ruin eventually”. That happens when you bet twice the optimal bet size.

To find the geometric mean, multiply 1.5 by 0.6 and take the square root. When is that appropriate? When you are interested in the expected return on investment. And that is how the question is presented:

“Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total.”

You don’t put up $100 every time you bet, you put up your bankroll. If every time you bet you win $50 on heads and lose $40 on tails, it’s great. If every time you bet you win 50% of your bankroll on heads and lose 40% of your bankroll on tails, it’s lousy.

Min:

What is a “bankroll”? I don’t think this term is part of utility theory. If my bankroll is $100, then I might win a bunch of money or I might lose most of $100. That doesn’t seem like such a bad tradeoff at all!

Carlos, the time average growth rate for a trajectory taking a long series of this bet is around 0.95 IIRC. This contrasts with the ensemble expectation which is 1.05. So if you do not have access to Many Worlds copies of yourself, according to Peters, this is a bad bet. His thesis is that in any non-ergodicity growth situation, you always go with the time average, which describes what happens to a singular ‘typical’ trajectory.

What I was asking Andrew is how he thought about evaluating the middle ground – where you are taking more than one bet in sequence quite possibly, but not so many that the convergence to time average is plausible. I think you just have to try and solve explicitly for the distro and go from there…

> So if you do not have access to Many Worlds copies of yourself, according to Peters, this is a bad bet.

What about the single bet case? Is the “50% gain / 40% loss” coin flip a bad bet, according to Peters, because I do not have access to Many Worlds copies of myself?

> His thesis is that in any non-ergodicity growth situation, you always go with the time average, which describes what happens to a singular ‘typical’ trajectory.

What’s the definition of a non-ergodicity growth situation? As opposed to what? Unfortunately the Nature Physics paper doesn’t explain much.

Growth rate optimization has been discussed, and used, for decades before anybody thought of calling it ‘ergodicity economics’. Maybe he could have included some references in the paper. He mentions just that it’s “well known among gamblers as Kelly’s criterion” but it has been also proposed by statisticians like Breiman or economists like Markowitz.

I think Markowitz advocates for it more convincingly in his 1976 article “Investment for the long run: New evidence for an old rule”. His argument doesn’t require all the periods to be identical or an infinite number of them. He discusses how to define asymptotic optimality. If the game never ends the final wealth is undefined.

http://finance.martinsewell.com/money-management/Markowitz1976.pdf

It would be better to have Ole speak for himself, or failing that, check out the lecture notes on Ergodicity Economics website.

I don’t have a horse in this race (hah!)- I just found working through his arguments clarifying on both ergodicity and probability. In the end, I don’t entirely agree with what they’re up to.

The impression I have in answer to your question is that Ole thinks the only meaningful application of probability is to evaluate a series or sequence of events/bets, not one-off events/bets. So he would take the bet as given, embed it in a long series, note the time average growth rate < 1, and say "don't take the bet!"

I don't entirely agree. But I don't entirely disagree either!

I brought up the case of the lottery with expected payout ~$3billion to Andrew above to illustrate this with an extreme case. The expected value of the ticket was positive. And yet! And yet…it is a surefire donation to the lottery system, no? If there is no large ensemble of versions of myself buying an inordinate # of tickets for that 'one off' event, do I care about the positive ensemble expectation? Why would I?

How many lottery tickets should I buy if they cost $1 each (inclusive of opportunity cost let's say), and the odds are 1:2.8 billion?

The optimal bet is so small for something like Powerball that you need to have a bankroll of hundreds of millions to make buying even 1 ticket optimal.

> So he would take the bet as given, embed it in a long series

A “long series” wouldn’t cut it. It should be an _infinite_ series. A finite sequence of events/bets can be expressed as a one-off event/bet.

Yes, well you get the ‘time average’ by taking limits T -> Inf. However, I think you could invoke a convergence argument here. If the system is non-ergodic (T_avg != E_avg), you arguably prefer one over the other even with a finite sequence. You will have to account for stopping rules, etc.

But this segues to the question of evaluating uncertainty *in the time dynamical formulation that they want*. I asked Ole about this on Twitter, and never got a satisfying response (IMO). Temporal frequentism gets in the way at this point :)

> I think you could invoke a convergence argument here

https://i2.wp.com/www.mindcharity.co.uk/wp-content/uploads/2017/03/cartoon-science-communication.gif

;-)

Haha, fair enough!

Here’s what it boils down to AFAICT. Ole Peters argues that the expectation value of a non-ergodic observable (say the amount of wealth per se, in this case undergoing some kind of multiplicative dynamic) should *NOT* be used, but that the multiplicative growth rate (in this case 0.95) *IS* an ergodic observable, and so it’s expectation value is meaningful outside of a multiverse of oneself.

The exponential growth rate in the coin-flipping gamble is ~0.95 < 1, and is therefore not a good bet to take either once or any number of times.

Why not even once? Well, I think the idea is that you are evaluating a gambling *strategy*, and if you consistently took bets of this sort (where the ensemble expectation was positive, but the time-average growth rate was not) your wealth over time would entrain on a downward trajectory.

If the gamble were merely additive, things are different.

Note that if you are the *house* offering this gamble to an ensemble of players, you DO care about the ensemble expectation. It’s a bad bet for you to offer as the house, since the expected value of the return *across the ensemble of gamblers* is positive.

So, this is a clever example of a gamble that is bad both for any given individual to take, and for the house to give!

I haven’t worked out the math, but the relevant convergence result I am intuiting occurs as the fraction of the total wealth to the wealth of the wealthiest gambler in the ensemble goes to some large fraction which asymptotically approaches 1:

sum(wealth)/max(ind_wealth) -> f, and for any given value of f at some time t, f_t, f_t < f_t+dt.

gah, max(ind_wealth)/sum(wealth) rather :) i.e. bounded above by 1

If ruin is possible, in the limit all wealth remaining held by a single individual, and the probability of being that individual is infinitismal.

Carlos, thanks for the reference. :)

Hey Andrew Gelman,

Disclaimer: I might be biased towards Ergodicity Economics (EE), of course, because I understand it in some depth, but I am open to learn from such discussions.

It surprises me that someone like you seems to judge only from a journalistic article about the whole topic of the research programme of EE.

Your argumente of the difference between what §100 mean for a pauper and a rich man was already Bernoulli’s motivation behind the introduction of a utility function. Isn’t it completely endogenised as soon as you look at wealth changes and the welath dynamic as is done in EE and not only at lottery payouts?

Sure, following the time perspective and calculate a time average makes an implicit assumption, that the DM will encounter similar situations throughout his life. Here comes the central limit theorem and ergodicity with it through the backdoor of the T to infinity limit. But remember, the expectation value also relies on a limit, namely the N to infinity limit, which comes with other implicit assumptions which can turn out unrealistic as well. In the end, decision theorists are merely looking for a good enough (null) model. There are many reason why the Expectation Value based theories might not be good models and vice versa.

You write:

“And the above analysis is relying entirely on the value of $100, without ever specifying the scenario in which the bet is applied.”

Actually, EE specifies exactly the scenario or environment to which a bet is subject to. The environment is simply the DM’s wealth level and the relevant wealth dynamics. What else is there to specifiy?

From only skim-reading sec. 5 of the article you’ve referenced, I can not see how the introduction of another (possibly psychologically loaded) effect called “fear of undertainty” is any different from the research programme in decision theory since Bernoulli and more so since Kahneman & Tversky of introducing ever new psychological effects?

I am delighted to hear your answers and welcome you to discuss these issues in more depth from Jan 18-20 at the Ergodicity Economics Online Conference http://lml.org.uk/ee2021/.

Best

Mark

If I were back in graduate school (so long ago that I’m not sure if ergodicity applies or not), I might be intensely interested in these developments. Even today, I might look into them a bit more. But after many years of being an economist, my gut reaction is that the relevant applications of economics are not likely to be decided by mathematical formulations such as these. In the end, almost all policy choices depend crucially on questionable assumptions such as interpersonal and intergenerational comparisons of utility – often using changes in wealth as proxies for changes in utility. I doubt that replacing expected utility with a physical representation (of what, I’m not sure) will somehow resolve any of these issues.

Early in my career I did some work involving time discounting and relevant exhaustible resource extraction policies under uncertainty. My “contribution” was to show that under uncertainty, the appropriate optimal policy might be to slow our rate of exhaustible resource depletion while markets would generally accelerate it. I did this within a framework of maximizing expected discounted social welfare. Once, when presenting this work to an esteemed economist, they kept asking a single question: did I believe it was appropriate to discount social welfare over time. My answer – that even with discounting, I got my result – was unsatisfactory at the time, and is even more so on reflection.

I use this example to suggest that the issues involved with economics and its applications are more likely matters involving philosophy, sociology, politics, and psychology than matters of physics and economics. I invite others to show me that I am wrong – I sincerely might be. But my gut reaction tells me that my time is better spent elsewhere than doing a deep dive into ergodicity. Peter Dorman – are you out there?

Hi Dale, I am not an economist so I can’t provide a great answer to your question. I can say that studying Ole Peters work was tremendously clarifying on some foundational concepts to me. However, I incline to agree that every time I encounter economic work (including an advanced natural resource economics grad course I did), I come away thinking “gee that’s a lot of mathematical formalism, but almost all the interesting questions here are philosophical and political”

I agree that the agenda of EE is unlikely to change that, although I think if they can show – empirically – that a lot of heuristic behavior that is “irrational” under standard utility optimization has a simpler rationale in maximizing growth rates over time, that is very valuable and thought-provoking…

In fact, I never found utility theory necessary for most of economics – at least the parts I found useful. For the behavior of markets, importance of market structures and information, and other applied areas, utility theory is simply not necessary. It is used on the normative side of economics – and that is the side that finally convinced me to stop teaching economics. It is vitally important to analyze policies, and normative theories are certainly useful there – but the economic basis has always seemed quite narrow and limited to me. Mathematical formalism does not guarantee formation of good policy, nor is it necessary for deciding when/how/if to rely on market mechanisms or use policy to intervene in them. Yet, much economic theory is based upon that – that “free” markets maximize social welfare. That result, while mathematically pure (in that it can be derived from a set of assumptions), is far from the only way to make policy choices.

On the positive (as opposed to normative) side, then any theory that helps explain how people actually behave is useful. As you suggest, if EE does a better job of this, then it could be important. However, it seems to me that there are many reasons, outside of utility maximization, that might explain otherwise anomalous behaviors. I’m not convinced that there is a simple physical basis that works better than the many attempts to modify the expected utility framework to account for these.

[it seems that there was a problem with the first submission]

Chris, James, thanks for your comments. I decided to go to the source and I read this: https://twitter.com/ole_b_peters/status/1293240720858505224

“Utility is not ergodic. That’s upsetting because it largely invalidates expected-utility theory. But it’s the way to a mathematically sound economic theory. The mathematics is not hard, though unfamiliar if you’ve studied economics. Read this 455-word note. Judge for yourself.”

This is my 695-word “summary”:

Utility is a function of wealth. According to expected utility theory, when individuals make financial decisions in the face of uncertainty they choose the course of action that maximizes their expected utility. For example, should you sell your house and buy bitcoins? Or imagine that you have to pay $50k to a loan shark at midnight and you just have $3k you got by selling your car at the pawn shop in front of the casino. How should you bet that money to optimize the probability of staying alive? Your utility function would be 1 if you have more than $50k at midnight, 0 otherwise.

But is it wise to take live-or-death decisions using utility, which is not ergodic?

We were talking above about choosing the course of action comparing the expected utilities under the different alternatives. Utility is ergodic if those expected utilities averaging over the potential outcomes in that one-off event are equal to the expected utilities over time when time goes to infinity.

There was no concept of time there, just the wealth at the end of the event. We could think about the evolution of wealth over time and calculate the average of utility when the length of the period goes to infinity. On the other hand, utility is a monotonic function of wealth and we don’t expect wealth to be ergodic, at least in the interesting cases where it changes in a meaningful way.

As it’s kind-of trivial that utility is not ergodic, we will in fact be looking at the change in utility instead. After all, maximizing expected utility at midnight is equivalent to maximizing the difference between expected utility at midnight and any arbitrary baseline like utility when we get the casino chips.

Still, we have a problem in that we’re looking at a “single-period” change in utility but we need an infinity of periods to average over time. So we need to construct an stochastic process for wealth. Preferably in a way that ensures that we can transform the wealth process to get an stationary series.

For example, let’s say that the wealth process is additive, with innovations being identically distributed at every period. For example, in each period it increases by 1 unit or stays unchanged with equal probability. Then we can just take utility to be equal to wealth and after differencing we have an stationary process. In every period, the difference is +1 or 0 with equal probability. Averaging over an infinite number of period we get, unsurprisingly, the average of +1 and 0 which is 0.5. The change in utility is ergodic. The change in wealth is also ergodic in this case.

But we could also assume that the wealth process is multiplicative, with innovations being identically distributed at every period. For example, in each period we multiply the previous wealth by 2 or it stays the same with equal probability. We have to be a bit smarter in this case, but if we take logarithms before differencing we get again an stationary process. If we define utility as the logarithm of wealth the change in utility is ergodic. In every period the difference is log(2) or log(1) and the average over time or over the probability distribution at any time is log(2)/2. The change in utility is ergodic again.

Actually slightly more general definitions of the wealth processes, like additive of multiplicative innovations with time-dependent parameters, there may be no solution. A combination of both, even with constant parameters, also seems problematic. Say in each period our wealth increases by 10% because we get consistent returns from our investments plus $100k we save from our never increasing salary that we will be receiving until the end of time.

Anyway, the two examples above are enough to show that utility – well, the change in utility – is not ergodic. Well, actually in those two particular examples it is ergodic if we define utility appropriately. But there is no single definition of utility that makes the change in utility stationary in those two arbitrary stochastic wealth processes.

That largely invalidates expected-utility theory, somehow.

> The filter is not letting this go through. I doubt it’s clever enough to use a quality threshold so I guess it’s due to its length.

It seems it was actually related to the examples of what could you buy with the proceeds of the sale of your house or where could you go to try to multiply your money.

Not sure what the people that are saying the game describe is a good bet so it should be taken. It is not a good bet at all.

If you invest all your money in such a bet you will lose over time. It is so simple, you have equal odds of getting a return of 1.5 or a return of 0.6. Let’s see what it the cumulative return after few bets: Cumulative return= 1 *1.5*0.6*0.6*1.5…. and so on. Given the equal odds basically you end up with a PL in average of 1.5*0.6=0.9, so a net loss of 10 % every 2 trades. Not sure why this is not clear. The only way to save such a game is not to invest everything but only what Kelly Criterion suggest is the optimal size of the bet, in this case 25 % of the entire capital at each toss of the coin. In that case you do have a steady growth of your investment.

Giovanni:

Whether it’s a good bet depends on several things, including how many times you plan to play, what you will do with the money if you win, and what happens to you if you lose. The problems is not well formulated if these things are not specified. Not specifying these crucial details makes it a great classroom discussion example. Your mistake is in thinking there is a single correct answer that could be specified irrespective of the details. To think there’s a correct answer here is like thinking there’s a correct answer to the question, “Should you buy car X for $10,000?”

> If you invest all your money

The formulation discussed doesn’t say “all your money”. It’s not even clear what “all your money” means! The question is about betting (or not) your initial bankroll of $100, not the whole of your fortune.

> Given the equal odds basically you end up with a PL in average of 1.5*0.6=0.9, so a net loss of 10 % every 2 trades.

That depends on what do you understand by “ending up in average”. Given the equal odds, after two rounds there four equally possible outcomes { $225, $90, $90, $36 }. One can also say that in average after two bets you end up with $110.25. Which is higher than $100.

You may prefer $100 for sure that playing one round or two (or n) rounds. But you should undertand that other people may like the odds. As Andrew said *that* problem is not well-posed and there is no single correct answer.