Newcomb’s paradox solved using statistical reasoning

Newcomb’s paradox is considered to be a big deal, but it’s actually straightforward from a statistical perspective. The paradox goes as follows: you are shown two boxes, A and B. Box A contains either $1 million or $0, and Box B contains $1000. You are given the following options: (1) take the money (if any) that’s in Box A, or (2) take all the money (if any) that’s in Box A, plus the $1000 in Box B. Nothing can happen to the boxes between the time that you make the decision and when you open them and take the money, so it’s pretty clear that the right choice is to take both boxes. (Well, assuming that an extra $1000 will always make you happier…)

The hitch is that, ahead of time, somebody decided whether to put $1 million or $0 into Box A, and that Somebody did so in a crafty way, putting in $1 million if he or she thought you would pick Box A only, and $0 if he or she thought you would pick Box A and B. Let’s suppose that this Somebody is an accurate forecaster of which option you would choose. In that case, it’s easy to calculate that the expected gain of people who pick only Box A is greater than the expected gain of people who would pick both A and B. (For example, if Somebody gets it right 70% of the time, for either category of person, then the expected monetary value for the “believers” who pick only box A is 0.7*($1,000,000) + 0.3*0 = $700,000, and the expected monetary value for the “greedy people” who pick both A and B is 0.7*$1000 + 0.3*$1,001,000 = $301,000.) So the A-pickers do better, on average, than the A-and-B-pickers.

The paradox

The paradox, as has been stated, is that from the perspective of the particular decision, it’s better to pick A and B, but from the perspective of expected monetary value, it appears better to pick just A.

Resolution of the paradox

It’s better to pick A and B. The people who pick A do better than the people who pick A and B, but that doesn’t mean it’s better for you to pick A. This can be explained in a number of statistical frameworks:

– Ecological correlation: the above expected monetary value calculation compares the population of A-pickers with the population of A-and-B-pickers. It does not compare what would happen to an individual. Here’s an analogy: one year, I looked at the correlation between students’ midterm exam scores and the number of pages in their exam solutions. There was a negative correlation: the students who wrote the exams in 2 pages did the best, the students who needed 3 pages did a little worse, and so forth. But for any given student, writing more pages could only help. Writing fewer pages would give them an attribute of the good students, but it wouldn’t actually help their grades.

– Random variables: label X as the variable for whether the Somebody would predict you are an A-picker, and label Y as the decision you actually take. In the population, there is a positive correlation between X and Y. But X occurs before Y. Changing Y won’t change X, any more than painting dots on your face will give you chicken pox. Yes, it would be great to be identified as an A-picker, but picking A won’t change your status on this.

One more thing

Some people have claimed to “resolve” Newcomb’s paradox by saying that this accurate-forecasting Somebody can’t exist; the Somebody is identified with God, time travel, reverse causation, or whatever. But from a statistical point of view, it shouldn’t be hard at all to come up with an accurate forecast. Just do a little survey, ask people some background questions (age, sex, education, occupation, etc.), then ask them if they’d pick A or A-and-B in this setting. Even a small survey should allow you to fit a regression model that would predict the choice pretty well. Of course, you don’t really know what people would do when presented with the actual million dollars, but I think you’d be able to forecast to an accuracy of quite a bit better than 50%, just based on some readily-available predictors.

12 thoughts on “Newcomb’s paradox solved using statistical reasoning

  1. The way you put it as someone using a *fairly* good prediction doesn't work, I don't think. I think it's usually formulated as having a *perfect* prediction.

    So then the argument is you should choose A, because if there's not $1m in box A then you can complain that according to the rules you should have got $1m.

  2. TomC,

    If the prediction is perfect, i.e., X=Y in my notation, then it is meaningless to say you "should" choose A because X has already been determined, thus Y is implicitly determined also.

    The way I've sometimes seen it stated is, at first, that it's a perfect prediction, but then later on, when the expected value argument is presented, all that is required is that it be a predictor with a probability a bit more than 50% of being correct.

    The problem of a "perfect predictor of your decision" followed by the decision itself is no paradox at all, merely a contradiction, along the lines of, "Which wins, the irresistible force or the immovable object?"

    To put it yet another way, suppose that the Somebody's predictions have already been made, and they are supposed to be 100% accurate, and you persuade a bunch of people to play the game and flip coins to determine what action to do. Then by the rules of the problem, the Somebody is predicting the coin flips. This could be interesting but has nothing to do with decision making.

  3. In the usual formulation of Newcomb's paradox, the predictor is very, very good, but is not thought to be absolutely perfect. I think almost everyone agrees that if the predictor is good but nowhere near perfect, you should take both boxes, as Andrew argues. The problem is with the near-perfect predictor, for which it seems a bit strange that supposedly rational behaviour leads to such poverty.

    The actual resolution of the paradox (which I though of many years ago, and even submitted to a philosophy journal, though the referee appeared to not understand it) comes from the fact that the only conceivable way that an almost-perfect predictor can be almost perfect is by simulating your brain to a high degree of fidelity, so that they can predict the results of even complex and subtle reasoning processes.

    Now, unless you think that there is a "soul" that would be missing from such a simulation, this simulation of you will be just as much a person as the original you. In particular, when deciding whether or not to take two boxes, you DON'T KNOW IF YOU ARE THE SIMULATION. Assuming you have sympathy for the original you even if you are the simulation, you should therefore take only box A, because it is a great help to the original you if you are in the simulation (since your action of taking only the simulated box A causes the predictor to put $1000000 in the real box A), while forgoing box B is only a small cost if you happen to be the original.

  4. Given that the money is already in the box by the time you pick, the expected value is always greater if you pick A and B, and it does not depend on how accurate the person is at predicting what you will do; therefore there is no paradox. Deciding to choose Box A does not increase your chances of getting a million dollars.

    What increases your chances at a million is having that crafty Somebody *think* you are going to choose Box A. And what they think about me is out of my control. Once that Somebody's decision has been made, choosing A and B is better 100% of the time.

  5. The crux of your solution comes in this quote: "Yes, it would be great to be identified as an A-picker, but picking A won't change your status on this." The paradox lies in this sentence: someone who could read your mind, even imperfectly, will be able to predict YOUR A-picker status. Think of yourself deciding in advance which to use. If you can convince yourself that you should be an A-picker, you WILL be an A-picker, and you WILL win th big bucks. That's the paradox — you can decide in advance what to pick, and the predictor can predict THAT response, not something merely correlated with your demographics.

  6. Anonymous,

    I agree completely. The problem with the expected value calculation, as for example shown in the 2nd paragraph of my entry above, is that it averages over all people who choose a certain option, rather than averaging over what might happen if you choose the option right now.

    Expected value (and probability itself) is pretty mysterious to people, hence they make this sort of mistake about what to condition on. This arises in lots of other puzzles too–for example the problem with the three cards (one is red on both sides, one's blue on both sides, one's red on one side and blue on the other; you take one card at random and see one side at random, it's blue, what's the probability the other side is blue also).

    A pretty good description of Newcomb's paradox by Jim Holt is here, but he gets confused on the expected value question.

    Radford,

    Your analysis seems unnecessarily complicated to me. Realistically, I think a 70% accurate prediction is reasonable but 99% is harder to believe, but I don't really see the advantage of this sort of brain-in-a-vat reasoning. I mean, it's fine to think about it if you want, but I don't see it as necessary to understanding the paradox.

  7. Your analysis seems unnecessarily complicated to me. Realistically, I think a 70% accurate prediction is reasonable but 99% is harder to believe, but I don't really see the advantage of this sort of brain-in-a-vat reasoning. I mean, it's fine to think about it if you want, but I don't see it as necessary to understanding the paradox.

    The standard problem formulation assumes a very good predictor. I would guess that very few people would advocate taking only Box A if the probability that the predictor is correct is only 70%. So if your argument is confined to such not-so-great predictors, you're refuting a position that practically nobody holds. If you think your argument holds even if the predictor is 99.9% accurate, then I maintain that you're wrong.

    The reason why the problem is more interesting with a 99.9% accurate predictor is that it makes it very unlikely that the reasoning process you're going through is one that the predictor isn't able to deal with. With a less accurate predictor – who bases their prediction on, say,
    gender, years of education, income, and religious affiliation – the argument may degenerate into the question, "what should I do if I'm not allowed to think too hard about the problem?". Extended debate on this question would, of course, be ridiculous.

  8. Radford,

    Yeah, I think we're in agreement on the substance. Like so many paradoxes, this one disappears if you think too hard about it. But my brief reading on the topic does lead me to disagree with your statement, "if your argument is confined to such not-so-great predictors [e.g., 70% accuracy], you're refuting a position that practically nobody holds." I think that people do try to (mistakenly) apply the expected value argument to the 70% case. Or, to put it another way, they're not sure the expected-value argument is wrong in this case, which adds enough cloudiness to their thinking to preserve the "paradoxical" nature of the problem.

  9. I essentially agree that the two-box solution is correct. That said, from a scientific/psychological point of view, it is just as interesting to consider what exactly makes this situation so puzzling. Maitzen and Wilson (see link below) have proposed an answer to this question that I think is deserving of a look. They argue that the "rational" decision is hard to define when it requires one to consider information that itself is contingent upon the definition of what is rational. That is, in NP you are asked to make your decision based on what you think your decision will be based on what you think your decision will be….etc. For those who have studied some game theory, this is just the assumption of "common knowledge" (except it involves just one person instead of two or more). Anyone who finds Newcomb's paradox puzzling should also bristle at the idea of common knowledge: I know that you know that I know that you know that I know that you know…….So maybe the answer to the scientific/psychological question is that "common knowledge" doesn't sit well with most people!

    http://ace.acadiau.ca/arts/phil/faculty_and_staff

  10. Most paradoxes boil down to a fault in your assumptions. The simple explanation is that once you are in a position to choose whether or not to open both boxes the rational choice is to open both boxes, because you will always get equal or more money regardless of what the predictor has predicted. The fault lies in the mechanism of prediction. Is the predictor interviewing each chooser? or is he taking a sample of people and using that to judge what a given chooser will decide. In the first case you personally are affecting the prediction and it is in you best interest to lie and then take both boxes. If lying is not an option then the predictor is not predicting but instead reacting to your decision. In the second case the predictor is predicting the behavior of rational people he will always choose to put $0 in box A.

  11. I'm a convinced two-boxer, but I'll try to put my argument without any bias. It seems to me the way this problem has been put has been an attempt to rig it for the one boxers. When we talk about "precommitment" it is suggested the subject has an advance knowledge of Omega and what is to happen. The way I thought the paradox worked, was that Omega would scan/analyze a person and make its prediction, all before the person ever heard of the dilemna. Therefore, a person has no way to develop an intention of being a one-boxer or a two-boxer that in any way affects Omega's prediction. For the Irene/Rachel situation, there is no way to ever "precommit;" the subject never gets to play Omega's game again and Omega scans their brains before they ever heard of him. (So imagine you only had one shot at playing Omega's game, and Omega made its prediction before you ever came to this website or anywhere else and heard about Newcomb's paradox. Then that already decides what it puts in the boxes.)

    Secondly, I think a requirement of the problem is that your choice, at the time of actually taking the box(es), cannot effect what's in the box. What we have here are two completely different problems; if in any way Omega or your choice information can travel back in time to change the contents of the box, the choice is trivial. So yes, Omega may have chosen to discriminate against rational people and award irrational ones; the point is, there is absolutely nothing we can do about it (neither in precommitment or at the actual time to choose).

    To clarify why I think two-boxing is the right choice, I would propose a real life experiment. Let's say we developed a survey, which, by asking people various questions about logic or the paranormal etc…, we use to classify them into one-boxers or two-boxers. The crux of the setup is, all the volunteers we take have never heard of the Newcomb Paradox; we make up any reason we want for them to take the survey. THEN, having already placed money or no money in box B, we give them the story about Omega and let them make the choice. Hypothetically, our survey could be 100% accurate; even if not it may be very accurate such that many of our predicted one-boxers will be glad to find their choice rewarded. In essence, they cannot "precommit" and their choice won't magically change the contents of the box (based on a human survey). They also cannot go back and convince themselves to cheat on our survey – it's impossible – and that is how Omega is supposed to operate. To ensure perfect prediction, the alien would be equivalent to a 100% accurate human survey – impossible to change the prediction from the future, or try and cheat The point is, from the experimental point of view, every single person would make more from taking both boxes, because at the time of choice there's always the extra $1000 in box A.

  12. The analogy of the exam grades seems inappropriate since anyone could play Newcomb's game repeatedly and uncover a correlation (for themselves alone) between one-boxing and getting a million; likewise between two-boxing and getting only a thousand. Nothing like this is true of your students in the case you describe; quite the opposite.

Comments are closed.