General Question

SavoirFaire's avatar

Philosophical question: do you take one box or two (see details)?

Asked by SavoirFaire (28831points) March 23rd, 2015
73 responses
“Great Question” (3points)

You are presented with two boxes. Box A is transparent and contains $1,000. Box B is opaque, so you do not know what—if anything—is inside. You can take either both Box A and Box B, or you can take just Box B.

The contents of Box B have been determined by someone, call him the Predictor, shortly before the beginning of the game. If he predicted that you would take both boxes (or that you would choose randomly), then nothing was put into Box B. If he predicted that you would only take Box B, however, then $1,000,000 was put into Box B.

Now here’s the twist: the game has been played hundreds of times, and the Predictor has never been wrong before. No explanation for his past predictions is available. All you know is that he has a perfect record of predicting what a player will choose to do. Earlier in the day, the Predictor made his prediction about you and the boxes were filled (or not filled) accordingly.

So here’s the question: how many boxes do you take and why?

Observing members: 0
Composing members: 0

Answers

Mariah's avatar

I take just box B. If he’s always right as his record implies, then he predicted that I would do this and I am now rich!

Response moderated (Unhelpful)
fluthernutter's avatar

Just Box B.

For the same reasons that @Mariah already stated. If he’s always right, whatever I choose to do was already the predicted choice.

Plus the gamble for $1,001,000 is more interesting than $1000.

CWOTUS's avatar

Are you sure this is presented accurately? Because if it is, who (aside from @talljasperman) would take anything other than “B” by itself (assuming they correctly understood the choices, that is)?

I would consider that “hundreds of times” doesn’t mean the Predictor will be right at all times, or on my draw, but if my choices are:
– “A certain $1000 only” (for box A alone) or
– “A certain $1000 plus nothing” (for picking A+B together) or
– “Likely $1,000,000” (for B alone, assuming that I am at least no less predictable in this regard than other choosers) – and I would be certain to mention that “This is not a random choice!”
then why would I do other than choose “B”?

fluthernutter's avatar

@CWOTUS I think most of my family would choose A+B.

dappled_leaves's avatar

I would take both boxes, for reasons I gave when this question was last posted. I find it quite irrational that anyone would choose only one box.

fluthernutter's avatar

@dappled_leaves Can you link the older version of this question? It’d be interesting to read some older responses too.

fluthernutter's avatar

Nevermind, I found it!
@dappled_leaves I’m putting a nice thick pad on the table in case you want to bang your head again. :P

hominid's avatar

Just box B (for this reason).

I still can’t see how this is in any way a paradox.

hominid's avatar

@SavoirFaire – Please, for the love of “god”, report back to us some analysis from an academic philosophical position. Is this a test to determine something else? For example, is the selection of “both boxes” dependent on an a libertarian view of free will? Something else? And why is this referred to as a paradox?

dappled_leaves's avatar

@fluthernutter Sorry, if I’d had more time, I’d have hunted it down – thanks for doing the legwork!

Hypocrisy_Central's avatar

I take box ‘A’ because I know for certain what I am gaining and what is at stake. The other box might have more, or nothing, it might even have something to nullify box ‘A’, so choosing both boxes might still leave me with nothing.

LostInParadise's avatar

As I originally said, if this was nationally televised, I would take box B, because I don’t think anyone is giving me $1,000,000 and it would be worth $1,000 to prove this guy is a phony.

Response moderated (Unhelpful)
josie's avatar

There is no such thing as “The Predictor” so the problem as stated is irrelevant to the real world.
But I’ll play along.
Since The Predictor has never been wrong before, I will play the odds and take Box B. Chances are he’ll be right again, and I will have the million. Two winners in one game!
If he is wrong for the first time ever, I will only have lost the house’s money. So what.

dappled_leaves's avatar

This really is a question about faith. If you think you have the power to fulfill the prophecy, you act accordingly (open Box B). Otherwise, you would just do whatever the hell you want, knowing that the prediction is already made and the boxes contain whatever they contain (opening both boxes would give the greatest total amount).

LuckyGuy's avatar

Am I missing something? I’d take only box B. And the Predictor knows I’m taking it so there would be $1M inside. Maybe… the “hundreds of times” is a clue. The chances that the Predictor is merely lucky only one hundred of times in a row is 2 ^100s which is a 1 in a million. Hundred’S of times is in the trillions. I’d take those odds.

gorillapaws's avatar

I change my answer so I have a 66% chance to not pick the goat…oh, wait, that’s the wrong question.

Box B only.

LostInParadise's avatar

It seems that there is a danger of getting into an infinite loop in trying to determine what to do. The Predictor is predicting what I am going to do and I am predicting what the Predictor did. The Predictor’s action is in the past and mine is in the future, but I am not so sure that makes much of a difference. Each of our actions is based on what the other is predicting.

I might reason that my first inclination is to take just box B. Then the Predictor will have put the million dollars in Box B. So I might as well take the other box also. But the Predictor will know that I thought that way, so I better take just box B. Then the Predictor will put the million dollars in the box. So I might as well take both boxes. And on and on.

hominid's avatar

This is why I think this hinges on the free will question in some way.

@LostInParadise: “I might reason that my first inclination is to take just box B. Then the Predictor will have put the million dollars in Box B. So I might as well take the other box also.”

Wait. The predictor (and your computer in the previous wording of this) is defined as having near 100% precision in its predictions. In that case, if you are swayed by some (confused, in my opinion) to choose both boxes, we can know – based on how the question is defined – that there is near 100% chance that the predictor/computer knows this decision. You accept that here…

@LostInParadise: “But the Predictor will know that I thought that way, so I better take just box B. Then the Predictor will put the million dollars in the box.”

You are correct. That is why “Box B only” is the correct answer. But why did you then shift the game in this sentence?...

@LostInParadise: “So I might as well take both boxes. And on and on.”

You just explained why you shouldn’t take both boxes. So, I’m confused why you would introduce that here.

The original question demands that you choose “box B only”. I can’t see any way around this.

CWOTUS's avatar

Well, you pick “B” only if you want the $1,000,000. A modded-off response gave a perfectly valid reason why you may not want that, if you recognize that the million dollars would in some way screw up your life. Considering the number of lottery winners who end up worse off than before they “won” the lottery, that’s not an irrational response.

LostInParadise's avatar

@hominid, Both actions, mine and the Predictor’s, are one time events. If either of us knows what the other will do, then that person can make the optimum decision. The problem comes in that once the Predictor’s decision is known, I can now alter my decision.

Suppose that I myself am a superior predictor and that I can predict what the Predictor will do. In either case I am at least as well off taking both boxes. Does it matter that in fact I am not a superior predictor? I should still take both boxes. But then the Predictor will have predicted this and not give the million dollars. The self-referential nature of this problem prevents a clear resolution.

hominid's avatar

@LostInParadise: “The problem comes in that once the Predictor’s decision is known, I can now alter my decision.”

Then you are redefining the accuracy of the predictor. This is a thought experiment. In this experiment, you are defining an extremely accurate predictor (or computer in your wording of it). To say the predictor is x% accurate is to say that x% of the time it will be correct. That doesn’t mean that x% of the time it will guess what you originally decided but then changed and changed back, etc. In fact, that inserts some mechanism that I can find in neither reality or the thought experiment: to change the future.

I am going to pick a color – blue or red. If there is a predictor or computer that we define as being 99% accurate in predicting my decision, it doesn’t matter how I interpret my decision making. I will choose red or blue, and by definition the predictor had a 99% chance of guessing that final decision.

The only way around this would be to propose that if I had thought of choosing red but instead chose blue, I would be messing with a universe in which red had been been chosen (or spinning up another thread – a universe in which I have selected blue). I don’t see any reason to believe this. And it appears to add to the thought experiment in ways that are not defined in the experiment itself.

If I choose “red” (or “box b only”) it doesn’t matter whether I am convinced that I had “tricked” the predictor or myself or spawned new universes. What matters is that an experiment defined in such a way as to explicitly say:

99% of the time, your final action will have been predicted

…you can’t redefine this to mean…

the predictor/computer is in no way accurate. It will just guess.

LostInParadise's avatar

You are right in saying this relates to free will. Free will is problematic. You can say that it is meaningless. How can you even design an experiment to test for it? On the other hand, there may be limits to the assumption of determinism. Suppose that there is a computer that can predict with complete accuracy everything that I will do for the next five minutes, and that it writes out a report of what it predicts. If it predicts that the next word that I will say is “how” then I can foil the prediction by saying anything else. Newcomb’s Paradox, which is what this question is describing, reveals the paradoxical nature of free will. My feeling is that there is a type of uncertainty principle regarding conscious decisions that is analogous to the Heisenberg uncertainty principle. Even though decisions may be deterministic, there are limits to how well they can be predicted due to the impact of the act of observation.

hominid's avatar

@LostInParadise: “Suppose that there is a computer that can predict with complete accuracy everything that I will do for the next five minutes, and that it writes out a report of what it predicts. If it predicts that the next word that I will say is “how” then I can foil the prediction by saying anything else.”

Right. But that wouldn’t be a well-designed thought experiment. It would define a predictor with very low accuracy. But the OP’s (and your other formulation of the question) doesn’t allow for this. It keeps things clean by a) keeping the prediction a secret, and b) defining the accuracy. In this case, I can say with certainty that I am going to choose “box B only”, and I can be near 100% sure that the box contains $1,000,000.

But you’re right – this is probably because I am viewing this from a deterministic point of view.

fluthernutter's avatar

It’s a question of free will. If your decision has already been decided, are you actually making a decision at all?

As for the correct answer to the paradox, I don’t think there is a universal one. Everyone’s “correct” answer is uniquely correct for them. That’s what makes this interesting.

@dappled_leaves I thought it would be hard to find the older question. But I just followed @SavoirFaire‘s topics. :)

Hypocrisy_Central's avatar

No explanation for his past predictions is available.
Even if no explanations are made, are the results of the past predictions a matter of fact that all can see?

SavoirFaire's avatar

@CWOTUS Yes, I am sure this is presented accurately. But as it turns out, the answer to the question is highly controversial. That’s one of the reasons I asked. There are about as many one-boxers as there are two-boxers in the world, and they’re all quite sure they are correct. Yet it’s very difficult to get either side to understand the arguments in favor of the other, and nearly impossible to get anyone to switch sides.

As for whether the Predictor will be correct when your turn to play comes, who knows? That’s why it’s a game. We don’t know why the Predictor is so good at his job. But he is—or at least, he has been so far.

@dappled_leaves Oops! I didn’t realize a similar question had been asked so recently (though strictly speaking, @LostInParadise was asking for opinions about a particular solution). But yes, two-boxers and one-boxers never do see eye-to-eye about what to do.

@Hypocrisy_Central Ha! I admire your attempt to game the system and your willingness to just go for the sure thing, but you have to pick either Box B or both Box A and Box B. For the sake of the thought experiment, you can be assured that Box B contains either $1,000,000 or nothing. It won’t contain anything that takes the contents of Box A away from you.

As for your other question, the players are aware of the Predictor’s record even though they have no explanation for it. If we need an explanation of how they know, we could stipulate that the Predictor’s prediction is put up on a screen for all but the current player to see while the player makes their choice. So anyone watching knows what the Predictor has predicted for other players in the past (and he has always been right), but doesn’t know what has been predicted while they are playing the game.

@LostInParadise Looking at your old question, there was a paper written about four years ago advocating more or less the solution you offered there. I like the approach, but the standard objection to that sort of solution is that it’s changing the parameters of the problem. Anyone can add conditions to the game to make one choice clearly superior. For example: “I made a bet with Bill Gates that Box B would be empty. If it is, he has to pay me $500,000. If it isn’t, I have to pay him $500,000.” This guarantees that you will walk away with $501,000 so long as you take both boxes (whereas it only guarantees you $500,000 if you take Box B alone), so it is obvious that you should be a two-boxer if you have this sort of side bet going (and are primarily concerned with getting the most money). But it doesn’t tell us what we should do if we don’t know any rich gamblers.

@josie That there may be no such thing as the Predictor in the real world does not entail that the problem is irrelevant to the real world. If the thought experiment gets at an underlying principle that is applicable in life, then the impossibility of the scenario is immaterial. That’s how thought experiments work.

SavoirFaire's avatar

@hominid I agree that it is not a paradox. I always refer to it as “Newcomb’s Problem” (though I put “paradox” and “Newcomb’s Paradox” in the tags because the problem is commonly referred to as such). The reason it is sometimes referred to as a paradox is that there are two contradictory analyses that both seem perfectly logical. But the problem with this is: (a) no one actually thinks that both analyses are logical, and (b) a paradox is supposed to land us in trouble no matter which way we go (whereas the problem here is about which way to go, not where we land). One might argue that the word “paradox” comes from a Greek word meaning “strange” or “unexpected,” so there’s still a loose sense in which Newcomb’s problem is a paradox. But again, the fact that so few people see this as a particularly difficult problem—and that the philosophically interesting part about it is the fact that it generates a particularly intractable disagreement between one-boxers and two-boxers—again suggests that the problem itself is not a paradox.

As for what the problem is supposed to be determining, that’s another interesting question. The thought experiment itself is ultimately a problem of game theory, but what we think our decision tracks is very much influenced by which side we are on (or perhaps which side we are on is influenced by what we think it tracks).

A one-boxer is going to say: “Look, the Predictor clearly has some way of getting inside the player’s head (even if only figuratively) and understanding how they approach problems like this. Also, people who take only Box B have been consistently left better off than people who choose both Box A and Box B. I can’t change what’s in the boxes, but I should want to be the sort of person who only takes Box B because they are generally left better off. And since I don’t know how the Predictor does whatever he does, picking Box B minimizes my risk.”

A two-boxer is going to say: “Look, there’s no such thing as backwards causation. So no matter what the Predictor has predicted, Box B is either full or not. So you risk nothing by taking both boxes. If it’s empty, it would have been empty anyway even if you had left Box A behind.” For the two-boxer, the problem is just about recognizing that it’s too late to change the circumstances. Unless we believe in time travel—and some versions of the problem explicitly rule this out—there’s no way to change what’s in the boxes. Thus the only thing that keeps us from taking both is a failure in reasoning.

So while the one-boxer might say the problem illuminates how you think about risk, the two-boxer will deny this on the grounds that there is no risk involved. The two-boxer, meanwhile, might say that the problem illuminates a particular failure of reasoning that many people are prone to, but the one-boxer will point out that it hardly counts as a failure of reasoning if one-boxers are consistently left better off than one-boxers (which is what has happened so far given the Predictor’s perfect record). In short, then, there’s not even an agreement about what is being determined here!

There have been attempts to link the problem to the issue of free will, but my own view is that the problem isn’t really about that. A two-boxer might think that determinism makes it all the more certain that I can’t change what’s in the boxes, or they might think that indeterminism undermines confidence in the Predictor. A one-boxer might think that determinism offers a clue as to why the Predictor is so reliable, or they might think that indeterminism is a reason for why they see the problem as a question of risk. Thus it seems one could be a one-boxer or a two-boxer regardless of one’s views on free will.

dappled_leaves's avatar

@SavoirFaire Yes. However – it’s not clear at all from the initial problem that the one-boxers are “consistently left better off”. All we know is that the Predictor’s predictions have been very accurate. We don’t know that he made the same prediction for everyone that he did for ourselves.

gorillapaws's avatar

@SavoirFaire So at what dollar amount do the 2 boxers switch to being 1 boxers on average? It seems likely that there is some dollar value in box A that is so low that the reward for being right is not worth the risk of being wrong. And I guess the opposite would be true with different amounts in box B. I would choose both if it was $1,000 in box A and zero or $1,001 in B.

hominid's avatar

@SavoirFaire – Thanks. Admittedly, I think my reading of this was heavily influenced by @LostInParadise‘s version:

“Here is the hitch. A near perfect computer simulation has been run of your life to predict your choice. The second box will contain the million dollars only if the computer thinks that you will just take the one box.” (and then defines it as 99.9% accurate)

In this case, the premise explicitly tells you that if you choose box B only, there is a 99.9% chance that it contains $1,000,000. There is no getting around that.

But as I am now re-reading your version…maybe, the way you would interpret the 100 straight correct guesses defines how you see this “problem”. I’m still interpreting it as the predictor is near 100% accurate. And since this is the case, by definition if I choose box B only, there will be $1,000,000 in it.

@SavoirFaire: “A two-boxer is going to say: “Look, there’s no such thing as backwards causation. So no matter what the Predictor has predicted, Box B is either full or not. So you risk nothing by taking both boxes.”

But this would mean that the two-boxer has interpreted the “problem” and the whole definition of the predictor as something else altogether. I can’t quite figure this out, but maybe this is where I’m stuck.

@SavoirFaire: “For the two-boxer, the problem is just about recognizing that it’s too late to change the circumstances.”

But what could that possibly mean? It seems to place agency – free-agency – of the person selecting the box front and center, doesn’t it? I can’t get around the fact that there is an answer to what box I will choose. That is, all of the variables, prior causes, and state of the universe will lead me to choose ____ – whether or not I am under the illusion that I could have chosen otherwise. The variables and considerations included in this very problem are part of the mix for sure, but if I end up convinced that I should choose box B, it’s not as though I had really “chosen” both boxes but at the last moment magically broken the laws of physics and the causal chain. That last-minute “change” is itself part of the chain.

I’m convinced that Box B is the only box to choose. Therefore, any predictor (or computer) that we define as being near 100% accurate in predicting causal events will have predicted this. Problem solved.

Right? Ugh. Is this what you’re saying – that the “problem” really lies in the fact that 2 sides are completely dumbfounded at how it’s possible to interpret this any other way?

dappled_leaves's avatar

@SavoirFaire “Yet it’s very difficult to get either side to understand the arguments in favor of the other, and nearly impossible to get anyone to switch sides.”

To me, this is the greatest surprise about this problem. Reading your post, I cannot understand how anyone would not open both boxes (and I was already firmly convinced of this!).

@hominid “But what could that possibly mean? It seems to place agency – free-agency – of the person selecting the box front and center, doesn’t it?”

Yes. The Predictor has already taken his action. Nothing you can do will change that. This is why I don’t understand why anyone would make any other choice than to open both boxes and see how it turned out. The choice was made – you have only to reveal what was chosen.

As I said here, the one-boxer thinks that his choice gives him the power to fulfill a prophecy. In fact, he has no power whatsoever over the content of the boxes. The decision was already made in the past; the boxes were packed in the past. The only potential he has is to limit his winnings by leaving some in an unopened box. Why do that?

hominid's avatar

@dappled_leaves: “As I said here, the one-boxer thinks that his choice gives him the power to fulfill a prophecy.”

Yeah, I read that previously, and it makes no more sense to me now.

Let me try this: I am going to pick a color: red or blue. I could take 10 minutes or 10 seconds, but at some point I will pick either red or blue. There is an answer to the question of which color I will pick, but we are unable to determine what this is because we do not have the ability (currently) to do so. But there is an answer – just like there is an answer to my son’s question last night: how many people in the world are currently reaching up to grab something? Whatever the answer is, we certainly don’t know it. We also don’t know what color I will eventually choose. But if we were to define a thought experiment in which we defined into existence a near-perfect predictor/computer, then we would we have one piece of the puzzle solved: whatever I will choose, the predictor will know the answer near 100% of the time. Now, add an incentive to make one choice over the other ($1,000,000 – as in the original “problem”), and you have an easy “choice”. I am swayed by $1,000,000 and the fact that the predictor has been defined into existence as someone who knows what I will be swayed by to choose box b only. And since the original problem has defined the accuracy of prediction, that box has a near 100% chance of containing $1,000,000.

dappled_leaves's avatar

@hominid The introduction of the colour as a middle-man makes no difference to the problem.

As the person who is going to open the box, I cannot influence a choice that has already been made. My choice cannot influence past events.

Therefore, my choice does not affect the contents of the box, which was packed in the past.

I am free to choose both boxes without risk. I choose both, knowing that an unopened box may or may not contain money.

Hypocrisy_Central's avatar

If there was no way to keep the past choosers of box ‘B’ from going on Larry King stating they got a cool million for choosing box ‘B’, then it would be common knowledge if you wanted to be a millionaire (even for a week before taxes got taken out) everyone getting the chance would choose box ‘B’, knowing this, the Predictor would never be wrong. Who would take a million he can look at when he knows he will get a million even if he can’t see it?——If there was no way to keep the past choosers of box ‘B’ from going on Larry King stating they got a cool million for choosing box ‘B’, then it would be common knowledge if you wanted to be a millionaire (even for a week before taxes got taken out) everyone getting the chance would choose box ‘B’, knowing this, the Predictor would never be wrong. Who would take a million he can look at when he knows he will get a million even if he can’t see it?

hominid's avatar

@dappled_leaves: “As the person who is going to open the box, I cannot influence a choice that has already been made. My choice cannot influence past events.”

That doesn’t come into play in any way. You don’t need to influence anything. The defined agent here that has had to make the real decision is the predictor/computer. S/he or it is the one who is somehow able to achieve near 100% accuracy.

@dappled_leaves: “Therefore, my choice does not affect the contents of the box, which was packed in the past.”

I think we have our difference here.

1. The box has been packed in the past.
2. The predictor applied the technique that guarantees near-perfect accuracy at that time in the past.

So, if you are convinced and choose Box B only and there is nothing there, you are defining the premise as something else. The predictor’s accuracy in this case is not near 100%.

dappled_leaves's avatar

@hominid But there is no requirement that my choice must match the Predictor’s choice in order for me to qualify to accept the money. Perhaps that is where we are differing. I do not disagree with either 1. or 2. We are in agreement that these are the facts.

However, whatever the Predictor chose, his choice is over by the time I make my choice. Nothing I do will affect the contents of Box B. He either filled it, or he didn’t. I may as well find out how it went.

hominid's avatar

@dappled_leaves: “However, whatever the Predictor chose, his choice is over by the time I make my choice.”

The predictor is near 100% accurate. You are not aware of the predictor’s prediction, so we don’t have to get into any real paradox here And, the accuracy we are talking about isn’t affected by the issues you have with the time between the prediction made and “choice” you make. If you can identify a variable that has been introduced that reduces the predictor’s accuracy in this case, you need to explain. If you accept the premise, you are accepting that the time between the prediction and when you “choose” in no way affects the accuracy of the prediction.

dappled_leaves's avatar

@hominid “The predictor is near 100% accurate.”

Right. Not “The predictor is 100% accurate.”

But even if it were 100% accurate, what does that mean? We cannot know that he would be 100% accurate in the future, we only know that he was 100% accurate in the past. He does not know what I am going to do, because I have not done it yet. And nothing I do will force a million dollars into that box if he didn’t already put it there.

So, fuck the Predictor. I open both boxes.

fluthernutter's avatar

@SavoirFaire I think I can understand both sides.

It comes down to whether you think your decision affects the contents of the box.

Two-boxers:
Your decision does not affect the contents of the box. Your decision is a singular event that takes place after the boxes are packed. Taking both boxes maximizes your outcome.

One-boxers:
Your decision does affect the contents of the box. Your decision is inseparable from your general reasoning that exists abstractly. And in this situation, has been duplicated to near perfection by the predictor. Your decision exists before you decide. Taking only Box B maximizes the outcome.

hominid's avatar

@dappled_leaves: “But even if it were 100% accurate, what does that mean? We cannot know that he would be 100% accurate in the future, we only know that he was 100% accurate in the past.”

Would it change anything if we take @LostInParadise‘s 99.9% accurate computer? It sounds like some of your objection to the predictor is that we can’t assume anything about accuracy.

@dappled_leaves: “He does not know what I am going to do, because I have not done it yet.”

Do you agree with this?: You are are going to pick one of the following:

1. box a
2. box b
3. both boxes

Do you believe that there is an answer to which choice (1, 2, or 3) you will choose? We don’t have to introduce some all-knowing predictor or a futuristic neuroscience super-computer. Just on the question of whether or not there is an answer (1, 2, or 3) – do you believe that there is an answer here?

I’ll assume that you do believe there is an answer in the above question. To understand where I’m coming from, consider that I am reading this as though a predictor (or computer) has been defined as having near perfect accuracy in determining that answer.

If I had first thought of choosing (3), but then decided that I should choose (2), choice (2) was always the answer. The answer never changes. So, to define high accuracy is to by definition mean that if I chose box b only, it would likely contain $1,000,000.

@dappled_leaves: “And nothing I do will force a million dollars into that box if he didn’t already put it there.”

I think @SavoirFaire might be wrong to dismiss free will here. I think this last statement only makes sense in that context.

gorillapaws's avatar

@fluthernutter For me, I don’t think your decision affects the outcome of the box, but my confidence in that belief is not high enough to justify the risk of loosing $1,000,000. So it’s not just about what I believe, but also how confident I am that my beliefs about the universe are correct.

fluthernutter's avatar

@gorillapaws Not directly. But do you think that your decision affects the predictor’s decision? Or the reasoning behind your decision affects the predictor’s decision? Or is it purely a gamble with the house’s money?

CWOTUS's avatar

Having read your explanation now, especially as it concerns “backwards causation”, I can certainly see how picking two boxes would seem to be the best strategy. After all, “the boxes are already set up with their contents”, so picking the second box could not hurt anyone’s chances of doing even better than they would have done by picking a sure $1,000,000 in box B.

But that’s if the setup and the entire predicting-and-choosing scenario are fair and aboveboard, and there is no chicanery with the boxes’ contents after the choice has been made. In the back of my mind has always been a sort of supposition, unvoiced even to myself until just now, that “this is a rigged game”, which is what caused me to take the instructions so literally and to “just go along with it” for the big return.

gorillapaws's avatar

@fluthernutter For me it’s a gamble with the house’s money. Intellectually, taking both is the correct decision, but I’m not so confident in my understanding of the universe that I’d be willing to gamble that I’m right just to win an extra $1,000. In a way this is an epistemic puzzle.

LostInParadise's avatar

@SavoirFaire , I am with @hominid in saying that this question involves free will. For the Predictor to be able to make predictions with such a large degree of success presupposes not only that there are criteria that predetermine what a person is going to do but, more importantly, that these criteria are somehow accessible. My feeling is that what causes people to make decisions is of such great complexity as to be beyond our ability to ascertain, in the way that long term weather prediction is not possible. This problem has not just one person but two interacting people whose behavior is based on guessing what the other will do, raising the complexity even further.

LostInParadise's avatar

Here is how the question can be seen as paradoxical. Suppose I know the information that the Predictor used to make the decision and how this information was used to predict what I will do. I now know what decision I am going to make, but this is impossible, because I can act in the opposite way.

dappled_leaves's avatar

@LostInParadise You know that he has made a prediction, but you don’t know what that prediction was. So, your decision can’t be based on the actual prediction.

LostInParadise's avatar

The assumption is that the prediction was based on some knowledge. If I know all the information that was used in making the prediction then I can deduce what the prediction is. Does the Predictor know more about me than I know about myself?

hominid's avatar

@LostInParadise: “Here is how the question can be seen as paradoxical. Suppose I know the information that the Predictor used to make the decision and how this information was used to predict what I will do.”

But that’s changing things, right? The original question doesn’t include this. @dappled_leaves is correct in saying: “You know that he has made a prediction, but you don’t know what that prediction was.”

@LostInParadise: “The assumption is that the prediction was based on some knowledge. If I know all the information that was used in making the prediction then I can deduce what the prediction is. Does the Predictor know more about me than I know about myself?”

This thought experiment doesn’t assume that we need to accept the methodology used to achieve near 100% accuracy by the predictor. It only requires that we accept the premise.

But if we were to discuss the tangential issues raised in this question, such as knowledge of the variables that end up as a decision, we could certainly imagine that it is just as likely that a third party have more info about your decision making process than you do. We are recipients of our thought – observers. We can’t even really know why we are choose between two inconsequential options. I wouldn’t get too hung up on who or what this predictor is. We have to suspend our disbelief in order to play the game. It’s the price of admission.

LostInParadise's avatar

Why should we have to suspend disbelief? I am trying to see what happens if we place the situation in a real world context. One of the things that make this such a good question is the number of different perspectives from which it can be looked at.

hominid's avatar

@LostInParadise: “Why should we have to suspend disbelief? I am trying to see what happens if we place the situation in a real world context. One of the things that make this such a good question is the number of different perspectives from which it can be looked at.”

I’m not saying that you can’t explore further by modifying a thought experiment to see if people respond differently. But that’s a different project than the initial playing of the game.

Take the famous trolley problem where you’re presented with pushing the fat man off the bridge to save the people on the tracks. You’re then presented with a different scenario where you’re offered a switch to divert the train to a track with one person to save the many. This is an experiment in ethics, and we’re to explore all of the issues that arise here and what that says about our moral intuitions, etc. But if you were to initially refuse to play along by saying, “but what if the fat man would be insufficient to stop the train”, you’re not necessarily being more precise in the experiment – you’re attempting to shift the game altogether. You are forced in that case by the rules of the experiment to believe that you know pushing the fat man (or flipping the switch) would lead to the outcome described in the experiment.

So, if presented with the Newcomb problem, we could answer in any of these ways:

- I wouldn’t trust anyone presenting me with boxes of money and games like this. I would immediately leave and call the police.
– I would never want to win $1,000,000 because it would complicate my perfect life. So, I would choose box A.
– I have a fetish for clear things and an aversion to opaque things….

..etc…

That might be bringing it into a real world context, but in a way that nobody is going to be happy with. To announce your skepticism (valid as it may be) of a predictor is similar. We’re now left without the Newcomb experiment altogether, and are discussing something quite different. That’s actually fine with me. I like some of the issues that you are actually talking about here, but they “break” the Newcomb problem here and we probably have to abandon it.

LostInParadise's avatar

All I am doing is assuming that the Predictor has a rational reason for making the prediction, rather than assuming some clairvoyant vision or intuition, a not unreasonable interpretation. I then show that this can lead to a paradoxical situation. This paradox may lie at the root of why people seem to be so evenly divided and steadfast in their answer to the question.

hominid's avatar

@LostInParadise: “All I am doing is assuming that the Predictor has a rational reason for making the prediction, rather than assuming some clairvoyant vision or intuition”

Fair enough. But I don’t see how that’s relevant. The predictor could be some magical being or it could be someone with access to a super-computer running accurate simulations. Whatever the reason for the accuracy, we are only given the accuracy.

But additionally, I think your statement here does go further….

@LostInParadise: “Here is how the question can be seen as paradoxical. Suppose I know the information that the Predictor used to make the decision and how this information was used to predict what I will do. I now know what decision I am going to make, but this is impossible, because I can act in the opposite way.”

I agree – we would have a paradox here if the problem were changed to include the fact you inserted: “suppose I know the information…”. That’s nowhere in the problem, and inserting it does something to greatly alter the experiment – it creates two predictors.

LostInParadise's avatar

There are two predictors, as I stated previously. The question is whether it is logically possible that the Predictor is almost always right no matter what I do and no matter what I know. The answer is not clear. Assuming that the Predictor is almost always right does seem a little like backwards causality – whatever I do will cause the Predictor to correctly anticipate it.

hominid's avatar

@LostInParadise: “There are two predictors, as I stated previously.”

Who are the two predictors? You’re not defined in the problem as having some supernatural ability to predict the future nor are you defined as having access to some neuroscience super-computer of the future. To describe yourself as a predictor in this equation appears to me to be creative, and serves to destroy the notion that the real predictor in the problem is extremely accurate.

@LostInParadise: “The question is whether it is logically possible that the Predictor is almost always right no matter what I do and no matter what I know. The answer is not clear.”

It’s pretty clear to me. You have to inject other details into this problem for it to become unclear. And that’s what we’ve been discussing in the past few exchanges.

@LostInParadise: “Assuming that the Predictor is almost always right does seem a little like backwards causality – whatever I do will cause the Predictor to correctly anticipate it.”

And here we are back to questions of free will and determinism. “Whatever I do will cause the predictor to correctly anticipate it.” Correct. There is an answer right now to the answer of what you will pick, even if you aren’t going to pick until you’ve deliberated for 8 hours. That answer is not going to change. And this is the same for everyone. So a predictor defined as having near perfect accuracy is by definition a predictor that knows what that answer is. Deliberation and choice are illusory factors in this context.

LostInParadise's avatar

I see your point of view and we will have to agree to disagree.

Just a few clarifications.
By predictor I mean someone who makes a prediction. There is no implication as to the accuracy of the prediction. This gets back to what was said by @SavoirFaire about the relationship to game theory. Here is a nice presentation from this point of view. I hesitate to include a video by the mathematician Norman Wildberger because, as a strict constructionist, he is outside the mainstream, but this is not a factor in the video.

By answer not being clear I mean that there are two points of view, as shown by the various answers on this site. Each side feels that their answer is transparently obvious. That is part of what makes this question so interesting.

We do agree that free will is a factor and we are both in agreement that there is no such thing. Where we disagree is whether it is possible in principle to know in advance whether all of a person’s actions are predictable. I was trying to suggest that there may be fundamental limits to what we can predict in the case that the person knows that a prediction is being made.

hominid's avatar

@LostInParadise – Thanks. I’ll check out that presentation. And yes, we are largely in agreement here. Thanks for the conversation.

SavoirFaire's avatar

@dappled_leaves “However – it’s not clear at all from the initial problem that the one-boxers are ‘consistently left better off’. All we know is that the Predictor’s predictions have been very accurate.”

The Predictor has never been wrong, which means that every past player who has adopted a one-box strategy has left with more money than any past player who has adopted a two-box strategy. So as long as we are understanding “consistently left better off” as a way of saying “consistently won more money from the game,” then the one-boxer will say that it is clear that the one-boxers have been consistently left better off so far. The two-boxer, however, denies that this is relevant data. For the two-boxer, the past predictions and their outcomes are distractions that get in the way of proper reasoning about the situation (like a riddle in which misleading information distracts you from the obvious answer or tricks you into giving a wrong answer).

“To me, this is the greatest surprise about this problem. Reading your post, I cannot understand how anyone would not open both boxes (and I was already firmly convinced of this!).”

And that’s why I find the problem so interesting. In most philosophical debates, each side can at least understand why someone might hold a different belief. Furthermore, each side can usually be pushed into having to say something they would rather not in order to keep their position consistent. But in this case, neither of these things seem to be the case. One-boxers and two-boxers are perfectly happy to say everything they need to say to keep their position consistent, yet this doesn’t help people on the other side to understand why someone might go with a different strategy.

@gorillapaws I don’t know if there is any formal research on your question, but two-boxers never switch to being one-boxers in my experience. On the other hand, one-boxers I have known could be convinced to switch to a two-box strategy when the reward for choosing Box B is low (as you have indicated you would do). When this difference has come up in group discussions, the two-boxers thought it was an indication that their strategy was better (since in their mind it’s the exact same problem no matter what the numbers are). The one-boxers, however, denied this on the grounds that their view is already wrapped up in questions of risk (meaning that the change in strategy is consistent with their previous answer).

I do think you are correct that this is, at least in part, an epistemic problem. Insofar as the key issue is whether the Predictor’s past record is relevant, we have an evidential claim to resolve before making a decision.

@fluthernutter I don’t think that one-boxers would say their decision affects the contents of the box. Phrasing it in that way sounds too much like backwards causation. But yes, the one-boxer thinks that the Predictor’s record suggests that he has some way of figuring out what kind of reasoner you are (and so how you will play the game). Thus the one-boxer can say that while it might at first appear as if taking both boxes is the right strategy—and while it might be in many other circumstances—it is better to be the sort of person who would only take Box B in this particular situation because of the way in which the box’s contents are decided (i.e., by the Predictor). It’s a matter of the normally irrational decision being the situationally rational decision.

@CWOTUS Though we don’t know how the Predictor has managed to obtain a perfect record so far, we are supposed to assume that there is no post-decision chicanery involved. I guess the game could be rigged in some other way (faux psychic John Edward is said to have used hidden microphones to find out what his audience was hoping to hear, so maybe the Predictor could be listening for people to say “I’m going with a one-box strategy” or “definitely going for both boxes”). And unless his record is purely due to luck, he must be observing players in some way that gives him a hint as to which prediction to make. But let us assume that no changes are made to the boxes after the players have made their decisions.

SavoirFaire's avatar

@hominid ”…maybe, the way you would interpret the 100 straight correct guesses defines how you see this ‘problem’.”

Yes, I think so. Key to the two-boxer’s response is that the previous guesses are irrelevant. Key to the one-boxer’s response is that they are not. The two-boxer says that the boxes are already filled (or not filled), so there’s no advantage to taking only Box B or even worrying about the prediction. The one-boxer says that the past accuracy of the Predictor suggests that there is some as yet unknown (to the public, at least) way of figuring out how the players will make their decision, so you might as well play the odds.

“But this would mean that the two-boxer has interpreted the ‘problem’ and the whole definition of the predictor as something else altogether. I can’t quite figure this out, but maybe this is where I’m stuck.”

Indeed. Consider the following riddle:

Think of words ending in -GRY. Angry and hungry are two of them. There are only three words in the English language. What is the third word? The word is something that everyone uses every day. If you have listened carefully, I have already told you what it is.

It has driven many people crazy, particularly since it is frequently misstated (and the misstated versions of it are, in fact, impossible to solve). But the reason it works as a riddle is because the first two sentences are completely irrelevant. They distract people from the actual question (“there are only three words in ‘the English language.’ What is the third word?”).

The two-boxer thinks the Predictor is also a distraction and that the real problem is just this: “There are two boxes in front of you. Box A is transparent and contains $1,000. Box B is opaque and contains either nothing or $1,000,000. You can take either both boxes, or just Box B. How many boxes do you take and why?”

“But what could that possibly mean?”

It’s supposed to mean that there can be no risk involved in taking both boxes. If we rule out backwards causation, says the two-boxer, then what I actually decide cannot influence what is in the boxes. Therefore, taking both in no way causes Box B to be empty (if, in fact, it is.)

“It seems to place agency – free-agency – of the person selecting the box front and center, doesn’t it? I can’t get around the fact that there is an answer to what box I will choose.”

Again, I don’t think free will is the issue. In a world without libertarian free will, we still don’t know how the Predictor maintains his accuracy. Maybe he’s figured out the underlying causal laws, maybe not. But the one-boxer is still going to say that the evidence from past predictions is a reason to think the Predictor has some special insight that is worth taking into account, and the two-boxer is still going to say it’s too late to influence the Predictor’s actions once you get to the decision point of the game.

Now assume a world with libertarian free will. Again, we don’t know how the Predictor maintains his accuracy (though we know it can’t be because of any immutable causal laws that he has figured out). Maybe he’s just really good at psychological profiling (which need not rest on causal determinism). The one-boxer will again think that the evidence from past predictions is a reason to think the Predictor has some special insight that is worth taking into account, and the two-boxer will again think it’s to late to influence the Predictor’s actions once you get to the decision point of the game.

So when @dappled_leaves says “nothing I do will force a million dollars into that box,” there need not be any assumption of free will. We can no more will an empty box to contain a million dollars in a deterministic world than we can in an indeterministic world. At least not so long as the other laws of physics stay in place. Free will is a far cry from conjuration magic.

Thus the real issue seems to be whether we think the evidence of the Predictor’s past predictions is relevant. The one-boxer says it is regardless of whether the world is deterministic or not. Similarly, the two-boxer says it isn’t relevant regardless of which type of world we’re in. Therefore, the issue does not seem to be free will.

“Right? Ugh. Is this what you’re saying – that the ‘problem’ really lies in the fact that 2 sides are completely dumbfounded at how it’s possible to interpret this any other way?”

That’s what makes the problem interesting to me (and what seems to make it more than just a game theory problem). But I wouldn’t say that’s what the problem itself is. I am as convinced as anyone that one of the strategies is better than the other, so I cannot consistently deny that there really is a game theory problem (even if it may not be a difficult one) in addition to the dialectical problem of convincing other people regarding which strategy is best.

SavoirFaire's avatar

@LostInParadise “For the Predictor to be able to make predictions with such a large degree of success presupposes not only that there are criteria that predetermine what a person is going to do but, more importantly, that these criteria are somehow accessible.”

It may suggest these things, but it certainly does not presuppose them. Nothing in the scenario rules out luck or pre-game cheating. All you know is the Predictor’s past record. Consider a somewhat different case: I make a bet with you that I can introduce you to someone who just won 10 coin tosses in a row using a fair coin. So long as I have 1,024 accomplices, I cannot lose this bet. I just run a ten-round tournament and introduce you to the winner. This person may have no special features whatsoever (other than a recent string of lucky guesses).

Granted, 100 correct guesses in a row is more impressive. But even still, this does not necessarily raise the problem of free will. Human beings could still be predictable both en masse and as individuals in a world where they had libertarian free will due to psychological tendencies that they could—but generally didn’t—work against. This is particularly true if the players are personally unaware of the psychological tendencies that might give them away.

“Suppose I know the information that the Predictor used to make the decision and how this information was used to predict what I will do.”

But you don’t. Obviously, you can change the scenario to make it a paradox. But my claim was that the scenario, as normally described, is not paradoxical.

“Does the Predictor know more about me than I know about myself?”

He might. I see no reason to think it is impossible. I know things about my wife that she does not know about herself. She knows things about me that I do not know about myself. Alternatively, we may know things about one another before they know them about themselves because we recognize the external signs of, say, an imminent upswing or downswing in the other’s mood. And it’s well known that our friends often are better at describing us than we are describing ourselves.

In part, this is because external observers don’t have access to all the “noise” that clouds our minds when making decisions. They don’t know the things that secretly tempted us or what justifications we have given ourselves for certain decisions. They don’t know what we almost did or what we wish we hadn’t done. They just see the actual results of our decision making process, which means they can see that we consistently choose X or never actually get around to doing Y. So while we are distracted by the noise of consciousness, they can see the patterns in our behavior more clearly.

If he is a particularly keen observer, then, the Predictor may well know all sorts of things we do not know about ourselves. And again, he may also know important facts about human psychology that we do not (which still apply to us even if we don’t know about them). These combined with more personal facts may be enough for him to deduce what we cannot even if we do know more about ourselves than he does. Quantity of information isn’t everything, after all.

hominid's avatar

@SavoirFaire – Thanks for the great, lengthy response here. Love this question.

@hominid: “It seems to place agency – free-agency – of the person selecting the box front and center, doesn’t it? I can’t get around the fact that there is an answer to what box I will choose.”

@SavoirFaire: “Again, I don’t think free will is the issue. In a world without libertarian free will, we still don’t know how the Predictor maintains his accuracy. Maybe he’s figured out the underlying causal laws, maybe not. But the one-boxer is still going to say that the evidence from past predictions is a reason to think the Predictor has some special insight that is worth taking into account, and the two-boxer is still going to say it’s too late to influence the Predictor’s actions once you get to the decision point of the game. [my emphasis]

But this anticipated two-boxer response is what I am objecting to. S/he may still say this, but it will not make any sense in a world without libertarian free will. So when I say that this does have something to do with free will, what I mean is that the two-boxer’s position is only plausible in a world with libertarian free will.

In a world without libertarian free will, statements like “it’s too late to influence” are nonsensical. It’s an attempt to break free from determinism. It’s also eliminating the Predictor from the question. If we alter the original proposed problem, then all bets are off. We could insert a box C.

gorillapaws's avatar

@SavoirFaire That’s interesting about 2 boxers never switching, even for measly rewards. It seems this could actually be a test for confidence of belief. 2 boxers are 100% confident in their belief, while 1 boxers acknowledge that their beliefs about the universe could be different than what they think in the face of significant evidence to indicate otherwise.

It’s not unlike “true believers” vs. skeptics. I wonder if this correlates with other firmly held beliefs.

Similar puzzle:
I strongly hold belief x. Someone presents compelling (but not conclusive evidence for not x). If I am wrong I get nothing. If I’m right I get a modest reward. If I switch beliefs and am right then I get a very large reward. If I switch beliefs and am wrong I get nothing.

It’s a hell-of-a-lot like Pascal’s Wager.

LostInParadise's avatar

@SavoirFaire , It still looks like a free will issue. If luck or cheating is involved, that changes how most of us see the problem. One would have to make a preliminary evaluation of the chances of these factors holding. If the Predictor was accurate purely due to luck then take both boxes. If the game is rigged then take the one box.

We can’t consider, as you do, the possibility of free will where the guesser is not seriously thinking about the options. It is key to the problem that the guesser knows that his/her actions are being predicted. For the Predictor to be accurate apart from luck or cheating, there must be some lawful behavior that can be actively predicted.

Here is something else to consider. There is a key piece of information that we are not given, which is the proportions of people in the previous trials who took one or two boxes. If nearly everyone took only one box and these were the only people that the Predictor got right, that would mean that the Predictor always placed money in the second box and that would be the preferred choice based only on past experience.

dappled_leaves's avatar

@gorillapaws Except that two-boxers think of the problem in exactly the opposite way than what you just described. Have a look at my earlier comment about belief. I think the one-boxer is the one with the faith. Look how @hominid and @LostInParadise can’t even discuss this question without invoking determinism. It is like embracing a kind of religious slavery to the will of a god.

The two-boxer removes faith from the equation, knowing that his decision cannot influence past events. The one-boxer somehow imbues himself with the power to fulfill a prophecy.

hominid's avatar

@dappled_leaves: “Look how @hominid and @LostInParadise can’t even discuss this question without invoking determinism. It is like embracing a kind of religious slavery to the will of a god.”

Well, I am a theist after all (Christian).

I get it – we have different beliefs about free will. But I’m not necessarily arguing that here. I’m just pointing out that I think the answer to the Newcomb thing does shed light on these differences.

fluthernutter's avatar

@SavoirFaire and @dappled_leaves I don’t think it’s necessarily about faith or backwards causation. At least not for me.

The way I approach the one-box solution is by thinking of my answer as being part of a larger pattern of behavior (that exists before, during and after my decision). Versus (how I reason) the two-box solution that my decision does not exist until I make it.

gorillapaws's avatar

@dappled_leaves The 2-boxers are the ones with faith. They won’t change their beliefs no matter what the circumstances are. Even when presented with “evidence” they remain entirely convinced that it doesn’t matter and they’re right. That’s an act of faith: you’re so convinced that you’re right that you won’t consider other factors.

I think the 2-boxers are correct intellectually, but I don’t have the level of conviction wiling to risk loosing the 1 million dollars, just to gain an additional 1,000. If I’m wrong I’m only out 1k.

dappled_leaves's avatar

@gorillapaws The evidence points to the one-boxers being right, but the two-boxers are correct intellectually? You are contradicting yourself.

fluthernutter's avatar

@dappled_leaves Two-boxers are straight up rational. One-boxers are suspending their disbelief (that a machine could correctly predict their behavior) for the hypothetical situation.
And have a bit of a gambling streak.

Answer this question

Login

or

Join

to answer.

Mobile | Desktop


Send Feedback   

`