Newcomb's paradox
From Wikipedia, the free encyclopedia
Newcomb's Paradox, also referred to as Newcomb's Problem, is a thought experiment involving a game between two players, one of whom purports to be able to predict the future. Whether the problem is actually a paradox is disputed.
Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969, and appeared in Martin Gardner's Scientific American column in 1974. Today it is a much debated problem in the philosophical branch of decision theory but has received little attention from the mathematical side.
Contents |
[edit] The problem
A person is playing a game operated by the Predictor, an entity somehow presented as being exceptionally skilled at predicting people's actions. The exact nature of the Predictor varies between retellings of the paradox. Some assume that the character always has a reputation for being completely infallible and incapable of error. The Predictor can be presented as a psychic, as a superintelligent alien, as a deity, etc. However, the original discussion by Nozick says only that the Predictor's predictions are "almost certainly" correct, and also specifies that "what you actually decide to do is not part of the explanation of why he made the prediction he made". With this original version of the problem, some of the discussion below is inapplicable.
The player of the game is presented with two opaque boxes, labeled A and B. The player is permitted to take the contents of both boxes, or just of box B. (The option of taking only box A is ignored, for reasons soon to be obvious.) Box A contains $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.
By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor's prediction, and knowledge of the Predictor's infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.
Predicted choice | Actual choice | Payout |
---|---|---|
A and B | A and B | $1,000 |
A and B | B only | $0 |
B only | A and B | $1,001,000 |
B only | B only | $1,000,000 |
The problem is called a paradox because two strategies that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first strategy argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money. That is, if the prediction is for both A and B to be taken, then the player's decision becomes a matter of choosing between $1,000 (by taking A and B) and $0 (by taking just B), in which case taking both boxes is obviously preferable. But, even if the prediction is for the player to take only B, then taking both boxes yields $1,001,000, and taking only B yields only $1,000,000—the difference is comparatively slight in the latter case, but taking both boxes is still better, regardless of which prediction has been made.
The second strategy suggests taking only B. By this strategy, we can ignore the possibilities that return $0 and $1,001,000, as they both require that the Predictor has made an incorrect prediction, and the problem states that the Predictor is almost never wrong. Thus, the choice becomes whether to receive $1,000 (both boxes) or to receive $1,000,000 (only box B)—so taking only box B is better.
In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."
[edit] Attempted resolutions
Many argue that the paradox is primarily a matter of conflicting decision making models. Using the expected utility hypothesis will lead one to believe that one should expect the most utility (or money) from taking only box B. However if one uses the Dominance principle, one would expect to benefit most from taking both boxes.
Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.
Other philosophers have proposed many solutions to the problem, many eliminating its seemingly paradoxical nature:
Some suggest a rational person will choose both boxes, and an irrational person will choose just the one, therefore rational people fare better, since the Predictor cannot actually exist. Others have suggested that an irrational person will do better than a rational person and interpret this paradox as showing how people can be punished for making rational decisions. [1]
It is possible to create a predictor similar to that proposed in the problem through the use of memory-blocking drugs like Versed. Under such drugs, subjects are unable to lay down new memories, so it would be possible to run a subject through the problem a large number of times, producing for many subjects a highly accurate prediction of what they will do the next iteration, though today's drugs would generate a different mental state between the drugged trials and non-drugged "real" experiment. This technique would fail with subjects who decide to deliberately act randomly in their response.
Others have suggested that in a world with perfect predictors (or time machines because a time machine could be the mechanism for making the prediction) causation can go backwards. [2] If a person truly knows the future, and that knowledge affects his actions, then events in the future will be causing effects in the past. Chooser's choice will have already caused Predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and Chooser will do whatever he's fated to do. Others conclude that the paradox shows that it is impossible to ever know the future. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Some philosophers argue this paradox is equivalent to the grandfather paradox. Put another way, the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. Note, however, that Nozick's exposition specifically excludes backward causation (such as time travel) and requires only that the predictions be of high accuracy, not that they are absolutely certain to be correct. So the considerations just discussed are irrelevant to the paradox as seen by Nozick, which focuses on two principles of choice, one probabilistic and the other causal - assuming backward causation removes any conflict between these two principles.
Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person. [3] Suppose we take the Predictor to be a machine that arrives at its prediction by simulating the brain of the Chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the Chooser, then the Chooser cannot tell if he is standing in front of the boxes in the real world or in the virtual world generated by the simulation. The "virtual" Chooser would thus tell the Predictor which choice the "real" Chooser is going to make.
[edit] See also
[edit] Notes
- ^ Lewis, David (1981), "Why Ain'cha Rich?" Princeton University
- ^ William, Craig (1988), "Tachyons, Time Travel, and Divine Omniscience." The Journal of Philosophy
- ^ R. M. Neal, Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning, preprint
[edit] References
- Nozick, Robert (1969), "Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempel, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p 115.
- Bar-Hillel, Maya & Margalit, Avishai (1972), Newcomb's paradox revisited. British Journal of Philosophy of Science, 23, 295-304.
- Gardner, Martin (1974), "Mathematical Games," Scientific American, March 1974, p. 102; reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics (ISBN 0-393-02023-1)
- Campbell, Richmond and Lanning Sowden, ed. (1985), Paradoxes of Rationality and Cooperation: Prisoners' Dilemma and Newcomb's Problem, Vancouver: University of British Columbia Press. (an anthology discussing Newcomb's Problem, with an extensive bibliography)
- Levi, Isaac (1982), "A Note on Newcombmania," Journal of Philosophy 79 (1982): 337-42. (a paper discussing the popularity of Newcomb's Problem)
- John Collins, "Newcomb's Problem", International Encyclopedia of the Social and Behavioral Sciences, Neil Smelser and Paul Baltes (eds), Elsevier Science (2001) (Requires proper credentials)
[edit] External links
- Newcomb's Paradox by Franz Kiekeben
- Thinking Inside the Boxes by Jim Holt, for Slate
- Newcomb's Problem by Marion Ledwig
- Free Will: Two Paradoxes of Choice (lecture) by Roderick T. Long