Causal decision theory
Causal decision theory is a mathematical theory intended to determine the set of rational choices in a given situation. In informal terms, it maintains that the rational choice is that with the best expected causal consequences. This theory is often contrasted with evidential decision theory, which recommends those actions that provide the best evidence about the world.
Informal Description
Very informally, causal decision theory advises decision makers to make the decision with the best expected causal consequences. The basic idea is simple enough: if eating an apple will cause you to be happy and eating an orange will cause you to be sad then you would be rational to eat the apple. One complication is the notion of expected causal consequences. Imagine that eating a good apple will cause you to be happy and eating a bad apple will cause you to be sad but you aren't sure if the apple is good or bad. In this case you don't know the causal effects of eating the apple. Instead, then, you work from the expected causal effects, where these will depend on three things: (1) how likely you think the apple is to be good and how likely you think it is to be bad; (2) how happy eating a good apple makes you; and (3) how sad eating a bad apple makes you. In informal terms, causal decision theory advises the agent to make the decision with the best expected causal effects.
Formal Description
In a 1981 article, Allan Gibbard and William Harper explained causal decision theory as maximization of the expected utility of an action of an action "calculated from probabilities of counterfactuals":[1]
where is the desirability of outcome and is the counterfactual probability that, if were done, then would hold.
Difference from evidential decision theory
David Lewis proved[2] that the probability of a conditional does not always equal the conditional probability .[3] If that were the case, causal decision theory would be equivalent to evidential decision theory, which uses conditional probabilities.
Gibbard and Harper showed that if we accept two axioms (one related to the controversial principle of the conditional excluded middle[4]), then the statistical independence of and suffices to guarantee that . However, there are cases in which actions and conditionals are not independent. Gibbard and Harper give an example in which King David wants Bathsheba but fears that summoning her would provoke a revolt.
Further, David has studied works on psychology and political science which teach him the following: Kings have two personality types, charismatic and uncharismatic. A king's degree of charisma depends on his genetic make-up and early childhood experiences, and cannot be changed in adulthood. Now, charismatic kings tend to act justly and uncharismatic kings unjustly. Successful revolts against charismatic kings are rare, whereas successful revolts against uncharismatic kings are frequent. Unjust acts themselves, though, do not cause successful revolts; the reason uncharismatic kings are prone to successful revolts is that they have a sneaky, ignoble bearing. David does not know whether or not he is charismatic; he does know that it is unjust to send for another man's wife. (p. 164)
In this case, evidential decision theory recommends that David abstain from Bathsheba, while causal decision theory—noting that whether David is charismatic or uncharismatic cannot be changed—recommends sending for her.
Criticism
Counterexamples
Newcomb's paradox is a classic example illustrating the potential conflict between causal and evidential decision theory: Because your choice of one or two boxes can't causally affect the Predictor's guess, causal decision theory recommends the two-boxing strategy.[1] However, this results in getting only $1,000, not $1,000,000. Similar concerns arise in problems like the prisoner's dilemma[5] and various other thought experiments.[6]
Probabilities of conditionals
As Michael John Shaffer points out,[4] there are difficulties with assigning probabilities to counterfactuals. One proposal is the "imaging" technique suggested by Lewis:[7] To evaluate , move probability mass from each possible world to the closest possible world in which holds, assuming is possible. However, this procedure requires that we know what we would believe if we were certain of ; this is itself a conditional to which we might assign probability less than 1, leading to regress.[4]
See also
Notes
- 1 2 Gibbard, A.; Harper, W.L. (1981), "Counterfactuals and two kinds of expected utility", Ifs: Conditionals, Beliefs, Decision, Chance, and Time: 153–190
- ↑ Lewis, D. (1976), "Probabilities of conditionals and conditional probabilities", The Philosophical Review, Duke University Press, 85 (3): 297–315, doi:10.2307/2184045, JSTOR 2184045
- ↑ In fact, Lewis proved a stronger result: "if a class of probability functions is closed under conditionalizing, then there can be no probability conditional for that class unless the class consists entirely of trivial probability functions," where a trivial probability function is one that "never assigns positive probability to more than two incompatible alternatives, and hence is at most four-valued [...]."
- 1 2 3 Shaffer, Michael John (2009), "Decision Theory, Intelligent Planning and Counterfactuals", Minds and Machines, 19 (1): 61–92, doi:10.1007/s11023-008-9126-2
- ↑ Lewis, D. (1979), "Prisoners'dilemma is a Newcomb problem", Philosophy & Public Affairs, Blackwell Publishing, 8 (3): 235–240, JSTOR 2265034
- ↑ Egan, A. (2007), "Some counterexamples to causal decision theory" (PDF), The Philosophical Review, 116 (1): 93–114, doi:10.1215/00318108-2006-023, archived from the original (PDF) on October 26, 2009, retrieved 2009-05-28
- ↑ Lewis, D. (1981), "Causal decision theory" (PDF), Australasian Journal of Philosophy, 59 (1): 5–30, doi:10.1080/00048408112340011, retrieved 2009-05-29
External links
- Causal Decision Theory at the Stanford Encyclopedia of Philosophy
- The Logic of Conditionals at the Stanford Encyclopedia of Philosophy