Decision Making In Prisoner’s Dilemma



Yüklə 2,24 Mb.
səhifə6/27
tarix08.12.2017
ölçüsü2,24 Mb.
#14708
1   2   3   4   5   6   7   8   9   ...   27

5. Heuristics and biases

In this section we will deal with decision making that takes into account various biasing inputs and/or incorporates some bounded, simplified, suboptimal, or “heuristic” processing (evaluation, sampling, elimination, anchoring, choice, etc.).


Three major categories of heuristics are: „(i) representativeness [see 5.1], which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios [see 5. 2], which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor [see 5.3], which is usually employed in numerical prediction when a relevant value is available“ (Tversky & Kahneman, 1974, p. 20).


5.1 Representativeness

People use representativeness heuristic when they try to asses the probability that an object (or a person, or an event) belongs to a given class of objects and hence has certain properties typical for objects of that class. We can also speak about resemblance, instead of representativeness. To base objects classification on their representativeness seems as a good approximation to optimal classification, but it can sometimes lead to erroneous judgment.




5.11 Base rates

There is at least one important factor (information) that should, but often does not influence a valid inclusion of an object into a class of objects (an empirical /inductive/ – not logical or semantic /deductive/ inclusion): It is prior background information (base rates and other prior odds) that state the usual frequency of incidence of a given object. If inclusion of an object into a class does not seem very accurate to the person performing this inclusion (for example because the description of the object is vague), decision makers’ classification should depend more on known base rates – that is on prior frequency of incidence of the member of the class in question vs. the frequency of incidence of members of other classes.


Kahneman and Tversky’s (Kahneman & Tversky, 1973, pp. 49-53) subjects failed to do this, even though they had all the necessary background information explicitly at their disposal (i. e. the base rates were given). The subjects in Kahneman and Tversky’s study were given short personality sketches and their task was to decide what field the person that was described in the material studies (the subjects stated this in form of a likelihood rank of the classification). The subjects also judged how much the person resembles a typical student of the chosen field (in form of a similarity rank). Again, the real base rates of frequency of incidence of students of all relevant fields were given.
The subject’s predictions (the likelihood rank of the classification) were essentially based on independent judgments of representativeness (similarity) of the personality sketch compared with the typical student of the respective field. The base rates of the frequency of incidence of students of the relevant fields were neglected. The correlation between likelihood and similarity was 0,97, while the correlation between base rates of incidence and judged likelihood was -0,65! Although the subjects were aware of the fact that to base the predictions of likelihood of classification on representativeness is not valid – they expected the probability of their predictions to range between 23% when estimates were based on projective tests and 53% when based on reports of the students interests and plans – they ignored base rates that might have served as helpful cues for classification of students.
The reason why people neglect base rates is that they lack psychological machinery that would enable to use them:
„What is perhaps surprising is the failure of people to infer from lifelong experience such fundamental statistical rules as regression toward the mean, or the effect of sample size on sampling variability. […] Statistical principles are not learned from everyday experience because the relevant instances are not coded appropriately. For example people do not discover that successive lines in a text differ more in average word length than do successive pages, because they simply do not attend to every word length of individual lines or pages. Thus, people do not learn the relation between sample size and sampling variability, although the data for such learning are abundant.“ (Tversky & Kahneman, 1974, p. 18)
There can be various reasons why people lack the psychological machinery that would enable to use base rates. To sum the reasons up we can say that it was not advantageous from the evolutionary point of view to evolve taking base rates into account. This could be because:
(1) Base rates are usually not available or not reliable (on the other hand, in Kahneman’s and Tversky’s study, the rates were given explicitly).
(2) Base rates are available, but it is not advantageous to take them into account: It might be more advantageous to classify certain signals as signs of threat and take action, even though they are usually not signs of a real danger. To put it more generally, base rates are weighed (and discounted) by the importance of events in question. The more possibly threatening or possibly rewarding the event, the bigger the neglect of base rates (e. g. running into an enemy, winning in a lottery).
(3) Base rates were evolutionary not useful in contact within a small group of people (100-200 members) – that a person is faithful, attractive, clever, or a cheat can be better assessed using representativeness heuristic when you deal with only a small group of people than using base rates.
(4) Using statistical rules (see section 5.12 below), logical inference, base rates or un-biased outcome information is computationally costly and time consuming (not to mention that you need to evolve specialized brain modules to perform these tasks). These cost-factors limit the possible employment (and development) of the more sophisticated reasoning procedures.
In experimental situations when assessing what subjects’ typical (or maximum) reasoning powers are, variables such as type, amount, and chronology of reinforcement should be taken into account (these variables can increase or decrease subject’s performance). You also have to consider subjects’ motivation, as well as the number of trials (and the opportunity to learn). See for example Einhorn, 1980; for description of the process of probability assessment calibration through experience and feedback see Alpert & Raiffa, 1969/1982; and Lichtenstein et al., 1977/1982; for discussion of the possibility of formulating and employing “statistical heuristics” that would improve people’s reasoning see Nisbett et al., 1982; possible tools of improvement of people’s performance in judgment and decision making experimental tasks are summarized in Fischhoff, 1982 – you can for example show the subjects overlooked distinctions, describe semantic disagreement, clarify instructions, warn of the problem, provide feedback, raise stakes, or train them extensively.


5.12 Samples of a population

Another study (Kahneman & Tversky, 1972/1982) presented subjects with the following problem:


“There are two programs in a high school. Boys are a majority (65%) in program A, and a minority (45%) in program B. There is an equal number of classes in the two programs.
You enter a class at random, and observe that 55% of the students are boys. What is your best guess – does the class belong to program A or to program B?” (Kahneman & Tversky, 1972/1982, p. 34)
67 of 89 subjects answered that the class belongs to program A. The class is, in fact, more representative (in the “represetativenes” = “subjective similarity” sense) of program A (it has a bigger proportion of boys). But from the statistical point of view it is slightly more likely that the class belongs to program B, since the variance for p = 0,45 exceeds that for p = 0,65 (Kahneman & Tversky, 1972/1982). This study showed that for decision makers it can be difficult to judge, in some cases, what is representative in the statistical sense (not in the representativeness = subjective similarity sense). The problem is to take into account statistical laws when comparing samples and their population, or judging actual relations between samples and their population.
Another study offers an even stronger example of this fallacy (Kahneman & Tversky, 1972/1982). Subjects were asked what sequence of birth is more probable: girl, boy, girl, boy, boy, girl (sequence A) – or boy, girl, boy, boy, boy, boy (sequence B). Both sequences are about equally likely, but 79 out of 90 persons judged the second one (sequence B) less likely, presumably because it seemed to them a less likely sample of a series of random events (such as births, coin flips, etc.). The authors call this (fallacious) expectancy of randomness in a sample the belief in the law of small numbers – i. e. that the law of large numbers applies to small numbers (small samples) as well (Tversky & Kahneman, 1971; Kahneman & Tversky, 1972/1982).
Again we can think that people simply lack the psychological machinery for applying statistical reasoning, and again we can see some reasons why this is advantageous from the evolutionary perspective. Take the example of social exchange (we will reinterpret here the above example with newborns in terms of social exchange). Assume that you are in an ongoing social exchange with someone, who is supposed to cooperate in 50% of cases. In the remaining 50% of cases he does not help you, and you will still consider him quite a good friend (this is also our assumption here). Then assume a short series of cooperative events (say 6 during a few months). If your partner acted in this way: cooperation, no cooperation, cooperation, no cooperation, no cooperation, cooperation (sequence A) you will probably think of him as quite a good friend (this is my hypothesis, not an assumption); though if he acted in this way: no cooperation, cooperation, no cooperation, no cooperation, no cooperation, no cooperation (sequence B) you will probably think him a lousy guy and terminate the exchange (even if the sequence B is as representative a sample of “50% cooperative behavior” as sequence A). I expect this might be the kind of situation where evolutionary pressures acted on evolution of statistical vs. representativeness reasoning – if our ancestors had developed fine statistical reasoning, it is possible they would have become victims of cheaters (like the one from sequence B).
---
For a detailed empirical study showing the same type of results concerning sampling fallacies, see Bar-Hillel, 1982, for more recent studies see Fiedler (2000), Denrell (2005). Analogous results in the area of probabilistic reasoning/diagnostics by experts in clinical medicine were found by see Eddy, 1982 (see also Tversky & Kahneman, 1982). Chater et al. (2008) attempt to explain people’s cooperation in single-move Prisoner’s Dilemma game by the possibility that their former experience of cooperation constituted a biased sample, for which they are unable to correct (and hence they transfer cooperative decision making from settings where it could be profitable into different settings where it might not be so profitable, such as the single-move Prisoner’s Dilemma).



Yüklə 2,24 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   27




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə