Decision Making In Prisoner’s Dilemma


Other independent variables and modifications of the game



Yüklə 2,24 Mb.
səhifə10/27
tarix08.12.2017
ölçüsü2,24 Mb.
#14708
1   ...   6   7   8   9   10   11   12   13   ...   27

6.3 Other independent variables and modifications of the game

(1)


Kin altruism. If the players are related (or if they at least think they might share the same genes), it will increase the probability of their mutual cooperation (Axelrod & Hamilton, 1981; Wade & Breden, 1980). Related individuals share a proportion of their genes (i. e. there is a probability that related individuals share a given gene). Natural selection would have it that benefiting our kin yields gains to our genes multiplied by the proportion (probability) of the shared genes. This sharing of gains among kin is called inclusive fitness (Hamilton, 1964, 1972; Trivers, 1976; Trivers, 1974 – for limits of and conflicts connected with kin altruism; Haldane, 1955; Wilson, 2000, pp. 117-120, 415-418; Dawkins, 2006b, pp. 88-108).
(2)

Reciprocity or generosity? Axelrod (1980a, 1980b, 1981, 2006), also Axelrod & Dion (1988), and Nowak & Sigmund (1992) offer empirical evidence that people are likely to cooperate with those who cooperated with them, and that such conditional cooperation represents a robustly successful strategy. In fact two of three main characteristics of a successful strategy in Prisoner’s Dilemma tournaments are being “nice” (cooperate with those who cooperate) and being “retaliatory” (retaliate former defection from the other player).


People increase their willingness to cooperate in the iterated game, if they can reasonably think that they are about to interact with altruists (cooperative players). This is sometimes called “warm glow” or altruism hypothesis (Samuelson, 1987; Palfrey & Rosenthal, 1988; Andreoni & Miller, 1993).
Bendor et al. (1991) found, using data from a computer tournament they organized, that “generous”, highly cooperative strategies are more successful than Tit for Tat in “noisy” circumstances (e. g. when false/uncertain feedback about opponent’s moves is given, or when the link between one’s own decision and action is error-prone; the importance of behavioral “noise” in strategic games was pointed out by Selten, 1975). This was confirmed by Kolock (1993) who speaks of “relaxed accounting” rather than “generosity” (technically both terms represent the same characteristic of a strategy). The reason why generous strategies are more successful in noisy games is that cooperation is more easily hampered in noisy circumstances and generosity can maintain relatively high level of cooperation (Molander, 1985; Bendor, 1987, 1993).
(3)

Theory of green-beard effect (Dawkins, 2006b, pp. 88-89; Dawkins, 1999, pp. 143-155) maintains that we behave altruistically towards those who we know behave altruistically themselves. Since altruism is genetically based and all altruists share the same genes for altruism, by helping other altruists you benefit copies of your own genes for altruism.


(4)

Availability of threats and commitments (Schelling, 1960a). If you can, for instance, bindingly announce that after a single defection on your opponent’s side you will not cooperate again with the defector, and that you will never defect first, this is likely to increase the probability of the other player’s cooperation. To formulate it more generally, you can introduce some forms of interdependent moves (Howard, 1966; Rapoport,1967c), which means that it is bindingly announced that if player A acts so and so, player B will respond in a certain way, and vice versa.


(5)

Penalty on defection (Tideman & Tullock, 1976; Frank, 1995). If defection is in some way penalized, its probability of course lowers. This may require presence of a central authority above the players, or it can be seen as a commitment of both players.


(6)

A cooperative partner must be able to recognize us, to remember our past encounters and their outcomes, and to be able to calculate the profitability and likelihood of cooperation. A player without memory (including implicit memory), and/or without ability to discern us is always playing a single-move Prisoner’s Dilemma, and we might expect that he will always defect.


A player who systematically miscalculates the probability of outcomes would also compel us to defect: if he expects that we cooperate little, he will defect, and we will defect accordingly to avoid S; if he expects that we cooperate exceedingly, we will take advantage of that and defect to obtain T.
Interestingly, Trivers (1971) shows how some less sophisticated organisms such as cleaner fish and its host can simplify the problem of memory and recognition by identifying players from the past not individually (which would be cognitively costly) but on a territorial basis (cooperating only with players in a given, easily recognizable, territory – we can hypothesize that humans use this heuristic too: they favor people of the same neighborhood, nationality or color of skin, and avoid strangers). Axelrod & Hamilton (1981) show how bacteria solve this same problem of recognition and memory by accommodating to a single player (entrails of an animal), where no memory and explicit recognition is necessary. Prisoner’s Dilemma can be applied even to interaction between viruses and their hosts (who solve the cognitive problem in the same way as bacteria) – viruses (and bacteria) tend to defect when w lowers due to aging or (another) illness of the host, and vice versa: when w is high, many bacteria and some viruses are harmless; see Axelrod & Hamilton, 1981, Turner & Chao, 1999; for more examples of cooperation among animals see Chase, 1980; Fagen, 1980.
(7)

Gumpert et al. (1969) and Messick & Thomgate (1967) noticed that players in experimental setting sometimes willingly decrease their cooperation, even though maintaining it was (in the particular games studied) easier and safer than in the Prisoner’s Dilemma game, and (also in Prisoner’s Dilemma) more profitable than provoking defection in the opponent.


This might be caused by the fact that players are bored by long sequences of CC (according to Gumpert); or by comparing their profits with those of their opponent (Messick & Thorngate) – that is by switching from viewing Prisoner’s Dilemma (or another game) as a non-zero sum game to viewing it as a zero-sum game (a nice strategy, such as ALL C or Tit for Tat, can by definition never win more points than the opponent).
(8)

“Cheap talk”. A short discussion in a group that is to play Prisoner’s Dilemma enhances cooperation in the game. The discussion had different forms in different studies, sometimes it was related only to the group itself, sometimes the upcoming game was discussed. It is cheap talk, because no one can bindingly commit to cooperate and no one can enforce any such promises. There are at least three explanations why “cheap talk” might work: a) group identity and solidarity is developed, b) “cheap” but public promises and commitments are made, c) cooperative norms are primed. It is difficult to disentangle these three effects in empirical studies.


a) Group identity and solidarity. There is a switch from individual to group goals. Mere talking in the group enhances the feeling of belonging, which enhances cooperation among members of the group (Brewer & Kramer, 1986; Thompson et al., 1998; Pilisuk et al., 1965 – Pilisuk and his colleagues are convinced that the feeling, perception, or attitude of “we” or of “common fate” that enhances cooperation develops also during playing itself).
b) Promises and commitments. This mechanism underlying the effectiveness of pre-play discussion has been repeatedly confirmed (Sally, 1995). Mulford et al. (2008) presume that making a public commitment leads to the need to keep one’s word (it would be interesting to test the effect of public pre-play commitment on players who engage in the upcoming play anonymously – for example via computer network). Opponents are also expected to cooperate, because they made their promises (Mulford et al., 2008; Kerr & Kaufman-Gilliland, 1994). Some degree of initial trust is thus created.
c) Cooperative norms are most likely instilled when pre-play discussion takes form of solving a “coordination problem” (what will happen, how to react, who does what, who talks, who listens, is everything clear, when to begin, etc.). Coordination norms are thus primed and they can subsequently enhance cooperation in the game tasks (Orbell et al., 1988; Bicchieri, 2002).
(9)

“Indirect change”. Behavior (together with thinking, emotions, attitudes, etc.) has several levels/components (see for example Sluzki, 2007). Some of these levels/components are harder to change than others. Some of the harder-to-change levels/components are causally connected to easier-to-change components/levels. If you change an easy-to-change component of a person’s behavior (thinking, attitudes, etc.), the causally related hard-to-change levels/components will adjust automatically. Indirect change is based on these premises. You change the hard-to-change things indirectly by changing the causally related easy-to-change things.


We can find examples of similar approach in therapeutic work by Milton H. Erickson. He utilized already existing behavior that was susceptible to change and by changing it, he elicited indirect change in causally related components of behavior (thinking, attitudes, etc.) that were resistant to direct change. For many examples of this or similar kinds of therapeutic change see Haley, 1993; Haley, 1999a, pp. 6-7, 77-79; Haley, 1999b, pp. 33-35, 44-45, 168; Haley, 1999c, pp. 126-127, 128-130; Zeig, 2008; Zeig, 2010, pp. 184-196, 269-271; Erickson & Rossi, 2010, pp. 71-73, 122-127. Also Watzlawick et al., 1967; and Minuchin, 1974, 1995 used a somewhat similar approach. For indirect induction of hypnosis see Erickson, 1959/2009a, 1959/2009b; Erickson & Rossi, 2009, pp. 148-155.
If we understand Prisoner’s Dilemma to be a model of real life situations, we can think of some “easy” changes that can potentially influence players’ decision making rather significantly (what is an “easy” change can be determined only in a given context). This approach is virtually unknown it the study of experimental games and we can offer only a few suggestions here. What will, for example, happen, if players encounter two different types of opponents with two marginally different respective pay-off matrices (T1 = 5, R1 = 3, P1 = 1, S1 = 0, T2 = 5,000001, R2 = 3,000001, P2 = 1,000001, S2 = 0,000001)? What will happen, if players are not told whether a game with a previously unknown opponent is a single-trial game, or the first trial in an iterated game? What will happen, if iterated games are played with delayed feedback about opponent’s moves? Will subjects behave differently in condition A (payoff matrix T1 = 5, R1 = 3, P1 = 1, S1 = 0), and in condition B where players get 6 points each before every trial and the payoff matrix is T2 = -1, R2 = -3, P2 = -5, S2 = -6? What will happen, if players are told they are going to interact in single-move encounters with a pool of 2, 3, 10, 100, 1.000, or 1.000.000 anonymous opponents (even though subjects will in fact meet with exactly the same condition, including the actual number of opponents – only the available information about the condition will differ)?


6.31 Continuous development of Prisoner’s Dilemma research

Prisoner’s Dilemma is a subject of continuing research and some of the original assumptions and results have already been revised in some respects.


For example Nowak & Sigmund (1993) devised a strategy that outperforms Tit for Tat under certain circumstances (Tit for Tat was by far the most successful and robust (i. e. mot successful under almost all circumstances) strategy in Axelrod’s original tournaments). Axelrod himself (2006, pp. 158-168) found that some strategies that did not rank very well in his original round robin tournaments could outperform other strategies including Tit for Tat under some special circumstances: when successful strategies spread territorially into the space previously occupied by less successful strategies and the diffusion took place by means of “imitation” or “conversion”, it paid greatly to attain outstanding success sometimes, even if the average rate of success of a given strategy was not outstanding.
Le & Boyd (2007) found that if the behavior is not divided into discrete either/or categories (C or D), but changes continuously along some dimension (e. g. is more or less cooperative, see also Frean, 1996), there is smaller chance that cooperation would evolve in previously uncooperative environment, since invading players (or players that decide to cooperate) that are only marginally more cooperative than the rest of the non-cooperative population do not earn so much by their cooperation (and get S from at least part of the noncooperative players). A tool for modeling marginal changes of cooperation (a 21 x 21 matrix of gradually changing payoffs that contained the basic dilemma of a 2 x 2 payoff matrix) was used by Pilisuk et al. as early as in 1965. For summary discussion of invasion of noncooperative population by a small but clustering (discerning and at least partially non-randomly cooperative – cooperating more with other cooperators than with noncooperators) population of cooperators see Axelrod, 2006, p. 212-215; see also Fagen, 1980; Bendor & Swistak, 1997.
Also, computer simulation gained popularity in the study of experimental games (see for example Axelrod, 1997; Gotts et al., 2003).

II. Practical part




Yüklə 2,24 Mb.

Dostları ilə paylaş:
1   ...   6   7   8   9   10   11   12   13   ...   27




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə