Decision Making In Prisoner’s Dilemma



Yüklə 2,24 Mb.
səhifə11/27
tarix08.12.2017
ölçüsü2,24 Mb.
#14708
1   ...   7   8   9   10   11   12   13   14   ...   27

7. Overview

sections 8.-12. we will deal with individual hypotheses, as well as with the experimental designs and theoretical considerations that are associated with the particular hypotheses. Please note that experiments described in section 8.44 will be widely used to test several hypotheses not only in section 8., but also in the next sections. Also the sample description in 8.41 is relevant for all the other sections.


We will test experimentally the difference between single-move and iterated Prisoner’s Dilemma game in terms of the amount of cooperation – see section 8. We will try to determine whether there is an effect of learning (see section 10.), and of explicit, non-binding communication, or “cheap-talk” (section 11.) on decision making in the Prisoner’s Dilemma. In section 9. we will examine the existence and extent of the so called end-effect in the iterated Prisoner’s Dilemma game. We will also perform some (preliminary) regression studies using data from our experiments and measures of certain characteristics of the players (friendliness, risk avoidance, dominance, and tolerance for ambiguity) obtained by self-rating scales in section 12.
The complete summarized data are available in Appendix.

8. The amount of cooperation in the single move Prisoner’s Dilemma game and in the iterated Prisoner’s Dilemma game




8.1 Task

The task of the present experiment is to compare the amount of cooperation in the single move Prisoner’s Dilemma game and in the iterated Prisoner’s Dilemma game. There are not many such experiments, partly because experimenting with single-move games is not considered “cost-effective” by some researchers (Rapoport, 1988b, p. 475).




8.2 Single-move Prisoner’s Dilemma

As we already discussed in the theoretical part (see section 6.14), theory predicts that defection in single-trial games is strictly dominant (Luce and Raiffa, 1957), which means that a reasonable utility-maximizing player would always defect:


Player A knows, that Player B can either cooperate (1), or defect (2).
(1) Player A might assume that Player B will cooperate, so he (Player A) can get either R (for his cooperation), or T (for defection). It follows that any utility-maximizing Player A will defect.
(2) Player A might assume that Player B will defect, so he (Player A) can get either S (for his cooperation), or P (for defection). Thus, any utility-maximizing Player A will defect.
Cooperation in single-move Prisoner’s Dilemma, on the other hand, might stem from the so called “matching heuristic” (Morris et al., 1998) – that means you act as if you believe your opponent is going to cooperate and you “reciprocate” this cooperation. This, in fact, is a reformulation of Hofstadter’s (1985) superrationality argument. Another source of cooperation in the single-shot game is the “control heuristic” (Shafir & Tversky, 1992). (These heuristics are discussed in more detail below.)

8.3 Iterated Prisoner’s Dilemma

In case of the iterated Prisoner’s Dilemma game, some theorists predict that reasonable utility-maximizing players would also always defect (see section 6.17):


Luce and Raiffa (1957, pp. 94-102) showed that for any finite length iterated Prisoner’s Dilemma a rational player should still always defects, since he cannot expect cooperation from the other player on the last move (the last move is basically the same as a single-move Prisoner’s Dilemma), and since he cannot be rewarded by cooperation of the other player on the last move, he has no incentive to cooperate on the next-to-last move (which becomes the same as a single-move Prisoner’s Dilemma), and so on all the way back to the very first move. This is called multistage Prisoner’s Dilemma paradox which stems from backward induction from a known terminating point (Rapoport, 1967a, p. 143). In Luce and Raiffa’s account of iterated Prisoner’s Dilemma there is no “shadow of the future”, that means no expectation of possible cooperation by the other player in the next move, and hence no incentive to maintain cooperative strategy.
Numerous empirical studies, however, showed that there usually is significant amount of cooperation in the iterated Prisoner’s Dilemma game (see for example Rapoport and Chammah, 1965a; Rapoport and Dale, 1966; also see sections 6.15-6.19).


8.4 Procedure

With our subjects we will play 30 single-move games and fifteen 30-moves games (for technical reasons we did not perform the single-move games with 2 of our 45 subjects). The strategies against which the players compete in the iterated games are given in 8.44. Data from the experiment performed here will be used to test hypotheses in this and also in the following sections.


The experimenter played against each player individually, face to face, using the strategies described below. It was not disclosed to our subjects that they are in fact playing against a predetermined strategy.

8.41 Subjects sample

A total of 45 subjects participated in our study. Out of these only 43 subjects played the single-move games. All of our subjects participated in the rest of the procedures. The subjects were not chosen randomly, but according to the method of convenience sample. Because we used a within-subject design, we did not need a control group.


There were 18 men and 27 women in our study. Their age ranged from 20 to 56 years (mean = 25,13 years, SD = 6,69, median = 24 years). They were mostly college undergraduates (N = 37), who attended courses in humanities and social sciences, business and administration, computer science, or medicine. The eight remaining subjects were college graduates working in various fields, such as medicine, research, or business.
Only six of our subjects admitted ever playing Prisoner’s Dilemma before.


8.42 Prisoner’s Dilemma game introduced to subjects

Subjects are informed about the rules of the game. The pay-off matrix showed below will be used in all of the games.











Player B







Cooperate

Defect

Player A

Cooperate

R = 3, R = 3

(Reward for mutual cooperation for both players)



S = 0, T = 5

(Sucker’s payoff for player A, temptation to defect for player B)



Defect

T = 5, S = 0

(Temptation to defect for player A, sucker’s payoff for player B)



P = 1, P = 1

(Punishment for mutual defection for both players)


All combinations of moves are explained to subjects and then we rehearse the game with each participant for a few minutes. It is stressed to them that the goal of each single-move and/or iterated game is to obtain as much points as possible. It is also stressed to them that the goal is not to earn more points than the experimenter (whom they encounter in the games), but rather more points than other subjects (whom they will never encounter directly in the games).




8.43 Single move Prisoner’s Dilemma

Subjects are told they will play thirty single-move games against thirty different, previously unknown subjects. They are told to write down their moves for each of the thirty games. There will be no feedback for any of the games (i. e. the subjects will not learn about their “opponents’” moves).




8.44 Iterated Prisoner’s Dilemma

Subsequently, subjects will play fifteen 30-moves iterated Prisoner’s Dilemma games. We tell them it will be as if they played against several different players represented in the actual experimental setting by the experimenter (the number of games is given explicitly, but the number of different “opponents” is not).


Each subject will encounter five different strategies with three possible endings each (see Table 8.1). The strategies employed (Tit for Tat, Random, Benevolent, Deterrent, and Bully) cover a relatively wide scope of possible strategic behavior.

Table 8.1: The experimental conditions used in the iterated Prisoner’s Dilemma games



Condition number

Condition

1

Closed Tit for Tat

2

Closed Random

3

Closed Benevolent

4

Closed Deterrent

5

Closed Bully

6

Semiclosed Tit for Tat

7

Semiclosed Random

8

Semiclosed Benevolent

9

Semiclosed Deterrent

10

Semiclosed Bully

11

Open Tit for Tat

12

Open Random

13

Open Benevolent

14

Open Deterrent

15

Open Bully

In both closed, semiclosed, and open games at least 30 trials are played in each iterated game, and for each condition 30 trials are registered. In closed-end games players are told the game’s duration is 30 moves (which is the actual duration of the game). In semiclosed games the players are told the game is about to last for 25-35 trials. In open-end games subjects are not given information about the length of the game.


Tit for Tat responds to opponent’s cooperation in n-th move by cooperation in the next move, while it responds to opponent’s defection in n-th move by defection in the next move.
Random strategy chooses randomly between cooperation and defection.
Benevolent strategy responds to opponent’s cooperation in n-th move by cooperation in the next move, while it responds only to opponent’s third defection by defection in the next move, and afterwards it responds to every other opponent’s defection by defection in the next move.
Deterrent responds to opponent’s cooperation in n-th move by cooperation in the next move and it responds to opponent’s defection in n-th move by defection in the next move. But if its opponent defects right after mutual cooperation (i. e. after CC), Deterrent responds to it by two consecutive defections in the next two moves. If its opponent defects again right after another mutual cooperation, Deterrent responds to this by three consecutive defections in the next three moves. If its opponent defects again right after another mutual cooperation, Deterrent responds to it by four consecutive defections, etc.
Bully responds to opponent’s defection in n-th move by defection in the next move, and it responds to every third opponent’s cooperation by defection in the next move, otherwise it responds to opponent’s cooperation in n-th move by cooperation in the next move.
---
For our experiments we used balanced condition orders of experimental conditions (1-15), as can be seen below in Tables 8.2 and 8.3. We created a balanced latin square of 15 condition orders (A-O) of our 15 conditions (five strategies with three different endings each). We constructed our latin square with help of Bradley’s (1958) algorithm. This means that subject S1 meets with condition order A, subject S2 with different condition order B, etc. The 1st, 16th, and 31th subject meets with the same condition order (condition order A), the 2nd, 17th, and 32th subject meets with identical condition order (condition order B), etc. This is standard procedure for this kind of within-subject experimental design.
Table 8.2: Latin square of condition orders in our experiment

Conditions order

Conditions

A

1

2

15

3

14

4

13

5

12

6

11

7

10

8

9

B

2

3

1

4

15

5

14

6

13

7

12

8

11

9

10

C

3

4

2

5

1

6

15

7

14

8

13

9

12

10

11

D

4

5

3

6

2

7

1

8

15

9

14

10

13

11

12

E

5

6

4

7

3

8

2

9

1

10

15

11

14

12

13

F

6

7

5

8

4

9

3

10

2

11

1

12

15

13

14

G

7

8

6

9

5

10

4

11

3

12

2

13

1

14

15

H

8

9

7

10

6

11

5

12

4

13

3

14

2

15

1

I

9

10

8

11

7

12

6

13

5

14

4

15

3

1

2

J

10

11

9

12

8

13

7

14

6

15

5

1

4

2

3

K

11

12

10

13

9

14

8

15

7

1

6

2

5

3

4

L

12

13

11

14

10

15

9

1

8

2

7

3

6

4

5

M

13

14

12

15

11

1

10

2

9

3

8

4

7

5

6

N

14

15

13

1

12

2

11

3

10

4

9

5

8

6

7

O

15

1

14

2

13

3

12

4

11

5

10

6

9

7

8

Table 8.3: Condition orders in our experiment (the same as Table 8.2, but with names of the conditions given)



Conditions order

Conditions

A

closed Tit for Tat

closed Random

open Bully

closed Benevolent

open Deterrent

closed Deterrent

open Benevolent

closed Bully

open Random

semiclosed Tit for Tat

open Tit for Tat

semiclosed Random

semiclosed Bully

semiclosed Benevolent

semiclosed Deterrent

B

closed Random

closed Benevolent

closed Tit for Tat

closed Deterrent

open Bully

closed Bully

open Deterrent

semiclosed Tit for Tat

open Benevolent

semiclosed Random

open Random

semiclosed Benevolent

open Tit for Tat

semiclosed Deterrent

semiclosed Bully

C

closed Benevolent

closed Deterrent

closed Random

closed Bully

closed Tit for Tat

semiclosed Tit for Tat

open Bully

semiclosed Random

open Deterrent

semiclosed Benevolent

open Benevolent

semiclosed Deterrent

open Random

semiclosed Bully

open Tit for Tat

D

closed Deterrent

closed Bully

closed Benevolent

semiclosed Tit for Tat

closed Random

semiclosed Random

closed Tit for Tat

semiclosed Benevolent

open Bully

semiclosed Deterrent

open Deterrent

semiclosed Bully

open Benevolent

open Tit for Tat

open Random

E

closed Bully

semiclosed Tit for Tat

closed Deterrent

semiclosed Random

closed Benevolent

semiclosed Benevolent

closed Random

semiclosed Deterrent

closed Tit for Tat

semiclosed Bully

open Bully

open Tit for Tat

open Deterrent

open Random

open Benevolent

F

semiclosed Tit for Tat

semiclosed Random

closed Bully

semiclosed Benevolent

closed Deterrent

semiclosed Deterrent

closed Benevolent

semiclosed Bully

closed Random

open Tit for Tat

closed Tit for Tat

open Random

open Bully

open Benevolent

open Deterrent

G

semiclosed Random

semiclosed Benevolent

semiclosed Tit for Tat

semiclosed Deterrent

closed Bully

semiclosed Bully

closed Deterrent

open Tit for Tat

closed Benevolent

open Random

closed Random

open Benevolent

closed Tit for Tat

open Deterrent

open Bully

H

semiclosed Benevolent

semiclosed Deterrent

semiclosed Random

semiclosed Bully

semiclosed Tit for Tat

open Tit for Tat

closed Bully

open Random

closed Deterrent

open Benevolent

closed Benevolent

open Deterrent

closed Random

open Bully

closed Tit for Tat

I

semiclosed Deterrent

semiclosed Bully

semiclosed Benevolent

open Tit for Tat

semiclosed Random

open Random

semiclosed Tit for Tat

open Benevolent

closed Bully

open Deterrent

closed Deterrent

open Bully

closed Benevolent

closed Tit for Tat

closed Random

J

semiclosed Bully

open Tit for Tat

semiclosed Deterrent

open Random

semiclosed Benevolent

open Benevolent

semiclosed Random

open Deterrent

semiclosed Tit for Tat

open Bully

closed Bully

closed Tit for Tat

closed Deterrent

closed Random

closed Benevolent

K

open Tit for Tat

open Random

semiclosed Bully

open Benevolent

semiclosed Deterrent

open Deterrent

semiclosed Benevolent

open Bully

semiclosed Random

closed Tit for Tat

semiclosed Tit for Tat

closed Random

closed Bully

closed Benevolent

closed Deterrent

L

open Random

open Benevolent

open Tit for Tat

open Deterrent

semiclosed Bully

open Bully

semiclosed Deterrent

closed Tit for Tat

semiclosed Benevolent

closed Random

semiclosed Random

closed Benevolent

semiclosed Tit for Tat

closed Deterrent

closed Bully

M

open Benevolent

open Deterrent

open Random

open Bully

open Tit for Tat

closed Tit for Tat

semiclosed Bully

closed Random

semiclosed Deterrent

closed Benevolent

semiclosed Benevolent

closed Deterrent

semiclosed Random

closed Bully

semiclosed Tit for Tat

N

open Deterrent

open Bully

open Benevolent

closed Tit for Tat

open Random

closed Random

open Tit for Tat

closed Benevolent

semiclosed Bully

closed Deterrent

semiclosed Deterrent

closed Bully

semiclosed Benevolent

semiclosed Tit for Tat

semiclosed Random

O

open Bully

closed Tit for Tat

open Deterrent

closed Random

open Benevolent

closed Benevolent

open Random

closed Deterrent

open Tit for Tat

closed Bully

semiclosed Bully

semiclosed Tit for Tat

semiclosed Deterrent

semiclosed Random

semiclosed Benevolent



Yüklə 2,24 Mb.

Dostları ilə paylaş:
1   ...   7   8   9   10   11   12   13   14   ...   27




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə