Decision Making In Prisoner’s Dilemma



Yüklə 2,24 Mb.
səhifə3/27
tarix08.12.2017
ölçüsü2,24 Mb.
#14708
1   2   3   4   5   6   7   8   9   ...   27

3.2 Basic features

Even more importantly, Simon formulated seminal postulates about bounded rationality. Human (as well as computer) rationality is bounded (approximate, heuristic), mostly because of limited computational power (1); the only remaining practical solution is some kind of heuristic operation, such as satysficing (2); while another simple version of decision making is offered in (3).


(1) The foremost reason why people are not flawlessly rational utility maximizers is the lack of computational feasibility of almost all more complex problems’ solutions (or to put it more generally: the information for determining optimal solution is not available to the decision maker). Finding the optimal solution is computationally costly, so we resort to utilizing heuristics and approximate methods of problem solving/decision making: we make good (but not optimal) decisions and find good (but not optimal) solutions of problems. There are problems that require that much computation that they cannot be solved by present or prospective intelligent actors (people or computers). One example can be a perfect game of chess (you would have to examine more chess positions than there are molecules in the universe – the search would require exploring a tree of some 10120 branches, Simon, 1978b). Real life problems offer typically even greater combinatorial variability than chess, although it is not as obviously demonstrable (Simon, 1990). Another example of a problem beyond the scope of practical computation is optimal class schedule for a university – the utility maximizing solution of such a problem would come from queuing theory, but queuing theory can solve only rather simple and well-structured problems with only a few variables (such as production schedule for a factory that produces one or two products using one or two different pieces of equipment, Simon, 1990, p. 8). Bounded rationality affects human decision making in strategic games (see for example Colman, 2003 – humans are not able to perform indefinitely recursive reasoning presupposed by axiomatic theory of games).
(2) Because we cannot solve problems/make decisions optimally, we must find (and in the evolutionary, as well as recent history we found – see section 3.3) ways how to solve problems approximately (heuristically) (Simon, 1990). This kind of problem solving was called by Simon (1991) the principle of bounded rationality. Examples of applying bounded rationality solutions are satisfycing (more in the next paragraph, see also Simon, 1957, 1959, 1990), means-ends analysis (for example Simon, 1990; in means-ends analysis we follow the rule that any of our steps must remove differences between the current situation and the desired goal), heuristics described by Kahneman and Tversky (see section 5.), or simple effective strategies such as “Tit for Tat” or “Always Defect” in playing the Prisoner’s Dilemma (see section 6.).
We use satisfycing when we have to choose among many alternatives (typically hundreds and more; satisfycing is also called heuristic search), or when we have only vague idea about the amount and structure of all possibilities, or when the alternatives are in some respect incommensurable (they have several incommensurable dimensions of value, the possible outcomes are uncertain, or the outcomes affect the values of more than one person – see Simon, 1990). When we want to find a satisfactory solution, we approximately decide beforehand (with possible corrections throughout the process) what features the solution must have and we halt the search when we find such a solution.
(3) Simon sometimes (1978a, p. 7) tends to describe rational decision making in the most common sense terms: “Situations and practices will be preferred when important favorable consequences are associated with them, and avoided when important unfavorable consequences are associated with them.” This description, however, begs the question – how do we discern favorable (preferred) situations and consequences? To answer this question properly, it will be probably only right to turn to procedures of heuristic decision making (section 5.), satysficing (heuristic search, see above), and other related phenomena (computation cost, cognitive tools of selection and recognition, cognitive limitations of short term memory – see paragraph (4) in section 3.3, etc.).


3.3 Bounded rationality: developments

In humans as well as in animals we can observe various trade-offs between problem solving capacity of a given level of rationality, and the costs for achieving that level of rationality. Such trade-offs result in various specific, sub-rational, “weakly” or “boundedly” rational problem solving/decision making. We also observe trade-offs between different purposes when only a given amount of energy, time, and/or computation is available, which leads natural selection to “adopt” some kind of satisfycing approach – for example secondary sexual ornamentation in birds must be conspicuous enough to attract mates (purpose A) but discreet enough not to provoke too much attention in predators (purpose B).


As to the recent development of bounded rationality theory, I would like to mention the following topics/areas (for more comprehensive account of these topics see Simon, 1978b):

(1) Operations research and management science, e. g. queuing theory, search theory, linear, dynamic, integer, and geometric programming – the task of these techniques is, in general, to reach satisfactory solutions at a relatively low computational cost.


(2) Artificial intelligence: combining machine computational power and human-like selectivity – diagnostic programs and modern chess programs (for algorithmic nature of chess computer programs, see for example Dennett, 1995, pp. 439-440). Human-like selectivity of programs can be achieved with help of algorithms for pruning search trees, thereby reducing the total search effort, or with help of key inputs, cues, and patterns databases – see also the cognitive simulation paragraph (4) below; see also Chi et al., 1988.

(3) Computational complexity: some problems are too complex to be ever solved, these are the so called exponential problems. As we increase the number of problem elements, the time required to solve the problem rises exponentially. The computational cost, though, can be greatly reduced if we permit a small fraction of erroneous proofs. Simon concludes with a note that we must take seriously the “needs of approximation in our theories of problem solving and decision making”, see Simon, 1978b, pp. 501-502. Incidentally, there are some speculative notions of computers – “quantum computers” – that would be able to search dramatically larger and more complex problem spaces than today’s fastest computers (Deutsch, 1985).


(4) Cognitive theory: for example identifying short-term memory as the bottleneck for the capacity/power of human cognition/processing and real-time problem solving/decision-making (see Miller, 1956). Relatively low capacity of short term memory can be the reason why people fail to integrate earlier and later evidence in making judgments, see Simon, 1978b, pp. 502-503; Kahneman & Tversky, 1973. The weakness of short-term memory is compensated by human cognitive sensitivity to great number of key inputs that trigger patterns (input/inputs followed by a reaction/reactions) stored in long-term memory (this is called recognition of patterns) – quick solutions of particular problems are thus provided. The amount of stored cues (patterns) is 50.000 or more according to Simon, 1978b; Simon, 1990). Simon (1967), to give another example of cognitive theory research, deals with emotional controls of behavior (focusing attention, queuing multiple goals, and terminating behavior).


Yüklə 2,24 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   27




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə