© Copyright JASSS

  JASSS logo

The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration

Robert Axelrod
Princeton, New Jersey: Princeton University Press
1997
Cloth: ISBN 0-691-01568-6; Paper: ISBN 0-691-01567-6

Order this book

Reviewed by
Ken Binmore
ELSE, Economics Department, University College London.

Robert Axelrod's recent Complexity of Cooperation is a sequel to his celebrated Evolution of Cooperation (1984). In the new book, updated versions of his papers on the subject since 1984 are reprinted together with a commentary that places them in context.

Axelrod's championing of the strategy TIT-FOR-TAT in The Evolution of Cooperation originated a vast literature that ranges from idolisers who credit Axelrod with achievements beyond his wildest aspirations to sceptics who dismiss his work as mere hype. An example of the former appears in Watson's (1995, p. 182) Dark Nature:

In 1997, Robert Axelrod discovered that [TIT-FOR-TAT], and as far as we know, only this strategy, is stable and resists all invasions by rival programmes. And that was a vital find ...

Such hero worship from science popularisers is to be compared with the sour observations of economists and game theorists. For example, Martinez-Coll and Hirshleifer (1991) remark that:

Owing mainly to Axelrod's studies of evolutionary competition in an iterated Prisoners' Dilemma context, a rather astonishing claim has come to be widely accepted: to wit, that the simple reciprocity behaviour known as TIT-FOR-TAT is a best strategy not only in the particular environment modelled by Axelrod's simulations but quite generally. Or still more sweepingly, that TIT-FOR-TAT can provide the basis for co-operation in complex social interactions among humans, and even explains the evolution of social co-operation over the whole range of life.

Where does the truth lie in this extraordinary range of assessments of Axelrod's work?

As a game theorist, it will not be hard to guess where my sympathies lie. It is certainly irritating to find the jacket of Complexity of Cooperation congratulating Axelrod on his groundbreaking work in game theory, by which is usually meant his rediscovery of the fact that full co-operation can be sustained as an equilibrium in some indefinitely repeated games. But this fact had been well known for more than a quarter of a century before Axelrod began to write on the subject. The folk theorem of game theory proved by several authors simultaneously in the early fifties demonstrates not only this fact, but describes in precise detail all of the outcomes of a repeated game that can be sustained as equilibria. However, although Axelrod didn't discover the folk theorem, I believe that he did make an important contribution to game theory, which has nothing to do with the particular strategy TIT-FOR-TAT nor with the mechanisms that sustain any of the other equilibria in the indefinitely repeated Prisoners' Dilemma. He did us the service of focusing our attention on the importance of evolution in selecting an equilibrium from the infinitude of possibilities whose existence is demonstrated by the folk theorem. Other game theorists may protest at my recognising someone who knew no game theory at the time he made his contribution and still resolutely ignores game-theoretic commentary on his work, but it is inescapable that the evolutionary ideas pioneered by Axelrod now provide the standard approach to the equilibrium selection problem in game theory. But it is necessary to insist that to recognise Axelrod as a pioneer in evolutionary equilibrium selection is to endorse neither his claims for the strategy TIT-FOR-TAT, nor his unwillingness to see what theory can do before resorting to complicated computer simulations.

My Playing Fair discusses the tit-for-tat bubble at length (Binmore 1994, p. 194). In brief, Axelrod invited various social scientists to submit computer programs for an Olympiad in which each entry would be matched against every other entry in the indefinitely repeated Prisoners' Dilemma. After learning the outcome of a pilot round, contestants submitted computer programs that implemented 63 of the possible strategies of the game. For example, TIT-FOR-TAT was submitted by the psychologist, Anatole Rapoport, who invented the Symmetry Fallacy that purports to demonstrate that it is rational to co-operate in the one-shot Prisoners' Dilemma. The GRIM strategy, which punishes any deviation from co-operation by switching permanently to defection, was submitted by the economist, James Friedman. And so on.

In the Olympiad, TIT-FOR-TAT was the most successful strategy. Axelrod then simulated the effect of evolution operating on his 63 strategies using an updating rule which ensures that strategies that achieve a high payoff in one generation are more numerous in the next. The fact that TIT-FOR-TAT was the most numerous of all the surviving programs at the end of the evolutionary simulation clinched the question for Axelrod, who then proceeded to propose TIT-FOR-TAT as a suitable paradigm for human co-operation in a very wide range of contexts. In describing its virtues, Axelrod (1984, p. 54) says:

What accounts for TIT-FOR-TAT's robust success is its combination of being nice, retaliatory, forgiving and clear. Its niceness prevents it from getting into unnecessary trouble. Its retaliation discourages the other side from persisting whenever defection is tried. Its forgiveness helps restore mutual co-operation. And its clarity makes it intelligible to the other player, thereby eliciting long-term co-operation.

To what extent are these claims justified? On examination, it turns out that TIT-FOR-TAT was not so very successful in Axelrod's simulation. Nor is the limited success it does enjoy robust when the initial population of entries is varied. The unforgiving GRIM does extremely well when the initial population of entries consists of all 26 finite automata with at most two states. Nor should we expect evolution to generate nice machines that are never the first to defect, provided that some small fraction of suckers worth exploiting continually flows into the system. As for clarity, in order for co-operation to evolve, it is only necessary that a mutant be able to recognise a copy of itself. All that is then left on Axelrod's list is the requirement that a successful strategy be retaliatory. But this is a lesson that applies only in pairwise interactions. In multi-person interactions, it need not be the injured party who punishes a cheater.

Before justifying these counterclaims, it is necessary to address Nachbar's (1992) more radical criticism: that Axelrod mistakenly ran an evolutionary simulation of the finitely repeated Prisoners' Dilemma. Since the use of a Nash equilibrium in the finitely repeated Prisoners' Dilemma necessarily results in both players always defecting, we then wouldn't need a computer simulation to know what would survive if every strategy were present in the initial population of entries. The winning strategies would never co-operate.

Nachbar is correct to claim that Axelrod inadvertently ran his evolutionary simulations with the finitely repeated Prisoners' Dilemma (Binmore 1994, p. 199). This fact teaches a lesson about the potential reliability of computer simulations that are run in ignorance of the underlying theory, but it does not necessarily invalidate Axelrod's conclusions, because none of the 63 entries actually submitted were programmed to exploit the end effects of the finitely repeated game. In fact , Linster (1990, 1992) obtained results that are close to those reported by Axelrod when he replicated the simulations with the finitely repeated Prisoners' Dilemma replaced by the infinitely repeated version. However, contrary to popular opinion, the result obtained is not that only TIT-FOR-TAT survives. Actually a mixture of strategies survives. Amongst these, TIT-FOR-TAT is the most numerous, but nevertheless controls only a little more than 1/6 of the population. How significant is it that TIT-FOR-TAT commands a plurality among the survivors?

Theory provides some help in answering this question. We know that Linster's simulation can only converge on one of the many Nash equilibria of the 63 x 63 matrix game whose pure strategies are the entries submitted to the Olympiad. If the population starts with each of these strategies controlling an equal share of the population, then Axelrod and Linster's work shows that the system converges on a mixed Nash equilibrium in which TIT-FOR-TAT is played with probability about 1/6. However, we can make the system converge on a variety of the Nash equilibria of the 63 x 63 game by starting it off in the basin of attraction of whatever stable equilibrium takes our fancy. Axelrod tried six different initial conditions, and found that TIT-FOR-TAT was most numerous among the survivors five times out of six. Linster (1990, 1992) systematically explored all initial conditions, and found that TIT-FOR-TAT is played with greatest probability in the final mixture only about 1/4 of the time.

But why restrict ourselves to Axelrod's 63 strategies? Why not follow Linster (1990, 1992) in beginning with all finite automata with at most two states? The system then converges from a wide variety of initial conditions to a mixture in which the strategy GRIM is played with probability greater than 1/2. But GRIM is not forgiving. On the contrary, it gets its name from its relentless punishment of any deviation for all eternity.

To evaluate the evolutionary claims for niceness that Axelrod makes on behalf of TIT-FOR-TAT, it is necessary to turn to simulations that mimic the noisy processes of mutation and sexual variation. In an innovative paper, Axelrod (1987, 1997) pioneered the use of Holland's (1992a, 1992b) genetic algorithm for this purpose. (Axelrod refers to the deterministic simulations of his earlier work as "ecological" to distinguish them from these "evolutionary" simulations.) Axelrod's pilot study considered only 40 simulations of 50 generations each, but Probst (1996) later went the whole hog by running very large numbers of simulations for very long periods without imposing binding complexity constraints. In the first of the papers reprinted and revised for Complexity of Cooperation, Axelrod (1987) found that mean machines were thriving at the end of 11 of his 40 simulations, but Probst shows that Axelrod was wrong to dismiss this phenomenon as a transient blip preceding an ultimate take-over by naive reciprocators like TIT-FOR-TAT. On the contrary, it is the initial success of naive reciprocators like TIT-FOR-TAT that turns out to be transient. In the long run, mean machines triumph. Axelrod's (1984) claim that evolution should be expected to generate nice machines in the indefinitely repeated Prisoners' Dilemma therefore turns out to be mistaken. Binmore (1994, p. 202) reports the original simulations Probst ran for his master's thesis in Basel. The later studies of his doctoral thesis are shortly to be published (Pollack and Probst 1998, Probst 1996).

The two-state finite automaton TAT-FOR-TIT is the simplest example of the type of mean machine that emerges from Probst's simulations. This strategy begins by defecting and continues to defect until the opponent also defects. At that point TAT-FOR-TIT switches to its co-operative state. It continues to co-operate until the opponent defects, which behaviour it punishes by returning to the defection state in which it started the game. A player using TAT-FOR-TIT therefore begins by trying to exploit his opponent, and only starts to co-operate if he finds that she is trying to exploit him in the same way that he is trying to exploit her.

The strategy TAT-FOR-TIT was first described by Rapoport and Chammah (1965) in their early book on the Prisoner's Dilemma under the prejudicial name of SIMPLETON. Conclusions similar to Probst's are reached by the biologists Nowak and Sigmund (1990, 1992, 1993, Nowak, Sigmund and El-Sedy 1993, Sigmund 1993), albeit in less decisive simulations. They refer to TAT-FOR-TIT as PAVLOV because it stays in the same state when it wins but shifts when it loses. I prefer to stick with the earlier terminology introduced in a paper of Banks and Sundaram (1990), which uses theoretical tools to show that introducing complexity considerations into an evolutionary analysis destabilises all equilibrium mixtures that fail to include a mean machine. The reason is that the name TAT-FOR-TIT recognises the importance of the signalling role o f the opening phase, during which two mean machines (that will eventually co-operate) explore the possibility that the other machine might be exploitable. Abreu and Rubinstein (1988) discuss this point in detail.

Axelrod's (1997, p. 21) new Complexity of Cooperation ignores the widespread criticism from game theorists discussed above. It recognises the unease registered by some biologists, but nevertheless reiterates the original claim that TIT-FOR-TAT embodies the essential features of a successful strategy for the indefinitely repeated Prisoners' Dilemma. But what of Axelrod's own discovery that evolution sometimes favours mean machines? Axelrod argues that mean machines did well only because they could exploit suckers. If his simulations had been run for longer, he claims that the mean machines which initially did well would have fallen by the wayside after eliminating the suckers on which their success depends.

The intuition behind this argument is certainly valid for a population that initially consists only of strategies that always co-operate or always defect together with TIT-FOR-TAT, provided that no new strategies are allowed to intrude. If strategies that always co-operate dominate at the outset, then the strategy that always defects will do well initially, but then fade away altogether as the unconditional co-operators on which they prey become increasingly infrequent. But the intuition derived from this example doesn't extend to the case when unconditional defectors are replaced by TAT-FOR-TITs. If one starts the latter system off in the right basin of attraction, the final population will consist only of TAT-FOR-TITs. It is equally true that the final population will contain no TAT-FOR-TITs at all if the system is started off in the second of the two basins of attraction, but drawing attention to this fact doesn't make the other basin of attraction go away. I register this point because Wu and Axelrod (1995, Axelrod 1997) attempt to use a similar rhetorical trick in seeking to rebut the claims made by Nowak and Sigmund for PAVLOV in the second of the papers reprinted in the Complexity of Cooperation. They add TAT-FOR-TIT and three other strategies to the initial population in Axelrod's original ecological simulation. This perturbation turns out to be insufficient to shift the system into the basin of attraction of an equilibrium mixture containing TAT-FOR-TIT, but so what?

The persistence of the tit-for-tat bubble is a mystery to game theorists. Why do science writers continue to use TIT-FOR-TAT as the paradigm for human co-operation? Authors like Ridley (1996) are aware of the criticisms of Axelrod's work surveyed here. Do they not understand them?

Sometimes TIT-FOR-TAT is endorsed under the misapprehension that it means no more than "tit for tat" in ordinary English. Further confusion then follows because the colloquial usage carries a fairness connotation. For example, a journalist recently told me that TIT-FOR-TAT is a scientific fact because badgers apparently split the time they spend grooming each other very equally. But why is this relevant to TIT-FOR-TAT? Even in ordinary English, the tit that follows a tat is a punishment that fits the crime.

To make this observation is not to deny that one needs to appeal to fairness when matching a tit to a tat. On the contrary, my forthcoming Just Playing argues that the natural response when Adam cheats on a fair deal is for Eve to take whatever act ion is necessary to reinstate the status quo that held good before the deal was reached (Binmore 1998). The loss that a player then sustains is then equal to the gain he anticipated receiving as a result of implementing the deal. Since these gains are calculated using the prevailing standards of fairness, so are the losses that result from Eve's punishment of Adam's deviation. The tit is therefore fairly determined by the tat. However, such fairness considerations are entirely absent in the type of Olympiad at which Axelrod (1984) crowned TIT-FOR-TAT with the laurel wreath of victory.

Other popularisers are so seduced by the idea that evolution will necessarily make us nice that they see no need at all to examine the scientific evidence they quote in its support. When confronted with the shortcomings of the evidence, they give themselves away by changing their ground. Their enthusiasm for TIT-FOR-TAT then really turns out to be based on their experiences of being brought up in a comfortable middle-class household. But the anecdotes about the social dynamics of the middle-classes with which they defend TIT-FOR-TAT are irrelevant to the repeated Prisoners' Dilemma, which models the interaction between two strangers. To understand the social contracts that operate within middle-class insider-groups, one must remember that the sons and daughters of bourgeois families enter a multi-player game that began long ago.

The simplest game that seems to capture something of the intuition that popularisers have mistakenly learned to label with the TIT-FOR-TAT tag is an overlapping generations model in which three players are alive at any time. Occasionally, one of the players dies and is immediately replaced by a new player. In each period, two of the players are matched at random to play the Prisoners' Dilemma, while the third player looks on. Long ago, an equilibrium was somehow established which now requires that each player always co-operates. A player who fails to do so will find that the opponent with whom he is next matched will punish him by defecting - whoever that opponent may be. Yesterday, the players were Adam, Eve and Ichabod. But Ichabod died overnight and has been replaced by Olive. She is now matched with Adam. Why does Adam treat her nicely by co-operating? After all, we know that there are many mean equilibria that might form the basis for a social contract in the mini society consisting only of Adam and Olive. Some of these mean equilibria would allow Adam and Olive to explore the possibility that their new opponent is a sucker who can be exploited. But these equilibria are unavailable because of the presence of Eve. She enforces nice behaviour from the word go by being ready to punish anyone who is nasty. More generally, when children grow up within a middle-class insider group, they learn to treat other insiders with a consideration denied to outsiders. Insiders who don't conform soon find them selves treated as outsiders unless they mend their ways. However, this is as far as the analogy with TIT-FOR-TAT goes. Nature has not brought the same sweetness and light that operates within middle-class insider-groups to the world at large. The outsiders who lurk in dark alleys with rape and mayhem in their hearts are neither nice nor forgiving. Nor do sharks only cruise in murky waters. They also swim in brightly lit boardrooms and patrol the corridors of power. Such upper-crust sharks show beautiful teeth as they prey upon our bank accounts and raid the pension funds of elderly widows. But we would be the fools they take us for if we returned the smiles with which they try to convince us that they are nice people like ourselves.

Political theorists make a bad mistake when they invent theories that remove nastiness from the world. It just isn't true that nastiness is irrational, or that evolution will eventually sweep it away. As Hume (1985 [1758]) warned, our constitutions therefore need to be armoured against the modern methods that rogues and knaves posing as insiders have developed to subvert our social contract. Even more urgent is the problem of finding ways of reducing conflict between mutually hostile groups. Can Serbs and Croats eventually be persuaded to stop treating each other as outsiders and start being nice to each other again? Is there any hope for Northern Ireland or the Middle East?

Axelrod (1984) gives one striking example of the emergence of such co-operation. In the First World War, there were several fleeting outbreaks of implicit collusion between units of the British and German armies, in which each side ceased to shell the other. Axelrod (1984) attributes this behaviour to tit-for-tat reasoning, but such an explanation overlooks the obvious fact that the players didn't begin by being nice to each other. I agree that it is vital to understand how such co-operation between groups who treat each other as outsiders can get off the ground, but there seems no point at all in seeking to analyse the emergence of co-operation using a model that takes the conclusion for granted.

The preceding criticism of the TIT-FOR-TAT paradigm is extracted from Chapter 3 of my forthcoming Just Playing (Binmore 1998). The chapter also has much to say about the light that theory can shed on the matters that Axelrod treats entirely through computer simulation. To a game theorist, such a wilful refusal to learn any lessons from theory seems almost criminal. What is the point of running a complicated simulation to find some of the equilibria of a game when they can often be easily computed directly? But this is a routine mistake among social scientists who use simulation techniques. Often the simulators are unaware even that they are studying a game, or that their simulations cannot but converge on an equilibrium of this game, if they converge at all. Sometimes a theorist can see immediately that a simulation must be wrong because it is said to have converged to something that is not an equilibrium of the underlying game, but I have never known a simulator feel any need to rethink his results after learning this fact.

Rather than talk about the Prisoners' Dilemma any more, I propose to pursue my point about the value of theory by briefly looking at a third paper "Promoting Norms" from Complexity of Cooperation. As with his simulations of the repeated Prisoners' Dilemma, Axelrod (1986) is ready to draw large conclusions from his simulations of what he calls the Norms Game. Axelrod knows nothing of it, but it happens that a related game has achieved notoriety in both the economic and psychological literature. The related game is called the Ultimatum Game. A philanthropist offers 100 dollars to Adam and Eve if they can agree on how to split it. The bargaining rules require that Adam make a proposal to Eve, which she can accept or refuse. If she refuses both players get nothing. A rational expectations argument predicts that Adam will exploit his bargaining power and get nearly all the money. If Eve cares only about money, he will reason that she will accept any positive amount, since some thing is better than nothing. He will therefore offer her only one cent. But experiment shows that Adam would be unwise to offer Eve less than a third of the available money since the probability that she will say no is then about one half. In fact, players commonly agree on a split that gives Adam only a little more than half the money (Güth et al. 1982).

Since any division of the money can be supported as an equilibrium, one can think of the problem faced by Adam and Eve in the Ultimatum Game as an equilibrium selection problem. The criterion used in their society to solve this problem can then be regarded as a social norm. Rather than turning immediately to computer simulation, one can use theory to address the problem faced by evolution in coming up with such a norm. Binmore, Gale and Samuelson (1995) began by looking at a simplified version of the Ultimatum Game that is also a simplified version of Axelrod's Norms Game. The precise values of the payoffs are not important, but let us assume that the philanthropist puts four dollars on the table. Adam is then allowed either to propose an equal split of two dollars each, or else the split in which he gets three dollars and Eve gets only one. Assume further that Eve always accepts the equal split proposal, so that attention can be concentrated on what happens when she is offered the unequal split. In such a situation, it is straightforward to analyse the convergence properties of the replicator dynamics used by biologists to model the simplest evolutionary processes. As with the repeated Prisoners' Dilemma, the equilibrium to which the system converges depends on the basin of attraction in which it begins. Sometimes the system converges to the rational expectations equilibrium and sometimes it doesn't. But the chief point that needs to be made is that the convergence process is not at all robust. It can easily be disrupted or slowed down enormously by quite small perturbations in the dynamic process. It follows that one can place very little reliance on conclusions obtained from a few runs of a computer simulation. One needs to conduct very large numbers of robustness tests with varying values of the underlying parameters before beginning to take any results seriously. We ran half a million simulations when exploring evolution in the full Ultimatum Game (Binmore et al. 1995). In the process, we discovered that it was insufficient to maintain an accuracy of only ten decimal places, and were forced to move to fifteen decimal places instead.

In brief, the simulation data on which Axelrod supposedly bases his conclusions about the evolution of norms is woefully inadequate, even if one thought that his Norms Game were a good representation of the Game of Life in which real norms actually evolve. One simply cannot get by without learning the underlying theory. Without any knowledge of the theory, one has no way of assessing the reliability of a simulation and hence no idea of how much confidence to repose in the conclusions that it suggests. It does not follow that the conclusions on norms and other issues which Axelrod offers in his Complexity of Cooperation are without value. He is, after all a clever man who knows the literature of his own subject very well. But I do not think one can escape the conclusion that the evidence from computer simulations that he offers in support of his ideas has only rhetorical value. His methodology may table some new conjectures that are worth exploring. But such conjectures can only be evaluated in a scientific manner by running properly controlled robustness tests that have been designed using a knowledge of the underlying theory.

Ken Binmore is the author of Playing Fair: Game Theory and the Social Contract I, MIT Press (1994). A second volume, Just Playing, is currently in press.

* References

ABREU D. and A. Rubinstein. 1988. The structure of Nash equilibrium in repeated games with finite automata. Econometrica, 56:1259-1282.

AXELROD R. 1984. The Evolution of Cooperation. Basic Books, New York.

AXELROD R. 1986. An evolutionary approach to norms. American Political Science Review, 80: 1095-1111.

AXELROD R. 1987. The evolution of strategies in the iterated Prisoners' Dilemma. In L. Davis, editor, Genetic Algorithms and Simulated Annealing. Morgan Kaufmann, Los Altos, CA.

AXELROD R. 1997. The Complexity of Cooperation. Princeton University Press, Princeton, NJ.

BANKS J. and R. Sundaram. 1990. Repeated games finite automata and complexity. Games and Economic Behavior, 2:97-117.

BINMORE K. 1994. Playing Fair: Game Theory and the Social Contract I. MIT Press, Cambridge, MA.

BINMORE K. 1998. Just Playing: Game Theory and the Social Contract II. MIT Press, Cambridge, MA.

BINMORE K., J. Gale and L. Samuelson. 1995. Learning to be imperfect: The Ultimatum Game. Games and Economic Behavior, 8:56-90.

GüTH W., R. Schmittberger and B. Schwarze. 1982. An experimental analysis of ultimatum bargaining. Journal of Economic Behavior and Organization, 3:367-388.

HOLLAND J. 1992. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI. (Second edition, first published 1975).

HOLLAND J. 1992. Genetic algorithms. Scientific American, 267:66-72.

HUME D. 1985. Of the first principles of government. In Essays Moral, Political and Literary, Part I. Liberty Classics, Indianapolis, IN. (Edited by E. Miller. Essay first published 1758).

LINSTER B. 1990. Essays on Co-operation and Competition. PhD thesis, University of Michigan.

LINSTER B. 1992. Evolutionary stability in the repeated Prisoners' Dilemma played by two-state Moore machines. Southern Economic Journal, 58:880-903.

MARTINEZ-COLL J. and J. Hirshleifer. 1991. The limits of reciprocity. Rationality and Society, 3:35-64.

NACHBAR J. 1992. Evolution in the finitely repeated Prisoners' Dilemma. Journal of Economic Behavior and Organization, 19:307-326.

NOWAK M. and K. Sigmund. 1990. The evolution of stochastic strategies in the Prisoners' Dilemma. Acta Applicandae Mathematicae, 20:247-265.

NOWAK M. and K. Sigmund. 1992. Tit for tat in heterogeneous populations. Nature, 355:250-253.

NOWAK M. and K. Sigmund. 1993. A strategy of win-shift, lose-stay that outperforms tit-for-tat in the Prisoners' Dilemma game. Nature, 364:56-57.

NOWAK M., K. Sigmund and E. El-Sedy. 1993. Automata, repeated games and noise. Technical report, Department of Zoology, Oxford University.

POLLACK G. and D. Probst. 1998. Evolution, automata and the repeated Prisoners' Dilemma. (Forthcoming in Rationality and Society).

PROBST D. 1996. On Evolution and Learning in Games. PhD thesis, University of Bonn.

RAPOPORT A. and A. Chammah. 1965. Prisoner's Dilemma. University of Michigan Press, Ann Arbor, MI.

RIDLEY M. 1996. Origins of Virtue. Penguin, Harmondsworth.

SIGMUND K. 1993. Games of Life: Explorations in Ecology, Evolution and Behaviour. Penguin, Harmondsworth.

WATSON L. 1995. Dark Nature: A Natural History of Evil. Hodder and Stoughton, London.

WU J. and R. Axelrod. 1995. How to cope with noise in the iterated Prisoner's Dilemma. Journal of Conflict Resolution, 39:183-189.

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1998