© Copyright JASSS

  JASSS logo ----

Pietro Terna (2001)

Creating Artificial Worlds: A Note on Sugarscape and Two Comments

Journal of Artificial Societies and Social Simulation vol. 4, no. 2,
<https://www.jasss.org/4/2/9.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 18-Jan-01      Published: 31-Mar-01


1.1
The introductory part of this note[1] pays due homage to Epstein and Axtell's (1996) seminal book on the social simulation model known as Sugarscape. Two comments follow: first, on so-called methodological individualism and second, on the possibility of defining a robust framework for the development of agent based simulation models, such as Sugarscape. Replies to these comments would be more than welcome. It is useful to develop a discussion on these themes, now that the foundations are being laid for a wider use of "bottom-up" or agent based simulation techniques.

* Sugarscape

2.1
Epstein and Axtell (1996, Chapter 1, Introduction) assert:
Herbert Simon is fond of arguing that the social sciences are, in fact, the "hard" sciences. For one, many crucially important social processes are complex. They are not neatly decomposable into separate subprocesses-economic, demographic, cultural, spatial-whose isolated analyses can be aggregated to give an adequate analysis of the social process as a whole. And yet, this is exactly how social science is organized, into more or less insular departments and journals of economics, demography, political science, and so forth. Of course, most social scientists would readily agree that these divisions are artificial. But, they would argue, there is no natural methodology for studying these processes together, as they coevolve.

The social sciences are also hard because certain kinds of controlled experimentation are hard. In particular, it is difficult to test hypotheses concerning the relationship of individual behaviors to macroscopic regularities, hypotheses of the form: If individuals behave in thus and such a way - that is, follow certain specific rules - then society as a whole will exhibit some particular property. How does the heterogeneous micro-world of individual behaviors generate the global macroscopic regularities of the society?

Another fundamental concern of most social scientists is that the rational actor - a perfectly informed individual with infinite computing capacity who maximizes a fixed (nonevolving) exogenous utility function-bears little relation to a human being. Yet, there has been no natural methodology for relaxing these assumptions about the individual.

Relatedly, it is standard practice in the social sciences to suppress real-world agent heterogeneity in model-building. This is done either explicitly, as in representative agent models in macroeconomics, or implicitly, as when highly aggregate models are used to represent social processes. While such models can offer powerful insights, they "filter out" all consequences of heterogeneity. Few social scientists would deny that these consequences can be crucially important, but there has been no natural methodology for systematically studying highly heterogeneous populations.

Finally, it is fair to say that, by and large, social science, especially game theory and general equilibrium theory, has been preoccupied with static equilibria, and has essentially ignored time dynamics. Again, while granting the point, many social scientists would claim that there has been no natural methodology for studying nonequilibrium dynamics in social systems.

2.2
Sugarscape is a bottom-up defined world of agents - in contrast to the representative agent models in economics - where agents are heterogeneous from the point of view of individual abilities (vision) and needs (metabolism). Sugar, in Sugarscape, is variously distributed in space and it is the only resource for agents' life (there is also a model including both sugar and spice, to observe exchanges).

2.3
The book has been positively reviewed, for instance by Gessler (1997) and in Tesfatsion (1998). A crushing criticism was published in a popular technical computer science monthly magazine, Dr. Dobb's Journal (September 1997, p.119, no signature). It is useful to analyse this criticism, coming from the information technology milieu, to better understand the problems that today we have to face in diffusing this new type of model.

2.4
The anonymous reviewer considered that the book's thesis - stylised as "simple entities, interacting through simple, local rules, can produce very complicated behaviour" ... "has become a familiar one." A double comment: (i) this kind of idea is not so popular, at least in the social sciences; (ii) the use, in the assertion, of the term "complicated" is inappropriate. (See Kaneko (1998) for the subtle, but crucial difference between "complex" and "complicated" in this field of analysis.) The hostile reviewer furthermore states that every investigation in this field is useful only if scientifically based; instead, the text presents situations and suggests interpretations (agents that emigrate, trade etc.), but the authors do not indicate how many combinations of parameters they tried before obtaining these results. According to the criticism, in the field of agent-based models, a structured framework comparable to those of cellular automata is missing; therefore, according to the anonymous reviewer, this is "cargo-cult science" (an expression attributed to Feynman).

2.5
The new field of agent-based simulation is, by definition, lacking a complete structure. With our simulated experiments we are looking for the emergence[2] of complex phenomena such as institutions, as well as for complex agents' behaviour in the presence of interactions. All this work cannot be validated by reverse engineering (i.e. given a known aggregate result, to determine agents' structures and the rules producing them) which is in effect impossible at present and may always be.

2.6
The observation about the difficulty of the repeatability of this kind of experiment is at a different level of depth: I deal with this subject in my second comment; certainly, the problem is not attributable to the authors of Growing Artificial Societies, any more than is the absence of a general theory of simulation!

2.7
Work of this kind[3] is not absent, see for example Rasmussen and Barret (1995); however, it is limited to the discussion of rigorous and important issues on the simulability and computability of phenomena. It begins with the assumption that simulation is an original form of union between building models and computing, necessary to answer the question about how, given a system made up of many interacting subsystems, we can arrange those components and their interaction to reproduce the global dynamic, in terms of Artificial Life. According to Rasmussen and Barret, the question is: How can behaviour similar to life be generated using low level local rules? To achieve a rigorous approach to the implementation of a defined and close structure of agent based simulation models - as is the case for cellular automata - there is a lot of work to be done, assuming that it is possible at all.

2.8
Concluding the discussion opened with the analysis of the hostile review, we observe (see below) that Sugarscape is the sum of a cellular automata and an agent-based model.

2.9
A further clarification can be focused on the operating scheme necessary to build agent-based models. While an answer can be found in Sugarscape, more generally, we can use object based tool-kits implemented through libraries of functions and related protocols, like the Swarm environment (see below), where the simulation would be guided by endogenous or exogenous events. Such an abstract structure is translated into a computer code, where the occurrence of an event (for instance a simulated time step) triggers actions in the form of messages sent to the agents, that in turn produce events and so on. This is a relatively simple way to implement simulation strategies when the sequences of events are mainly unforeseeable, with complex trees of actions and reactions among the various components.

2.10
Sugarscape's world is built on a toroid, to simplify representing it on a plane. World, agents, food, rules and actions can be set by parameters. The world becomes increasingly complicated, chapter after chapter; also the system characteristics and the emerging institutions are increasingly interesting and meaningful. Every agent has a fixed structure, randomly determined, consisting of a metabolism and the ability to see the food. Agents can't see diagonally and this determines bounds to their rationality; they can save the food collected in excess with respect to their needs.

2.11
An agent dies if its stock of food is zero; to run experiments on the income distribution, an agent is assumed to die within a finite time and it is replaced by a new agent. The initial world is therefore very simple, but at the same time it allows us to verify the effects that environmental conditions (the quantity of sugar and its local renewal rate; local rules for the search of food) have on migrations and on the agents' distribution along various dimensions (space, wealth, etc.).

2.12
With the introduction of sexual reproduction, with the possibility of cultural transmission and of fights (everything, of course, highly stylised) tribes and cultural segregation emerge. Furthermore, by introducing another kind of food (spice) and differences in the sugar-spice metabolism of the agents, a market emerges, based on bilateral exchanges among nearby agents, with prices fluctuating around the equilibrium value, but realistically never reaching the theoretical optimal solution. Volatility levels depend on extremely indirect conditions, such as cultural transmission among generations. Finally, the presence of infectious diseases allows us to study the interactions between epidemics and social processes.

2.13
Technically speaking the agents are instances of object classes; the environment can be interpreted as a cellular automaton, with simple or complicated rules defining the production of food in each cell. This is a case of joint employment of two techniques (agents and cellular automata) that are only apparently alternatives.

2.14
From this synthesis we can also see that complex behaviour can emerge from a very simple environment and rules, but trying to connect individuals to the aggregate properties of the world as a whole, there is the risk of misunderstanding, as explained below.

* First Comment: Methodological individualism and agent based models - the risk of misunderstandings

3.1
Epstein and Axtell (1996 Chapter 1., "Beyond Methodological Individualism") state:
Our point of departure in agent-based modeling is the individual: We give agents rules of behavior and then spin the system forward in time and see what macroscopic social structures emerge. This approach contrasts sharply with the highly aggregate perspective of macroeconomics, sociology, and certain subfields of political science, in which social aggregates like classes and states are posited ab initio. To that extent our work can be accurately characterized as "methodologically individualist." However, we part company with certain members of the individualist camp insofar as we believe that the collective structures, or "institutions," that emerge can have feedback effects in the agent population, altering the behavior of individuals. Agent-based modeling allows us to study the interactions between individuals and institutions.

3.2
How can we circumscribe the field of methodological individualism? Let's choose a neutral definition by Zamagni (1987), Chapter 1, section 1.9, p.65 (our translation):
The thesis that all the propositions about a group are referable to propositions about the behaviour and the interactions of the individuals constituting that groups, is today known as methodological individualism, an expression formulated by Schumpeter.

3.3
On one hand, this position is criticised by methodological organicism or holism, which denies the possibility of understanding any social or economic system by studying its components. On the other hand, it is criticised also by scholars rejecting the reductionist choice of the so-called representative agent, highly structured in terms of rules, optimisation ability, knowledge of data and of the economic model underlying the actual world (paradoxically, as Sargent observes (1993), the same model is unknown to the econometrician studying the economic structure where the representative agent is supposed to act).

3.4
Misunderstandings emerge especially when the first type of scholar, those denying the interest of the representative agent based methodology, being not possible to disregard structures (classes, institutions, and so on) reject the experiments built with simple agent models, again in order to refuse the same type of reductionist choice.

3.5
Instead, we can explicitly admit the role of institutions and structures, looking for their emergence, and considering the heterogeneity of agents (always, if possible, simple) as essential, to experiment with the non-linearity of aggregated effects of their behaviour. We have to remember that the whole does not necessarily correspond to the sum of the parts in the presence of interactions: This is the main source for the emergence of complexity and here we find the added value of artificial experiments with agent-based simulation models.

3.6
Methodological individualism and agent-based models are therefore non-coincident fields. With agent-based models we study the emergence of structures, institutions and behaviour, previously unexpected or unpredictable, looking for many interpretation levels about projects built on several layers, all interacting one with the other. We can also have agents made up of more simple parts (e.g., an enterprise and its units, with composing agents); a model can provide for the interaction among entities made up of agents themselves, and so on (in Swarm terminology, we can have swarms of swarms); we can also fruitfully use such methodologies in the organisation domain.

3.7
Summarising, as Kirman (1992) does:
The sum of the behavior of simple economically plausible individuals may generate complicated dynamics, whereas constructing one individual whose behavior has these dynamics can lead to that individual having very unnatural characteristics. Furthermore, if one rejects a particular behavioral hypothesis, it is not clear whether one is really rejecting the hypothesis in question, or rejecting the additional hypothesis that there is only one individual.
Studying the emergence of phenomena we use complexity as a two-way process, where the non-linear interaction of agents produces social effects and the emerging social structures (via evolution, genetic selection, co-determination) affect the agents' behaviour.

3.8
An operating definition of complexity, very useful in this context, has been provided by Kaneko (1998):
one should carefully distinguish "complex" from "complicated". The latter is a system composed of a variety of elements, which requires hard effort to disintegrate it into parts, but in principle it is possible. On the other hand, in complex systems, one has to face the circular situation in which the parts are understood only through the whole, although the whole, of course, consists of parts.

3.9
The emergence of complexity has been analysed by McIntyre (1998), taking into account the ontological perspective, linked to what reality is, with respect to the epistemological perspective, i.e. what we know about reality and what we are able to represent about it. The author privileges the epistemological explanation, which is correct for most cases, but leave unsolved the problem of reverse engineering, introduced above when discussing the hostile review to Sugarscape. When we feel it is impossible to start from an expected outcome to build agents able to reproduce it, in the presence of non-reversible agent-system functions, we are probably facing intrinsically complex problems. We can exclude in such a case that complexity arises from the use of inappropriate techniques or erroneous metrics in representation.

3.10
Summarising, and to state it quite provocatively: (i) we say no to the reductionism of methodological individualism that attempts to reproduce the complexity of the system directly in the agents, complicating them to the extreme, and overburdening them with knowledge and abilities; (ii) we say no to organicism, which considers the agents' approach unfeasible, based on the belief that there are no structures, at the agent's level, able to reproduce the complexity of reality; (iii) we say yes to an approach based on simple agents, that tries to find in interaction and in ex ante structures (if any) the way to generate complex landscapes, comparable to the real ones. Obviously we have to look to complex situations at the appropriate level, which is not necessarily that of each specific agent.

3.11
To conclude this first comment, a practical way to discriminate between complicated models and complex ones may be to remember the difference between representation and ontology, as a possible cause of our perception of complexity: the possibility of reversing from the system to the agents the implicit or explicit functions that determine the behaviour of the simulated environment (reverse engineering) may be, in our opinion, the sign of the non-complexity of the system.

* Second comment: From the repeatability of experiments to a methodological proposal

4.1
Models such as Sugarscape are more useful if it is easy to replicate their experiments, either for verification or to explore other analytic perspectives , perhaps through varying the parameters about the agents, the environment and the inanimate objects eventually existing in the environment.

4.2
The repeatability of this kind of experiment is not achieved for free. I have a direct experience using my own software for agent-based models founded on neural networks, called CT. Although working with great accuracy in the original preparatory phase, after some time, the introduction of modifications becomes very difficult. That is why CT has been rewritten[4], according to the rules of a standard environment: the Objective C library of Swarm functions [5]. The replication of experiments contained in the Sugarscape world is certainly possible, but the authors do not suggest explicitly the availability of the necessary software; there is however a portion of the model usefully rewritten with Swarm, for teaching purposes, by N. Minar (see http://www.syslab.ceu.hu/~nelson/Sugarscape-lecture/).

4.3
Even a model written on a standard platform can be very difficult to use for those who did not take part in its construction, and modifications may also become practically impossible for the authors. In order to overcome these difficulties and to propose a generalised scheme, I have introduced (Gilbert and Terna, 2000 , but others have done the same) a methodological guide for the design of agent-based models. It is called ERA (Environment Rules Agents) and is pictured in Figure 1.

Fig. 1
Figure 1. The Environment Rules Agents (ERA) architecture for agent based modelling

4.4
To separate the environment, with its rules, from the agents, which communicate through the environment, is the main characteristic of the ERA scheme. For instance, the environment supplies every agent with the necessary information to build a list of its neighbours. The agents receive instructions from objects that are representations of abstract entities such as set of rules or metarules (i.e. the rules used to modify rules). The Rule Master objects communicate with Rule Maker objects. A Rule Maker may be "used" by several Rule Masters, for instance to apply the results of a learning process. All information necessary to the Rule Master comes from the specific agent using it; similarly, information necessary to the Rule Makers come from the Rule Masters. The rigidity of this structure, that disallows direct links among different levels (the four vertical blocks in the figure) and within the same level, increases the initial difficulty in writing the code for the simulation. But this effort it is paid back subsequently, thanks to the greater intelligibility and ease of modification of the source code.

4.5
The suggested architecture must obviously be based on an object-oriented environment, such as the Swarm platform. Its modularity is an additional motivation for its adoption. It allows the easy use, almost interchangeably, of classifier systems, neural networks, genetic algorithms, set of rules, or other tools, at the level of Rule Masters and Rule Makers.


* Notes

1 This note is a revision of P.Terna (1998) "Creare mondi artificiali: una nota su Sugarscape e due commenti", Sistemi intelligenti, 3.

2 An interesting discussion about emergence can be found at: http://www.santafe.edu/projects/swarm/archive.modelling/list-archive.0012/index.html; a key site about emergence is http://el.www.media.mit.edu/groups/el/projects/emergence/contents.html.

3 Two seminal papers about simulation methodology are Epstein (1999) and Axtell (2000).

4 You can download bp-ct v.1.1 code from the anarchy section at http://www.swarm.org/.

5 http://www.swarm.org/.


* References

AXTELL R. (2000). Why agents? On the varied motivations for agent computing in the social sciences. Center on Social and Economic Dynamics, http://www.brookings.edu/es/dynamics/papers/agents/agents.

EPSTEIN J.M. and AXTELL R. (1996) Growing Artificial Societies - Social Science from the Bottom Up, Cambridge MA, MIT Press.

EPSTEIN J.M. (1999) Agent Based Models and Generative Social Science. Complexity, IV (5)

GESSLER N. (1997) Growing Artificial Societies - Social Science from the Bottom Up - Joshua M. Epstein and Robert Axtell. Artificial Life, 3, pp. 237-242.

GILBERT N. AND TERNA P. (2000) How to build and use agent-based models in social science, Mind & Society, no. 1, pp.57-72.

KANEKO K. (1998) Life as Complex System: Viewpoint from Intra-Inter Dynamics. Complexity, 6, pp.53-63.

KIRMAN A. (1992) Whom or What Does the Representative Agent Represent? Journal of Economic Perspectives, 6, pp.126-139.

MCINTYRE L. (1998) Complexity: A Philosopher's Reflection. Complexity, 6, pp.26-32.

RASMUSSEN S and BARRET C. L. (1995) Elements of a Theory of Simulation, in F. Moran et al. (eds.), Advances in artificial life: Third European Conference on Artificial Life, Lecture notes in computer science 929, Berlin, Springer, (see also http://www.santafe.edu/sfi/publications/95wplist.html).

SARGENT T. J. (1993) Bounded Rationality in Macroeconomics, Oxford, Clarendon Press.

TESFATSION L. (1998) Growing Artificial Societies - Social Science from the Bottom Up - By Joshua M. Epstein and Robert Axtell, Journal of Economic Literature, pp. 233-234.

ZAMAGNI S. (1987) Economia politica - Teoria dei prezzi, dei mercati e della distribuzione, Roma, NIS.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2001