© Copyright JASSS

  JASSS logo ----

Carlos Gershenson (2002)

Philosophical Ideas on the Simulation of Social Behaviour

Journal of Artificial Societies and Social Simulation vol. 5, no. 3

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 21-May-2002      Accepted: 25-Jun-2002      Published: 30-Jun-2002

* Abstract

In this study we consider some of the philosophical issues that should be taken into account when simulating social behaviour. Even though the ideas presented here are philosophical, they should be of interest more to researchers simulating social behaviour than to philosophers, since we try to note some problems that researchers might not put much attention to. We give notions of what could be considered a social behaviour, and mention the problems that arise if we attempt to give a sharp definition of social behaviour in a broad context. We also briefly give useful concepts and ideas of complex systems and abstraction levels (Gershenson, 2002), since any society can be seen as a complex system. We discuss the problems that arise while modelling social behaviour, mentioning the synthetic method as a useful approach for contrasting social theories, because of the complexities of the phenomena they model. In addition, we note the importance of the study of social behaviour for the understanding of cognition. We hope that the ideas presented here motivate the interest and debate of researchers simulating social behaviour in order to pay attention to the problems mentioned in this work, and attempt to provide more suitable solutions to them than the ones proposed here.

Complex Systems; Modelling; Social Behaviour; Synthetic Method

* Introduction

We can illustrate one of the relationships of philosophy with the sciences using the following black-humoured metaphor: The sciences are like children playing marbles. Then philosophy is like a girl who comes and watches how the sciences play. Then, after having observed the game for some time, philosophy explains to the sciences how to play marbles: the rules of the game the sciences were already playing. From this perspective, some might argue that philosophy is not doing much, since the sciences were already playing the game before she came with her explanations. But we believe that philosophy does something useful: each science might be so concentrated in her own game, so that they do not pay attention to the games of others. For example, if they are playing with the same rules, which might bring new insights to their own game. Philosophy brings a consensus. Before, each science had her own rules, assuming everyone else was playing their game. Philosophy also questions the value of each rule. Perhaps some sciences were cheating (not to their eyes...), or did not see some aspects of the games of others which could be beneficial for their own game. Perhaps the rules philosophy dictates may seem ridiculously obvious to the sciences, but it is when they think that they are obvious that they do not put attention to them, enhancing the probability of committing a lamentable mistake while playing marbles.

So, in this philosophical paper, how useful will be to try to "explain" to people who have worked decades on simulating social behaviour what are they doing? Well, we should note that there is a difference between the rules of the game and the game itself. What we attempt here is to begin a discussion to reach a consensus in the community, so that we all will know that we are playing the same game, with the same rules; or even if not, which game each one is playing, not confusing it with ours. This is not a review paper, nor an attempt "to tell social roboticists how to play". We just put some ideas on the table of discussion by observing the simulation of social behaviour from a philosophical perspective.

Of course, it is more convenient if the ones making the rules would be the same ones who play the game, or at least to have as much contact with the game as possible, as well as the ones playing should put at least attention to the rules, if they are not making them. We are not experts in the field, but we can say that we have played the game a bit (e.g. Gershenson, 2001a), so our philosophical observations do have empirical motivations.

About Social Behaviour

To have a notion of what we consider to be a behaviour, we will use the definition of Maturana and Varela: "behaviour is a description an observer makes of the changes in a system with respect to an environment with which the system interacts" (Maturana and Varela, 1987, p. 163). In other words, if we observe a system (animal, robot, agent) situated in an environment (physical or virtual), and the system responds to its environment, then we can say that we observe a behaviour of the system. We can see that this notion is very general, and almost any system can be considered to have a behaviour. But here we will be interested only in social behaviour. What does a behaviour require to be considered as social?

Well, a "dictionary" answer could be: If the behaviour of a system is with respect to a society, then it is social. Of course we are not limiting societies to humans, but we consider any group of entities or agents[1] which interact socially. If the entities do not interact socially, their group would be merely a population. So what is a social action? We can take ideas from Castelfranchi (1998) to say that the action of an agent is social if it is performed towards another agent with a purpose[2], considering the other agent also as a purposeful entity. For example, if a robot considers another robot as a movable obstacle, hitting it will not be a social action. But if the hitting is because the robot is "expecting" a reaction from the other robot, then the action is social. The social action will be successful if the other robot reacts in the "expected way". If a robot confuses an obstacle with another robot and hits it expecting some reaction, the action is social because of its purpose, but it is a failed action because the obstacle does not react "as expected".

Castelfranchi (1998) distinguished two types of social action: weak and strong. A weak social action is given when an agent considers the purpose of the behaviour of another agent for deciding her behaviour. A strong social action is given when the purpose of an agent is to influence the behaviour of another agent. This purpose can be considered to be a social purpose[3].

Another perspective is given by Maturana and Varela: a social interaction involves the structural coupling between organisms, i.e. there is mutual perturbation without loss of autopoiesis (Maturana and Varela, 1987). A system is autopoietic if its elements interact in such a way that the system is self-producing and self-maintaining. The concept of autopoiesis was formulated in order to explain living systems, but some "artificial life" systems can be also considered as autopoietic. If to take this definition, we fall into the problem of discussing what can we consider being alive.

Having more or less an idea of a social action, we can just say that the behaviour of a system or agent is social if it performs social actions while the agent behaves. And a society would be a group of such agents.

But how subjective is our perception of social behaviour? Are proteins social? Is the n-body problem (in astrophysics) a social problem? Well, it is subjective depending how much the purpose of a system with social behaviour is ascribed by us. Some people might say that the purpose of a carbon atom under certain conditions is to bind to another carbon atom, "recognizing" it as such; or that the elements of any complex system can be seen to have social behaviour. Some other people might say that robots do not really have a purpose of their own, because we tell them what to do. And some other people might even say that only humans have purposeful behaviour, and other animals are "just like machines".

From the perspective of Maturana and Varela, we would say that a system needs to be autopoietic in order to be social. Artificial systems, when they are not autopoietic, could be then seen as models or imitations of social behaviour.

But if we take another definition of social behaviour (i.e. in another context), we will have to make other considerations to decide whether a system is social or not. For example, from the perspective of sociobiology, we can define a society as "a group of individuals belonging to the same species and organized in a cooperative manner. The diagnostic criterion is reciprocal communication of a cooperative nature, extending beyond mere sexual activity" (Wilson, 1975). This definition is useful, but only in the context of sociobiology. Robots and software agents can be easily considered as social in other contexts, but not in a sociobiological one, because in most of them there are no definitions of species nor sexuality.

So, how useful was our attempt of giving a notion of social behaviour, if it does not suit all contexts (since it should change with contexts)? Well, we realized that no definition of social behaviour or of society will be satisfactory in all contexts. It seems that we need to choose a notion accordingly to our own context. And if to study societies in general, it seems that we will have a wider idea as we contemplate at more contexts and notions.

In the meantime, we hope to have given a broader notion of sociality, and also to note the problems that we may encounter when we speak with others about simulating social behaviour: we might be speaking about different things using the same words. But do we really need a strict all-inclusive definition of social behaviour in order to simulate it? It seems we have done not so badly without one.

Complex Systems

Independently if we consider the elements of any complex system[4] as exhibiting social behaviour or not, it is clear that any society can be seen as a complex system (Goldspink, 2000; 2002). Therefore, the concepts of complex systems are useful and can be applied in the study of natural and artificial societies. In other words, any society can be studied from the perspective of complex systems. Because of this we will mention briefly some concepts that can be useful for philosophizing about the simulation of social behaviour.

It is indeed a complex task trying to define a complex system, but to give a broad notion, we can say that a complex system has elements which interact. Their interactions give rise to a global behaviour of the system which cannot be identified or deduced straightaway from the behaviour of the parts. This global behaviour is said to emerge from the interactions and the parts of the system.

We can recursively say that the complexity of a system scales with the number of elements it has, the number of interactions among them, the complexities of the elements, and the complexities of their interactions (Gershenson, 2002). As the number of elements and interactions of a system is increased, we can observe an emergent complexity. But somehow, regularities arise and we can observe emergent simplicity (and this "somehow" is one of the most interesting questions addressed by complex systems researchers, with still no general or clear answer).

Abstraction levels (Gershenson, 2002) represent simplicities and regularities in nature. Phenomena are easier to represent in our minds when they are simple. We can have an almost clear concept of them, and then we can try to understand complex phenomena in terms of our simple representations. We can recognize abstraction levels in atoms, molecules, cells, organisms, societies, ecosystems, planets, star systems, galaxies. An element of an abstraction level has a simple behaviour, and it is because of this that can be easily observed and described (at least easier than the complexities which emerge from the interactions of several elements). When elements of an abstraction level interact, they form systems with emergent complexity, but when regularities arise, we can identify another abstraction level through emergent simplicity.

We can see a society as emerging from interactions among individuals. The society is not in the individuals and their interactions. The society is the individuals and their interactions, but at a different abstraction level. We are the ones who distinguish between society and individuals plus interactions, because we can identify them at different abstraction levels. But intrinsically, absolutely, they are[5] the same thing. Of course, our distinction is necessary for understanding social behaviour.

We can study a society as an entity, at a social level. This is useful if we are interested in the properties of the society as a whole. But if we are interested in how these properties depend on the individuals and their interactions, we need to study the individual level, its emergent complexity, and the emergent simplicity resulting in the society.

But then, do we need to study also the elements and interactions from which individuals emerge (cells)? And the elements and interactions from which these emerge (proteins, molecules)? It seems that there is no need to go all the way down (and we believe that there is no "all the way down" (first "elemental" abstraction level)). Not because the behaviour of an electron could not affect the behaviour of a society, but because causality cannot jump abstraction levels (Gershenson, 2002). That is, an electron could not affect a society without affecting at least one molecule, at least one cell, and at least one individual. Therefore, we can abstract just the behaviour of the individual (caused by a single electron or not), in order to understand how this behaviour affects the global behaviour of the society.

Another issue is that it is difficult to study complex systems with traditional causal relationships (i.e. A causes B), because it is very common to have circular causality in complex systems (e.g. A causes B, B causes C, C causes A)[6]. In such systems we need to study global properties and individual interactions in order to understand the system. We cannot reduce the study of the system only to the study of its elements, because we would lose the sight of the properties which emerge from the interactions among elements.

Modelling Social Behaviour

Since we cannot comprehend completely and absolutely a society (nor any phenomenon), in order to understand how it functions, we have no other choice than to make abstractions of "the real thing". Science makes these abstractions with models. We have to be very careful in noticing while modelling, which aspect of a phenomenon we are modelling in order to judge the accuracy of a model (Webb, 2001). This is very important, because different people might be modelling different aspects of the same society, from different perspectives, and in order to compare the models, we have to be aware of the context of each, since they could even be using the same words to describe things which are different in each context. We believe that we will comprehend most of a phenomenon by observing and studying it from as many perspectives and contexts as possible (even if they are contradictory (Gershenson, 2001b)). This will make our models less incomplete. We will never comprehend completely "the real thing", but we can make our models as less incomplete as we want to.

One "problem" of modelling (and science) is that, since we can make different abstractions of the same phenomenon (from different contexts), we can find more than one explanation (or cause) for some phenomena. For example, if we want to describe any given function of dimension n (it can be a straight line, n=2), there are an infinitude of functions dimension n+1 which describe correctly that function (an infinitude of planes, n=3, cross any straight line). How to find which explanation is less incomplete, less redundant, and describes better the phenomenon we want to describe? One way is through rhetoric: logical arguments dismiss theories which are not consistent with the established human knowledge. But the logical arguments are based on axioms which cannot be proven with logical arguments themselves. And, by definition, the theories developed under a set of axioms will be consistent with the axioms[7]. So, how do we decide which axioms are less incomplete? It seems that we do it through experience[8]. The only detail is that, while studying societies (and any complex system), it is very hard (although not impossible) to contrast (Popper, 1934) scientific theories with perceptual experience (because they are far from an abstraction level. Well, abstraction levels are created where the regularities of a system allow contrasting on the first place). The aid of electronic computers for simulating societies in order to contrast social theories brought a revolution into the field. Not that there were no models of societies, but many of them were either very simple for explaining much, or too complex to be tested in a natural society. Other difficulties are that sometimes the time scale of the phenomena a theory describes is larger than a human life (e.g. cultural evolution), and that societies are very hard to control, so as for being close to "ideal conditions", which for example can be approached in a physics laboratory controlling temperature, pressure, friction, etc. But simulations in computers and robots provide one way for contrasting theories in a synthetic way. Another way has been to study less complex societies, such as the ones of certain microbes (e.g. Velicer, Kroos, and Lenski, 2000), where the time scale allows the contrasting of evolutionary processes in these societies, but the repertoire of social behaviours in microbes is more limited than in more complex creatures.

The Synthetic Method

It seems that the synthetic method (Steels, 1995; Verschure, 1998; Castelfranchi, 1998) was described after it was already being used (the sciences need to play marbles first so that philosophy can come and explain the rules to them...). It would be useful to describe first the inductive method (Steels, 1995): A theory is made from abstracting and generalizing observations. The theory can be used for making predictions, which are verified by further observations, which contrasted with the theory falsify or justify it (see Figure 1). As we said, it is very difficult to contrast social theories with the inductive method because it is not easy to make controlled observations of many natural societies. With the development of artificial systems (electronic computers, autonomous robots, etc.), some people began to engineer those systems inspired in natural ones (e.g. artificial neural networks, behaviour-based systems, etc.). But then it was clear that people could use these artificial systems in order to contrast theories describing natural phenomena (Maes, 1991; Webb, 2001), instead of attempting to contrast them directly with natural systems (see Figure 2).

Figure 1. The inductive method tests a theory of a system by matching predicted facts against observed facts (Steels, 1995).

Figure 2. The synthetic method builds up theories of a system attempting to construct an artificial system that exhibits the same capabilities of the natural system (Steels, 1995).

Of course, synthetic modelling has positive and negative sides. We have the advantage that we have a very high control over the artificial system. Also, it is relatively easy to repeat experiments and to isolate some processes. But this is because we are simplifying the natural phenomena. If this simplification is not done with care nor justified (turning into oversimplification), we might be fooling ourselves contrasting the performance of our artificial system not taking into account observed phenomena (experience) of the natural system, and our artificial system will just be a consequence of our theory (Di Paolo, Noble, and Bullock, 2000). It would be like testing if the theorems derived from axioms we define, are consistent with the axioms[9] (which is important for noticing unexpected errors, but by no means justifies the axioms). It is difficult to decide when a model is oversimplifying, because it depends on the goal of the model. Since by definition all models make simplifications (otherwise they would be instances of the system, not models), we can always find a context when any model can be considered to be oversimplifying[10]. It depends more on how much we want, or are able, to understand, and how many assumptions we want to admit, in order to decide how much a model should simplify the modelled. In all cases, this depends on the specific situation. That is, it is relative to a specific context. It is only inside a specific context that we can judge oversimplification. And again, experience will show us which contexts are more appropriate for different situations.

Synthetic modelling of social behaviour

The versatility of the synthetic method over the inductive method for modelling societies has lead many researchers studying human and animal societies to use computer and robotic simulations to support and contrast their theories.

Usually the modelling takes place at an individual level, since in most cases the goal is to understand how the global properties of the societies emerge from the individuals and their interactions. As with all emergent processes, this can be a bit observer-dependant. This is because some people might not consider the emergent phenomena to be what the researchers claim to be, because "it is only individuals and interactions, not a society". As we stated earlier, the societies absolutely are the individuals and their interactions. It is just that we observe social phenomena at a different abstraction level. Sometimes this observation is not shared, and some people speak about a system in terms of a lower abstraction level while others in terms of the higher (in this case, individual and social level), not realizing that they are different perspectives of the same thing.

Because of these different perspectives, and different goals, people have modelled societies at different levels, and with different degrees of simplification. For example, social models involving cellular automata or grid worlds (e.g. Epstein and Axtell, 1996) have been severely criticized for being "non realistic". But, as we stated, any model is non realistic in some sense. Other approaches include the modelling of societies from an adaptive autonomous agents (or behaviour-based systems) perspective (e.g. Hemelrijk, 1999; Gershenson, 2001a), or from a rational agents perspective (e.g. Castelfranchi, 1998). All these are just examples, since the specific approach for each model is hard to classify rigidly. We would not like to say that any approach is effective, but we need to note that it is difficult to compare different models if they are taking different perspectives, goals and assumptions. Rational agents are more effective for modelling social reasonings; behaviour-based systems are more effective for modelling social adaptive behaviour; and so on. Every approach has its pros and cons, since they study societies at different levels and in different contexts.

Also, there are a variety of specific aspects of societies that models try to describe (in different contexts). These include the study of communication and language (e.g. Cangelosi and Parisi, 2001; Steels and Kaplan, 2002), different types of self-organization and adaptation (e.g. Mataric, 1995; Steels, 1996), coordination (e.g. Di Paolo, 2000) and cooperation (e.g. Hemelrijk, 1999; Cohen et al., 1999), imitation and induction (Gershenson, 2001a; Dautenhahn and Nehaniv, 2002), evolution and co-evolution (e.g. Cangelosi and Parisi, 2001), and emergence of social properties (e.g. Hemelrijk, 1999; Cohen et al., 1999; Gershenson, 2001a). As we cannot compare models from different perspectives or contexts, we cannot compare models of different aspects of society.

In other words, we cannot say which models social behaviour "better": a model of cooperation in adaptive autonomous agents, or a model of the emergence of communication in robots, or a model of the coordination of rational agents. This is because each one is modelling different aspects of sociality, and from different perspectives. While judging a model we need always to take into account which perspective it has, because no model will be able to have all the properties of a society (if we are not even able to define them properly...). If, for a given perspective and aspect, a model is not able to simulate the modelled aspect of the natural system seen from the same perspective, it will not be valid. And for valid models, it seems we are interested only in the ones which explain non trivial phenomena.

Social Behaviour and Cognition

It is clear that individuals need to know something about the individuals they interact with, in order to present social behaviour. But also, it is a widely accepted theory that our cognition evolved because of the complexity of our societies[11] (Dunbar, 1998). Therefore, in order to study human cognition, we need to study our societies. We acquire our intelligence (whatever intelligence may be) by social interactions. An argument for this, without having to isolate a newborn and study its non-development, would be to compare the genetic differences of present-day humans and those of ten thousand years ago (or one thousand years ago (or one hundred years ago)). The genetic difference is uncomparable to the cognitive and cultural differences found in any period of time. How can we explain this? Accepted and sensible ideas note the role of the storage of information outside our brains (e.g. writing) and complex communication (languages), which allowed our ancestors to accumulate knowledge. But all this only has sense, and can be studied, inside a social context. A Homo sapiens sapiens per se will be unable to acquire by his/her own experience a minimum of what we are taught by our families and social groups. Taking this to a larger scale, how could an isolated individual acquire culture and civilization[12] So, if we attempt to study the evolution or acquisition of human (and it seems any) cognition, we need to include a social perspective. Recent research in Artificial Intelligence has already noted this importance (e.g. Breazeal, 1999; Steels and Kaplan, 2002). The importance of social factors has been also noticed for studying consciousness (Thompson, 2001).

Another issue is that societies seem to be important not only for the study of cognition, but from some perspectives, they can be seen themselves as decision-making entities (e.g. Seeley and Towne, 1991) and even as cognitive entities (e.g. Chialvo and Millonas, 1995). That is, the processes at the social level in particular cases can be very similar to those found in a cognitive entity.


The ideas presented here were an attempt to motivate researchers simulating social behaviour to initiate discussions in philosophical issues arising from their work. Therefore, there are no conclusions; the discussion is open. We just incite the readers to enhance the discussion in order to evolve the ideas presented here. Hopefully, we will soon reach more or less admissible "rules of our game".

* Acknowledgements

I give thanks to Nadia Gershenson, Ezequiel Di Paolo, Inman Harvey, Jaime Besprosvany, and Leon Lederman for fruitful discussions and comments. This work was supported in part by the Consejo Nacional de Ciencia y Tecnologia (CONACYT) of Mexico and by the School of Cognitive and Computer Sciences of the University of Sussex.


1An agent is an entity and a system.

2 Following the concept of purposeful behaviour of Rosenblueth and Wiener (1968).

3 For Castelfranchi (1998) a "social purpose" would be a "social goal".

4 For a general introduction to complex systems, see Bar-Yam (1997)

5 For the ontological difference between the relative being and absolute being, please refer to Gershenson (2001b; 2002).

6 A good example of this can be observed with random boolean networks (Kauffman, 1993; Gershenson, submitted).

7 This can be seen as a version of the "silly theorem problem" (Gershenson, 2001b): for any silly theorem, you can define infinite sets of axioms so that the silly theorem is consistent with the axioms.

8 Less incompletely speaking, we would say that the experience is based on the axioms (beliefs) and the reasonings (logical arguments) built upon the axioms, the reasonings are based upon the beliefs and experience, and beliefs are based upon reasonings and experience (Gershenson, 2002).

9 Again, we would have the "silly theorem problem" (Gershenson, 2001b).

10 This idea is clearly presented by Michael Arbib (1989), speaking about brain models: "a model that simply duplicates the brain is no more illuminating than the brain itself" (p. 8). For understanding complex systems, simplifications are necessary. But we need to justify our simplifications anyway.

11 We would notice that also as the complexity of our cognition is increased, it increases the complexity of the society, stimulating the individuals to increase their complexity as well. It can be seen as a self-reinforcing process or as a positive feedback loop.

12 Not that it cannot be done, but then the complexity of the individual would need to be similar to the complexity of a culture... which for humans, seems not to be the case.

* References

ARBIB, M. A. (1989). The Metaphorical Brain 2. Neural Networks and Beyond. John Wiley & Sons.

BAR-YAM, Y. (1997). Dynamics of Complex Systems. Addison-Wesley.

BREAZEAL, C. (1999). Imitation as Social Exchange between Humans and Robots. Proceedings of the 1999 Symposium on Imitation in Animals and Artifacts (AISB99), pp. 96-104. Edinburgh, Scotland.

CANGELOSI, A. and D. PARISI (2001) (Eds.). Simulating the Evolution of Language. Springer.

CASTELFRANCHI, C. (1998). Modelling social action for AI agents. Artificial Intelligence 103, pp. 157-182.

CHIALVO, D. R., MILLONAS, M. M. (1995) How swarms build cognitive maps. Santa Fe Institute Working Paper 95-03-033.

COHEN, M. D., R. L. RIOLO, and R. AXELROD (1999) The Emergence of Social Organization in the Prisoner's Dilemma: How Context-Preservation and Other Factors Promote Cooperation. Santa Fe Institute Working Paper 99-01-002.

DAUTENHAHN, K. and C. L. NEHANIV (Eds.) (2002), Imitation in Animals and Artifacts. MIT Press.

DI PAOLO, E. A. (2000). Behavioral coordination, structural congruence and entrainment in a simulation of acoustically coupled agents. Adaptive Behavior 8 (1).

DI PAOLO, E. A., J. NOBLE, and S. BULLOCK (2000). Simulation Models as Opaque Thought Experiments. Artificial Life VII, pp. 1-6.

DUNBAR, R. I. M. (1998). The social brain hypothesis. Evolutionary Anthropology 6, pp. 178-190.

EPSTEIN, J. M. and R. L. AXTELL (1996). Growing Artificial Societies: Social Science from the Bottom Up. The Brookings Institution Press, Washington, D. C. & The MIT Press, Cambridge, Massachusetts.

GERSHENSON, C. (2001a). Artificial Societies of Intelligent Agents. Unpublished BEng Thesis. Fundaci█n Arturo Rosenblueth, M╚xico.

GERSHENSON, C. (2001b). Comments to Neutrosophy. In Smarandache, F. (Ed.) Proceedings of the First International Conference on Neutrosophy, Neutrosophic Logic, Set, Probability and Statistics, University of New Mexico. Gallup, NM.

GERSHENSON, C. (2002). Complex Philosophy. Proceedings of the 1st Biennial Seminar on Philosophical, Methodological & Epistemological Implications of Complexity Theory. La Habana, Cuba.

GERSHENSON, C. (submitted). Classification of Random Boolean Networks.

GOLDSPINK, C. (2000). Modelling Social Systems as Complex: Towards a Social Simulation Meta-model. Journal of Artificial Societies and Social Simulation 3 (2).

GOLDSPINK, C. (2002). Methodological Implications of Complex Systems Approaches to Sociality: Simulation as a foundation for knowledge. Journal of Artificial Societies and Social Simulation 5 (1).

HEMELRIJK, C. K. (1997). Cooperation Without Genes, Games Or Cognition. In Husbands, P. and I. Harvey (Eds.) Fourth European Conference on Artificial Life, pp. 511-520. MIT Press.

HEMELRIJK, C. K. (1999). An individual-oriented model of the emergence of despotic and egalitarian societies. Proc. R. Soc. Lond. 266, pp. 361-369.

KAUFFMAN, S. A. (1993) The Origins of Order. Oxford University Press.

MAES, P. (1991) (Ed.). Designing Autonomous Agents: Theory and Practice From Biology to Engineering and Back, MIT Press.

MATARIC, M. J. (1995). Designing and Understanding Adaptive Group Behavior, Adaptive Behavior 4(1).

MATURANA, H. R. and F. J. VARELA (1987). The Tree of Knowledge: The Biological Roots of Human Understanding. Shambhala.

POPPER, K. (1934). The Logic of Scientific Discovery.

ROSENBLUETH, A. and N. WIENER (1968). Purposeful and non-purposeful behaviour, in Buckley, W. (ed.), Modern Systems Research for the Behavioural Scientist, Aldine, Chicago.

SEELEY, T. D. and W. F. TOWNE (1991). Collective decision making in honey bees: how colonies choose among nectar sources. Behav. Ecol. Sociobiol. 28, pp. 277-290.

STEELS, L. (1995). Building Agents out of Autonomous Behavior Systems. In Steels, L. and R. Brooks (Eds.). The Artificial Life Route to Artificial Intelligence. Building Embodied Situated Agents. Lawrence Erlbaum Assoc. Hillsdale, NJ.

STEELS, L. (1996). The Spontaneous Self-organization of an Adaptive Language. In Muggleton, S. (ed.) Machine Intelligence 15. Oxford University Press.

STEELS, L. and F. KAPLAN (2002). AIBO's first words. The social learning of language and meaning. Evolution of Communication, 4 (1).

THOMPSON, E. (2001). Empathy and Consciousness. Journal of Consciousness Studies 8 (5-7), pp. 1-32.

VELICER, G. J., L. KROOS, and R. E. LENSKI (2000). Developmental cheating in the social bacterium Myxococcus xanthus. Nature 404, pp. 598-601.

VERSCHURE, P. F. M. J. (1998). Synthetic Epistemology: The acquisition, retention, and expression of knowledge in natural and synthetic systems. Proceedings of the 1998 World Conference on Computational Intelligence, WCC '98, Anchorage, Alaska. IEEE. pp. 147-152.

WEBB, B. (2001). Can Robots Make Good Models of Biological Behaviour? Behavioral and Brain Sciences 24 (6).

WILSON, E. O. (1975). Sociobiology: The New Synthesis. Harvard University Press.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2002]