© Copyright JASSS

  JASSS logo ----

Chris Goldspink (2000)

Modelling social systems as complex: Towards a social simulation meta-model

Journal of Artificial Societies and Social Simulation vol. 3, no. 2,
<https://www.jasss.org/3/2/1.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 3-Feb-00      Accepted: 5-Mar-00      Published: 31-Mar-00


* Abstract

There is growing interest in extending complex systems approaches to the social sciences. This is apparent in the increasingly widespread literature and journals that deal with the topic and is being facilitated by adoption of multi-agent simulation in research. Much of this research uses simple agents to explore limited aspects of social behaviour. Incorporation of higher order capabilities such as cognition into agents has proven problematic. Influenced by AI approaches, where cognitive capability has been sought, it has commonly been attempted based on a 'representational' theory of cognition. This has proven computationally expensive and difficult to implement. There would be some benefit also in the development of a framework for social simulation research which provides a consistent set of assumptions applicable in different fields and which can be scaled to apply to simple and more complex simulation tasks. This paper sets out, as a basis for discussion, a meta-model incorporating an 'enactive' model of cognition drawing on both complex system insights and the theory of autopoiesis. It is intended to provide an ontology that avoids some of the limitation of more traditional approaches and at the same time providing a basis for simulation in a wide range of fields and pursuant of a wider range of human behaviours.

Keywords:
Complex systems, Autopoiesis, social simulation, cognition, agents, modelling. meta-model, ontology

* Introduction

1.1
There is growing interest in the theory of complex systems with a significant extension of that theory to social systems (Eve et al 1997, McKelvey 1997; McKelvey 1999; Goldspink 1999; Marion 1999). Non-linear modelling of social systems is also being increasingly pursued particularly agent-based simulation (Gilbert & Troitzsch 1999, Conte et al 1997, Gilbert & Conte 1995). Agent based simulation is ideally suited to exploration of the implications of non-linearity in system behaviour and also lends itself to models that are readily scalable in scope and level. The approach is useful for examining the relationship between micro-level behaviour and macro outcomes. Such exploration can be arranged hierarchically where social agents (micro actors) may be individuals or systems of individuals (societies).

1.2
Many social systems may usefully be modelled using simple agents (see for example the work of Epstein and Axtell 1996). These do however impose limits on the range and scope of social behaviour that can be investigated-particularly where human social systems are the subject of the research.

1.3
What is distinctive about human social systems is that they are comprised of agents (humans) who have the capacity for language and who are reflexive or self-aware. From a complex systems perspective this adds a layer of complexity that has yet to be come to terms with theoretically or methodologically.

1.4
McKelvey (1997, p. 7) has argued that when examining social behaviour we are concerned to understand the interrelationship and interaction of four sources of order:
  • physical order-reducible to the four forces of field theory;
  • organic order-the result of natural selection;
  • rational order-rational actor decision effects; and
  • complexity.
The latter three sources of order are all relevant to social science. I have argued elsewhere (Goldspink 1999) that social science has tended to confuse order arising from complexity with rational order or alternatively has ignored it and adopted methods that exclude it-notably method derivative of Newtonian concepts.

1.5
Considerable attention has been given to agent based models of organic systems. This is evident in the increasingly sophisticated use of Artificial life (Alife) and Genetic Algorithms (GA's). Traditionally, within Artificial Intelligence (AI), attempts to accommodate rational order have involved incorporating simplified rule sets or incorporation of representationalist cognitive theory into agent architecture. The former has resulted in some valuable insight but frequently requires extensive simplification and as a consequence there are limits as to what models so derived can tell us about real social systems. In particular they commonly are designed to model only one aspect of human social behaviour and lack a more general applicability. At their worst, such models can prove misleading if taken to be reliable analogues of real world phenomena as some are now claiming about neo-classical economic models built on assumptions of 'utility maximising rational actors' (Ormerod 1995; Ormerod 1998; Hodgson 1996; Arthur et al 1997). The use of representationalist cognitive approaches has proven expensive in computational terms and both elusive and difficult to implement (Brooks 1991a; Brooks 1991b). The interest in complexity as a source of order is more recent. Within social science, it has become the norm to approach social order as derivative of rational (or boundedly rational) processes. Social theorists have often confused or confounded teleology and teleonomy in systems behaviour (Burrell and Morgan 1994, Goldspink 1999, Goldspink 2000). In addition, many traditional methods of research adopt linear concepts of causality and as a consequence fail to attend to or even obscure complex sources of order. There is a need then for an approach or methodology, which avoids these problems. It is being increasingly argued that agent based simulation offers one path forward (Conte & Gilbert 1995, Axelrod 1997, Troitzsch 1997). However agent based models need to avoid adoption of social concepts that assume away many of the phenomena of interest. If at least some social phenomena, which are typically assumed to arise through rational behaviour, arise instead due to complex dynamics that are little influenced by conscious intent, then we need to allow for this in the foundation assumptions incorporated into the model design. There is the need to develop an ontology that accepts as legitimate dynamics that emerge as a consequence of a complex interplay of different sources of order because as McKelvey (1997) notes, this may be where the phenomena of greatest interest is to be found.

1.6
Significant benefit could be realised from the development of a high level model-a meta model-which would provide a consistent ontology to guide future research reducing or standardising some fundamental assumptions. In particular what would be useful is an ontology that allows for a scaling of models to include increasing degrees of cognitive capacity and linguistic capability without adopting assumptions linked too strongly to one particular social theory. Such an approach should not preclude exploration of the intersection of order arising from organic, rational and complex organisational sources. What is sought then is a set of concepts that enhance our ability to explore social phenomena as potentially emergent from underlying dynamical processes. Ideally, the concepts chosen should map easily onto comparable concepts in the physical sciences and real world phenomena. By choice the selected concepts should not require a priori choice of one existing set of assumptions about the nature of social systems over another-a sociological perspective rather than a psychological one for example.

1.7
Set out here is a tentative model. It is offered as a stimulus to debate. A medium term aim would be to develop a framework such as this into an intermediate level modelling language. This may take the form of an extension to an existing agent based framework, such as SWARM, developed by the Santa-Fe Institute. The approach adopted here incorporates developments in the theory of complex systems and the theory of autopoiesis (Maturana & Varela 1980; Maturana & Varela 1988; Maturana, Mpodozis & Letelier 1995; Varela 1981, Varela 1984; Varela , Thompson & Rosch 1992) to present a meta-model that avoids some of the problems of traditional AI approaches.

* In search of a minimal agent

2.1
The Socio-biologist E.O. Wilson (1975) cautioned against adopting excessively narrow definitions of 'social' so as not to: "...prevent the exclusion of many interesting phenomena." This sentiment is well heeded particularly where the interrelationship between physical, organic, rational and complex phenomena are of interest.

2.2
Bura et al (1995) cite Ferber as providing a definition of an agent as follows:
A real or abstract entity that is able to act on itself and its environment; which has a partial representation of its environment; which can, in a multi-agent universe, communicate with other agents; and who's behaviour is a result of its observations, its knowledge and its interactions with the other agents. (Ferber 1989, p. 249)

2.3
This is a good example of a definition of agent as intentional if not teleological. It embodies representationalist assumptions about cognition and reified concepts of information and knowledge. As such, it echoes the assumptions adopted in most social theory. Using such a definition as the basis of social simulation would re-embed these assumptions into the research, potentially obscuring order contributed from natural or structural properties of the system. For the purposes of the meta-model the inclusion of teleology as a founding definition is unacceptable as it is intended to support exploration of natural as well as intentional social phenomena. It is worth examining each aspect of Ferber's definition in turn in order to identify where the definition can be simplified and the assumption set reduced. This exploration can then be used to develop an alternative concept of agent more suited to the task at hand.

2.4
Ferber identifies an agent as "a real or abstract entity". All agents used in computer simulations are artificial and are used to produce 'virtual' societies. These societies may or may not be designed to be analogous to real or natural societies. In this sense, the real should be interpreted as bracketed, that is, to imply realistic or like a real equivalent.

2.5
Ferber states that an agent will have
  • "The capacity to act on itself and the environment". If a parsimonious definition is sought this statement may be cast too broadly. It may imply, for example, the capacity to self-regulate-a capacity requiring relatively simple feedback processes-or may extend to a reflexive capacity-self-awareness-which is a far more demanding requirement. A capacity to act on its environment is simply to say that it should have an environment. By definition, the environment is constituted by those things with which the agent interacts but over which it can exercise no direct control. If an agent has an environment and has some behavioural repertoire then it has the potential to influence the environment and to be influenced by it.
  • "Has a partial representation of the environment"-It is not clear why this aspect of the definition has been included except on the assumption that cognition implies 'representation'[1]. It should not be necessary that an agent have a representation of the environment unless it is being argued that this is necessary for the agent to interact with the environment. An agent and its environment should have, as a minimum condition, the capacity to mutually perturb one another. Thus the agent's behaviour may be influenced by changes in environment (system parameters) and may in turn alter those parameters. This does not require any capacity to 'represent' the environment. If the agent has some behavioural plasticity, it is conceivable that it may, over time, structurally couple with the environment such that to an observer they appear to be acting in a co-ordinated way. Again, this does not require representation although coupling could be described as expressing some structural synchronicity. In other words, the structure of the agent has the capacity to mirror an analogue of the structure of the environment within certain bounds. No further 'representation' than this should be required or implied.
  • "Which can communicate...and who's behaviour is the result of its observations, its knowledge and its interactions with other agents"-This set of requirements contains some redundancy and arguably anthropomorphises the agent. The minimum requirement is to interact and to possess sufficient behavioural plasticity and self-regulatory capacity to maintain recurrent interaction over time-that is, to be able to persist amidst its interactions. Communication and knowledge implies the need for higher order functions and reflexive capacity. Two agents that enter recurrent mutual perturbation can be said to be 'communicating' to the extent that structural change in one will trigger structural change in the other. 'Knowledge' in this sense may arise where recurrent mutual perturbation by one agent with, lets say, another class of agent, results in structural change which improves the agent's capacity to interact in some useful way with other agents of that or similar classes. As an example, an immune system, having learned to recognise a class of virus will remain sensitive to that and similar viruses in future and thus be more able to neutralise them in the system of the body of which they are a part.

2.6
The following alternative minimum definition is suggested. An agent is:
A natural or artificial entity with sufficient behavioural plasticity to persist in its medium by responding to recurrent perturbation within that medium so as to maintain its organisation[2].

* Incorporating cognition

3.1
The possibility of deriving 'intelligence' from more simple agents was explored in Minsky's Society of Mind in 1987. Here Minsky was concerned to identify how "intelligence could arise from non intelligence" (Minsky 1987, p. 17). He proposed that 'mind' be considered as a 'society of agents', each with distinct functions. Mind, he argued, arises as an emergent property of the interaction between these functionally differentiated agents. These 'agents of mind' are autonomous, able to become involved in many different sequences and patterns of interaction to perform different tasks, which, in combination, an observer would regard as 'intelligent'. Agents communicate with one another at a local level and there is no necessary control from higher levels. Each of Minsky's agents were pre-programmed with specific functional capabilities. If something approaching human intelligence is to be derived an approach is required whereby agent capabilities can be bootstrapped from much simpler intrinsic properties. In other words intelligence in biology has arguably arisen from auto-catalytic processes which have, under the influence of selective environmental forces, self-produced higher order capabilities (Kauffman 1993, Kauffman 1996) until the point where language and reflexivity have emerged. One approach to understanding how this may occur has been set out by Maturana and Varela in the theory of autopoietic systems.

Cognition in biological agents

3.2
Maturana and Varela have developed a concept of cognition consistent with the autopoietic nature of living systems (Maturana & Varela 1980, Maturana & Varela 1988; Maturana 1987; Maturana 1988). This approach has been somewhat developed for computational applications by Winograd and Flores (1986). Maturana and Varela argue that cognition takes place whenever an organism behaves in a manner consistent with its maintenance and without loss of identity, that is, without loss of any of its defining characteristics. Cognition defined in this way does not imply or require representation and therefore provides a basis for developing agents which do not include representative structures. Varela, Thompson & Rosch (1992), in their The Embodied Mind, link cognitivism (i.e. representationalist approaches to understanding cognition) to cybernetics, suggesting that the latter is an outgrowth and extension of the former, with application to the understanding of mind.

3.3
While this approach represents an advance on Behaviourist psychology, which adopts a simple systems view-inputs (stimuli) trigger outputs (behaviour)-and renders mind as a 'black box', it is still problematic. Cognitivism constructs a duality. The environment is experienced as a facticity and acted upon directly, but is also conceived and symbolically represented in the mind. Mind and behaviour are linked as hypothesis and experiment. The mind looks for patterns in representations and tests the degree to which these accord with the outside world. Attempts to incorporate this approach into AI have proven computationally costly and difficult to implement (Brooks 1991a; Brooks 1991b).

3.4
The advent of complexity theory has given greater impetus to connectionist models of mind such as neural networks. Here emergent structure or pattern arises from massively interconnected webs of active agents. Applied to the brain, Varela, Thompson & Rosch state:
The brain is thus a highly co-operative system: the dense interconnections amongst its components entail that eventually everything going on will be a function of what all the other components are doing (Varela, Thompson & Rosch 1992, p. 94).

3.5
It is important to note that no symbols are invoked or required by this model. Meaning is embodied in fine-grained structure and pattern throughout the network. Representational approaches require a direct mapping-symbol to symbolised. In other words, representational systems require a tangible referent, or at least a referent that can be mapped with minimal ambiguity. Most social phenomena do not have these properties. Connectionist approaches can derive pattern and meaning by mapping a referent situation in many different (and context dependent) ways. Meaning in connectionist models is embodied by the overall state of the system in its context-it is implicit in the overall 'performance in some domain'. Varela, Thompson & Rosch place Minsky's previously mentioned 'society of mind' somewhere between connectionist and cognitivist approaches. Here cognition arises from networks (societies) of smaller abstract functional capacities.

3.6
Representationism was arguably an advance on behaviourism, and connectionist models an advance on representation. Moving beyond both representational and connectionist models however, Varela, Thompson & Rosch note that:
an important and pervasive shift is beginning to take place in cognitive science under the very influence of its own research. This shift requires that we move away from the idea of the world as independent and extrinsic to the idea of a world as inseparable from the structure of [mental] processes of self modification. This change in stance does not express a mere philosophical preference; it reflects the necessity of understanding cognitive systems not on the basis of their input and output relationships but by their operational closure (1992, p. 139).

3.7
They go on to argue that connectionist approaches are not consistent with an approach which views biological agents as operationally closed in that "... the results of its processes are those processes themselves' (1992, p. 139). They assert:
Such systems do not operate by representation. Instead of representing an independent world, they enact a world as a domain of distinctions that is inseparable from the structure embodied by the cognitive system (1992, p. 140).

3.8
They argue for an approach of cognition as 'enaction', an intertwining of experience and conceptualisation which results from the structural coupling of an autonomous organism and its environment. From this perspective, the importance of environment recedes from determinant to constraint. Intelligence moves from problem solving capacity to flexibility to enter into and engage with a shared world. From this point of view, to adequately capture the biological basis of cognition, biological agents should incorporate and be founded on an enactive concept of cognition not a representative one.

Linguistic agents

3.9
In the biological world, organisms who's nervous systems have acquired a high level of development (reached sufficient complexity) may be capable of language. Organisms, which have a history of recurrent interaction or co-ordination of action are in structural coupling which means that they are mutually co-ordinating one-anothers behaviour (Maturana & Varela 1980). Sufficiently complex organisms may also co-ordinate these coronations of behaviour (Maturana & Varela 1980; Maturana & Varela 1988; Maturana 1988). This co-ordination of co-ordination of behaviour is what Maturana and Varela call 'languaging'. Structural coupling or recursive compensatory behaviour between two or more autonomous agents thus gives rise to a linguistic domain.

3.10
This approach to language fundamentally challenges representational assumptions. Languaging is something that an observer can say is happening when he/she notices that two organisms are mutually orienting one another through the co-ordination of the co-ordination of their behaviour. Language is therefore a process that takes place within the domain of interactions of entities and is not something which takes place in the brain. To quote from Maturana:
language is a biological phenomena because it results from the operations of human beings as living systems, but it takes place in the domain of the co-ordinations of actions of the participants, and not in their physiology or neurophysiology...language as a special kind of operation of in co-ordination of actions requires the neurophysiology of the participants but it is not a neurophysiological phenomenon (1988, p. 45).

3.11
It is important to note, however, that no exchange of 'information' needs to be implied in order to explain the phenomena of language. This means that the emergence of symbolic interactions does not imply the exchange of symbols that contain information about or which represents 'things'. Symbolic interaction is rather a process of mutual 'triggering' of behaviour, which, having arisen in a consensual domain, is co-ordinated by and orientating for the organisms which participate.

3.12
By way of reinforcement of the above argument, simulation work undertaken by Hutchins and Hazlehurst (1995) demonstrates that shared lexicons can emerge through the recurrent interaction of 'individuals' with access to common referents. These shared lexicons do not require the sharing of internal structure. This work suggests a means by which language may be modelled for classes of biological agents that are observed to have this capacity. The approach avoids the need to embody reified concepts of 'symbol', 'information' or 'communication' which, following the above argument, are inconsistent with a biological understanding of the origins and nature of language.

Reflexive agents

3.13
The existence of language improves an organism's ability to adapt to environmental perturbation by effectively increasing its structural plasticity. Once language has arisen it is possible for an organism to interact with itself so as to orient its own behaviour through reflexive linguistic behaviour. Maturana and Varela state this as follows:
An autopoietic system capable of interacting with its own states (as an organism with a nervous system) can do, and capable of developing with others a linguistic consensual domain, can treat its own linguistic states as a source of deformations and thus interact linguistically in a closed linguistic domain (1980, p. 121).

3.14
Thus through recursive interactions (distinctions upon distinctions) it is possible for such an organism to treat some of its own linguistic states as 'objects' for further distinction. In other words, it becomes possible for an observer to become an observer of self.

3.15
Much current modelling of social systems is being undertaken using simulations that do not account for reflexivity. Many of the applications of complexity to social analysis, therefore, are being applied to non-human social systems (ant colonies etc.) or highly simplified 'human' systems. By developing a model which integrates a way of thinking about non-reflexive and reflexive social systems the path is more open to evolve current methods into the human social domain.

* Meta-Model architecture-the ontology

4.1
The key concepts have now identified that make possible an ontology capable of avoiding the problems associated with founding agent based social simulation on representationalist assumptions. The following meta-model is proposed to incorporate these concepts into the beginnings of an ontological framework upon which simulations can be built. The meta-model is based on:
  • A medium-a background or context within which the social entity will form and with which it will interact.
  • Agents-the autonomous, operationally closed character of individuals (as suggested by autopoiesis) and social systems (as implied by dissipative systems models) suggests the use of an 'agent' based framework. In this meta-model a distinction is drawn between:
    • primary agents-these are the agents which constitute the target population, i.e. the agent who's societies we wish to understand. These will generally be 'biological' and their biological nature will define their fundamental properties and capabilities.
    • secondary agents-these are agents which may be 'biological' or artificial, active or passive. While they enter into or are incorporated in and contribute to the behaviour of higher order agent structures (societies) they are not members of the primary target group. Note that secondary agents may be of the same agent class as the primary agents but belong to a different population.
  • Systems of agents-it is to be assumed that primary agents can and will assemble into systems of agents. These can be described as hierarchies and heterarchies. These structures will result from, and be described by referring to the nature of the structural coupling (see Glossary) that brings them about and maintains them in the medium.
  • Systems of systems of agents-these are systems comprising two or more coexisting systems of agents, which may or may not intersect.

An alternative agent typology

4.2
Any system derived from this basic framework may be comprised of agents of a wide range of alternative types. The following Venn diagram indicates the family of agents suggested as most appropriate to the demands of such a model.

Figure 1
Figure 1: Venn diagram of agent classes and their relationship

Note that the possibility of non-biological cognitive agents is a consequence of the way in which cognition is defined and used here while the possibility of non-biological linguistic agents is speculative.

Passive agents or objects

4.3
Passive agents have properties but do not initiate interaction. They just are, in the way that, say, a rock is in the real world. Passive agents may interact with each other only if active agents bring them together or if they are brought together by other physical processes. One 'rock' may be thrown at another, for example or may fall under the influence of gravity, and depending on its properties (elasticity, surface texture etc.) may then behave in a certain manner. Similarly, active agents may interact with passive agents-an active agent may be physically restricted or blocked by a passive agent for instance. Passive agents are included in the meta-model as it has been established that such objects can influence social behaviour. Portugali, Benenson & Omer (1997 and personal communication) have found, for example, that physical barriers such as roads and rivers can influence ethnic cluster formation in communities by influencing perceptions of proximity.
Active Agents

4.4
For anything interesting to happen in an agent-based system there is a need to include active agents. Active agents have properties that allow them to interact with other agents. The action potential of an active agent can vary markedly. A simple active agent, commonly known as a 'reactive' agent (Brassel et al 1997), may simply be able to 'receive' a message from another and 'transmit' a standard response. Others may be able to process input before demonstrating behaviour dependent on the results of the process. Such behaviour may be guided by 'if-then' decision rules or some more complex decision algorithm. These are 'behavioural' agents and generally involve limited capacity for what would commonly be regarded as intelligent behaviour, having a predefined and externally programmed scope of behaviour. Very often much can be achieved using agents of this type. The work of Epstein and Axtell (1996) for example, embodies agents of such 'limited intelligence', but capable of producing complex behaviour, analogous to natural social behaviour, from simple local rules of interaction. As noted, agents in social simulation are commonly defined in teleological terms. Such agents are often called 'deliberative' or 'intentional' agents (Brassel et al 1997). Watt states, for example; "An agent will set out to do something, and do it; therefore it has competencies for intending to act, for action in an environment and for monitoring and achieving its goals" (Watt 1996, p. 2). Systems adopting agents that have such pre-programmed goals may still give rise to unexpected behaviour at the macro-level.

4.5
A further class of agent is the 'adaptive' agent. Adaptive agents are capable of modifying some of their parameters or variable states or, in some instances, their rule set. Directed agents, which are also commonly used for social simulation, incorporate assumptions about goal directedness, often including bounded rationality and/or utility maximisation. These are not applicable to non-goal oriented entities and precludes exploration of non-directed processes between social agents. They can, however, reveal macro outcomes that are unanticipated consequences of individual goal seeking behaviour.

4.6
Agents may be used to model higher level structures, such as 'groups' and 'organisations', as well as individuals. The behaviour set of an agent will reflect rules consistent with the theory they are to be used to explore. For the purposes of the meta-model set out here, what is required is a minimal, low level definition of agent. Incorporation of higher level social theoretical assumptions, as is commonly the case with the classes of agent identified above, is avoided so as to maximise the models utility for incorporation in social simulations directed at a wide range of alternative theoretical exploration. Two additional classes are therefore proposed, cognitive and biological.
Cognitive agents

4.7
The requirements for a neutral but flexible agent suggested above conform to what will be called a 'cognitive' agent. Such an agent has an intrinsic ability to adapt and modify its own structure to accomodate recurrent perturbation. This capability, consistent with Maturana and Varela's (1980) use of the term, represents its cognising ability. From this perspective, any agent capable of adjusting its structure to compensate for perturbation can be regarded as cognitive. Such an agent may or may not also belong to the class of biological agents.
Biological Agents

4.8
An important class of agent for all social theory is the 'biological' agent. While any such agent used in computer simulation will of course be artificial, this class of agent will be referred to as 'biological' in that it is designed to embody fundamental characteristics of real biological entities. A key characteristic of biological agents is that they are autonomous and self-producing. The theory of autopoiesis is directly applicable to such biological agents. It is consistent with the operationally closed and autonomous nature of biological agents and ideally suited to the needs of the meta-model.
The behavioural repertoire of meta-model agents
Behaviour
By definition, all active agents can demonstrate behaviour. Behavioural flexibility extends in animals on a continuum from reflex action at the low end to learning at the high. Whatever the scope of behaviour, if agents reciprocally change their own behaviour, be it by reflex or choice, in response to that of another, they will become structurally coupled to one another. In the context of the meta-model proposed here, Structural coupling constitutes the fundamental mechanism by which social behaviour can emerge between all forms of biological agent.
Adaptation

4.9
Some classes of Active agents and all biological agents should be able to adapt. Plotkin (1994) sees strong parallels between adaptive variability arising through genetic adaptation and that arising through learned behaviour. He specifies the relationship between the primary (genetic) adaptive heuristic and the shorter term (learning) heuristic as a nested control hierarchy, noting that "the scaling factor is frequency of change in the world to which each is sensitive" (Plotkin 1994, p. 161). Sommerhoff (1974[1969], p. 177) also talks of the term adaptation being used to identify a range of forms of directive correlation. He notes that it can cover a range of forms which vary primarily in the implied 'back-reference period'. The term can be applied to a continuum of processes from self-correcting or regulating processes which are processes of continuous adaptation characterised by an instantaneous back reference period (equates to learning) to phylogenetic adaptation, i.e. evolution, with a characteristically very long back reference period.

4.10
In a biological organism then, evolution provides long cycle variability suited to maintaining the agent's adaptation to long time cycle changes in the environment. The adaptive potential is realised at the level of the individual but effects the characteristics of the class. This form of plasticity is not available to non-biologiocal agents unless they are modelled on similar mechanisms (e.g. Genetic algorithms) and cannot be applied directly to agents which lack reproduction. Learning, on the other hand, offers the agent the potential to adapt to, or to accommodate, short cycle changes. Biological agents need to incorporate mechanisms for both forms of adaptive behaviour if the adaptive potential of animals is to be approximated. Plotkin suggests culture as a third order heuristic, an adaptation for propagating learning through communities (i.e. at the social level) and further improving responsiveness to change over short time frames. Culture also has other effects. It makes generates or reflects learning at meso and macro levels and may extend its geographic coverage. Culture supports adaptation at intermediate time scales-between individual learning and genetic adaptation. By means of culture, 'knowledge' is developed and retained over time spans longer than individual life times but shorter than that required for inherited traits to accumulate in a population. Culture may also speed the potential macro impact of local action by accelerating the propagation of behaviour and ideas. Conversely, it may constrain development and change through cultural 'inertia'.

The meta-model structure

The medium

4.11
The medium represents the background environment or substrate of a social system. For any selected social system, the medium may contain active agents and/or passive agents. Active agents within the environment may be of similar or dissimilar type to those comprising the social system of interest. Active agents may be biological or artificial. Generally, heterogeneity should be assumed within populations of biological agents while homogeneity may be more likely within populations of artificial agents. Passive agents may also come in two types, natural or artificial.

Figure 2
Figure 2:The structure of the meta-model medium

4.12
A fundamental characteristic of the environment is that it presents and constrains the space of interaction of those agents within it. It will have topological characteristics and its topology may be an important factor influencing the interaction of its member agents. Recent interest in virtual societies, that is computer-mediated interaction that occurs in a 'virtual' space rather than a real one, is also of interest. Significantly, a characteristic of such interaction is its independence from topological proximity. The meta-model can accommodate both. Agents will interact along a temporal dimension as well as topological dimensions. These two are potentially related in that interaction at a distance may imply time delay, either for the agent to travel or the time delay in sending a message. Time and topology may therefore be linked. The medium may be modelled as an active agent comprising active and passive agents. Similarly, social environmental features such as 'village' and 'city' can be modelled as an agent. Such structures may be hierarchically organised.
Primary agents

4.13
For social research, primary agents will generally be 'biological' agents. Where 'social' structure is to be extended to include artificial societies, i.e. societies comprising artificial agents, then the scope of the concept could be broadened to embrace any active agent.
Secondary agents

4.14
Secondary agents may be any class of agent.
Systems of agents

4.15
Systems of agents may form amongst agents of the same or different classes. Where active agents interact, they may mutually perturb one another. This may result in one of five possible outcomes.
  1. they mutually annihilate;
  2. one party is annihilated;
  3. one party adjusts to accommodate the other;
  4. both parties mutually accommodate; or
  5. nothing happens.

4.16
The only interaction that can give rise to social behaviour is that indicated by number four, both parties mutually accommodate. Systems formed by means of number three would constitute an aggregate but not a society. Systems formed by different classes of agent give rise to one of four distinctive matrixes, each with its own characteristics. These matrices are:
Passive Matrix
A system comprising only passive agents will do nothing. It is at equilibrium.
Passive-active matrix
Systems comprising active and passive agents have the potential for dynamical behaviour. This will depend on the density of agent populations (and thus the likelihood of their encountering one another and its frequency), their action horizon, the topology of the environment and the parameters to which agents are sensitive. Such systems may demonstrate a wide range of behaviour. If active agents can interact only with passive agents and not each other, behaviour should be predictable and subject to explanation through analysis. As the structural plasticity of reactive agents increases and/or the size of reactive agent populations increases, the system will display increasingly complex dynamics. The introduction of adaptive agents will, immediately introduce a complex causality and will render the system unable to be analysed deductively.
Active-active matrix
As the number of different types of agent increases (greater heterogeneity) the range of possible behaviours of the system will increase rapidly. Active agents with adaptive or learning capability will also lead to complex behaviour with the potential to change (increase or decrease) the behavioural scope of the system through divergent and convergent adaptation. This is because, despite possibly being of the same class, adaptive agents will have a unique history and hence be ontogenically heterogeneous. Where active and adaptive agents are present, are free to interact and are sensitive to one another's behaviour in one or more dimensions, they will become structurally coupled through mutual recurrent perturbation. The strength of the resulting coupling will depend on the internal structures of each agent, in particular their plasticity and sensitivity to certain types of perturbation. Agents may influence one another in one or many dimensions and the nature of the response, again depending on the structure, may be discrete, or continuous.

4.17
The highest level of structural coupling is that of symbiosis where the coupled agents effectively merge and operate as a single composite agent. Many classes of agent, however, will operate within the same medium and undergo recurrent mutual perturbation without ever moving into symbiosis. It is important to note that both unity and medium are structurally coupled and both will act as selectors of change in the other (see Maturana and Varela 1988). Irrespective of whether social systems are regarded as autopoietic[3], to the extent that they comprise autopoietic unities (here represented by the qualities specified for biological agents) structural coupling between these agents' forms the basis for a higher order system. Coupling between biological agents may occur within behavioural and/or linguistic domains. All of these aspects constitute structure and all intertwine and contribute to the overall behaviour of the social system, which they integrate. None is primary or determinate of the others as each is linked with the other.

4.18
The degree of structural coupling that arises when two or more agents interact is a fundamental factor in determining the dynamics and emergent behaviour of the resulting structurally coupled system.

Figure 3
Figure 3. Structure of systems of agents

Social systems

4.19
The term social is used here to refer to patterns of interaction which take place and persist in time between at least two biological agents of the same class (species). Social interaction may include and be influenced by the presence of biological agents of another class and/or artificial agents, but interactions between a biological agent and a biological agent of another class or an artificial agent will not be called social.
Fractal structure of social aggregates

4.20
Entities that have become structurally coupled constitute higher order structures. Thus in social systems individuals come together in many potentially intersecting structures (work groups, families, sports clubs), which are a part of larger more extensive structures (corporations, sub-cultures, nations). Each of these structures at different levels may be treated as operationally closed in that the recurrent interaction is uniquely determined by the structures of the participating agents and their individual and collective histories of interaction. Their interaction is self-maintaining. If the structurally coupled agents comes across another agent of the same or a different class, and they in turn enter into recurrent interaction, the new arrival interacts with the previous two on the basis of their co-evolved structures.

4.21
In the case of human social systems, language plays a critical role. As Roos and Von Krogh have argued, "The world [of those who comprise the social system] is brought forth in language." (Roos and Von Krogh 1995, p. 95). They go on to quote Varela as saying:
whenever we engage in social interactions that we label as dialogue or conversation, these constitute autonomous aggregates, which exhibit all the properties of other autonomous units (Varela 1979, p. 269).
Thus in human societies, domains of interaction are primarily brought forth and maintained in language.
Degrees of freedom in structural coupling

4.22
It has been said that humans are creatures of habit. Habit, be it behavioural or conceptual (paradigms) may constrain the variability of interaction or serve to reduce the degrees of freedom. Habit development, norms, rituals and conventions may serve to reduce the density of interconnection (to collapse a potential many onto a few dimensions) in social systems and therefore become a basis for control of the dynamic characteristics. The more 'norms' constrain interaction, the more stable the society (and conversely the less adaptive or responsive to perturbation). Significantly, these patterns do not require a prospective forward looking logic to become established. As Macy (1998, p. 3) notes "The rules that secure social order emerge not from the shadow of the future but from the lessons of the past". Macy links social 'norms' to genetic inheritance but there is no need for this, cultural transmission and selection will suffice. Indeed the stability and self-reproducing character of many norms, rituals and habits of action constitute a lineage of a kind (Plotkin 1994). Social routines such as these will continue to propagate to the extent that they help the social system of which they are a part, to remain viable. They are, however, maintained on the basis of past contribution rather than prospective relevance. Importantly, the lack of need to invoke rational foresight implies no need for conscious action as a basis for explanation of co-operative interaction and regulative behaviour in social systems. Some individuals may choose to adopt the 'norm', perhaps seeing its social value, but 'blind following' will serve the same purpose. Further if the normative strategy is robust, such 'blind following' need imply no weakness nor diminish the viability of the social system it helps to integrate. What we as observers call 'norms' may be emergent patterns which stabilise social dynamics but which themselves arise from those dynamics. In other words they are an emergent self-regulatory mechanism. This is important for, as Macy points out, altruism as examined through analytic game theory, implies the conscious and rational selection of a "...prudent detour in the pursuit of self interest" (Macy 1998, p. 4). Relaxing the necessity of rationality in order to explain either selfishness or altruism makes possible a broader explanatory framework. Co-operation does not imply altruism (Castelfranchi 1998) as it may be used for selfish or altruistic motives. Routines of co-operation (and for that matter of non-cooperation or hostility) constitute means for regulating the overall stability of social structures and systems of societies. The in-group/out-group phenomena serves to break social networks into 'patches' (Kauffman & Macready 1995), while habitual, or conscious cooperation between such groups serve to maintain linkages of varying strength to maintain some overall coherence and stability as well as adaptability.
Systems of systems of agents

4.23
Where dissimilar agents of significant complexity and in reasonable numbers come together and enter structural coupling, it would be unlikely that this coupling occurs in only a few dimensions. Most societies will comprise many of the social units proposed in the previous section. These will often have multiple members. That is to say, an individual agent may participate in the generation and maintenance of many social sub-systems at the same time. The presence of an individual agent common to many networks represents a point of intersection between these networks. A theory of social systems as complex systems must therefore be capable of being scaled at multiple levels, social systems within social systems and conceived of as multiple systems of intersecting networks of interaction.

Figure 4
Figure 4. Structure of systems of systems of agents

Non-intersecting domains

4.24
Non intersecting domains of social action will demonstrate dynamic properties consistent with the nature of coupling and the characteristics of the social matrix. Key factors effecting dynamics will be different properties of constituent agents and the number of agents in the network. More important will be the number of dimensions on which agents become coupled. With active agents, it must be remembered that this dimensionality may itself come under control at some level of the system, by means of negative feedback. This is what is implied by Kauffman's suggestion that systems migrate to the edge of chaos (Kauffman 1993; Kauffman 1996). Selective processes move systems towards the point of balance between stability and innovation, balancing short term viability for current conditions with long term survivability in the face of change. This closure and self-regulation will, in the case of non-intersecting systems, occur at the level at which this system is autonomously viable. It may possibly occur in relation to viable sub-systems nested within it also.

4.25
Non-intersecting domains can be considered as an ecology of co-existing individual organisms. While co-existing, such organisms do not interact directly or participate in integrating sub-domains. Their only point of intersection is at the point of closure i.e. at the level of the system (or ecosystem) as a whole. They interact, therefore, only in that each acts on the medium and this may have consequences for others.
Intersecting domains

4.26
Where domains intersect, each individual agent participates in giving rise to and integrating different domains of social interaction. As this occurs through structural coupling, it must be appreciated that structural changes and deformations made to maintain viability or in response to perturbations triggered in one domain may spill over into the other domains. What helps maintain integration in one domain may be dysfunctional in another. Thus, the domains will continually disrupt each other at points of intersection. As it is conceivable that every agent may be participating in many and different domains of action, intersecting domains have the potential to exhibit much more irregular and far from equilibrium behaviour. If these systems too can evolve to the edge of chaos, it should be expected that they would far more often be tripped over the boundary into instability. Indeed the 'optimum' proximity for these systems may be further into the stable zone and some closure may occur to maintain them in that position. This may occur by the way in which sub-grouping form so as to reduce the degree of cross membership, reduction of 'patch' size, reduction in the dimensional coupling between patches or by the introduction of strong negative feedback to stabilise other potentially disruptive dimensions of operation. Kauffman & Macready (1995) have shown the importance of patch size for the stability of such systems and it is conceivable that as the number of sub-groups agents participate in rises, the size of intersecting social groupings overall must fall to maintain some order.

4.27
Consistent with this, Hejl (1984) places the origins of social change in the multiplicity of intersecting domains, acting through the node of particular individuals. Although social systems are conservative systems due to their organisation, they generate phenomena of social change. This can be explained as resulting from the multi-component character of the individuals that constitute them.
The inner feedback of a social system is very often a conservative factor...In internally differentiated societies, social change seems to originate mostly from the interaction of social systems. Social systems always interact through the interactions of their components, i.e. the individuals that constitute the systems (Hejl 1984, p. 76).

Figure 5
Figure 5. Systems of structurally coupled agents give rise to domains of interaction

4.28
Each domain will drift in a 'parameter space' as the agents co-evolve. Domains may intersect where they have common members. This intersection means that the domains themselves couple and perturb one another. This may be modelled using Kauffman's concept of coupled rugged fitness landscapes (Kauffman 1993; Kauffman 1996).

* Conclusion

5.1
The meta-model set out here cannot be fully implemented with the current stage of development of computer simulation techniques. Agent based models are still, as Terna (1998) notes, "under construction". The model is put forward to contribute to the debate about appropriate directions for the application of the science of complexity in social disciplines and to propose a mechanism by which it may be advanced. It must be expected that a great deal of work needs to be done to determine if and to what extent the approach presented here is viable or meaningful in alternative social fields. It does provide a framework and a set of constructs which facilitates a mapping of complex systems concepts onto social systems theory. In this, it provides a basis for the incorporation of complexity into social research and a direction for the development of that research.

5.2
The meta-model is necessarily highly abstract. Making it operational will require several stages. While the basic technologies needed to implement it already exist in some form, they are commonly built upon fundamentally different and incompatible ontologies. They also frequently use different technologies and intermediate languages in their elaboration. Most approaches to date have also been built on an ontology consistent with existing assumptions of social theory and/or of AI, as such they suffer some limitations for examining the implications of complexity (see Wooldridge & Jennings 1995).

Meta-model as proforma

5.3
The meta-model captures some important characteristics of social systems and represents them in a language and using concepts consistent with complexity science. It is not suggested that it is necessary to incorporate all of the meta-model's elements into any specific social model. A model, by its nature attempts to simplify, to strip situations back to only those properties important to expose the features of interest. The meta-model captures the overall structural characteristics of a social system-representing them in a manner consistent with a complex systems viewpoint. It expresses them as a framework of concepts, which facilitates exploration of the relationship between natural, structural and purposeful behaviour. Using this as a template it is possible to attempt more narrowly scoped and simplified simulations that are consistent in underlying assumptions with the meta-model. In this way the meta-model provides a proforma from which specific and more narrowly defined simulations may be developed at different times, within different disciplines and by different researchers, while retaining some consistency and hence comparability. This may help to build a body of understanding of social behaviour without the complication of work proceeding on the basis of different and possibly incompatible assumptions. It complements other developmental work on agent based ontologies adding some additional elements, in particular insights drawn from the theory of autopoiesis. The main contribution of this meta-model is in offering an alternative to 'representationist' models where there is a need for an agent with advanced cognitive capability. It achieves this by suggesting an alternative pathway and one intrinsically suited to, and derivative of, complexity based research.

5.4
There are a number of logical stages to the development and extension of the meta-model:
  1. wide discussion as to the legitimacy and coherence of the meta-model as proposed;
  2. refinement of the model and development of the ontology and its expression in a symbol system;
  3. development of an intermediate modelling language consistent with the revised meta-model;
  4. development of a simulation platform (or modification of an existing one such as SWARM) which will facilitate the development of specific simulations consistent with the meta-model
.


* Glossary

Domains
(from Whitaker 1996)
"A domain is a description for the "world brought forth"-a circumscription of experiential flux via reference to current states and possible trajectories. Maturana and Varela define a number of domains in developing autopoietic theory's formal aspects into a phenomenological framework:
Domain of interactions
'...the set of all interactions into which an entity can enter...' Domain of relations
'...the set of all relations (interactions through the observer) in which an entity can be observed...' Phenomenological domain
That set of actions and interactions '...defined by the properties of the unity or unities that constitute it, either singly or collectively through their transformations or interactions.' Cognitive domain
the set of '... all the interactions in which an autopoietic system can enter without loss of identity...' An observer's cognitive domain circumscribes '...all the descriptions which it can possibly make.' Consensual domain
'.. a domain of interlocked (intercalated and mutually triggering) sequences of states, established and determined through ontogenic interactions between structurally plastic state-determined systems.' Linguistic domain
'...a consensual domain of communicative interactions in which the behaviorally coupled organisms orient each other with modes of behaviour whose internal determination has become specified during their coupled ontogenies.'

Operational Closure
An operationally closed system is one where the identity of the system is specified by a network of relations and processes the effects of which do not extend beyond the network. The operation of such a system is such that any change in relations between components will be reflected by changes in relations between others. The system may be configured such that the structure tends to maintain certain relations constant in response to changes in others. As a minimum, those relations which define the organisation of the system must be held constant if the system is to continue to exist. Thus, it may maintain a dynamical homeostasis between components in response to other internally generated change and in response to perturbation.

Note that to say a system is operationally closed is not the same as the concept of a closed system. A closed system is by definition a system that has no input or output of any kind. An operationally closed system may be, and frequently is, systemically 'open' in that it takes in energy, or some other form of input from the environment and produces output, if only in the form of waste products such as heat. The important point is that operationally closed systems are closed 'informationally'-that is, they do not exchange 'information' with the environment. Their behaviour is internally determined and self referenced. The behaviour of such systems is determined by their structure-by the specific properties, configuration and dynamics of the components that comprise them. The response of the system to perturbations will be determined by this structure.

Structural Coupling
If it is accepted that social systems can usefully be treated as operationally closed systems then social system dynamics are determined by internal structure. The environment can only trigger change in the system it cannot determine it. The response of a social system to perturbation will be determined by its structure at that time and by its prior history of interaction and adaptation.

Operationally closed systems, or unities, as Maturana and Varela (1980) have demonstrated, may become 'structurally coupled' to one another and their environment through mutual recurrent perturbation. If these interactions are complementary, that is, maintain the viability of the interacting systems, this mutual adaptation reflects what we might call co-evolution. The strength of the resulting coupling will depend on the internal structures of each unity, in particular their plasticity and sensitivity to certain types of perturbation. Operationally closed systems may influence one another in one or many dimensions and the nature of the response, again depending on the structure, may be discrete, or continuous. Note that there is no determinism between operationally closed systems and the response of one system to another will be determined by its structure only-this will commonly be non-linear. From this the origin of innovation is explicable (Teubner & Willke 1997), as despite becoming coupled, the presence of non-linearity will lead to discontinuous behavioural responses with the potential to trigger reciprocal interaction leading to co-evolution in otherwise inexplicable directions.


* Notes

1For an overview of competing cognitive theory in AI see Clancey et al 1994

2The concept of organisation used here is consistent with that of Maturana and Varela.

3This is a controversial subject (see Mingers 1995, Teubner & Willke 1997, Goldspink 1999)


* References

ARTHUR W.B., Durlauf S. N. & Lane D.A. 1997, The Economy as an Evolving Complex System II, Addison-Wesley, Reading Ma.

AXELROD R. 1997, 'Advancing the Art of Simulation in the Social Sciences', in Conte R., Hegselmann R. & Terna P. (eds) Simulating Social Phenomena, Springer, Berlin. p.p. 21-40.

BRASSEL K. H. Möhting M. Schumacher E. & Troitzsh K. G. 1997, 'Can agents Cover All the World'. In Conte R. Hegselmann R. & Terna P. eds. Simulating Social Phenomena, Springer, Berlin, p.p. 55-72

BROOKS R.A. 1991a, 'Intelligence without representation', Artificial Intelligence, No 47, p.p. 139-159.

BROOKS R.A. 1991b, 'Intelligence without reason', Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, Darling Harbour, Sydney, Australia 24-30 August, Morgan Kaufmann, p.p. 569-595

BURA S., Guerin-Pace F., Mathian H., Pumain D., & Sanders L., 1995 'Cities can be agents to: a model for the evolution of settlement systems', in Gilbert N., & Conte R (eds), Artificial Societies: The Computer Simulation of social life, UCL Press, London. p.p. 86-102

BURRELL G. & Morgan G. 1994, Sociological Paradigms and Organisational Analysis, Virago, London.

CASTELFRANCHI C. 1998 'Through The Minds of the Agents' Journal of Artificial Societies and Social Simulation, Vol 1. No. 1, https://www.jasss.org/1/1/5.html

CLANCEY W.J., Smoliar S.W. & Stefik M.J. (eds) 1994, Contemplating Minds, MIT Press, Cambridge MA.

CONTE R., Hegselmann R. & Terna P. (eds.) 1997, Simulating Social Phenomena, Springer, Berlin

CONTE R. & Gilbert N. 1995, 'Computer Simulation for Social Theory', in Gilbert & Conte (eds) Artificial Societies, UCL Press, London, p.p. 1-15

EPSTEIN J.M. & Axtell R. 1996, Growing Artificial Societies, MIT Press, Cam. Ma. EVE R.A., Horsfall S & Lee M.E., 1997, Chaos, Complexity and Sociology: Myths Models and Theories, Sage

FERBER J. 1989, Des objets aux agents. Doctoral thesis, University of Paris VI.

GILBERT N. & Conte R. eds. 1995, Artificial Societies: The Computer Simulation of Social Life, UCL Press, London.

GILBERT N. & Troitzsch K. G. 1999, Simulation for the Social Scientist, Open University Press, Buckingham.

GOLDSPINK C. 1999, Social Attractors: Applicability of Complexity theory to social and organisational analysis, Unpublished thesis, University of Western Sydney - Hawkesbury, New South Wales, Australia.

GOLDSPINKC 2000, 'Contrasting linear and non-linear perspectives in contemporary social research: Organisation Theory', Submitted for consideration to Emergence.

HEJL P.M. 1984 'Towards a Theory of Social Systems: Self-Organization, Self-Maintenance, Self-Reference and Syn-Reference', in Ulrich H. & Probst G.J.B. eds, Self-Organisation and Management of Social Systems: Insights, Promises, Doubts and Questions, Springer-Verlag, Berlin.

HODGSON G. M. 1996, Economics and Institutions, Polity Press, Oxford

HUTCHINS E. & Hazlehurst B 1995, 'How to invent a lexicon: the development of shared symbols', in Interaction, in Artificial Societies: The Computer Simulation of Social Life, Gilbert N. & Conte R. (eds.), UCL Press, London, p.p. 157-189.

KAUFFMAN S. A.1993 The Origins of Order: Self Organization and Selection in Evolution, Oxford University Press

KAUFFMAN S.1996, At home in the Universe: The Search for Laws of Complexity, Penguin, London

KAUFFMAN S.& Macready W. 1995, 'Technological Evolution and Adaptive Organizations', Complexity, Vol 1 No. 2., p.p. 26-43.

MACY M. W. 1998, 'Social Order in Artificial Worlds', Journal of Artificial Societies and Social Simulation, Vol. 1, No. 1, http:jasss.soc.surrey.ac.uk/1/1/4.html

MARION R. 1999, The Edge of Organisation: Chaos and Complexity Theories of Formal Social Systems, Sage.

MCKELVEY 1997, 'Quasi-Natural Organisation Science', Organization Science, No 8, p.p. 351-380

MCKELVEY1999, 'Complexity Theory in Organization Science: Seizing the Promise or Becoming a Fad?', Emergence, Vol 1 No 1., p.p. 5-32

MATURANA H.R. 1987, 'Everything is Said By an Observer', In Thompson W.I. (ed.) 1987, Gaia: A Way of Knowing, Lindisfarne Press, Barrington, MA, p.p. 65-82

MATURANA H.R.1988, 'Reality: The Search for Objectivity of the Quest for Compelling Argument', The Irish Journal of Psychology, Vol. 9 No. 1. p.p. 25-82

MATURANA H.R.& Varela F. 1980, Autopoiesis and Cognition, Reidel

MATURANA H.R.& Varela F. 1988, The Tree of Knowledge - The Biological Roots of Human Understanding, Shambhala

MATURANA H.R.Mpodozis J & Letelier J.C. 1995, 'Brain, Language and the Origin of Human Mental Functions', Biological Research, Vol.. 28, p.p. 15-26,

MINGERS, J. 1995, Self-producing Systems: Implications and Applications of Autopoiesis, Plenum Press, New York.

MINSKY M. 1987, The Society of Mind, Picador, London.

ORMEROD P. 1995, The Death of Economics, Faber and Faber, London

ORMEROD P. 1998, Butterfly Economics, Faber & Faber, London

PLOTKIN H. 1994, The Nature of Knowledge, Allen Lane

PORTUGALI J., Benenson I & Omer I, 1997, 'Spatial Cognitive Dissonance and Sociospacial Emergence in a Self-organizing City', Environment and Planning, Vol 24, p.p. 263-285

ROOS J. & von Krogh G. 1995, Organizational Epistemology, St. Martins Press, N.Y.

SOMMERHOFF G. 1974[1969], 'The Abstract Characteristics of Living Systems', in Emery F.E. (ed.) Systems Thinking, Penguin. p.p. 147-202

TERNA P. 1998, 'Simulation Tools for Social Scientists: Building Agent Based Models with SWARM', Journal of Artificial Societies and Social Simulation, Vol. 1, No. 2, https://www.jasss.org/1/2/4.html

TEUBNER G & Willke H. 1997, 'Can Social Systems be Viewed as Autopoietic?', LSE Study Group, Report on Presentations, Meeting No. 3, 18 June.

TROITZSCH K.G. 1997, 'Social Science Simulation-Origins, Prospects, Purposes', in Conte R., Hegselmann R. & Terna P. (eds) Simulating Social Phenomena, Springer, Berlin. p.p 41-54

VARELA F. 1979, Principles of Biological Autonomy, North Holland, Amsterdam.

Varela F. (1981) 'Describing the logic of the Living. The adequacy and limitations of the idea of Autopoiesis' in Autopoiesis: A theory of the Living Organization M. Zeleny (de) Elsevier-Verlag, Berlin.

VARELA F. 1984, 'Two principles of self-organization', in Ulrich H. & Probst G.J.B. eds, Self-Organisation and Management of Social Systems Insights, Promises, Doubts and Questions, Springer-Verlag, Berlin., p.p. 25-32

VARELA F., Thompson E. & Rosch E. 1992, The Embodied Mind, MIT Press, Ca. Mass.

WATT S., 1996, Artificial Societies and Psychological Agents, Knowledge Media Institute, The Open University

WILSON E.O. 1975, Sociobiology: The new synthesis, Belknap Press, CA

WINOGRAD T. & Flores F. 1986, Understanding Computers and Cognition: A New Foundation for Design, Ablex, Norwood, NJ.

WHITAKER R. (1996), 'Autopoietic Theory and Social Systems: Theory and Practice', http://www.acm.org/sigois/auto/AT&Soc.html#Luhmann

WOOLDRIDGE, M. J., and Jennings, N. R. 1995. 'Intelligent agents: Theory and practice'. Knowledge Engineering Review, Vol 10 No 2. p.p. 49-62

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1999