© Copyright JASSS

  JASSS logo ----

Jürgen Klüver, Christina Stoica and Jörn Schmidt (2003)

Formal Models, Social Theory and Computer Simulations: Some Methodical Reflections

Journal of Artificial Societies and Social Simulation vol. 6, no. 2
<https://www.jasss.org/6/2/8.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 18-Sep-2002      Accepted: 17-Feb-2003      Published: 31-Mar-2003


* Abstract

The article deals with the problem of general methodical foundations of computational sociology or mathematical sociology respectively. It proposes a universal schema for modelling, based on the conception of social domains as formal systems or nets. The goals for constructing formal models in the social sciences are defined as theory testing, theory developing and analysing universal Turing machines in order to obtain general laws concerning the behaviour of complex social systems. We illustrate our meta-theoretical considerations with different examples of our own research.

Keywords:
Computational Sociology; Formal Models; Mathematical Sociology; Methodical Foundations

* Introduction

1.1
Neither the construction of formal models nor their implementation as computer programs and their experimental testing are at present very unusual or new in the social sciences. Since some time there exists a scientific subcommunity that defines itself by these formal methods; special journals, e.g., JASSS, CMOT , the Journal for Mathematical Sociology and numerous conferences demonstrate that a new scientific sub-discipline has been established in a relatively short time.

1.2
Yet despite successful development this new field of research still only exists at the periphery of the mainstream of the social sciences, a fact that already Fararo (1997) noted with a touch of resignation. This observation is still valid; in particular the important theoretical discourses and debates are seldom influenced by the use of formal models and simulation programs. According to our experiences it is often easier to discuss the results of the simulation of social systems with theoretical physicists or evolutionary biologists than with theoretical sociologists.

1.3
The reasons why most professional social scientists still ignore the potentials of social computer simulations are without doubt manifold and in particular they have their roots in the social curricula (cf. already Hannemann 1988). But one important reason is also the fact that the concepts of formal modelling, mathematical social theory and social computer experiments are often used with rather different denotations and not always unambiguously. That makes it difficult even for interested scholars from other branches of the social sciences to get acquainted with these new methods and to see how they could use them for their own problems and purposes. The aim of this article is to clarify these concepts and to contribute to the methodical foundations of the computational and mathematical social sciences. We shall illustrate our general remarks with examples of our own studies.

1.4
To avoid misunderstandings we wish to emphasise that we neither believe to solve general theoretical problems by proposing certain formal methods nor that we are of the opinion that our rather general remarks are at present sufficient to capture all aspects of social research. We intend to present a general proposal for the construction of models of social processes while other approaches are certainly possible and in application. Therefore this article may be taken as a contribution to methodological discourses that the subcommunity of computational social scientists needs as much as every other scientific community.

* Models and Simulations

2.1
When speaking of formal and/or mathematical models and computer simulations we have to make an important restriction in advance: we do not deal with certain methods that are known for longer times in the natural and economical sciences. By this we mean the formal representing specific realms of reality by differential or difference equations and simulating the probable development of these realms by inserting particular values of parameters into an appropriate computer program. This "classical" method is without doubt still important and is at present applied in the social sciences too (e.g. Hannemann 1988; Bahr and Passerini 1998). Particularly important in this context is the use of differential equations of the "Lotka-Volterra type", which are known from ecological biology. Epstein (1977), for instance, could demonstrate that the Lotka-Volterra equations represent a rather general type of equations that can be applied to very different social fields. This result shows that the classical methods of formal model construction and theory building are still of high relevance.

2.2
For our own studies and our aims of research, however, another approach is, according to our belief, much more suited to the peculiarities of social analysis. This approach uses formal methods that have been developed during the last twenty years by researchers in complexity theory and artificial intelligence. That means formal mathematical models like cellular automata (CA), Boolean nets (BN), artificial neural nets (NN) and evolutionary algorithms like the genetic algorithm (GA) developed by Holland and evolutionary strategies (ES). The summarising concept for these methods is at the present that of "Soft Computing" and their advantages for social analysis can be characterised as follows.[1]

2.3
The classical modelling of real systems by the use of differential or difference equations often are "top down" approaches: the system is looked at as a whole - "from above" - and its behaviour, i.e., its dynamics is defined via the changing of its states, described by the according system's equations. The well-known methods of statistical mechanics are a classical example for this kind of procedure. The elements of the system appear in this type of model only as a collective, i.e., the effects of the interactions of the elements are described as aggregate data. Quantitative social research and economical sciences demonstrate that this procedure may be quite useful for certain purposes.

2.4
Yet more suited for the tradition of social theory is another approach that is called - in contrast to top down procedures - as "bottom up". Basis for the analysis is not the system's behaviour as a whole but the level of single elements and their interactions with other elements; the behaviour of the whole system, i.e., its dynamics is with bottom up procedures an "emergent" result of the strictly locally defined interactions between single elements. Theoretically and methodically such approaches allow to construct the model from the level of processes that are immediately empirically observable, namely the local interactions of single elements - physical particles, chemical molecules, biological organisms or single social actors. Mathematically one gains the possibility to represent the usual non-linear dynamics of complex systems by such procedures in a "pure form", which classical mathematical top down approaches usually allow only in an approximated fashion (Holland 1998; Wolfram 2002). This is in particular the case if one uses the formal methods of Soft Computing mentioned above. We shall return to this subject below.[2]

2.5
For the social sciences these theoretical and methodical advantages of bottom up procedures are certainly even more important than for the natural sciences that can realise their goals of research often with top down approaches. Because the units of social construction - in reality as well as in theory - are social actors, which even formally cannot be simply identified with chemical molecules or biological organisms, theoretical understanding of social dynamics is not possible by just aggregating the collective behaviour of many actors. A pure top down model is certainly suited to describe social regularities; theoretical understanding and explanation of social dynamics is in our opinion only possible if one takes into account the rules of social interaction and explains social dynamics in terms of rule determined social behaviour.

2.6
For this reason even bottom up models that use the traditional apparatus of differential or difference equations (see footnote 2) seem to be less suited for the task of social modelling. If the basic concept is that of social actors and social rules then one has to use formal models that are based on the same logic. The models we propose are of this kind.

2.7
Probably because of this reason the subcommunity of computational sociologists has concentrated during the last years the direction of research to the construction and analysis of so called "multi agent systems" (MAS) or the simulation of "intelligent agents" respectively. It is not important here to consider if the term "agent" is a very suited one for the social sciences, because they use the established concept of (social) actor for long time and "normal" social scientists are not used to the concept of agent. More essential is the fact that the term of agent points to a consensus in regard to a preference of bottom up models, based on the formal representation of social actors. This makes it possible to combine the important traditions of social theory and research with the formal techniques of model construction and computer simulation.

2.8
The main advantage of the formal techniques for the construction of models therefore is that formal models like, e.g., cellular automata (CA) or Boolean nets (BN) allow to translate the empirical observations about social actors and their interactions in an immediate way into the model. For example, observations about social interactions in a group and the empirical reconstruction of the according rules of interaction can be directly transformed into transition rules of a CA or a BN, depending on the neighbourhood. The concept of Soft Computing is defined as the ensemble of those formal models that are constructed in parallel to natural, social and cognitive processes. Conversely Soft Computing models allow social observations to be immediately embedded into these models without any difficult round about. That is why models of Soft Computing are in particular very well suited to be combined with the methods and research traditions of qualitative social research; this fact was rather early noted with respect to expert systems by e.g. Brent and Anderson (1990) and Benfer et al. (1991).

* Formal systems, networks and mathematical spaces

3.1
When talking in this article about social "systems" or social "actors" we do not want to express a preference for one or other theoretical approach. By a social system we do not necessarily mean systems in the sense, of, e.g., the theories of Parsons or Luhmann; neither do we presuppose certain theories of social action or actors, like, e.g., rational choice approaches. On the contrary we believe that it is possible - and necessary - to develop procedures of modelling and mathematical analysis that are neutral in regard to certain theories. The option for a particular theoretical approach must not be confused with the option for certain methods of formal modelling. Methods must be chosen because the theoretical problem demands a particular kind of methods and not the other way around. Therefore the formal methods and modelling schemas we deal with in this article have to be as general as possible and in particular the use of some specific kind of formal model must not restrict the theoretical question one tries to solve.

3.2
Taking this into account it is possible to develop a universal modelling schema that can be applied to literally all kind of problems and that immediately allows to use one or the other kind of Soft Computing techniques - in the domain of social research, but not only there.

3.3
We start from a certain domain of (social) reality, e.g., a social group, organisation or a whole society. According to the classical definition of a system of v. Bertalanffy (1951) this domain is methodically defined as a system, consisting of elements, their interactions and their according rules of interaction. "Elements" of course can be defined in very different ways: they may be social actors as the occupants of particular roles, they may be the members of a student class and they may also be "collective actors" like political parties or institutions. These basic definitions are only a question of the theoretical approach one has chosen and not of the modelling method. By the way, if the concept of "system" gives too strong associations in regard to particular theories of systems, one may speak of a "network" of elements.

3.4
The elements are in certain states, which may have again rather different meanings. A member of a group may be in a particular state of emotion, a social actor may occupy a certain social role, an actor may show a specific behaviour like "aggressive" or "co-operative", a political party as a collective actor may be in the state of being strictly united and so on. The interactions of the elements have the effect that according to certain rules the states of the elements at time t will be changed (or remain constant) at time t+1. The ensemble of the elements states at time t may be defined as the state St of the whole system; to be sure, it depends again on the particular problem how St is computed from the single states of the elements. The rules of interaction generate the trajectory of the system in its state space, i.e., a mapping of St to the state S t+1 and by iteration of this mapping to all other states Sn. If we call the ensemble of rules of interaction f and their iteration, applied n time fn, then we obtain

fn (S1) = S n+1. (1)

A point attractor Sa of the trajectory is now simply defined as

fn (Sa) = Sa, (2)

with the corresponding definitions of simple attractors with periods k > 1.

3.5
It is important to note that there are principally two types of rules of interaction in any complex system. On the one hand there are rules that are generally valid, i.e., they are executed whenever suited elements can interact. But such general rules give no information if at all and how frequently they will be applied, because that depends on whether certain elements of the system can interact at all. This is a question of the topology or geometry of the system that decides who can interact with whom. In the case of physical or biological systems this is usually a question of the physical space the systems are embedded in; in the case of social systems, which constitute a social space, this must be determined by "geometrical" rules that define a social topology (cf. Klüver and Schmidt 1999; Klüver 2000). For example, a worker in a global firm is not allowed to interact directly with the chairman of the board but only by a chain of intermediating persons. By the way, this definition of a social topology or social geometry respectively is roughly equivalent to the concept of "structure" in social network analysis (cf. Freeman 1989). But the term of structure is usually used only in a static sense, although the social topology of a system has important influences on its dynamics. Only recently this fact is considered by some scholars (e.g. Bonneuil 2000).

3.6
The definition of the dynamics (behaviour) of a social system by the iterative application of its rules of interactions presupposes that these rules are constant during the time of observation. This presupposition is without doubt valid for many systems, in particular during short times of observations. But frequently one can observe that systems are adaptive, i.e., they are not only able to change their states but also their rules of interaction to adjust to certain environmental demands. Biological systems are able to adapt by changing their genome; principally social systems are also capable of adaptive behaviour.

3.7
The adaptive capability of a system can be defined the following way: take a system S within an environment E that poses problems to S. The ensemble of the rules of interaction of S is called f, the states of the system are Si. An evaluation function e (fitness function) determines the "value" V of S, i.e., the achievements of the system by measuring the states. Therefore we obtain e(Si) = Vi. The degree to which the system is able to fulfil the environmental demands is now measured as the difference E - Vi. If this difference is rather large, whatever that means in a certain case, then a system with adaptive capabilities changes its rules of interaction by applying certain meta rules m on the interaction rules f. The system obtains m(f) = f´ that are applied to the (unfavourable) states of the system. If the states generated by the new rules f´ are sufficient in regard to the environmental demands, measured by e, the system keeps these rules; if this is not the case, m will be applied again until the states generated by some interaction rules f´´´... are sufficient. The genetic operators of mutation and recombination and the selection by the natural environment in the case of biological species is one of the most well known examples of meta rules; the reform of certain laws (i.e., general rules) via parliamentary procedures is another well known example.

3.8
These definitions give us a very general modelling schema that can be applied by variations and specifying to practically each field of social reality. The basic idea of this schema is the definition of the domain of research as a system or network with the characteristics mentioned above. Using these terms, however, is not just homage to the great theoreticians of general and social systems theory but, as the concept of social topology indicates, it is also the conceptual basis for treating social domains in a strict mathematical sense. It is possible not only to define a social topology, i.e. concepts of nearness, cohesion and distance, but also concepts of the dimensions of social spaces (Klüver and Schmidt 1999a; Klüver 2000). Apparently social domains can be treated by well-known concepts of theoretical physics and mathematics, e. g., as particular examples of topological vector spaces. Because topology and dimensionality of social spaces determine the dynamics, in particular also the adaptive dynamics of the social systems it is one of the most important tasks of computational models of social systems to investigate the interdependency of their mathematical structures and their dynamics. This research program gives the concept of "mathematical sociology" a new meaning; in particular it is easy to see that "computational sociology" and "mathematical sociology" are not different enterprises at all, as is sometimes stated. On the contrary, they form an inseparable unity, as is the case, e.g., in theoretical physics and theoretical biology.

3.9
Because these rather abstract remarks are likely to be misunderstood some clarifications are necessary:

3.10
We are quite aware of the fact that the tradition of mathematical sociology (cf. Fararo 1997 for an overview) is quite distinct from the comparatively new branch of computational sociology. In the sense of this well-known distinction our own approaches clearly belong to computational sociology. But we are of the opinion that it is possible and necessary to apply mathematical concepts like topology and dimensionality of general spaces to the subjects of investigation, i.e. social systems, with which we deal. It is probable that general characteristics of social systems and social dynamics can be expressed in terms of universal mathematical structures and therefore be understood as particular realisations of properties that complex systems have in general. Our own research in the dependence of the dynamics of complex systems of their geometrical characteristics (cf. Klüver 2000 and Klüver 2002) indicate that such an attempt may be rather fruitful. An example for this kind of procedure is the well known proof of Michalewicz (1994) for the convergence of the genetic algorithm (GA): Michalewicz used for this proof of the behaviour of a computational model a famous theorem of Banach and Tarski, the fix point theorem, about metrical spaces. Therefore the meaning of "mathematical sociology" can be enlarged: besides the mathematical methods that are used for some time the concept may also be defined as the study of the general mathematical properties of social spaces.

3.11
Before we illustrate these rather general remarks by some examples of our research, another methodical principle must be mentioned.

3.12
Since the days of Bacon, Galilei and Newton the natural sciences always followed a certain basic principle, i.e., they started with rather simple models and enlarged them by successive steps, whenever and how the progress of research made it necessary. The famous characterisation of "normal science" by Kuhn (1962) may be understood in the sense that normal science is guided by a certain paradigm that is transformed into a simple basic model. This model is enlarged by normal research as long as the paradigm makes it possible. Only by scientific revolutions the paradigm is abandoned and substituted by another that is transformed into another simple model. The famous quotation from Einstein and Infeld that "modern physics is basically simpler than the old" (1956, 144) is to be understood just in this sense: modern physics is based on simpler basic models than the old one.[3]

3.13
In classical sociology the construction of theories is often done in another way, i.e., theorists try to capture from the beginning as much of social complexity as they can within their conceptual frame work.[4] That is not possible in computational or mathematical sociology, respectively: the basic models must be simple in order to understand their behaviour in principle. The enlargement of the basic models that is always necessary in advancing research can only be done if this basic understanding has been achieved. Therefore the social sciences have to adopt this methodical procedure from the natural sciences if a formally precise theory of social complexity is to be achieved.

3.14
The universal modelling schema is in this sense not only a very general schema but also a rather simple one that comparatively easily allows the construction of simple models for a beginning. Enlargements of a basic model that has been constructed according to the schema are possible by following the same construction procedure. If, e.g., one wants to take into account the fact that social actors are not simple finite state automata like cells in a cellular automaton but that their way of thinking and believing influences their social actions as much as the social rules they have to follow, then the units of the according model - the artificial actors - can be represented as complex dynamical systems themselves. In this manner we have together with one of our students, Rouven Malecki, modelled some kinds of social dynamics by constructing a cellular automaton (CA) with cells consisting of combinations of different artificial neural nets (NN) (Klüver and Stoica 2002). In other words, the enlargement of simple initial models can be done by iteratively applying the universal-modelling schema to the initial model, although one can use different modelling techniques in the enlargement process. The CA/NN combination just mentioned allows, e.g., the modelling of cognitive learning processes in dependence of their social milieu and situations and therefore the modelling of some aspects of social behaviour as a result of the semantic world view of the actors.

3.15
In a similar sense the modelling schema makes possible the formal representation of the famous "Micro-Macro-Link" (Alexander et al. 1987). When an initial model is based upon theoretical assumptions about social actors like, e.g., the rational choice approach by Wippler and Lindenberg (1987), an accordingly constructed cellular automaton generates "macro-structures" consisting of geometrical patterns. These patterns that, of course, have to be interpreted according to the founding theory in turn determine the choices and the according behaviour of the actors - they are linked to the macro-structures generated by their own "micro actions". Applying the modelling schema two-fold can enlarge this basic model. The actors themselves can be modelled as cognitive systems by, e.g., using rule based systems that represent the rational cognitive strategies of the actors. The macro-structures can be modelled as the units of a macro-system that is generated by the basic system, which in turn is generated by the situation dependent cognitive strategies of the actors. We realised the generation of a macro system from a CA as a basic system by coupling a CA with a particular neural net, i.e., an interactive net (Klüver 2000).

3.16
Recent advances in social theory demonstrate that theoretical social research is performed on rather different levels (cf. Berger and Zelditch 2002) and that the classical distinction of "micro" and "macro" is perhaps too simple. Given the undeniable fact that actors act according to certain world views, cognitive strategies and beliefs on the one hand, according to social situations and the according social rules on the other hand, and finally in addition according to the general macro-structures of society we have at least three different levels of analysis and modelling. In particular, one always has to be aware of the fact that all these levels are in permanent feed back, i.e., they constitute a complex dynamics consisting of the interplay of three different kind of systems dynamics. The modelling schema proposed by us has obviously the great advantage that it allows to capture all kinds of complexity one wishes to analyse by iterating the same kind of modelling logic.

3.17
We wish to illustrate these general considerations by a short reconstruction of the research done in respect to the "prisoner's dilemma" (PD) by Axelrod and subsequent studies. We choose this example from the theoretical tradition of rational actors in order to demonstrate again that our schema of modelling is not to be mistaken for social systems theories.

* Prisoners and other dilemmata

4.1
The still famous and classic study by Axelrod (1984) is based, as is well known, on a computer tournament during which different strategies for dealing with the iterated PD were tested in a contest "each against each". The often quoted result was the winning of one of the most simple strategies, namely "tit for tat" (TFT) by Rapoport.

4.2
Axelrod and many other theoreticians of the "evolution of co-operation" interpreted this result, i.e., the winning of a "mixed co-operative" strategy rather optimistically in the fashion that co-operative behaviour has necessarily to emerge in social evolution. Because it is that kind of behaviour that guarantees the greatest success in the long run, the emerging of such kind of behaviour has to occur within a population of rational and egoistic actors with a mathematical necessity. But things, as always, are not so simple.

4.3
Speaking in terms of formal models the tournament of Axelrod is nothing else than a system or network of strategic actors, which interact on the basis of certain strategies, based on the aim for success or profit respectively. This artificial network has in a strict sense a very simple geometry because each player interacts with each other; there are just general rules of interaction, i.e., answering to every move of the opponent and counting the losses or gains. In this sense the tournament represented an extremely simple model. Axelrod (1987) enlarged this model in an evolutionary manner: There are still only general rules of "each against each", but there are no constant strategies. The strategies evolve by the use of genetic algorithms (GA) from the different situations of interaction. Although the result was similar to the former tournament, i.e., mixed strategies from the type of TFT were the winners in the long rung, the winning strategies were the outcome of the strategic interactions and not presupposed at the beginning. Obviously this enlarged model has adaptive characteristics with the genetic operators of the GA as meta rules and the winnings from the pay off matrix as evaluation function for each evolving strategy.

4.4
An additional increasement of the original tournament model of Axelrod was introduced by Nowak and May (1993) and Fogel (1993). They abandoned the principle of "each against each" and used the grid of CAs to get a local geometry. The artificial actors only interact with their immediate neighbours on the grid and they choose their strategies according to the successful ones of their neighbours. In this sense the actors behave adaptively; the system as a whole is characterised by the duality of general and topological rules. Moreover, the pay off matrix is used as a variable parameter of the system's dynamics: the evolution of co-operative and aggressive strategies is now dependent on particular values of the pay off matrix.[5] These results demonstrate that the evolution of co-operative behaviour is by no means a necessary event, as Axelrod apparently believed.

4.5
Subsequently we could demonstrate (cf. Klüver 2000) that the dependency on the pay off matrix is even more important for the evolution of particular strategies. We constructed a model with pay off matrix whose values change according to the dominance of strategic or co-operative strategies. In other words, if there are more aggressive than co-operative actors, then the aggressive actors are able to change the pay off matrix to their favour and vice versa. Intuitive assumptions would say that this situation is symmetric, but this is not the case: the aggressive actors just needed very small advantages, i.e. just a few more aggressive players in the beginning to gain more and more; very soon only aggressive players were left. In contrast the co-operative players needed rather large initial advantages to win against the aggressive actors (we used a similar model as Axelrod 1987).

4.6
Cooperative behaviour, this can be learned from these results, does not necessarily emerge from the egoistic behaviour of single actors who only think in terms of their profit. Probably actors have to take into regard the orientation to a whole social system - a group, a family, a clan - i.e. the profit the group gains by individual strategies. The kin-selection theory (e.g. Parisi et al. 1995) demonstrates a possibility how such behaviour may have evolved during biological evolution.

4.7
Cohen, Riollo and Axelrod (2000) finally explicitly introduced geometrical rules into the analysis of PD in order to investigate the dependency of particular strategies on social possibilities of interactions.[6] Apparently there are such dependencies; this fact again shows that the original consequences from the first model were too optimistic.

4.8
It is not our aim to tell the whole research story in regard to PD; that is why we pass over the numerous studies on this subject. In particular this is not the place to discuss the merits and shortcomings of this theoretical approach. What we wanted to show is that and how research in this field corresponds rather exactly to the general modelling schema we sketched above. In the beginning there was a simple - perhaps too simple - basic model with only one kind of rules of interaction; subsequently the model was enlarged by adding more and more features, i.e., geometrical rules, adaptive features and variable boundary conditions, namely the pay off matrix. It is an open question for us whether the recent results allow a final answer to the question of the evolution of co-operation. The important aspect, however, is that here is a type of research like that in the natural sciences: theoretical assumptions, construction of initial basic models, testing and revision by computer simulations, i.e., experiments, enlargement and variation of the models etc.., in nuce, a permanent interdependency between theory construction and (computer) experiment.

* Models, theories and Turing machines

5.1
The reflections above emphasised in particular the fashion of constructing and using formal models and computer simulations respectively. It is now time to ask for the aims that can be realised by these methodical procedures. An also rather general schema gives us mainly the following three:

(A)

5.2
By far the most of the studies in this field deal with the testing of certain theories. This means that specific theories or theoretical approaches are transformed into a formal model, parameters are classified as significant in regard to the model's behaviour, and then by variation of these parameters it is investigated if the - usually not formally expressed - theory can be confirmed. The benefit of this kind of research consists mainly of two aspects:

5.3
On the one hand the transformation of the usually informal theory into a formal model demands an operationalization and higher level of precision of the most important premises and statements of the theory. This aspect alone that was mentioned frequently in connection with model construction may give insights into the original theory that else remain hidden. It is rather often the case that the representation of a social theory in informal language hides inaccuracies and vagueness that become manifest only in the formal model. To be sure, if the construction of a formal model of a certain theory demands perhaps revisions of the theory then the question arises if the transformation into the model was a valid one. But just the coercion in respect to the re-examining of a theory can be very useful by itself.

5.4
On the other hand such a transformation may show that the original theory needs to be enlarged and corrected. If the model tested by computer simulations does not show the behaviour that is postulated by the theory then of course the first question is that for the validity of the model and if that must be revised. Yet according to our experiences it is rather often the case that the theory needs not only higher levels of precision but also enlargements, i.e., completions in its critical points. In such cases the question arises if the changed theory is still the original one at its core. But if a theory can be transformed into a formal model only by completing and changing it in vital parts then one must ask if the original theory had any explanative power at all.

5.5
It may very well be that every social theory that is not represented in a precise formal manner needs such completions in order to give it some explanative power. If this would be the case, which we suspect, then the social sciences would become sciences in a strict methodical sense only by basing them all on models of the kind described in the first sections. In other words, the use of formal models in the social sciences would be not only an addition to the established methods but it would be the base for the emerging of proper social sciences. But as this is a long story we leave this subject to the future.

5.6
Because theory testing has become in the meantime a well known type of research in computational sociology or traditional mathematical sociology respectively we do not deal with this aspect any longer and shall not illustrate it with examples of our own research. The journals mentioned in the introduction have published enough examples of interesting work in this area.

(B)

5.7
To be sure, the main reason for using mathematical and other formal models in the sciences is the development, i.e. construction of theories and their confirmation by testing the models against reality. Such a theory development usually occurs in a "dialectical" relation between theoretical assumptions, model construction, experimental testing of the model, revision and enlargement of the theoretical presuppositions and the model, new experimental testing etc. Because in the social sciences "real" experiments are in most cases not possible or only in a very restricted way, the use of computer simulations plays a decisive role: very often computer experiments have to play the part of real experiments in the laboratory sciences. By the way, that is also the case in those natural sciences where for similar reasons experiments are not (yet) possible, in particular in the "historical" sciences like physical cosmology and evolutionary biology. Methodically speaking the basic situation is usually that on the one hand a lot of historical facts are known and on the other hand these historical processes cannot be investigated by experiments. Therefore all evolutionary theories can be practically tested only by transforming them into computer models and testing them via computer simulations (cf. e.g. Kauffman 1995 and 2000). In nuce: there are known several states of the evolutionary systems that are to be investigated, e.g. the present, some past states and an - often - hypothetical initial state. An evolutionary theory postulates explanative relations between these states in form of transition rules and these postulates will be tested by the sketched dialectics between theory construction, model construction and computer experiments.

5.8
Of course, there is no sharp distinction between aspects (a) and (b) because usually one cannot tell if "only" an already existing theory is revised and enlarged or if the result of the theoretical research is factually a new theory. This decision is up to the scientific community in any case, usually done so only in hindsight. Our own studies have dealt with both aspects and the example we give below in respect of the development of a so called sociocultural algorithm, that is a mathematical theory of sociocultural evolution, will illustrate that certainly every new theory is in some sense always an enlargement and revision of old ones.

(C)

5.9
The third aspect we wish to mention is certainly for social scientists the most unusual and perhaps strange if not odd. The basic idea is that it is possible to investigate "pure" formal systems and to gain by these investigations important results about the regularities of the behaviour of all systems, in particular social ones. In order to explain this procedure that probably reminds of a Kantian apriorism, a little excursus is necessary to the logical-mathematical foundations of the formal models we mentioned in the preceding sections.

5.10
Formal systems like cellular automata (CA) or Boolean nets (BN) are logically so called universal Turing machines. These systems are capable of effectively computing each computable function which means that there is no formal system more general than universal Turing machines (the so called Church-Turing hypothesis). The physical Church-Turing hypothesis then postulates that each physical real system can be completely modelled by a universal Turing machine or a logical equivalent respectively. "Completely" of course just means the logical possibility; it is up to the empirical knowledge about the real system how factually complete the particular model is.

5.11
If one "inverts" these results of mathematical basic research, which was done by some scientists of the Santa Fè Institute for research in complexity (Rasmussen et al. 1992), then one obtains the following consideration. If one studies formal systems, logically equivalent to universal Turing machines, as research subjects sui generis, and if one discovers certain law governed regularities of their behaviour - their dynamics - then these laws are valid by necessity for all real systems, in particular also social systems. This is an immediate consequence from the logical universality of these formal systems: every single real system that is to be studied in regard to its behaviour can be completely modelled and simulated by a formal system that is equivalent to a universal Turing machine - e.g. by a CA. If one knows the general characteristics of such formal systems then these are also valid for the particular formal system that one uses for the simulation. Because this is a complete representation of the real system, i.e., the formal system is the result of an homomorphic mapping of the real system, the real system must necessarily contain the characteristics of the formal system, known from the general investigations of formal systems - q.e.d. In a strict methodical sense one may speak indeed here of a "Kantian stance" (Kauffman 1992), which, however, has nothing to do with transcendental philosophy but a lot with a specific kind of experimental mathematics. By the way, such general insights were probably always the aim of general systems theory, but one that could not be reached in lack of appropriate conceptual and methodical means.

5.12
The formal models that we mentioned in the preceding chapters like CA, BN and neural nets are on the one hand equivalent to universal Turing machines and on the other hand extremely well suited to realise the general modelling schema in section 3. Obviously the schema is not only in a methodical but also in a logical sense a general one, because according to these considerations it allows the modelling of every real system in the way of a bottom up approach. Therefore the old question, if a mathematical sociology is possible, can now be answered in the affirmative: in a logical sense this modelling procedure has literally no limits. It is a practical question if in regard to particular questions of social research always formal methods must be used.[7]

* Social evolution and social order

6.1
We are of course quite aware that our considerations so far are rather general and in particular for readers who are not acquainted with these subjects also rather abstract. That is why we illustrate the general remarks with two examples of our own research in regard to aspects (b) and (c) discussed in the preceding section. We omit aspect (a) because this is the aspect best known in the field of social computer simulations.

Aspect (b): the development of a theory of sociocultural evolution

6.2
It is of course not possible to present the whole theory in detail in this article that is written mainly for methodical purposes; therefore we give just a rough sketch. Interested readers may become acquainted with the whole theory in Klüver (2002) and Klüver and Schmidt (2003).

6.3
The theoretical starting point is the assumption that sociocultural evolution is a two-dimensional process, which is determined by the mutual interdependency of social and cultural evolutionary processes. Therefore a society S is a two-dimensional construct, consisting of a culture C and a social structure St. C is defined as the set of all socially accepted knowledge that a society has at its disposal at a certain time t (cf. Habermas 1981 , Geertz 1973). The social structure is nothing else than the set of all social rules that are valid in the society at time t (cf. Giddens 1984). In a formal way we get

S = (C, St). (3)

6.4
Because "society", "culture" and "social structure" are abstracta, they must be defined empirically by the actions and interactions of social actors; these are "social" by occupying certain social roles as the basis for their interactions with other social actors, i.e., occupants of other or the same social roles. Roles are also two-dimensional constructs: on the one hand they are defined by the set of social rules that are obligatory for the interactions of the role occupant with other actors; that is logically equivalent to the classical role definition as a set of generalised expectations of behaviour. On the other hand roles are characterised by a certain knowledge base, i.e., the set of all knowledge that is necessary to act as a role occupant. This definition can be illustrated best by the example of professional roles like a medical doctor who must have certain sets of medical knowledge at his disposal and has to act according to the rules of his profession (the Oath of Hippocrates). But of course the definition is valid for all roles.

6.5
If we call the role specific rules r and the knowledge base k, then we formally obtain

Ro = (r,k). (4)

6.6
It is important to note that social rules as well as cultural knowledge are socially only obliging if both are fixed as parts of particular roles - neither purely private knowledge nor private rules are constitutive aspects of a society. Therefore a role can be characterised as a focus or nexus of the two analytical dimensions of society: both are concretely integrated by the occupant of a role within and by his role. Therefore a society can be understood as the whole set of role incorporated knowledge bases, i.e. culture, and of role specific rules - the social structure.

6.7
The "motor" of sociocultural evolution is this: creative individuals generate new knowledge that becomes institutionalised in certain roles - if this knowledge is socially accepted, of course. By this process already existing roles become enlarged and changed or new roles will be generated. This way the social structure is changed in dependency on the growth of culture. The driving force of sociocultural evolution therefore is a) the creativity of individuals and their production of new knowledge, b) its transformation into new or enlarged roles and c) the emergence of a new social structure as a result of processes a) and b). The new social structure in turn determines the generation of new knowledge - in an inhibitoric or stimulating way - etc. Therefore one can visualise an evolving society as a network of social roles that is changed and enlarged by the individual generation of new knowledge; the network in turn determines the generation of additional knowledge, which changes the network again and so forth, but only in principle: certain realisations of the network - in fact, most of them - slow down this process or stop it altogether, i.e., the process reaches an attractor and stagnates. Apparently this development, that is the stagnation in an attractor, is according to Toynbee's great studies (Toynbee 1934 ) the fate of nearly all societies and cultures. This basic mechanism of sociocultural evolution we call the sociocultural algorithm (SCA).

6.8
We transformed this evolutionary schema that is formally related to the famous theory of Historical Materialism into a formal model, which can be characterised as a "generalised" CA. Occupants of certain roles are represented by specific states of the CA cells, which are defined as triple states: one value represents a certain role, the other a particular set of role specific knowledge and the third a learning strategy, i.e., a measure for the velocity by which the individuals acquire new knowledge. The interaction rules between role occupants represent different possibilities of social learning: one individual may take over a certain knowledge component from another, the individuals may mutually reinforce their learning processes or they may inhibit the learning of other individuals, they may take over such roles that are dominant in their CA neighbourhood and they may influence others in role taking. The whole system is stochastic because our artificial actors have a certain space of freedom, in which way they socially learn from others. Therefore our system can be characterised by general (stochastic) rules and a locally effective geometry. It is also adaptive because our learning actors seize problems from the system's environment and change or enlarge their roles according to the new knowledge they acquire.

6.9
Because of certain historical and role theoretical assumptions that we cannot explain here we assumed that there is a decisive evolutionary parameter, measured by the degree of role independence, i.e., the degree by which the occupants of roles are able to develop without an inhibiting influence by the occupants of other roles (their social milieu). This evolutionary parameter is called EP. After having defined this parameter we started experiments with our SCA, i.e. the sociocultural mechanism in our artificial society and learned that under most initial conditions, i.e., initial values of EP, our artificial societies got caught in an evolutionary attractor - they evolved according to a "Toynbee-fate". This typical development is shown in Figure 1.

fig 1
Figure 1. a Toynbee development with low EP-values: the culture is caught in an attractor

6.10
Only with very high EP-values, i.e., a very high degree of role independence our societies were not caught by an attractor. Such a development that is typical for modern societies of the Western kind is shown in Figure 2.

fig 2
Figure 2. a modern development: the culture transcends attractors

6.11
In other words, only very improbable initial conditions lead to evolutionary developments that are characteristic for the emergence of modern Western societies. Because such conditions were doubtless not the case in the beginning of European history and the European Middle Ages we had to conclude that an additional kind of evolutionary dynamics had to be effective: apparently under certain conditions a society is able to enlarge the EP-values via the SCA. That means that the SCA can change the very initial conditions that determined its evolutionary mechanism and such reinforce its own development. We may call this in orientation to some remarks of Luhmann an evolution of evolution. In a formal way we can describe this as a kind of dynamics that is not only adaptive in the sense defined in section 3 but that is able to change its own structural frame (for more detailed explanations see Klüver 2002). We introduced this type of dynamics into the SCA and obtained results that are consistent with the processes known from European history.

6.12
The whole process of theory construction was a permanent interplay between the formulation of basic assumptions and definitions, historical studies, construction of the formal SCA model, the revision of some assumptions by the first results of computer simulations and revisions or enlargements of the formal model and so on. To be sure, the whole theory is dependent on the consistency of theoretical foundations, historical knowledge and computer model. But only such a permanent integration of historical knowledge with theoretical reflections and computer simulations based on the mathematical model of the SCA made it possible to transcend the non-committal evolutionary theories of the tradition of the humanities. Although our theoretical level at the present is far from complete, we firmly believe that such an approach is an excellent, maybe the only way to obtain social theories that are truly scientific ones.

Aspect (c): the discovery of universal principles

6.13
In order to demonstrate the results one can gain by analysing pure formal systems we shall deal with the problem of the so-called ordering parameters in regard to CA and Boolean nets (BN). It is mathematically sufficient to analyse just a particular class of universal Turing machines because all other classes are equivalent with respect to the characteristics one wishes to study.

6.14
BN are a generalisation of CA and also discrete systems; in the simplest case the units of BN just have the values of 1 and 0. We shall consider in the following section only such binary networks, although without restriction of generality. The general rules of binary BN are just the well known rules of propositional calculus: if, e.g., two units a and b determine the value of a third unit c and if the rule is

(1,1) = 1
(1,0) = 0
(0,1) = 0
(0,0) = 0,
(5)

then obviously this rule is nothing else than the logical conjunction.

6.15
Besides those general rules there is a topology that determines, which units interact with which others, i.e., which units determine the values of which other units. This topology can be represented in a binary adjacency matrix; the general rules can be represented in a frequency matrix that states, which state values of the units can be realised dependent on the application of the general rules.

6.16
Ordering parameters of CA and BN are understood as attempts to find simple numerical values or parameters for the rules that forecast or determine, which kind of dynamics a certain BN (or CA) can principally generate. Well known is the so-called P-parameter (Kauffman 1995) that determines the frequency of reaching the values 1 or 0 by applying a certain rule. P = 1 means that either only 1 or only 0 can be realised; P = 0.5 means that both values can be realised with the same frequency; P = 0.75 means that there is a distribution of 1 and 0 with the ratio 0.75, i.e. 3/4. The logical conjunction mentioned above obviously has P = 0.75.

6.17
As an ordering parameter P has the following effects: if 0.5 ≤ P ≤ 0.63, then the BN generates complex forms of dynamics, i.e. trajectories with attractors of long periods - the BN does not stabilise. If 0.64 ≤ P ≤ 1, then the BN generates only trajectories with simple attractors of small periods or point attractors - the BN reaches rather quickly states of stable order.

6.18
There exist different ordering parameters, but for some time it was not clear if there is a connection between them. Besides that it was an open question if the topological condition of a BN can be expressed by a simple ordering parameter, because the known parameters were only defined for general rules. We could answer both questions in the affirmative by the discovery of the so-called v-parameter.

6.19
The v-parameter (v for the German word Verknüpfung (connection)) is defined as the deviation from the mean measure of influencing possibilities. In other words, the v-parameter measures the homogeneity of influencing possibilities the units of a BN have on other units. For example, if in a BN of three elements all units influence the others, then v = 0; v = 1 is the case, if in this BN only one element influences all others and the others have no influence at all. In this case the homogeneity or the measure of equality with respect to influencing possibilities is extremely small; in the other case the degree of equality obviously is extremely large. Therefore v = 1 exhibits a large degree of inequality and vice versa.

6.20
Mathematically v can be expressed as a characteristic of the adjacency matrix that is represented as a directed graph:

v = |(OD - ODmin)| / |(ODmax - ODmin)| and 0 ≤ v ≤ 1. (6)

ODmin is the outdegree vector, i.e. the vector for the outputs of the units, that contains the maximal possible homogeneous distribution of outputs; ODmax is the outdegree vector with the minimal possible homogeneous distribution and OD is the factual outdegree vector for the BN.

6.21
The effects of v as an ordering parameter are as follows: very small values of v, i.e., 0 ≤ v ≤ 0.25 generate complex types of dynamics, i.e., trajectories with attractors of long periods. The behaviour of systems with small values of v is similar to that of chaotic systems, although finite deterministic systems are always, strictly speaking, periodic and therefore never chaotic. This is principally known from the famous theorem of eternal return that Poincaré stated already in 1900. Systems with larger v-values generate simple dynamics, i.e. trajectories with point attractors or with small periods. In other words, if there is a high degree of equality in regard to influencing possibilities the system generates complex dynamics that is difficult to prognosticate; inequality in this respect generates simple dynamics that in most cases ends in point attractors. In this sense the topology of a system or network indeed determines the dynamics of all systems and its influence can be described by appropriate ordering parameters.

6.22
This result is in several aspects important (not only) for social scientists: in particular we demonstrated that the other ordering parameters also can be interpreted as measures for equality or inequality respectively, although in other dimensions of equality (Klüver and Schmidt 1999). Because the effects of the other ordering parameters also lead to the conclusion that high degrees of equality generate complex dynamics and vice versa, the following theorem of (social) inequality can be stated:

6.23
High degrees of equality of the actors in social systems in any different dimensions of equality generate complex dynamics; the higher the degree of inequality of the actors is, the simpler is the social dynamics.

6.24
For example, a strictly hierarchical system can be mathematically represented as a system with very high v-degrees. Empirical observations demonstrate indeed that such systems generate only very simple dynamics.

6.25
From this theorem we can obtain two further results. Most ordering parameters are characterised by the fact that complex dynamics can be generated only in a rather small region of its values; the regions for generating simple dynamics are in every case much larger. In addition, the different parameters can compensate each other, but only in one direction: for the generation of simple dynamics it is sufficient that only one of the different parameters exhibits values in the region for simple dynamics. In contrast, for the generation of complex dynamics all parameter values have to be in the region for complex dynamics. Mathematically that means that simple types of dynamics are much more probable than complex ones, i.e., on the one hand in regard to the value regions of the single parameters and on the other in respect of the combination of the different parameters. Because simple dynamics and in particular the generation of point attractors is certainly a very essential aspect of order - that is why the parameters are called ordering parameters -, the second result can be obtained:

6.26
The realisation of "dynamical order", i.e., of states that do not change any more although the rules of interaction are still operating, is much more probable than the realisation of complex dynamics, which for the members of social systems appears as permanent disorder. Indeed, only stable order guarantees for the members of social systems secure possibilities of action and prognosis, that is of living outside the "edge of chaos" (Kauffman). Apparently there are mathematically expressible reasons for the fact that societies can guarantee such order in most cases.

6.27
The third result we call the "paradoxon of democratic reforms". The ordering parameters tell us what kind of dynamics a system can principally generate and that also means what states can be reached at all. To be sure, nothing is said by that how the system or its members respectively evaluate these states. If the result of such an evaluation is negative, then each social system will try to change its rules in order to generate better states, i.e., the system behaves in an adaptive fashion. Politically speaking that is called reform. If these reforms particularly aim at the realisation of more equality in the system, i.e., the reforms shall increase the degree of democracy, then inevitably these reforms change the value of one or the other ordering parameters, if the changes in the social system are sufficiently radical. If by these radical reforms the values of the ordering parameters are varied, so that the system's dynamics will become more complex; that in turn means that the system will change its states more often than before the reforms without any further rule changes. Probability theory teaches us that with fixed criteria of normative values there are a lot more bad states than favourable ones, i.e., such states that all or the majority of the members of the system are content with. Therefore, because of the reforms, the system will with high probability generate states that are not favourable. That causes new reforms, which, if they again increase the degree of democracy, will produce the same result and so on.

6.28
We do not wish to plead for undemocratic societies or the uselessness of democratic reform - on the contrary. But on the basis of such results one has to consider very carefully when undertaking reforms. Some possible consequences of reforms we can know in advance and that is to be considered Quidquid agis, prudenter agis, et semper respice finem.

6.29
By the way, it is always possible to simplify too complex dynamics by introducing additional algorithms that "damp" the processes and force the generation of point attractors. In artificial neural nets, e.g., this is realised by using the so called "winner take all" function; in social systems such algorithms are, e.g., the decision competence of the occupants of political offices with respect to their formal equally citizens. But such damping algorithms of course increase inequality in such a system.

6.30
In particular investigations like the analysis of universal Turing machines are rather far away from the methods social scientists are used to. But these considerations as well as the reflections about that kind of dynamics that was mentioned in the preceding section are indications to look at social processes in a more general way than social scientists usually are accustomed to: to see social dynamics as a particular variation to a general air, that is the selforganising dynamics of complex systems.

6.31
This article of course was not meant as a state of the art review; that would have been quite another essay. Rather our aim was to clarify some foundations of the still comparatively new sub-discipline of computational - and mathematical - sociology. Other researchers may have a different look to the common subject. Scientific discourses will decide, as always, if our point of view is a fruitful one.

* Acknowledgements

We wish to thank two anonymous referees for very helpful comments on a first version.


* Notes

1 Christina Stoica and Jürgen Klüver just wrote a course on Soft Computing as part of an online curriculum "Economy and Computer Sciences" for the departments of economy of the universities of Bamberg and Essen. Readers who are able to read German may obtain this course from the authors.

2 To be sure, the use of differential or difference equations does not necessarily mean a top down approach. It is quite possible to model systems with differential equations while going bottom up. For example, Kepler's equations are a typical top down model of the planetary system; Newton's theory of gravitation is strictly speaking a bottom up model.

3 Our translation from the German version

4 We have to make a caveat to this statement: of course there are also theoretical approaches in the social sciences that try to start with the basics of social behaviour and go on from there by enlarging their models. Such a procedure is, e.g., characteristic for Garfinkel and his school of ethnomethodology, the role theory of Goffman or other micro sociological approaches. Therefore our remark is to be understood as a remembrance of methodical procedures that are sometimes neglected.

5 We assume that the determinant of the pay off matrix is the significant parameter. Unfortunately neither Nowak and May nor Fogel did investigate this question.

6 According to a personal remark of Michael Cohen they were inspired to do this by the research of our group.

7 Readers who are acquainted with the famous "limit theorems" of Gödel, Church and Turing may be a bit surprised by our statement, that this procedure has no limits. But it is rather easy to demonstrate that for every single modelling it is possible to construct a sufficient powerful formal system, although for this formal system the limit theorems are also valid (cf. Klüver 2000).


* References

ALEXANDER, J.C., Giesen, B., Münch, R. and Smelser, N.J. (eds.), 1987: The Micro-Macro-Link. Berkeley: University of California Press

AXELROD, R. 1984: The Evolution of Cooperation. New York: Basic Books

AXELROD, R., 1987: The Evolution of Strategies in the Iterated Prisoner's Dilemma. In: Davis, L. (ed.): Genetic Algorithms and Simulated Annealing. Los Altos: Morgan Kauffman

BAHR, D.B. and Passerini, E., 1998: Statistical Mechanics of Opinion Formation and Collective Behavior. Journal of Mathematical Sociology 23, 1 - 41

BENFER, R.A., Brent, E.E. and Furbee, L., 1991: Expert Systems. Newbury Park-London: Sage

BERGER, J. and Zelditch, M., (eds.), 2002: New Directions in contemporary Sociological Theory. New York: Rowman and Littlefield

BERTALANFFY, L.v., 1951: Zu einer allgemeine Systemlehre. Biologia Generalis. Archiv für die allgemeinen Fragen der Lebensforschung 19, 114 - 129

BRENT, E. E. and Anderson, R., 1990: Computer Applications in the Social Sciences. Philadelphia-London: Basics

BONNEUIL, N., 2000: Viability in Dynamic Social Networks. Journal of Mathematical Sociology 24, 175 - 192

COHEN,M.D., Riollo, R.L. and Axelrod, R., 2000: The Role of Social Structure in the Maintenance of Cooperative Regimes. Rationality and Society 13, 5 - 32

EINSTEIN, A. and Infeld, L., 1958: Die Evolution der Physik. Reinbek: Rohwohlt

EPSTEIN, J.M., 1997: Non-linear Dynamics, Mathematical Biology and Social Science. Redwood: Addison Wesley

FARARO, T., 1997: Reflections on Mathematical Sociology. Sociological Forum 12, 73 - 101

FOGEL, D.B., 1993: Evolving Behaviors in the Iterated Prisoner's Dilemma. Evolutionary Computation 1, 77 - 97

FREEMAN, L. 1989: Social Networks and the Structure Experiment. In: Freeman, L. (ed.): Research Methods in Social Network Analysis. Fairfax: George Mason University Press

GEERTZ, C., 1973: The Interpretation of Cultures. New York: Basic Books

GIDDENS, A., 1984: The Constitution of Society. Outline of the Theory of Structuration. Cambridge: Polity Press

HABERMAS, J., 1981: Theorie des kommunikativen Handelns. Frankfurt (M): Suhrkamp

HANNEMANN, R.A., 1988: Computer-Assisted Theory Building. Newbury Park/London: Sage

HOLLAND, J. R., 1998: Emergence. From Chaos to Order. Reading (MA): Addison Wesley

KAUFFMAN, S.A., 1992: Origins of Order in Evolution. Self-Organization and Selection. In: Varela, F.J. and Dupuy, J.P. (eds.): Understanding Origins. Contemporary Views on the Origin of Life. Dordrecht: Kluwer Academic Publishers

KAUFFMAN, S.A., 1995: At Home in the Universe. New York: Oxford University Press

KAUFFMAN, S.A., 2000: Investigations. Oxford: Oxford University Press

KLÜVER, J. and Schmidt, J., 1999: Control Parameters in Boolean Networks and Cellular Automata Revisited: From a logical and a sociological point of view. Complexity 5, No. 4, 45 - 52

KLÜVER, J. and Schmidt, J., 1999a: Social differentiation as the Unfolding of Dimensions of Social Systems. Journal of Mathematical Sociology 23 (4), 309 - 325

KLÜVER, J. and Stoica, C., 2002: A Model of Cognitive Ontogenesis. In: J. Klüver, J., An Essay Concerning Sociocultural Evolution. Theoretical Principles and Mathematical Models. Dordrecht: Kluwer Academic Publishers

KLÜVER, J. and Schmidt, J., 2003: Historical Evolution and Mathematical Models: A Sociocultural Algorithm. Journal of Mathematical Sociology Vol. 28, issue 1

KLÜVER, J., 2000: The Dynamics and Evolution of Social Systems. New Foundations of a Mathematical Sociology. Dordrecht (NL): Kluwer Academic Publishers

KLÜVER, J., 2002: An Essay Concerning Sociocultural Evolution. Theoretical Principles and Mathematical Models. Dordrecht: Kluwer Academic Publishers

KUHN, T.S., 1962: The Structure of Scientific Revolutions. Chicago: University of Chicago Press

MICHALEWICZ, Z., 1994: Genetic Algorithms + Data Structures = Evolutionary Programs. Berlin: Springer

NOWAK, M.A. and May, R.M., 1993: The Spatial Dilemma of Evolution. International Journal of Bifurcation and Chaos 3, 35 - 78

PARISI, D., Ceconi, F. and Cerini, A., 1995: Kin-directed altruism and attachment behaviour in an evolving population of neural networks. In: Gilbert, N. and Conte, R. (eds.): Artificial Societies. The Computer Simulation of Social Life. London: UCL Press

RASMUSSEN, S., Knudsen, C. and Feldberg, R., 1992: Dynamics of Programmable Matter. In: Langton, C.G., Taylor, C., Farmer, J.D. and Rasmussen S., (eds.): Artificial Life II. Reading (MA): Addison Wesley

TOYNBEE, A., 1934 - 61: A Study of History (12 vols.) Oxford: Oxford University Press

WIPPLER, R. and Lindenberg, S., 1987: Collective Phenomena and Rational Choice. In: Alexander et al. (eds.) The Micro-Macro-Link. Berkeley: University of California Press

WOLFRAM, S. 2002: A new kind of science. Champagne (IL): Wolfram Media

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2003]