© Copyright JASSS

  JASSS logo ----

Jürgen Klüver and Christina Stoica (2003)

Simulations of Group Dynamics with Different Models

Journal of Artificial Societies and Social Simulation vol. 6, no. 4
<https://www.jasss.org/6/4/8.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 13-Jul-2003      Accepted: 13-Jul-2003      Published: 31-Oct-2003


* Abstract

A socio-matrix (or Moreno matrix) describes relations between members of a group. Such a matrix can also be used to predict the dynamics of a group, i.e., the behaviour of the group members is determined by the values of the matrix. Several different models are used to analyse group dynamics based on Moreno matrices, namely a cellular automaton (CA), a Kohonen feature map (KFM), an interactive neural net (IN), and a genetic algorithm (GA). The results of the different models are compared; the models produce rather similar effects. In addition the predictions of the models are compared with empirical observations with respect to groups of students and children in a summer camp. The models are quite efficient in predicting real social behaviour.

Keywords:
Group Dynamics; Model Comparison; Moreno Matrix

* Introduction

1.1
One of the best known methods in social network analysis and research in group dynamics is the use of so called socio matrices or Moreno matrices (Moreno 1934; Freeman 1989). A socio matrix represents relations between the different group members, e.g., positive and negative attitudes or the frequency of social interactions. Given a group of three members A, B and C for example, then a socio matrix of this group may be

ABC
A111
B010
C000

which could represent the different attitudes: A likes himself and the others, B does not like A and C but only himself and C does not like anyone, including himself. Accordingly one may construct a socio matrix that measures the frequency of interactions. In our example such a matrix could be

ABC
A045
B401
C510

and zeroes in the main diagonal mean, of course, that one does not interact with oneself.

1.2
Usually socio matrices are used to describe only the structure of a group, for instance, when analysing the emotional relations between students in a classroom. In the course of systematic investigations of the dynamics of social systems and their dependency on structure (Klüver 2000; Klüver and Schmidt 1999) we analysed the dynamics of social groups and their dependency on the structure that is represented in a particular Moreno matrix. In other words, we took a certain socio matrix as the basis for different artificial models and analysed the dynamics of the resulting group; the dynamics of each model are generated by locally operating rules of interaction.

1.3
Because we were aware that often results obtained by experiments with computer models might be only artefacts, due to particular characteristics of the model, we used different models for the same problem and compared their respective results. In addition we did some group experiments with students at an academy at Dortmund and tried to predict the outcome with our models. Finally, one of our students, Dominic Kalisch, accomplished a systematic observation of children in a summer camp at the North West of France for two weeks and compared his data with the prediction of one of our models.

1.4
The five models we used are (1) a cellular automaton (CA) with rules of interaction based on the feelings of group members towards the other members, (2) a Kohonen feature map (KFM), i.e., a self-organising artificial neural net, (3) an interactive neural net (IN), that is a recurrent network that does not learn - in contrast to the KFM, (4) a model consisting of two different genetic algorithms (GA), and (5) a rule based system.

* The Moreno CA

2.1
CA are discrete systems consisting of a grid of artificial cells; the cells are in certain states that change according to the rules of interaction dependent on the states of the cells in the "neighbourhood" of a particular cell. As the cells of a CA are usually defined as squares on a grid, the neighbourhood of a cell is mostly the Von Neumann neighbourhood, i.e. the four adjacent cells at the sides of the square, or the Moore neighbourhood, that is the eight adjacent cells at the sides and at the corners. CA are logically equivalent to universal Turing machines which means that it is possible to model each type of complex systems with a suited CA (cf. Wolfram 2002; Klüver 2000).

2.2
The rules of our "Moreno CA" are roughly as follows: an artificial member, represented by a cell, is in a positive or negative emotional state dependent on the neighbours; he either likes them or does not like them or he is indifferent towards them. Some versions of this CA use emotional values between 0 and 9, i.e., an artificial member likes "more or less" its neighbours, but the simple version with just three values in the components of the matrix is sufficient for the purpose of this article. The artificial member is in a positive mood if he is surrounded by more people he likes than by people he does not like or feels indifferent to. He is in other emotional states if there are more people in the neighbourhood he does not like than those he likes etc.

2.3
Speaking more formally the states of the cells are computed in the following way: given a cell A then the Moreno matrix determines the emotional relations of A when applied to the eight cells of the Moore neighbourhood of A. We then obtain eight values v(A,X) for each cell X belonging to the Moore neighbourhood of A. The state SA of A is given by the arithmetical mean value, i.e.,

SA = S v(A,X)/n,

Where n is the number of non-empty cells in the Moore neighbourhood. The Moreno matrix consists, as mentioned above, only of values v(A,X) = 1, 0 or -1.

2.4
When the state of a cell A has been computed, the artificial group member "looks" around if there is a neighbourhood, i.e., a social milieu which he likes better. "Looking around" means that certain other cells are compared with the cells in the original (Moore) neighbourhood with respect to the values v(A,X); subsequently the new state SA is computed in regard to the new cells and compared with the original state. If the results are better than the first one, i.e., if there are members A likes better than his original neighbours, A "moves", i.e., leaves his place and moves into the direction of the members he likes better. "Moving" means that the original cell of A becomes an empty cell and one of the empty cells in the neighbourhood of A obtains the state of A, i.e., becomes the new position of A. A cell A can only move into an adjacent empty cell on the grid at one time step. If A cannot find members he/she likes better, A stays on the original place, i.e., in its original neighbourhood. In other words, A is looking for a neighbourhood U with a resulting state SAU with

SAU > SA.

2.5
"Certain cells" that are compared with the cells of the original neighbourhood can be defined in different ways in our model. The observation range can be the extended Moore neighbourhood, i.e. the 16 cells adjacent to the original neighbourhood cells; it can be the extension of the extended neighbourhood which give cell A a total of 16 + 24 = 40 cells to look for alternatives to its original position and it can be the whole grid. We usually did our experiments with 30 group members and a grid of 80 cells. There has to be a rather large number of empty cells in order to give the artificial group members free room to move.[1]

2.6
The socio-psychological assumption behind these rules is of course that people tend to gather with other people they like rather well or at least they do not dislike. Because this assumption is a truism, known from everyday experience, the rules of our CA seem to be quite realistic. Conversely, if certain group members are associated with members they do not like, then they will feel unhappy and try to go away, i.e., they try to look for a better milieu. The modelling logic therefore is that the values of the Moreno matrix are transformed into a certain state of satisfaction or dissatisfaction according to the formula above. This modelling logic is also the basis for the other four models described below.

2.7
The dynamics of the model depends on two parameters: on the one hand the size of the "looking area" can influence the behaviour of the system; on the other hand the structure of the Moreno matrix can have a certain influence too. In our experiments, we tried to analyse the effects of these two factors: the question was how fast the system stabilises, i.e., how fast the system will reach point attractors. A point attractor means in our model that the group members will not change their neighbourhoods any more and remain therefore in the same place and "emotional" state. The initial states of the whole system, i.e., the distribution of the group members on the grid, were generated at random. As there are three possible sizes of the looking area, we experimented with all of them.

2.8
The "Structure" of the Moreno matrix is measured with a simple "symmetry parameter" Sy. We call a relation between A and B symmetric if v(A, B) = v(B,A). Then Sy is the proportion of symmetric relations in a given Moreno matrix of all factual relations or

Sy = symm.rel./all rel.

In our case of n = 30 members so we have n2 -n relations between the members, if one does not count the relations (A, A), and therefore 870 relations.

2.9
Experiments done with this CA demonstrated that the CA in most cases generated point attractors very soon regardless of the initial states. For all sizes of the range of observation and for the most values of Sy the group was able to stabilise in the sense defined above. In the case of the first two sizes, i.e., 16 or 40 cells respectively, the CA reached point attractors after preperiods of 2 to 10 time steps. The length of the preperiods of the attractors depended mainly on the size of the looking area: 16 observable cells obtained preperiods not larger than 5 or 6; if the group members could observe all other group members then accordingly more runs (usually up to 20 time steps with a group of 30 members) were needed to obtain a point attractor, i.e., a group state where no member moved any longer. The reason for this behaviour is clear: the more possibilities of getting a better neighbourhood a member could perceive, the longer the whole system needed to obtain a stable state. If on the other hand each member only could analyse its extended Moore neighbourhood, then the members had to be content with their local optima and accepted suboptimal neighbourhoods: they could not perceive that better neighbourhoods would be possible with group members they could not "see".

2.10
Only in a few cases were the preperiods longer. Sometimes no point attractor was found. Only attractors with periods ≥ 5 could be reached during many runs. These were the cases when the Moreno matrix was deliberately constructed with values of Sy near 0: if A likes B, then B does not like A and this way for nearly all members. It is intuitively plausible and our CA-experiments confirmed it that in these cases A will go after B but B will go away from A and so on. Therefore no point attractor can be reached in these cases but only attractors with periods significantly larger than 1. The Moreno CA seems to be a realistic model of the dynamics of real groups and capable of predicting group dynamics. But these results could only be obtained if the range of observation was the whole grid. Then the group members always tried in vain to go for a global optimum (for themselves) which they could not get; in the cases of restricted observation the members were content with their local optima again and the group obtained a point attractor after long (15 - 20 time steps) preperiods. In other words, the group members accepted the fact that in their perceivable neighbourhood no better solution was in sight than their present local milieu and therefore they did not leave their neighbourhoods in order to search better social surroundings.

2.11
As a summary for our results we can state the following conclusions:

The "normal" behaviour of our Moreno-CA, i.e., the behaviour generated from the majority of Moreno-matrices with a reasonable degree of symmetry and from all different ranges of observation is the generation of point attractors after rather short preperiods. Only low values of Sy combined with the largest range of observation generate a more complex behaviour, i.e., the generation of attractors with periods significantly larger than 1.

2.12
Readers who are acquainted with the theory of complex systems and in particular with the theory of cellular automata and Boolean nets (BN) will know that the dynamical behaviour of these artificial systems is determined by so called "ordering parameters" (Kauffman 1993; Klüver 2000). The investigation of these ordering parameters showed that under most conditions, i.e., most values of the ordering parameters, CA and BN will generate only point attractors or attractors with very short periods. Only particular combinations of the ordering parameters with extreme values generate complex behaviour, that is attractors with long periods. Our experiments that should only investigate the basic logic of group dynamics obviously confirmed these general results: only a very small range of values of our simple parameters is able to generate more complex group behaviour. Therefore we may conclude that Sy is a kind of ordering parameter too. The empirically known fact that "real" groups usually tend to stabilise rather quickly may have its explanation in these general insights.

* The Moreno KFM

3.1
A Kohonen feature map (KFM) is an artificial neural net that learns "non supervised", i.e., the learning processes of a KFM are directed by the criterion to cluster neural groups according to their similarity. A KFM learns roughly speaking by obtaining certain information about "concepts" and clustering these concepts according to the degree of similarity in regard to the information about these concepts. An illustrating example of the operations of a KFM is the ordering of animal concepts according to information like "flesh eating", "having feathers", "having four legs" and so on (Ritter and Kohonen 1989; Stoica 2000). One may say that KFM learn according to a criterion of topological nearness.

3.2
The basic construction logic of a KFM-model therefore is the following: the KFM consists, as each neural net, (a) of a so called weight matrix that defines the relations between the different artificial neurons, (b) of a so called activation rule, that determines the information flow between the artificial neurons and (c) of a certain learning rule, that determines the changing of the values of the weight matrix. The learning criterion is, as stated above, the degree of similarity between the "concepts" or other components of information the KFM has to learn. The task of the KFM is always to order the information components in different clusters: One cluster centers around a "prototypical" specimen; the other members of the cluster are those that are most similar to the prototype.[2] Because this task is not done in a supervised manner, i.e., there is no direct feed back from an environment, KFM are also rightly called "self-organised maps". Mathematically speaking, the KFM maps the similarity of different concepts or in general different components of information into the structure of topological nearness.

3.3
In our experiments, we took the group members as "concepts" and the emotional relations between them as information about the concepts. Therefore the KFM had to cluster the members according to their mutual feelings, i.e., the KFM had to construct subgroups or neighbourhoods of members in dependency of the information represented in the respective Moreno matrix. (We skip for the sake of brevity the particular algorithms of a typical KFM; each textbook on neural nets deals with KFM too, in particular the Ritter-Kohonen-type of KFM we experimented with.)

3.4
A KFM has besides its weight matrix an additional matrix that contains the information about the concepts it has to order. We may call this second matrix the "semantic matrix". In our case, therefore we only had to use the Moreno matrix as the semantic matrix for the operations of the KFM. Another little example may again illustrate this:

3.5
Take again three group members A, B, and C. The Moreno matrix is, e.g.,

ABC
A011
B100
C000

The zeroes in the main diagonal mean that the relations (A, A) etc. are to be neglected.

3.6
"Similarity" means in our case the degree of correspondence between the vectors that represent the feelings of one member towards the others. In our example, the vectors are (0,1,1), (1,0,0) and (0,0,0). The similarity is measured by the Hamming distance in the case of binary codes. i.e., the number of components that are identical in two vectors. When we neglect the zeroes in the main diagonal, then we get the vectors A = (1,1), B = (1,0), and C = (0,0).

3.7
Obviously the vector of A is more similar to the vector of B than to that of C; the B-vector has the same degree of similarity to the A-vector and to the C-vector. Accordingly, the KFM would generate a topological pattern

A - B - C

with equal distances d(A,B) and d(B,C), but with a significantly greater distance d(A,C). The Ritter-Kohonen KFM generates a grid where the units A, B, C ... are placed in artificial cells rather similar to the grids of CA; the distance d(A,B) is then simply the number of cells between A and B.

3.8
This example demonstrates a certain weakness of this model: one would expect that A and B would be placed together because of their mutual positive feelings, and C would be placed nearer to A than to B because A likes C better than B likes C. The Moreno CA would do this (although not always) because both A and B would obtain a better emotional state, according to the evaluation formula, if they would be placed together and if C would be placed nearer to A than to B. The problem for C, however, is also for the Moreno-CA that C cannot decide between A and B because of the same feelings towards them. The KFM does not take into account particular local relations between single units but only the distribution of relations among the members of the whole group. In other words, the KFM clusters the group members according to their emotional relations to all members of the group and not only to those members in the neighbourhood of a particular member. Therefore in certain cases the CA and the KFM will not always obtain the same results in regard to local relations between single members. But they usually obtain the same results in regard to the distribution of the members of a larger group into subgroups.

3.9
In our example C is not placed nearer to A than to B because of the indifferent relation of C to A. Therefore C stays in the neighbourhood of B as an effect of the mutual positive relation of A to B - an indirect effect.

3.10
The movement of the cells in the Moreno-CA has a general although rather abstract parallel in the KFM: the convergent process of the KFM consists, roughly speaking, of the successive ordering of the units into different subgroups and of the evaluation of the ordering results by comparing the topological relations between the members and the different clusters that contain particular subgroups. Because these subgroups, i.e., the clusters are constantly changed during the learning process of the KFM the units change their position in a formally comparable manner as the movement of the CA-cells.

3.11
Our experiments demonstrated that the KFM also obtained point attractors in practically all cases although the KFM needed many more runs (500 in the average) than the CA.[3] The reason for this is primarily that the algorithms of a KFM are more complex than those of a CA because a KFM is basically a learning system - in contrast to the CA. An additional reason is that the KFM always uses the whole Moreno matrix to obtain the global optimum for its task. Remember, in most experiments the CA only uses that part of the Moreno matrix that is needed for a Moore neighbourhood, an extended Moore neighbourhood (the 16 cells adjacent to the cells of the Moore neighbourhood) or the extension of the extended neighbourhood. Because of this difference between the two models, one can expect that their respective results will not always be equal. It is of course possible to restrict the operations of a KFM too by letting it use only certain parts of the Moreno matrix, but because we were interested only in principal comparisons of the different models we did not perform such experiments.

3.12
In the case of a Moreno-matrix with deliberately chosen low values of Sy the KFM also could not obtain simple attractors but tried to adjust its weight matrix in vain for more than 1000 runs. It only could obtain attractors with periods larger than 1, i.e., it oscillated between different possibilities of clusters. In a certain sense, it seems that the dynamics of KFM is also regulated by ordering parameters; as far as we know this phenomenon has never been investigated before.

* The Moreno IN

4.1
An interactive network (IN) is a recurrent artificial neural network that does not learn. It is often used for the modelling of logical relations between different parts of a system, i.e., social classes, propositional parts of a text and the like (cf. Stoica 2000). The activation rule of an IN is the well-known linear activation rule of neural nets, i.e. the state of an artificial neuron is a result of the "weighted" sum of the inputs the neuron gets from other neurons:

Aj = Σi Ai *wij,

if Aj is the activation state of the receiving neuron j, Ai is the activation state of the sending neurons i and wij is the "weight" of the connections between the neurons j and i. Weight means a numerical value that decides to which degree the connections between i and j are inhibiting or reinforcing the information flow between them. The KFM usually operates with the same linear activation rule although other non-linear activation functions are possible.

4.2
Usually all neurons in an IN are directly connected with all others although that must not be the case. The "quality" of these connections, i.e., the strength of the influence the neurons exert on each other, is represented in the weight matrix (wij): wij = 0.5 means that the neuron j gets an input by neuron i that is the output of i multiplied by 0.5. In this case, apparently the connection is an inhibitoric one.

4.3
For our Moreno experiments we defined the Moreno matrix as the weight matrix of an IN - in contrast to the KFM, where the Moreno matrix is an additional semantic matrix: the three possible values have the same meaning as in the examples above: 0 means in this representation that member i has a neutral feeling in regard to member j, 1 means a positive feeling and -1 of course a negative one. The states of the neurons again represent the emotional states of the members as in the Moreno-CA (the neuron's activation states of the KFM also represent the emotional states of the members but they are not taken into account in the KFM experiments). Our experiments with an IN do not predict the ordering of members into subgroups according to their mutual feelings as is the case with the CA, the KFM, and the two additional models mentioned below. IN-experiments simulate the emotional outcome of a certain group structure, i.e., the result of IN-runs with a particular Moreno matrix are the emotional states of the members in regard to the whole group. In contrast to this, the emotional states as results of the CA- and KFM-experiments are dependent on the particular subgroups the different members have formed. In addition, CA and KFM are dynamical models that simulate the process of subgroup forming; the IN just models the emotional states at a certain time (Stoica 2000).

4.4
In most cases the IN with which we did our experiments generated a point attractor after about 30 steps; "most cases" mean experiments done with Moreno-matrices generated at random. But again, matrices with very low Sy-values did not lead to point attractors but to attractors with periods of 2 and more. Sy seems indeed to be an ordering parameter even for systems with algorithms as different from those of the CA.

* A Moreno genetic algorithm (GA)

5.1
After the experiments with the different models described above and after the comparison between the models and between models and empirical observations described below we constructed another model for the analysis and prediction of group processes, namely a rule based system or expert system respectively. This system is currently used for competence management, i.e., the program solves problems of the forming of subgroups of employees with different skills in a firm for tasks that need teams with different types of competence. The program is a "hybrid system", i.e., the expert system is optimised by a genetic algorithm (GA). Because these problems are of course not identical with the simple generation of subgroups according to the emotional relations between the group members we had to change the original program; in particular we had to modify the genetic algorithm (GA).

5.2
The basic idea of this fourth model is the following: each group member is represented by a vector that is nothing else than the according row in the Moreno matrix. That is the same procedure as in the case of the KFM. Then the program generates at random different subgroups with the condition that each group member must be a member in one and only one subgroup. This gives a particular combination of subgroups. In the same way nine other combinations of subgroups are generated. Each combination is represented by a vector of 30 dimensions; that means that each member is represented by a component of the whole vector.

5.3
For each subgroup an "emotional value" Ev is defined by the sum of the emotional values of each member j. Let wji be the emotional relation of j to member i, as it is expressed in the Moreno matrix. Then the emotional value of a member j is

Evj = Σ i wji.

The emotional value of a subgroup k is then

Evk = Σ j Evj /n

for the n members j of the subgroup k. The emotional value for the whole group m, i.e., the combination of disjoint subgroups is obtained by

Evm = Σ k Evk /r

for all r subgroups k of the group m. In other words, the better the emotional relations of the members are with respect to the other members of their subgroup, the better is the emotional value of the subgroups and of the whole group.

5.4
Now the genetic operators of the GA - mutation and crossover - are applied to the combinations of the subgroups. This means that members of subgroups are exchanged via random processes. The result gives emotional values of the combinations, i.e., the whole groups combined at random of different disjoined subgroups. The best five combinations of subgroups are preserved and the bad combinations, i.e. those with low emotional values are dismissed (selection). Then the genetic operators are applied on the best five combinations; that generates ten new combinations and so forth. When this process converges, i.e. generates no better results, then the best result will be compared with the results of the CA and the KFM.

5.5
When the genetic operators are applied the usual way on the vectors of members, i.e., the whole group (the vectors are divided into the different disjoin subgroups), then the problem arises that certain members will not be represented in a combination of subgroups and/or other members will be represented twice or more in one group. To avoid this possibility that would of course depreciate the whole process, mutation and crossover are modified a bit:

5.6
Mutation means the random exchange of two members within one combination of groups. If member A in subgroup k is "mutated", then A will be substituted by member B from subgroup n and vice versa. Then the whole group still contains all members and the subgroups are still disjoint. To be sure, member A may be exchanged for member B from the same subgroup k too.

5.7
Crossover means the exchange of several members of one whole group with the same number of members of another group. Let us assume that members A,B, and C from group m are exchanged with members D, E, and F from group r. Then group m contains members D, E, and F twice but no members A, B, and C. Group r contains A, B, and C twice but no members D, E, and F. Therefore another exchange has to be made within groups m and r: in group m the "old members" D, E, and F are exchanged for members A, B, and C; "old" means of course the members that were not subject to the crossover operation. The same procedure is applied to group r with respect to the old members D, E, and F. In the end both groups contain again all members in disjoint subgroups but with different combinations of subgroups. Speaking in terms of the GA this modification is basically just the double application of the usual crossover operator for the reasons mentioned above.

5.8
To be sure, the GA-model does not intend to simulate the real processes of forming subgroups within a group. The evolutionary algorithm of the GA operates by multiplying the original group and by selecting the best combination of subgroups, i.e., of members organised in subgroups. That is certainly not the way a real normal group forms its subgroups. The operation of the GA may be instead compared with a group of rather intellectual people who do some social gedankenexperiments, i.e., they consider different combinations of subgroups as intellectual exercises and form their respective subgroups according to the outcome of the gedankenexperiments. We constructed this model only because we are interested if the GA will reach results that are at least in some aspects comparable to the other models. First, but preliminary results indicate that the GA-model indeed generates divisions of subgroups that are rather similar to the neighbourhoods of the Ca and the clusters of the KFM.

5.9
Another and more elegant GA-model of the Moreno problem was just constructed by Jörn Schmidt, using a technique that we developed for constructing "hybrid CA", i.e., CA that are optimised by genetic algorithms (Klüver 2000). The basic idea is the following:

5.10
The whole group is divided at random into, e.g., six different subgroups; other number of subgroups are of course possible. Each member is initially placed in the whole group at a fixed place and is symbolised by the according number of its place; then the whole group is represented by a vector (1, 2, ... , 30). Because each member is also member of one and only one subgroup, every subgroup can be written as e.g.: SG1 = (1,0, 0, 4, ... 29, 0), which means that this subgroup contains the members 1, 4 and 29.

5.11
Now ten additional vectors are generated at random that represent the changing of the subgroups. The 30 components of these vectors are numbers from 0 to 6. One vector may be for example

V1 = (3, 0, 4, 0, 1, 0, 0, 6, ... . , 1).

This means that the first component of the vector that represents the first group member is placed into subgroup 1 (regardless in which subgroup he was before); the second member stays in his subgroup, the third member is placed into subgroup 4, the fourth member stays in his subgroup, the fifth member is placed into subgroup 1, the next two members stay in their respective subgroups, the eighth member is placed into subgroup 6 and so on. In other words, the components of these vectors represent the new subgroups the members will be placed into. A 0 means that no changing of the according member will take place.

5.12
When each vector has generated a new combination of subgroups the results will be evaluated according to the formulas given above. Then the best five vectors, i.e., the five vectors that have generated the best results are mutated and recombined by crossover which gives ten new vectors. These vectors are again evaluated and so forth until the whole process converges.

5.13
First experiments with this GA-version show that convergence is best obtained by using a so called "elitistic" GA, that is a GA where the best result of each step is preserved. We mentioned already with respect to the other GA-version that preliminary results are rather similar to those of the other models.

5.14
Finally we did some preliminary experiments with a little rule-based system that operates on a CA-grid and uses the same criterion of similarity as the KFM. The group members are again represented by their rows in the Moreno matrix and compare this vector with those of their neighbours in the Moore neighbourhood. Then they search the whole grid if there are cells with vectors more similar to their own vector than the vectors of the cells in the original neighbourhood. If that is the case then the cell moves to the better environment, i.e., the cells select that Moore neighbourhood where the most similar cells are to be found. If no such neighbourhoods can be found then the cells stay in their original place. In a certain sense, this model is a mixture between the logic of the CA and that of the KFM.

5.15
These fourth and fifth "Moreno programs" have only recently been implemented so we cannot report significant results with respect to the other models. Further work will detail results from these models. We mention these programs here in order to demonstrate what different possibilities exist when modelling the same process. The results of the following section only refer to the CA model, the KFM and the IN.

* Results of the comparison between the models and the results of empirical observations

6.1
In a direct sense model comparison is possible only between the CA results and those of the KFM- experiments because they simulate the same process of the forming of subgroups. Despite the mentioned fact that the CA only in certain experiments uses information about the whole group for each member the results of the predictions of the two models were in most cases rather similar or even equal. The results significantly differed, as was to be expected, only when the whole group was rather large, i.e., larger than 30 members, the range of observation was less than the whole grid and the initial states of the CA, i.e., the randomly generated position of the cells on the grid, made it impossible for the cells to perceive other cells with better opportunities for them.

6.2
In those cases where the CA used the whole Moreno matrix for each member as the KFM always does, i.e., when the range of observation for each cell of the CA was the whole grid, the results were practically always equal or differed only in small degrees. "Small degrees" means the following: the Moore neighbourhoods of the CA-runs were compared with the clusters of the KFM-runs for each group member. We call the results equal if all Moore neighbourhoods of a particular unit were subsets of the according clusters of the KFM for any unit A, i.e. the clusters that contained the according units of the KFM.[4] If the Moore neighbourhoods differed on the average for not more than one unit then we speak of small differences. The definition must be given in this manner because the clusters can be larger than eight, i.e. the Moore neighbourhood of the CA-cells. According to our experiments, results of equality or only small differences were obtained in the cases with an equal range of observation in more than 80%. In the cases of different ranges of observation we obtained these results in about 50 - 70%.

6.3
These experiments were always done with Moreno matrices generated at random. Because of the similarities or even equality of these results, despite the very different algorithms of the models and the different use of the Moreno matrix, we may safely assume that the results of the particular models are not simply artefacts.

6.4
To be sure, one important difference the results of the CA-experiments and the KFM-experiments always showed: both models generate a grid where the respective units are placed, i.e., each units gets a certain cell. We may call this cell the absolute position of the artificial members in physical space. Because both models only took into account the relative position of each member in regard to the other members the absolute position differs not only with respect to the different models but sometimes even in regard to different runs of the same model.

6.5
More interesting still than a model comparison is of course the comparison of the prediction of the models with empirical observations. The first experiment was done, as we mentioned above, with a group of students. The students had to construct the Moreno matrix of their class group themselves, i.e., give the information about their emotional relations to the other students to one of us and then they were asked to move into another class room, choose a sitting place and select their respective neighbours. Before they did this we gave the Moreno matrix as input to the CA and to the KFM and let them predict the sitting order.

6.6
To the astonishment of the students (and to our satisfaction) the KFM predicted the new sitting order to 90% and so did the CA (with an observation range of 100%, i.e., the whole group). 90% means that in 90% of all cases the pairing of the students into respective neighbourhoods - A sits near B, C sits near D and so on - was correctly predicted. The CA only erred in regard to the "absolute" position of the students in the room, i.e. if they were seated in the first row of the room or another row, in the middle of the room or at the windows etc.. But that of course could not be predicted; the CA just had to "choose" some absolute spatial order[5]. The KFM literally does that not but takes into regard only the emotional relations, i.e., the KFM just forms subgroups without predicting their distribution in the physical space. Of course, the grid of the KFM can be interpreted as the physical space too; in that sense, the KFM did also not predict the absolute position. To put it into a nutshell, obviously our models are valid in the sense that they are able to predict the outcome of certain social processes (although of course rather simple ones) and in particular they did it with the same success.

6.7
The IN, as was mentioned above, predicts the feelings of group members in regard to the whole group. Accordingly we asked the students to express their feelings in this respect. Again the model rather exactly predicted the outcome which the students acknowledged and confirmed. This model seems to be valid too.

6.8
To be sure, these little experiments are more illustrations of the possibilities of our models than a strict empirical confirmation. Yet they also demonstrate the advantages of this kind of modelling: it is very easy to compare the models with social reality because the models are constructed by directly translating empirical data - the Moreno matrix - into the rules and algorithms of the models (see below).

6.9
The second experiment in this context was done by observations of a group of children in a summer camp during two weeks. Dominic Kalisch, a student of ours, asked before the summer camp all children about their mutual feeling towards the other children and in addition about their willingness to interact with children they did not know. He had to ask this because in the beginning the children who came from different towns in Western Germany did know only the children from the same town (and sometimes not even these). The student then constructed a Moreno matrix of the whole group in the usual manner. In the cases where child A did not know child B the degree of willingness of child A to interact with strangers was inserted in the matrix at position (A, B) and accordingly the degree of willingness of child B was inserted in position (B, A).

6.10
The student then observed the interactions of the children (if he could not observe himself he asked the other teachers in the camp to do this). At the end he had a partition of the whole group into subgroups, with respect to the frequency of interactions between the children. Finally the student inserted the Moreno matrix in the Moreno CA and compared the results of the CA-runs with his observations. Because of the methodical problem that the children did not know most of the other children in the beginning the results were the following:

6.11
The first runs of the Moreno-CA were done with the Moreno matrix obtained from the initial interviews. The results differed in many cases from the children's factual behaviour in particular towards the children they did not know in advance. This can be easily explained: a general willingness to interact with strange children does not necessarily predict the willingness to interact with a particular strange child. Therefore, the student had to correct his CA-runs by observing the degrees of interaction between the children and feeding these observations as new initial states into the CA. These corrections were done three times. In the end, the CA predicted the forming of subgroups several days before the end of the summer camp with a precision of 90%.

6.12
Of course, this procedure is, methodologically speaking, a problem. Because of the deficiencies in the initial Moreno matrix the student should have interviewed the children again after some time of mutual acquaintance and thus fill up his matrix. For several practical reasons this was not possible. Therefore the student had to be content with the "second best" solution that produced also interesting results. The underlying hypothesis was, of course, that frequent interactions could be interpreted as positive mutual emotional relations between the children and vice versa.

6.13
The student also asked the children about their feelings in regard to the whole group and compared his data with the results of an IN. Again the results were very similar - a fact that is astonishing because children usually are not able to express their feeling in an exact, i.e. quantifiable manner. The IN - and the CA - seem to be rather robust with respect to vague or fuzzy empirical data.

* Conclusion and Discussion

7.1
Is there a reason for these equivalent or similar results of models with different algorithms? In our opinion the theoretical and methodological explanation for this equivalence of the models is due to the fact that all the models are constructed in the same methodical and theoretical way: the basis of the models always contains rules of interactions and empirical data taken "directly" from empirical experiences, like the CA-rules and emotional states mentioned above, and these data and rules are taken as the methodical frame for the models. In particular, the group members were in all models represented by their rows in the matrix and the criterion of the degree of emotional well being is always the same. In other words, the models are constructed by applying the well-known methods of empirical social research, i.e., interviewing the persons about their subjective states and observing the rules of interactions between the observed persons. In this sense, the algorithmically different models are methodically equivalent. Therefore the different models of this type are all classified by the term "Soft Computing", invented by the founder of fuzzy set theory, Zadeh. That is on the one hand the strength of those models (cf. Klüver 2000) and on the other hand the principal reason for their largely equivalent results. These considerations were confirmed by other experiments where we combined a hybrid CA, i.e. coupled with a genetic algorithm (GA), with a hybrid IN that was coupled with a GA too. The two hybrid systems mutually exchanged the results of their respective optimising processes and obtained this way better results than each system obtained alone (Stoica 2000; Klüver 2000). That was possible only because the two hybrid systems also are methodically equivalent in the manner just described despite their very different algorithms.

7.2
To be sure, group processes like those discussed here are rather simple and that is why we chose them to check our models - in comparison to each other and to empirical observations. But theoretical reflections on the characteristics of these models and enlarged experiments with quite elaborated models of this type (Klüver 2000 and Klüver 2002; Stoica 2000) demonstrate that one can capture much more complex social processes with such models. Further results in this direction will be reported in due time.

7.3
In our opinion these results confirm an assumption that we stated in a recent article (Klüver et al. 2003): Models of this type may be used for a universal modelling schema that can be applied to literally any problem of social or cognitive complexity. If this is true and we confirmed this assumption in a lot of very different cases, then the theory of modelling may have to be revised. Computational sociology can be viewed as "a new kind of science" (Wolfram 2002). But that is just another story.

The models we discussed can be obtained from us at: http://www.cobasc.de


* Notes

1 This particular version of the Moreno-CA was implemented by Jörn Schmidt

2 We deliberately use the term of prototype in the meaning of the well known prototype theory of Rosch (cf. e.g. Lakoff 1987). The clusters of a KFM as the result of its operations are the logical equivalent of the semantic clusters which are constructed around cognitive prototypes.

3 This version of the Moreno-KFM was implemented by Rouven Malecki

4 To be more exact: if the cells of the Moore neighbourhoods that were occupied with group members were subsets of the clusters.

5 The absolute spatial order is always a result of the particular initial configurations of the CA; these were generated at random. That is so because this CA is a deterministic system.


* References

FREEMAN, L (1989), "Social Networks and the Structure Experiment". In: Freeman, L (ed.): Research Methods in Social Network Analysis, Fairfax: George Mason University Press

KAUFFMAN, S (1993), The Origins of Order. Oxford: Oxford University Press

KLÜVER J (2002), An Essay Concerning Sociocultural Evolution. Theoretical Principles and Mathematical Models. Dordrecht: Kluwer Academic Publishers

KLÜVER, J and Schmidt, J (1999): Control Parameters in Cellular Automata and Boolean Networks Revisited. From a Logical and a Sociological Point of View. In: Complexity 5, No. 1, pp. 45 - 52

KLÜVER, J (2000): The Dynamics and Evolution of Social Systems. New Foundations of a Mathematical Sociology, Dordrecht: Kluwer Academic Publishers

KLÜVER, J, Stoica, C and Schmidt, J (2003): Formal models, social theory and computer simulations: some methodological reflections. In: JASSS - Journal for Social Simulation and Artificial Societies https://www.jasss.org/6/2/8.html

LAKOFF, G (1987): Women, Fire and Dangerous Things. What Categories reveal about the Mind. Chicago-London: University of Chicago Press

MORENO, J L (1934): Who Shall Survive. Nervous and Mental Disease Monograph 58. Washington DC

RITTER, H and Kohonen, T (1989): Self-Organizing Semantic Maps. In: Biological Cybernetics 61, pp. 241 - 254

STOICA, C (2000): Die Vernetzung sozialer Einheiten. Hybride Interaktive Neuronale Netze in den Kommunikations- und Sozialwissenschaften. Wiesbaden: Deutscher Universitätsverlag

WOLFRAM, S (2002): A New Kind of Science. Champaign (IL): Wolfram Media

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2003]