©Copyright JASSS

JASSS logo ----

Arianna Dal Forno and Ugo Merlone (2004)

From Classroom Experiments to Computer Code

Journal of Artificial Societies and Social Simulation vol. 7, no. 3
<https://www.jasss.org/7/3/2.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 01-Nov-2003    Accepted: 31-May-2004    Published: 30-Jun-2004


* Abstract

A carefully designed experimental procedure may be an invaluable source for gathering empirical data and a key to grasping the heterogeneity of human behavior, which is of the utmost importance when modeling artificial agents. This paper proposes an alternative way of inferring models of behavior through a different use of data gathered in classroom experiments. By way of example, we report and then discuss the results and the computer code obtained from the analysis of the behavior of subjects in two classroom experiments.

Keywords:
Agent Behavior, Experiments, Prisoner Dilemma, Harvesting Dilemma, Bounded Rationality

* Introduction

1.1
The use of multi-agent based models to study individual behavior in social interactions is a powerful approach for shedding light on different features of complex entities such as organizations and markets. Most especially, it is an approach through which some aspects of human behavior usually not considered in economic literature can be taken into account.

1.2
Although models of rational behavior are well known and have been formalized, the predictive power of these is scant if at all and there still remains a lot of empirical evidence which challenges the rational behavior approach.

1.3
Various approaches to model bounded rationality have been proposed in order to overcome these limitations. And while some models of bounded rationality have been outlined in economic literature, these often seem more a relaxation of the theoretical model rather than an actual human behavior model providing insights to complex societies.

1.4
The different approaches proposed in the literature include Fehr and Schmidt (1999). They model fairness as self-centered inequity aversion, and show that, in the presence of some inequity-averse subjects, both cooperative and non-cooperative behavioral patterns can be explained in a coherent framework. For others models where fairness is incorporated in rational behavior, the reader may refer to the well known ECR model (Bolton and Ockenfels 2000) and to Rabin (1993). Fudenberg and Levine (1998) develop an alternative explanation of the equilibrium arising when less than fully rational players interact. Finally, in Social Psychology, different approaches to social learning are presented (for a first introduction the reader may refer to Bandura 1999).

1.5
A variety of behavioral patterns are presented in this vast literature. In our experiments we have been able to observe some of them. The discussion of our results highlights various of these and includes references.

1.6
The implementation of behaviors, taking into account empirical data gathered from studying subjects in experiments may offer an alternative approach which employs more realistic micro-behaviors in the modeling of agents.

1.7
The use of experiments is central to all sciences and certainly not new to economics and the social sciences. Roth (1995a), for example, classifies experiments in economics according to purpose for which he identifies:
  1. experiments designed to test the predictions of well expounded formal theories
  2. experiments studying the effects of variables about which existing theories have little to say
  3. experiments testing new theories that have grown from behaviors observed in other experiments
  4. experiments designed to study the effects of the introduction or a change of policies in a given environment.

1.8
Furthermore, experimental games are used in social psychology for studying decisions and outcomes in settings where two or more parties are interdependent. Psychologists seek to understand how people behave in these games and how environmental variables affect these behaviors.

1.9
Given the increasing popularity of agent-based modeling, great interest is focused on understanding how behavior is modeled in artificial agents. Macy (1998) views a dynamical theory of microsocial interactions, formalized using artificial agents, as a promising new direction for theoretical research. Duffy and Engle-Warnick (2002) propose genetic algorithms applied to data gathered from experiments on human subjects and Boero (2003) extends and refines this approach.

1.10
We, on the other hand, have used results from experiments to infer some stylized patterns of behavior which are then modeled in artificial agents. This approach may also be useful for extending the notion of bounded rationality to include cultural tools such as social norms and imitation. In addition, it not only closely resembles the Strategy Method discussed in Roth (1995b), but also boasts the important difference that the subjects are not asked to provide a strategy - rather, this we infer from the subjects' behavior and from their answers to a questionnaire.

1.11
Other authors use questionnaires to classify subjects according to their type. For instance, Burlando and Guala (2003) report an experiment aimed at testing the heterogeneous agents hypothesis. They implement a within-subjects design where the same individuals participate in a repeated linear public goods experiment, first with a group of heterogeneous agents and then with homogeneous groups. They use the subject classification to identify homogeneous individuals and repeat the experiment with the same subjects. By contrast, in our approach, we classify subjects, code their behavior and then perform agent-based simulations where subjects are either homogeneous or otherwise.

1.12
Our approach is more in line with that of Duffy's (2001) which moves towards the comparison of human subjects and artificial agents in experiments. Taking into account the Kiyotaki-Wright model, he has designed an ABM similar to a human subject experimental environment. According to findings obtained from human subject experiments, artificial agents learn over time. In other words, the behavior of artificial agents is based on features of human subject behavior observed in a prior experimental study of the model. While he uses statistical inference from experimental data in agent modeling, we explicitly consider individual heterogeneity and disaggregate subjects taking into account both experimental data and answers given by subjects in questionnaires.

1.13
Social dilemmas have been extensively used in experimental research. A prime example is that of the prisoner's dilemma, which has been widely used in different classes of experiments exploring problems of social coordination. The prisoner's dilemma covers different situations such as harvesting fish, engaging in resistance or civil disobedience and contributing to a public good, all requiring threshold levels of participation in order to be successful. Implementing the approach described above, we have devised some classroom experiments in order to observe individuals' behavior when facing situations presenting social dilemmas.

1.14
We discuss some of the approaches used in experiments based on the prisoner's dilemma and harvesting dilemma. We consider aspects such as allowing communications between subjects before the experiment, motivation given by subjects, and the discussion of the experiment outcome. We argue that, if the procedure of the experiment is carefully designed, even these aspects, usually considered problem areas as far as the experiment set up is concerned, may be an invaluable source of empirical data regarding the behavior of individuals. These data are particularly important for several reasons. First, observing actual subject behavior in experiments may enable the serious social scientist to avoid the common pitfall of adopting ad-hoc models of bounded rationality. Secondly, it can provide important indications when modeling artificial agents' behavior. Finally, in some situations, some behavioral patterns even if exhibited by a single subject, may have dramatic effects on the entire population.

1.15
We present the results of two classroom experiments we used to model agents' behavior in multi-agent simulations. In particular, we describe the different kinds of behavior emerging from the classroom experiments and discuss how they have been embedded in the agent-based model developed by us. Furthermore, we compare the behavioral classes derived from our experiments with the models of bounded rationality used in other disciplines. Through the modeling of different behaviors we found that our simulations were able to elicit a wide variety of macro-level behaviors in the models of organization examined (Dal Forno and Merlone 2002, 2004).

1.16
The paper is organized as follows. In Section 2 we briefly discuss the literature on experiments and present our approach where data gathered from subjects are used. In Section 3 we describe two of the experiments we ran in the classroom and show how the results may be employed for writing computer code. In Section 4 we discuss our results, and present an example of a simulation incorporating the behaviors we observed. Our conclusions are set out in the last Section.

* Experiments on the prisoner's dilemma and social dilemma

2.1
Classification of experimental games is not univocal in literature. While some authors (Ledyard 1995, p.126) tend to consider social dilemmas as a special example of public goods, this is not so for all. One example is Pruitt (1999), who takes into consideration "(...) four types of games: matrix games; social dilemmas; bargaining games and coalition games." Then there are "Social dilemmas" (also called "resource dilemmas"), which involve a common pool of resources whether periodically removed by the parties (in the case of harvesting dilemmas) or periodically renewed (in the case of public goods dilemmas). In what follows we employ Pruitt's classification of social dilemmas.

2.2
The prisoner's dilemma plays a special role in experiments carried out in economics. Variations on the prisoner's dilemma have been the subjects of experimental interest since 1950, motivating hundreds of experiments (for a survey see Roth 1995a).

2.3
Social dilemmas are popular in experimental economics (Smith 1979a, 1979b), in economic sociology (Marwell and Ames 1979) and in social psychology literature (Dawes et al. 1977). Ledyard (1995) provides a good survey of different approaches.

2.4
The widespread approach to experiments consists in testing models with human subjects and discussing how their observed behaviors match the prediction of the model (see Figure 1).

Fig 1
Figure 1. Widespread approach used in experiments

2.5
We do it the other way round - thus, on completion of the experiment we use the observed behavior of the subjects to infer behavior models. In particular in the experiments we examined, subjects exhibited different behaviors; furthermore, though patterns of behaviors were close, on analysis the subjects' motivations were different. As a consequence from the results of a single experiment we could infer different behaviors (see Figure 2). Obviously, this way, in particular we can also analyze situations where all the individuals are homogeneous.

Fig 2
Figure 2. Our proposal for a new approach

2.6
There have also been investigations where the designer elicited short computer programs from several scholars (Axelrod 1980a, 1980b, 1984). As previously mentioned, our approach differs inasmuch as we observe the players' behavior only as a reaction to the situations which actually arise during the experiment, rather than requiring decisions at every node. Furthermore, we think that the results obtained when encoding an algorithm from observed behavior are very different from those obtained when asking the subjects to provide an algorithm. There are several reasons for this difference, one for example is that it is evident that, as Roth (1995a, page 29) reports, providing an algorithm to confront a situation and actually tackling the situation are two different tasks[1].

2.7
For these reasons, we find that asking for an algorithm is too limited for our purposes and prefer to infer rules of behavior from subjects who are playing out a sort of "real-life" social dilemma situation.

* Experimental design and agent behavior coding

3.1
Here we present two different experiments we ran in the classroom. Both experiments deal with social dilemmas. The first one concerns free riding, while the second one with common goods tragedy.

3.2
In both experiments, instructions were given to the subjects one week in advance. This way they were able to discuss and, hopefully, increase their understanding of the game. There were also some unexpected side effects that will be discussed in the subsections to follow. Since our focus in both experiments is behavioral modeling, the detailed motivations of subjects' choices is, in our view, crucial. Therefore, at the end of each session we asked every subject to motivate their given choices.

3.3
The motivations provided by subjects gave us two different kinds of information. First, they enabled a check on the coherence of data. An example of incoherence exhibited by our subjects occurred when their motivations did not match their behavior. In these cases, we either opted for modeling the behavior that was implicit in the motivations or approximating their actual behavior. When this is not possible the modeler can even decide to no longer consider these agents. Second, as it turned out, while some patterns of behavior were common to different subjects, their motivations were different. This, then, made the partitioning of different behaviors in homogeneous classes possible. Partitioning is either according to motivation or observed behavior. For example, in the prisoner's dilemma we chose to partition subjects considering their motivations. Yet other approaches such as cluster analysis are possible. As a matter of fact in our harvesting dilemma example we chose to partition in clusters considering their behavior.

3.4
Our next step, was the codification of the behavioral classes and so to characterize different types of agents (for further detail, see Dal Forno and Merlone 2002, 2004).

3.5
The whole process can be summarized as in Figure 3:

Fig 3
Figure 3. Procedure used to code behaviors

3.6
Motivating subjects in experiments is a well-known problem. Our students were encouraged in the following way. Subjects were told that they would receive up to 1 mark in addition to whatever their grade would be in the final exam of the Math class (the maximum available grade is 30/30 and pass grade is 18/30). Furthermore, only written motivations made them eligible for the additional mark. Subjects were also told that the actual incentive depended on their performance (incentive= ranked[2] performance normalized in the range [0,1]).

3.7
In what follows, we show how some of the behavioral patterns that emerge from data can be used to model behaviors of artificial agents. We used the data obtained in the first experiment to perform some agent-based model simulations. These are fully described in Dal Forno and Merlone (2002, 2004) and therefore omitted here.

3.8
While the results of our experiments and the observed behaviors are quite interesting, the number of subject involved is too small to provide a solid empirical basis for generalizations. What we are actually concerned with in this paper is to forward an idea on how behaviors can be implemented. For the sake of brevity, we have limited our analysis only to the more interesting subjects, overlooking either too simple or predictable behavior. The behaviors we observe are important since they both provide further justification of individual heterogeneity and suggest alternative models of bounded rationality. In our opinion, all these aspects are important when approaching the modeling of artificial agents behaviors.

3.9
Modeling is a creative and iterative process in which hypotheses are formulated and formal models are tested and revised (see Sterman 2000 for a discussion about the modeling process). In this sense our approach simply suggests possible uses of experimental data and subjects' motivations when formal models of behavior are considered. As a consequence the behavior implementations we report are intended as examples and, in most of the cases, different rules would implement the same observed behaviors. Nevertheless, in deciding whether the simulations are reasonable some measures of goodness of fit are in order. The problem of evaluating simulation models is well known (for an introduction the reader may refer to Pindyck and Rubinfeld 1991). For this purpose, the different evaluation criteria we have considered include both the rms (root-mean-square) simulation error and Theil's inequality coefficient.

3.10
The rms simulation error for the variable Yt is defined as Equation where:
This being the most frequently used measurement for the deviation of the simulated variable from its actual value.

3.11
Theil's inequality coefficient is defined as

Equation

It measures the predictive power of the model and always falls between 0 and 1. When U=0 there is perfect fit, while U=1 means the predictive value of the model is very bad (for details see Pindyck and Rubinfeld 1991).

3.12
In discussing our results, we present the actual behavior, the simulated one and the corresponding values of the rms and the Theil's inequality coefficient.

Prisoner's dilemma

3.13
Here we consider a very simple model of organization with no hierarchy and based on a population randomly matched in teams of two. Each member supplies a non-observable individual effort in order to produce a good. Members are rewarded according to their joint production, yet each agent bears his own private cost in providing effort. The profit function for agent i is:

Equation

where:

The profit function is the same for all the agents and does not change over time.

3.14
For sake of simplicity, as regards symmetric equilibria, the related one-shot game has a unique Nash equilibrium. Furthermore, a Pareto optimal coordination equilibrium is possible (this being the optimal value of effort when considering both team-mates playing exactly the same one). Of course, there is a strong incentive to deviate and this gives the cooperative equilibrium a very low probability of being played.

3.15
Here we report the data from undergraduates who were in the Math class at the SAA (the Business School at the University of Torino). Subjects were told that the experiment would last five minutes each session and that some extra time would be required for writing a a report explaining their choices. We ran two series of experiments. The first series consisted of 8 weekly sessions each including about 55 individuals, the second consisted of 7 weekly sessions each including about 28 individuals. At the beginning of each session, every student was asked for a non-negative effort, in order to maximize his/her individual profit. In particular, we proposed the following profit function:

Equation

3.16
Instructions for the game are presented in Figure 4:

Fig 4
Figure 4. Instructions for the prisoner's dilemma experiment

3.17
It is worth noting that, considering the one-shot game, in this case the Nash equilibrium is

Equation

while the coordination equilibrium is

Equation

3.18
In every session couples were randomly matched from the student in the room, so that nobody would know the partner in that current session. This was to avoid too many commitment and cheap-talk issues[3]. Each student was allowed to use any computational tool to make his/her decision but no communication was allowed during each session. After each session a public updated table was given of matching, efforts, individual profits and team profits.

3.19
At the end of the experiment, subjects were asked to answer the questions reported in the form seen in Figure 5[4].

Fig 5
Figure 5. Question form for the prisoner's dilemma experiment

3.20
In the following, we give a sample of the behavioral patterns emerging from the experiment. For each of the eight sessions, we report the efforts exerted by the selected subjects and the motivations which led them to those very choices. Both rms simulation error and Theil's coefficient measure show how close the simulated behavior is to the actual one. Our simulations seem to be reasonably good fits, since both are close to zero.

1. Subject #17A: (Close to Nash agent)

Turn 12345678
Agent's Prof.6.4096.9326.2636.085.8995.946.775.9
Actual Eff.1.4620.9210.90.920.920.920.920.92rmsU
Simul. Eff.1.4620.9210.9210.920.9210.920.920.920.00750.0037

Motivation: The optimal effort for a couple of agents is 1.462. I therefore started from this value. With the first attempt my profit was low, so I decided to exert a different effort. The effort 0.92 is the one that, at one and the same time, binds my loss in profit when my partner exerts a low effort and gives me a higher profit when matched with a high effort partner.
Comments: This result is one of the most interesting since, just observing the data, one would guess that this subjects plays the Nash equilibrium. Yet, a more careful analysis of the motivation shows that the actual reasoning behind the data is based on a heuristic which, in this case, matches the Nash effort eN.
	
case :	
  if (first_iteration) {	
      curr_effort=1.462;	
  }
  else{	
      if (my_last_profit < profit_level) {	
         curr_effort = Nash_effort;	
      }	
   }
   effort = curr_effort;	
break;

2. Subject #25A: (Adaptive agent with Lower Bound)

Turn 12345678
Avg. Eff.1.3541.3951.1680.810.9311.010.940.85
Actual Eff.1.461.471.35 1.121.291.251.27rmsU
Simul. Eff.1.4621.4621.462 1.3161.181.070.960.16590.0636

Motivation: The optimal effort for a couple of agents is 1.462, so I started with this value but, in the end, I decided to engage a lower effort to avoid free riding of partner. I probably should have had the courage to exert an even lower effort.
Comments: The data and the motivations are not completely consistent. One possibility would be to model the behavior described in the motivation. Other approaches are, however, possible: for instance, studying the effort exerted by the subject's partners, or, more simply, modeling their behavior as if it chose its effort randomly over a certain range.
 
case :	
  if (first_iteration) {	
    counter=0;
    curr_effort=cooper_effort;	
  }
  else{	
    if (avg_effort < my_last_effort) {	
       counter++;	
       if (counter > threshold) {	
         counter=0;
         curr_effort*=0.9;	
         curr_effort=max{curr_effort, lower_bound};	
       }
    }	
  }
  effort=curr_effort;	
  my_last_effort=effort;	
break;

3. Subject #26A:

Turn 12345678
Avg. Eff.1.3541.3951.1680.810.9311.010.940.85
Actual Eff.1.4621.3110.980.890.890.880.65rmsU
Simul. Eff.1.4621.2541.2951.070.7070.830.910.840.14620.0692

Motivation: I started with the optimal effort: 1.462. Since other students kept on lowering their effort I tried to exert an effort slightly lower than the average in order to free ride.
Comments: This subject induced us introduce different classes of reducing effort agents in our simulations.
 
case :	
  if (first_iteration) {	
    curr_effort=1.462;	
  }	
  else{
    curr_effort = avg_effort-0.1;	
  }
  effort = curr_effort;
break;

4. Subject #49A: (Idealistic Hi Effort)

Turn 12345678
Actual Eff.1.51.51.51.51.41.41.41.4rmsU
Simul. Eff.1.451.451.451.451.451.451.451.450.050.0172

Motivation: If both agents of a couple provide effort equal to 1.4 or 1.5 we obtain optimal profit. Even when I noticed that other players were exerting low effort, I kept on providing a higher one because, in my opinion, high team productivity is always more important than my personal profit.
Comments This subject gave an ethical rationale for keeping on exerting a high effort. This subject adopts an internal standard that gives her a sense of self-worth (Bandura 1999).
 
case :	
  effort=1.45;	
break;

5. Subject #44A: (Adaptive agent)

Turn 12345678
Avg. Eff.1.3541.3951.1680.810.9311.010.940.85
Winner's Eff.10.75610.91.41.30.511
Actual Eff.1.51.51.51.391.070.910.90.88rmsU
Simul. Eff.1.51.51.51.080.8541.171.150.730.19160.078

Motivation: The optimal outcome would have been to have all the players commit to an effort of about 1.4 or 1.5. Nevertheless, this way an individual providing no effort could free-ride. After a few turns I decided to take into account both the winning and average effort.
Comments: Some subjects considered different variables in order to monitor the overall situation of the organization.
 
case :	
  if (first_iteration) {	
    counter=0;
    curr_effort=1.5;	
  }	
  else{
    if (avg_effort < my_last_effort){	
      counter++;
    }	
    if (counter > threshold) {	
      curr_effort = (winner_effort + avg_effort) / 2.0;	
    }	
    effort = curr_effort;
    my_last_effort=effort;	
  }	
break;

6. Subject #43A: (Adaptive to Modal Effort agent)

Turn 12345678
Modal Eff.1.4621.50.90.90.910.90.9
Actual Eff.1.4621.4620.81.110.80.850.85rmsU
Simul. #11.4621.3621.40.80.80.80.90.80.25120.1167
Simul. #21.4621.4621.50.90.90.910.90.26810.1201

Motivation: I started with the optimal effort: 1.462. Then, I engaged the most commonly supplied effort of the other players.
Comments: At least in their motivation, this subject exhibits behavior similar to reciprocity in the Bolton and Ockenfels (2000) model. Yet, a closer analysis shows that empirical data are better explained when the subject exerts an effort that is slightly lower than the most common. We have provided the simulated effort both when there is a small decrement (Simul. #1) and when there is none (Simul. #2). It is clear that considering a decrement provides a better approximation of the observed behavior.
 
case :	
  decrement = -0.1;
  if (iteration < bootstrp_time) {	
    curr_effort = 1.462;	
  }	
  else{
    curr_effort = modal_effort + decrement;	
  }
  effort=curr_effort;
break;

7. Subject #13B: (Cheap Talk agent)

Turn1234567
Effort0.910.7250.6930.710.650.630.625

Motivation: I did not look at the previous outcomes in deciding. Instead, I used comments and reactions from other agents when discussing previous results. Furthermore, when talking to the others, I tried to persuade them to a given effort so as to increase the chances of achieving the highest profit by playing my best response.
Comments: This behavior is similar to the model of selfish player presented in Fehr and Schmidt (1999). We did not model this behavior; however it is an important one since it highlights the need to consider different aspects of the game.

3.21
The last subject's behavior is particularly interesting when considering communication between subjects. Our decision to give agents instructions in advance and let them discuss the game after each turn possibly highlights aspects and behaviors that differ from those usually considered in Experimental Economics. One such aspect is that carrying out experiments where subjects are allowed to communicate offers interesting insights to social networks. Further research should consider the social network effects when analyzing behavioral classes.

3.22
Many other behaviors were observed in our subjects. For example, many subjects were randomizing their effort over different ranges, while, by contrast, others simply provided the average effort of the population in the previous turn. Though we implemented these behaviors, for the sake of brevity, we do not report them here in full.

Harvesting dilemma

3.23
Consider a lake partitioned in n basins. This lake hosts an initial amount x0 of fish. Every agent manages one of the basins which contains x0/n amount of fish. Time is divided in years and each year consists of two periods:
  1. fishing: subjects can catch any percentage of the amount of fish in their basin;
  2. reproduction: barriers between basins are removed, allowing fish to reproduce at rate α > 1, and a new total amount x1 of fish is again equally divided between the n basins.

3.24
Formally, if we denote with pt,i the percentage of fish caught at time t in the basin i, the total amount of fish in the following year is:

Equation

3.25
If we consider a number T of years, the total amount of fish caught across time is:

Equation

A reward is given depending on Qi. By backward induction it is obvious that, if α is greater than n, it is not rational to fish in periods different from the final one. Vice versa, if α is less than n, not to fish is a dominated strategy. Of course, in this case, there is a strong incentive to deviate from any commitment and this means that the probability of cooperative equilibrium being played is very low.

3.26
The instructions for the game, presented in Figure 6, were given one week before the experiment. This resulted in having two students trying to persuade[5] the others to commit to a non catching policy. Their attempt was only partially successful and gave an interesting twist to the experiment.

Fig 6
Figure 6. Instructions for the harvesting dilemma experiment

3.27
Here we report the data from undergraduates at the University of Biella (Torino), who were in the Math class. We ran just one experimental session which included 60 individuals divided in 6 groups of 10 students each. Subjects were told that the experiment would last about one hour. Participants waited outside the lab; each group was randomly selected and then performed the experiment in the lab. On entering the lab, subjects were given the initial amount of fish in the lake x0=500 and the reproduction factor α =1.2.

3.28
At the beginning of each turn (year) the total amount of fish was posted on a board and every student was invited to supply both the percentage of fish caught in order to maximize his/her individual profit and details that motivated their choice. It is worth noting that, with a reproduction rate such this and as no credible commitment is possible, capturing all the fish in the first period would be rational. Answers were written on the form seen in Figure 7.

Fig 7
Figure 7. Report form for the harvesting dilemma experiment

3.29
During the experiment subjects were not allowed to communicate. When it was over, we learnt that, before beginning, two subjects had given the others a note calling on them not to exploit the resource and not fish in all but the last period. As subjects were not allowed to communicate during the experiment and as some of them did not cooperate, we were able to observe how different individuals reacted when someone in their group actually did catch fish.

3.30
Here we give a sample of the behavioral patterns we observed in the experiment. For each year we report the percentage chosen by the selected subjects and the motivations underlying these choices. The code describing the related pattern behavior follows. In most cases the simulated behavior fits the actual behavior perfectly. In these cases, both rms and U are zero. Some comments are given in the other cases.

3.31
An examination of the groups reveals three different behavior configurations:
  1. homogeneous groups with committed players (groups B, D, E and F);
  2. groups of committed players with one single (unexpected) free rider (group C);
  3. groups with several free riders (group A).
Individuals' behavior, on the other hand, can be classified as follows:

1. Subjects A1, C1, C5, C6, C7 and C8: (Stubborn Committed player in a group with free-rider); Subjects A9, C2 and C4: (Unconscious Committed player in a group with free-rider); Groups B, D, E and F: (Committed player in a homogeneous group; 10 subjects in each group).

Year% catchMotivation% simul.
00%I let the fish population increase as much as possible0%
10%I let the fish population increase as much as possible0%
20%I let the fish population increase as much as possible0%
30%I let the fish population increase as much as possible0%
4100%I get all the fish since this is the last period100%

rms0
U0

Comments: These types of subjects, even when in different groups, exhibit the same behavior and give the same motivations. They can not be distinguished unless the composition of the group they belong to is taken into account. One possible approach is to have the same computer code for all of them. Obviously, either designing other experiments to differentiate them or using their behavior as a common starting point for modeling more complex behaviors would also be a possibility. Another approach could be that of increasing the number of subjects in the experiment. The latter would enable the detection of the greatest number of possible situations in which different strategies show up.
 
case :	
  percentage = 0.0 %;	
  if (last_period) {	
    percentage = 100.0 %;	
  }	
break;

2. Subject A2: (Hard Adaptive Committed player in a group with free-rider)

Year% catchMotivationTotal %% simul.
00%I let the fish population increase as much as it can34%0%
1100%I get some fish48%100%
2100%I get some fish55%100%
3100%I get some fish63%100%
4100%I get all the fish since this is the last period100%100%

rms0
U0

Comments: The subject noticed that somebody was catching fish and reacted. This subject exhibits a reciprocity behavior in line with the Bolton and Ockenfels (2000) model.
 
case :	
  percentage = 0.0 %;	
  if (total_percentage > 0.0 %) {	
    percentage = 100.0 %;	
  }	
  if (last_period) {	
    percentage = 100.0 %;	
  }	
break;

3. Subject A6: (Soft Adaptive Behavior)

Year% catchMotivationTotal %% simul.
00%I let the fish population increase as much as it can34%0%
150%I get some fish48%50%
250%I get some fish55%50%
350%I get some fish63%50%
4100%I get all the fish since this is the last period100%100%

rms0
U0

Comments: The motivation given by the subject is not self explanatory. In this case, given the fact that the subject was in a group with several free riders, it is not unlikely that he/she reacted (softly) to the behavior of the other subjects.
 
case :	
  percentage = 0.0 %;	
  if (total_percentage > 0.0 %) {	
    percentage = 50.0 %;	
  }	
  if (last_period) {	
    percentage = 100.0 %;	
  }	
break;

4. Subject A7: (Soft Adaptive with Delayed Reply Committed agent in a group with free-rider)

Year% catchMotivationExp. Pop.Act. Pop.% simul.
00%I let the fish population increase as much as it can5005000%
10%I let the fish population increase as much as it can6003960%
250%The fish population is decreasing so I get some fish475.2247.10450%
350%The fish population is decreasing so I get some fish296.5248133.436250%
4100%I get all the fish since this is the last period160.123459.24566100%

rms0
U0

Comments: Actually some individuals in this subject's group started to fish right from the first period. A possible solution is to implement their behavior by simply introducing a delay in the Soft Adaptive Behavior code and considering expected population.
 
case :	
  counter=0;
  percentage = 0.0 %;	
  if (population < expected_population) {	
    counter++;	
    if (counter > threshold) {	
      percentage = 50.0 %;	
    };	
    if (last_period) {	
      percentage = 100.0 %;	
    }	
break;	

5. Subjects A4, A5 and A10: (Risk-averse)

Year% catchMotivation% simul.
0100%I do not know other players' strategies so I get as much as I can100%
1100%I do not know other players' strategies so I get as much as I can100%
2100%I do not know other players' strategies so I get as much as I can100%
3100%I do not know other players' strategies so I get as much as I can100%
4100%I get all the fish since this is the last period100%

rms0
U0

Comments: These agents did not commit and/or did not believe in the pre-game commitment. This is another example of behavior close to the model of selfish player presented in Fehr and Schmidt (1999).
 
case :	
  percentage=100.0%;	
break;

6. Subject C10: (Free-rider/Trigger)

Year% catchMotivation% simul.
00%To maximize last year fish population0%
175%Somebody is fishing so I fish as well75%
275%I keep on fishing75%
375%I keep on fishing75%
4100%I get all the fish since this is the last period100%

rms0
U0

Comments: Actually, nobody in their group caught any fish at time 0. The motivation given by the subject does not elicit whether their miscalculation was deliberate. As a consequence, we have introduced a random error term. Finally this behavior, even if exhibited by a single agent, is likely to trigger a fishing war in a population with adaptive agents. This is a further example of the heterogeneity relevance in social phenomena.
 
case :	
  percentage = 0.0 %;	
  if (total_percentage + error > 0.0 %){	
    percentage = 75.0 %;	
  }	
  if (last_period) {	
    percentage = 100.0 %;	
  }
break;

7. Subject A3: ('Fixed Population Goal' heuristic with adaptation)

Year% catchMotivationExp. Pop.Act. Pop.% simul.
030%To keep fish population constant5005002%
130%Even if the population decreases I keep on with my strategy5003962%
230%Even if the population decreases I keep on with my strategy500247.1042%
390%I change strategy since the fish population decreases too much500133.436290%
4100%I get all the fish since this is the last period50059.24566100%

rms0.219469
U0.176028

Comments: This subject uses a personal heuristic; in a group of committed players this behavior can trigger a reaction.
 
case :	
  counter=0;
  percentage = (alpha-1.0) / (alpha*#_of _players);	
  if (population < expected_population) {	
    counter++;	
    if (counter > threshold) {	
      percentage = 90.0 %;	
    };	
  }	
  if (last_period) {	
    percentage = 100.0 %;	
  }	
break;

In this case, the subject miscalculates the catch percentage to keep the population constant. In the simulation we used the right value. This explains the differences between simulated and actual values. However, the patterns of the behaviors are close. This is confirmed by the relatively low value for Theil's inequality coefficient.


8. Subject A8: (Non Trivial Adaptive behavior)

Year% catchMotivationExp. Pop.Act. Pop.Total %% simul.
010%To get some fish without depleting the population50050034%1%
10%The population decreased too much50039648%0%
220%People is still fishing so I have to get some fish500247.10455%48%
340%I got too little fish I need to increase my quantity - -63%55%
4100%I get all the fish since this is the last period - -100%100%

rms0.147853
U0.141399

Comments: This subject is similar to the previous one, even though in this case he/she shows some concern about not depleting the fish population too soon.
 
case :	
  counter=0;
  if (counter < threshold) {	
    percentage = (alpha-1.0) / (alpha*#_of _players) * 0.5;
    if (population < expected_population)	
       counter++;
       percentage=0.0%;	
    };	
  }	
  else {	
    percentage = total_percentage;	
  }	
  if (last_period) {	
    percentage = 100.0 %;	
  }	
break;

The same sort of miscalculation has affected this subject too. As a consequence, there are differences between simulated and actual values. In this case, too, the patterns of the behaviors are similar. This is confirmed by the relatively low value of Theil's inequality coefficient.


9. Subject C3: (Adaptive - with miscalculations - behavior)

Year% catchMotivationExp. Pop.Act. Pop.% simul.
00%Group committed to the no fishing strategy5005000%
10%Group followed the strategy, so I keep on6006000%
20%Even if there was some deviation I commit to the strategy7206660%
32%I deviate as much as my estimation of others' deviation799.2739.268%
4100%I get all the fish since this is the last period887.112752.271100%

rms0.024597
U0.054846

Comments: This subject does not react immediately to somebody deviating from commitment, subsequently he/she tries to play a Tit for Tat strategy.
 
case :	
  counter=0;
  percentage = 0.0%;	
  if (population < expected_population) {	
    counter++;
    if (counter > threshold) {	
      percentage = others'_percentage;	
    };	
  }	
  if (last_period) {	
    percentage = 100.0%;	
  }	
break;

In this subject we find another example of miscalculation. In this case its effects are less[6] than in previous subjects. This can also be observed to the extent of the goodness of fit we have taken into account.


10. Subject C9: (Lack of Comprehension of the game)

Year% catchMotivation% simul.
00%To increase the fish population0%
10%To increase the fish population0%
20%To increase the fish population0%
375%I catch 3/4 of my fish75%
425%I catch the remaining 1/4 of my fish25%

rms0
U0

Comments: This subject clearly did not understand the game. Nevertheless, it is important to realize that this sort of cognitive mistakes can potentially trigger unwanted reaction in other subjects.
 case :	
  percentage = 0.0 %;	
  if (last_period - 1) {	
    percentage = some_value %;	
  }	
  if (last_period) {	
    percentage = 100.0 % - sum_of_ my_prev_percent;	
  }	
break;

* Comments about the results

4.1
It turns out that some behaviors are not dependent on the game being played (which was, to some extent, to be expected). This gives us a degree of objectivity for the patterns which emerged. For example, social learning and strategies such as "do-what-the-majority-do" and "do-what-the-successful-do" appear in some of our subjects' motivations. Laland (2001) reports examples of these heuristics even in experiments with animals and it is well known (Mellers et al. 2001) that imitation can be a successful heuristic with humans, especially regarding social dilemmas. Finally, the behaviors we observe are consistent with Macy (1998) where "the players rarely calculate the strategic consequences of alternative courses of action but simply look ahead by holding a mirror to the past". However, our subjects are clearly not fully rational, yet most of them are in someway sophisticated. Moreover, although they show some learning features when updating their behavior, the process comes nowhere near the model of learning provided in Fudenberg and Levine (1998).

4.2
Before 1950 economists assumed that people were motivated by self-interests and the term rationality was used only occasionally (Arrow 1986). In the same way psychologists assumed that thinking could be understood without considering optimization processes. After 1950 the situation changed and the notion of rationality as optimization became entrenched in the theory and research across many disciplines.

4.3
Only some of our subjects seem to optimize. Most of them seem to use heuristics or social norms. Furthermore, it is evident that none of them uses rationality in the 'Stigler' sense (1961), since, at most, while seeming aware of the main issue, they seem to ignore the meta-level problem. By contrast, they adopt some heuristics that may be more or less efficient. Gigerenzer and Selten (2001) compare the optimizing and the bounded rationality approaches in programming an artificial soccer player. Our approach is obviously closer to the latter. Nevertheless, an important distinction is in order: for our purposes the focus is not limited to finding the most efficient heuristic. It is, rather, on observing as many strategies as possible. The behaviors that emerged in the first experiment were used for implementing some of the artificial agents behavioral strategies used to study the emergence of corporate culture and personnel turnover in a model of organization (Dal Forno and Merlone 2002, 2004).

4.4
Finally, using the software platform described in Dal Forno and Merlone (2002), we perform some simulations of the prisoner's dilemma experiment. In the two series of experiments we performed, results varied consistently; an initial difference was that in the first series efforts exerted were consistently higher than in the second. Our explanation was the different composition of the observed behaviors in the two populations. Therefore, in our simulations we examine two different populations consisting of individual behaviors in similar proportions to those observed in laboratory experiments.

4.5
As observed in Figures 8 and 9, if motivations were not taken into consideration cluster analysis of subjects' behavior did not provide significant insight when partitioning the subjects.

Fig 8
Figure 8. Clustering of subjects' behavior in the first series of experiments

Fig 9
Figure 9. Clustering of subjects' behavior in the second series of experiments

4.6
Since no clustering was evident, we partitioned our subjects according to their given motivations. Seven classes of behavior (some of which have been analyzed thoroughly in Section3) were identified in the first series of experiments and are presented here below.
  1. Random. Subjects in this class chose their effort randomizing over the range [0.01- 2.5].
  2. Nash. These subjects converged quickly to the Nash effort eN (Subject #17A).
  3. Shrinking. Subjects started with a high level of effort (very close to the cooperative effort eC) and then decrease it dramatically for different reasons. Some of them took into account the average effort, other also the winning effort or modal effort. (Subject #26A, #43A and #44A).
  4. Low random. Subjects in this class chose their effort randomizing over the range [0.01- 0.92].
  5. High random. Subjects in this class chose their effort randomizing over the range [0.92- 2.0]. (Subject #25A).
  6. Fixed high effort. These subjects decided to exert a fixed high level of effort. (Subject #49A).
  7. Average effort. Subjects in this class played the previous period population average effort.

4.7
The platform we used allowed for the introduction of up to six different classes of agents, for this reason we chose to join up some classes. For instance, since subjects in class 5 and 6, exhibited similar behaviors, those classes were merged. Finally in class 3 we summarized the behaviors with progressive reduction of the effort. The population composition is reported in Table 10.

Table 10: Population composition for first series in prisoner's dilemma experiment

%Behavior
19%Random
19%Nash effort
20%Shrinking effort
32%Hi Random
5%Low Random
5%Average effort

4.8
In the second series of the experiment we noticed some differences. The first regards the classes of behavior. Classes 5 (High random effort) and 6 (Fixed high effort) are completely absent. Instead, a new class (Fixed low effort) occurs. The second difference is the effort level: in this series the majority of population consisted of Shrinking, Low Random and Fixed Low subjects. The population composition is reported in Table 11.

Table 11: Population composition for second series in prisoner's dilemma experiment

%Behavior
16%Nash effort
5%Average effort
37%Low Random
10%Fixed low effort=0.01
16%Random
16%Shrinking effort

4.9
For each simulation we compare the average efforts of artificial population to the actual average effort observed in the lab. The artificial population data are the mean of ten independent replications and the standard deviation for each turn considered. The results are reported in Table 12 and 13.

Table 12: Simulation results for first series in prisoner's dilemma experiment

Turn12345678
Actual Avg Eff.1.3541.3951.2000.8070.9311.0100.9400.853rmsU
Simul. Avg. Eff.1.3321.1691.0981.0330.9620.9360.9150.9300.1340.059
Std. Dev.0.070.060.10.080.130.070.080.08


Table 13: Simulation results for second series in prisoner's dilemma experiment

Turn1234567
Actual Avg Eff.0.7090.6090.6340.5650.6570.6390.690rmsU
Simul. Avg. Eff.1.1931.0100.9230.8630.7790.7250.7850.290.19
Std. Dev.0.040.090.060.070.040.040.07

The simulated behavior is quite close to the actual behavior and, at least in these examples, it is possible to observe the consequences of different compositions of populations.

* Conclusions

5.1
In this paper we show how experiments can be used to model artificial agents. While experiments are usually used to test single theoretical models (and do not allow for the disaggregation of different models of behavior), our approach employs subjects' heterogeneity to model different schemata of behavior.

5.2
Behaviors can be modeled by just looking at the data. Our approach, however, emphasizes how an examination of the motivations given by subjects can be extremely important. For example, it is possible to evince ideas about variables and other aspects of the problem that neither theoretical approaches nor the reliance on crude data would afford (subjects #17A and #13B). Furthermore, motivations given by subjects and actual behaviors must be closely compared in order to identify contradictions and inconsistencies.

5.3
Our approach to modeling artificial agents relies on the observance of real behavior in human subjects undergoing the experiment. To this end the subjects in question were presented with a problem very similar to the focus of our study. During and after the experiment, we asked the subjects to provide explanations of their behavior. Both behaviors and their motivation were collected and analyzed. In both experiments the analysis of behavior was carried out very carefully as this enabled both a better understanding of the behavior rationale and the identification of any potential incoherence of the subjects. The next step was then partitioning behaviors in broad classes and lastly that of inferring stylized behaviors to be modeled in artificial agents.

5.4
To check the goodness of fit of inferred behaviors with real ones, a quantitative analysis is required. This can also be useful in detecting further minor inconsistencies in the behaviors and motivations provided by subjects.

5.5
Finally, through simulation the experimental results can, to a certain extent, be further developed. A well-documented feature in repeated experiments is, for instance, the boredom of subjects (see Duffy 2001). Artificial agent simulations provide, in part, a way of overcoming this problem. Once the human subject behavior has been coded, any number of simulations can be run in a way that would be impossible with human subjects. In addition, with artificial agents, the stability of results can be studied and sufficient conditions (in the sense of McCain 2000) obtained for experiment extensions such as changing the composition of the population. Finally, modeling artificial agents also enables us to study situations where the less successful subjects are far more unlikely to return for more sessions (a well known feature in auctions experiments, see Garvin and Kagel 1994).

5.6
There are, however, some limitations to this approach; for example, while human subjects may inadvertently change their behavior, any such change cannot be replicated by single artificial agents, unless taken into account and coded accordingly, beforehand. Drawbacks of this nature must be borne in mind, as several hefty assumptions are necessarily implicit when extending the number of simulations or population compositions.

5.7
Nevertheless, the approach and exemplifications we propose can be extremely useful as they provide a means, for modeling the behavior of artificial agents, which takes into account aspects that would otherwise be quite easily overlooked. In addition, this approach means that instead of simply modeling heterogeneity abstractly it can be observed and explored 'in the field'.

5.8
In conclusion, the approach we propose can also be reiterated and refined. In our discussion of behaviors (e.g. the first in the harvesting dilemma), we mentioned that different behavioral rules may lead to the same observed patterns of behavior in certain situations. This should not be interpreted as a limit of the inductive approach but simply as a starting point from which to focus on particular aspects that otherwise may be neglected. In circumstances such as these a possible approach may be that of devising further experiments carefully designed to disaggregate the apparently identical behaviors. Another approach could be that of extending the number of sessions and increasing the number of subjects involved in the experiment so as to better observe the differences in strategies. In our opinion, our approach is a first step in integrating both experimental and simulative literature and in shedding light on the different aspects of individual interactions.

* Acknowledgements

We wish to thank Fabrizio Bogli and Luigi Bollani for their helpful suggestions and the participants of ESSA 2003 for comments on an earlier version of this paper. Furthermore we are grateful to the anonymous referees whose comments helped to improve the quality of the paper. The usual caveats apply.

* Notes

1 The first task requires a higher degree of abstraction and, in a certain sense, is a meta-task.

2 The ranking of performance may have consequences on strategies. This may be relevant in the first experiment we examine (see Dal Forno and Merlone 2001), even though subjects did not seem aware of this.

3 Of course, any subject could have made clear their commitments but, given the number of subjects in each experiment, this was unlikely to be relevant. Furthermore, some subjects mentioned they failed to make a credible commitment.

4 We are well aware that asking the subject to motivate their choices after each session could have resulted in different motivations. Both the experimental examples we report here are just examples of the application of our approach.

5 They wrote and circulated some sheets referring to the ‘Tragedy of the Commons’ and strongly suggested a proposal of commitment.

6 Miscalculation occurs only in the last but one year and the error is lower than in previous cases.


* References

ARROW, K. J. 1986. "Rationality of the Self and others in an Economic System". J. Business. 59:S385-S399.

AXELROD R. (1980a). "Effective Choice in the Iterated Prisoner's Dilemma". Journal of Conflict Resolution. 24:3-25.

AXELROD R. (1980b). "More Effective Choice in the Prisoner's Dilemma". Journal of Conflict Resolution. 24:379-403.

AXELROD R. (1984). The Evolution of Cooperation. New York: Basic Books.

BANDURA A. (1999). "Social learning". In: The Blackwell Encyclopedia of Social Psychology. Edited by Manstead A.S.R., Hewstone M. Blackwell Publishers. Oxford UK.

BOERO R. (2003). "Inferire comportamenti economici da dati sperimentali: un nuovo approccio alla microfondazione dei modelli basati su agenti". Draft.

BOLTON G.E., Ockenfels A., (2000). "ERC: A Theory of Equity, Reciprocity, and Competition". The American Economic Review. 90(1), pp. 166-193.

BURLANDO R.B., Guala F. "Overcontribution and Decay in Public Goods Experiments: a Test of the Heterogeneous Agents Hypothesis", working paper 2003.

DAL FORNO A., Merlone U., (2001), "Incentive Policy and Optimal Effort: Equilibria in Heterogeneous Agents Populations", Quaderni del Dipartimento di Statistica e Matematica Applicata, n.10.

DAL FORNO A., Merlone U. (2002). "A Multi-agent Simulation Platform for Modeling Perfectly Rational and Bounded-rational Agents in Organizations". Journal of Artificial Societies and Social Simulation, Vol. 5, no. 2. https://www.jasss.org/5/2/3.html

DAL FORNO A., Merlone U. (2004). "Personnel Turnover in Organizations: an Agent-Based Simulation Model". Nonlinear Dynamics, Psychology & Life Sciences, 8(2), pp. 205-230

DAWES R.J, McTavish J. and Shaklee H. (1977). "Behavior, Communication and Assumptions about other People's Behavior in a Commons Dilemma's situation". Journal of Personality and Social Psychology, 35(1), pp.1-11

DUFFY J. (2001). "Learning to Speculate: Experiments with Artificial and Real Agents". Journal of Economic Dynamics and Control, 25, 295-319.

DUFFY J., Engle-Warnick J., (2002). "Using Symbolic Regression to Infer Strategies from Experimental Data" in S-H. Chen eds., Evolutionary Computation in Economics and Finance, Physica-Verlag. New York

FEHR E., Schmidt K.M., (1999). "A Theory of Fairness, Competition, and Cooperation". The Quarterly Journal of Economics, 114 (3), pp. 817-868.

FUDENBERG D., Levine D.K., (1998). The Theory of Learning in Games. The MIT Press. Cambridge, Mass.

GARVIN S., Kagel J.H., (1994). "Learning in Common Value Auctions: some Initial Observations". Journal of Economic Behavior and Organization, 25, pp. 351-372.

GIGERENZER G., Selten R., (2001). "Rethinking Rationality". Report of the 84th Dahlem Workshop on Bounded Rationality: The Adaptive Toolbox. Berlin, March 1999. Edited by G. Gigerenzer and R. Selten. The MIT Press. Cambridge MA.

LALAND K.L., (2001). "Imitation, Social Learning and Preparedness as Mechanisms of Bounded Rationality". Report of the 84th Dahlem Workshop on Bounded Rationality: The Adaptive Toolbox. Berlin, March 1999. Edited by G. Gigerenzer and R. Selten. The MIT Press. Cambridge MA.

LEDYARD J.O., (1995). "Public Goods: a Survey of Experimental Research". In: The Handbook of Experimental Economics. Kagel J.H., Roth A.E. (Ed.). Princeton University Press. Princeton, NJ.

MACY M.W., (1998). "Social Order in Artificial Worlds". Journal of Artificial Societies and Social Simulation, 1, no. 1, https://www.jasss.org/JASSS/1/1/4.html

MARWELL G., Ames R., (1979). "Experiments on the Provision of Public Goods I: Resources, Interest, Group size, and the Free-rider Problem". American Journal of Sociology, 84 (6), pp.1335-60.

MCCAIN R.A., (2000). Agent-based Computer Simulation of Dichotomous Economic Growth. Kluwer Academic Publisher.

MELLERS B.A., Erev I., Fessler D.M.T., Hemelrijk C.K., Hertwig R., Laland K.N., Scherer K.R., Seeley T.D., Selten R., Tetlock P. E.. (2001). "Group Report: Effects of Emotions and Social Processes on Bounded Rationality". Report of the 84th Dahlem Workshop on Bounded Rationality: The Adaptive Toolbox. Berlin, March 1999. Edited by G. Gigerenzer and R. Selten. The MIT Press. Cambridge MA.

PINDYCK R.S., Rubinfeld D.L., (1991). Econometric Models & Economic Forecast. Third Edition. McGraw-Hill, Inc.

PRUITT D.G., (1999). "Experimental Games". In: The Blackwell Encyclopedia of Social Psychology. Edited by Manstead A.S.R., Hewstone M. Blackwell Publishers. Oxford UK.

RABIN M., (1993). "Incorporating Fairness into Game Theory and Economics". The American Economic Review, 83(5), pp.1281-1302.

ROTH A.E., (1995a). "Introduction to Experimental Economics". In: The Handbook of Experimental Economics. Kagel J.H., Roth A.E. (Ed.). Princeton University Press. Princeton, NJ.

ROTH A.E., (1995b). "Bargaining Experiments". In: The Handbook of Experimental Economics. Kagel J.H., Roth A.E. (Ed.). Princeton University Press. Princeton, NJ.

SMITH V. L., (1979a). "An Experimental Comparison of Three Public Good Decision Mechanisms". Scandinavian Journal of Economics, 81(1), pp.198-215.

SMITH V. L., (1979b). Research in Experimental Economics, Vol.1. Greenwich, Conn.: JAI Press.

STERMAN J. D. (2000). Business Dynamics. System Thinking and Modeling for a Complex World. Boston, Ma.: Irving McGraw-Hill

STIGLER G.J., (1961). "The Economics of Information". J. Pol. Econ., 69, pp.213-225.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2004]