©Copyright JASSS

JASSS logo ----

Riccardo Boero and Flaminio Squazzoni (2005)

Does Empirical Embeddedness Matter? Methodological Issues on Agent-Based Models for Analytical Social Science

Journal of Artificial Societies and Social Simulation vol. 8, no. 4
<https://www.jasss.org/8/4/6.html>

For information about citing this article, click here

Received: 02-Oct-2005    Accepted: 02-Oct-2005    Published: 31-Oct-2005

PDF version


* Abstract

The paper deals with the use of empirical data in social science agent-based models. Agent-based models are too often viewed just as highly abstract thought experiments conducted in artificial worlds, in which the purpose is to generate and not to test theoretical hypotheses in an empirical way. On the contrary, they should be viewed as models that need to be embedded into empirical data both to allow the calibration and the validation of their findings. As a consequence, the search for strategies to find and extract data from reality, and integrate agent-based models with other traditional empirical social science methods, such as qualitative, quantitative, experimental and participatory methods, becomes a fundamental step of the modelling process. The paper argues that the characteristics of the empirical target matter. According to characteristics of the target, ABMs can be differentiated into case-based models, typifications and theoretical abstractions. These differences pose different challenges for empirical data gathering, and imply the use of different validation strategies.

Keywords:
Agent-Based Models, Empirical Calibration and Validation, Taxanomy of Models

* Introduction

1.1
The paper deals with the quest of empirical validation of agent-based models (ABMs), from a methodological point of view. Why computational social scientists need to take more carefully into account the use of empirical data? Which are the empirical data needed? Which are the possible strategies to take out empirical data from reality? Are all models of the same type? Does a difference of the modelling target matter for empirical calibration and validation strategies? These are the questions the paper aims to deal with.

1.2
Our starting point is the generalised belief that ABMs are just highly abstract "thought experiments" conducted in artificial worlds, in which the purpose is to generate but not to test theoretical hypotheses in an empirical way (Prietula, Carley and Gasser 1998). ABMs are often tacitly viewed as a new branch of experimental sciences, where the computer is conceived as a laboratory through which it is possible to compensate for the unavoidable weakness of empirical and experimental knowledge in social science. This belief often implies a self-referential theoretical usage of these models.

1.3
Of course, such attitude is not restricted to the case of computational social scientists. The weakness of the link between empirical reality, modelling and theory is not something new in social science (Merton 1949; Hedström and Swedberg 1998). In social science, theories always teem, theoretical debates are vivid, more or less grand theories emerge once in a while, to be sometimes left aside or to suddenly re-emerge. On the contrary, empirical tests of the theories lack perhaps at all, and the coherence of the theory with direct observable evidence does not seem to be one of the main imperative of social scientists (Moss and Edmonds 2004). Furthermore, broadly speaking, the need of relating theories and empirical evidences through formalised models is not perceived as a focal point in social science.

1.4
The literature on ABMs recently seems to begin to recognise the importance of these issues. Let the debate on applied evolutionary economics, social simulation, and history-friendly models be an example of this (Pyka and Ahrweiler 2004; Eliasson and Taymaz 2000; Brenner and Murmann 2003), to remain within the social science domain. In ecological sciences, biology and in social insects studies, the question of empirical validation of models is already under discussion since many years ago (for example, see: Carlson et al. 2002; Jackson, Holcombe, Ratnieks 2004).

1.5
In any case, within the computational social science community, most of the steps forward have been rather undertaken on the quest of internal verification of ABMs, model to model alignment or docking methods, replication, and so forth (see: Axtell, Axelrod, Epstein and Cohen 1996; Axelrod 1998; Burton 1998; Edmonds and Hales 2003). Less attention has been devoted to the quest of empirical calibration and validation and to the empirical extension of models.

1.6
The situation briefly pictured above implies that ABMs are often conceived as a kind of self-referential autonomous method of doing science, a new promise, something completely different, while little attention has been paid to the need of integrating ABMs (and simulation models generally speaking) and methods to infer data from empirical reality, such as qualitative, quantitative, experimental and participatory methods.

1.7
The first argument of the paper is that if empirical knowledge should be a fundamental component to have sound and interesting theoretical models, as a consequence model makers cannot use empirical data just as a loose and un-direct reference for modelling social phenomena. On the contrary, empirical knowledge needs to be appropriately embedded into modelling practices through specific strategies and methods.

1.8
The second argument is that speaking about empirical validation of ABMs means to take into account problems of both model construction and model validation. The link between empirical data, model construction and validation needs to be thought and practicised as a circular process for which the overall goal is not merely to get a validation of simulation results, but to empirically test theoretical mechanisms behind the model. Empirical data are needed both to build sound micro specifications of the model and to validate macro results of simulation. Models should be both empirically calibrated and empirically validated. This is the reason why we often enlarge our analysis to the broader quest of the use of empirical data in ABMs, with respect to the narrow quest of the empirical validation.

1.9
We are aware that social scientists often deal with missing, incomplete or inconsistent empirical data and, at the same time, that theory is the most important added value of the scientific process. But, our point is that models are theoretical constructs that need to be embedded as much as possible in empirical analysis to have a real analytical value.

1.10
The third argument is that there are different types of empirical data a model maker would need, and different possible and multiple strategies to take them out from the reality. We use the term "strategies" because a unique method for empirically calibrating and validating ABMs does not exist, yet. In this regard, ABMs should be fruitfully integrated, through a sort of creative bricolage, with other methods, such as qualitative, quantitative, experimental and participatory methods. There are some first examples of such a creative bricolage in ABMs literature (see the empirical model of Anasazi, the water demand model and the Fearlus model described in the fourth section). They should be used as "best practices" to improve our methodological knowledge about empirical validation.

1.11
The last argument is that the features of the model target definitively matter. We suggest a taxonomy according to which ABMs are differentiated into "case-based models", "typifications" and "theoretical abstractions". The difference is in the target of the model. This has a strong effect on empirical data finding strategies.

1.12
It is worth saying that the subject of this paper would imply to take seriously into account broad epistemological issues: for example, the relation between theories, models and reality, the difference between description and explanation, deduction and induction, explanation and prediction and so forth (see also Troitzsch 2004). We are firmly convinced that the innovative epistemological purport of ABMs for the social science domain is far from being fully understood and systematized, yet. Computational social science is still in its infancy, and computational social scientists are getting on as they were craftsmen of a new method. This is to say that computational social science does not have reached the age of standards, yet. We are in an innovation phase, not in an exploitation one. This is the reason why our argumentation, rather than consciously focussing on epistemological issues, is taken as close as possible within the field of methodological issues, although we are aware that such issues are in some sense also epistemological ones. To clarify some of these issues, we choose to summarise our point of view in this introduction.

1.13
First of all, there are different kinds of ABMs in social science as regards to the goals they aim to reach. ABMs can be used to allow prediction, scenario analysis and forecasting, entertainment or educational purposes, or again to substitute for human capabilities (Gilbert and Troitzsch 1999). Here, we take into account just the use of models to explain social empirical phenomena by means of a micro-macro causal theory.

1.14
We totally agree with the so called "analytical sociology" approach: the goal of a social scientist is to explain an empirical phenomenon by referring to a set of entities and processes (agents, action and interaction) that are spatially and temporally organised in such a way that they regularly bring about the type of phenomenon the social scientist seeks to explain (Hedström and Swedberg 1998; Barbera 2004)[1]. Without aiming at entering in detail on this, it is worth outlining here that the explanation via social mechanisms differs both from common knowledge and descriptions, and from statistical explanations and covering law explanations. The mechanism explanation does refer neither to general and deterministic causal laws, nor to statistical relationships among variables. It does not make use of explanations in terms of cause-effect linear principle, according to which causality is among a prior event (cause) and a consequent event (effect). Generally, in social science, causes and effects are not just viewed as events, but as attributes of agents or aggregates, which can be also viewed as non-events or not directly observables (Mahoney 2001). A mechanism-based causal explanation refers to a social macro regularity as a target to be explained, action and interaction among agents as objects to be modelled, and a causal micro-macro generative mechanism as a theoretical construct to be isolated and investigated, so that, according to specific conditions to be repeatedly found in reality, such a construct can allow to explain the target (Goldthorpe 2000; Barbera 2004). As Elster argues, the mechanism-based explanation is based on a finer grain logic in respect to black box explanations, according to which "if A [the mechanism], then sometimes B, C, and D [social outcomes]". This is because of the role of specific empirical conditions and the possibility that mechanisms can be paired and mutually exclusive, as well as operating simultaneously with opposite effects on the target, as in the Le Grand example of the impact of taxes on the supply of labour reported by Elster (1998). According to the "Coleman boat" (Coleman 1990), a typology of mechanisms-based explanation includes at best the interlacement between situational mechanisms (macro-micro), action-formation mechanisms (micro-micro), and transformational mechanisms (micro-macro). Models need to be viewed as generative tools, because they allow formalising a representation of the micro-macro mechanisms responsible for social outcomes to be brought about (Hedström and Sweberg 1998; Barbera 2004).

1.15
ABMs imply the use of the computer to formalise social science generative models of that kind (Squazzoni and Boero 2005). In this respect, the role of formalisation is important, because it is often the only way of making possible to study most of the emergent properties that are thought to be the most important aspects of social reality. We argue that ABMs are tools for translating, formalising and testing theoretical hypotheses about empirical reality in a mechanism-based style of analysis. As it is known, ABMs have a fundamental property that makes a difference on this point: they allow taking into account aspects and mechanisms that other methods can not do. They allow 'complexificating' models from a theoretical and empirical point of view (Casti 1994). Aspects and mechanisms included are heterogeneity and adaptation at micro level, non linear relations, complicate interaction structures, emergent properties, and micro-macro links.

1.16
To sum up the structure of the paper and the main arguments, section 2 focuses on empirical data and strategies to collect them. We argue that empirical data refer to the specification-calibration of model components and to the validation of simulation results. The output of such a process is intended to test the explanatory theoretical mechanism behind the model.

1.17
Section 3 depicts a taxonomy of models that can be useful to tackle with the quest of both empirical data and theory generalisation. Models are differentiated in "case-based models" (the target is a specific empirical phenomenon with a circumscribed space-time nature and the model is ad hoc construct), "typifications" (the target is a specific class of empirical phenomena that share some idealised properties), and "theoretical abstractions" (the target is a wide range general phenomena with no direct reference to reality). Differences in the target imply different empirical challenges and different possible strategies of validation to be taken into account. These types are not discrete ones but belong to an ideal continuum. This allows drawing some reflections on the problem of how theoretical results of a model can be put under test and generalised.

1.18
Section 4 brings the previous taxonomy on the ground of empirical data finding strategies. Which are the empirical data needed and the validation strategies available according to the specificity of the modelling target? We emphasise some available "best practices" in the field.

1.19
Finally, section 5 draws some conclusions on the entire set of issues the paper has dealt with.

* The Use of Empirical Data: Strategies for Model Calibration and Validation

2.1
Empirical validation distinguishes itself from other important modelling processes, which are more concerned with internal verification of the model.

2.2
For internal verification we mean the analysis of the internal correctness of the computer implementation of the theoretical model (Manson 2003) and the model's alignment or docking, which compares the same model translated in different platforms (Axelrod 1998). Internal verification focuses on the theory and its implementation as a computer programmed model.

2.3
On the contrary, according to a micro-macro analytical perspective, the usage of empirical data implies different methods for establishing fruitful relations between the model and the data. Empirical data can be used for two purposes as follows: to specify and calibrate model components at micro level and to validate simulation results at macro level.

2.4
For specification and calibration of model components, we mean the use of empirical data to choose and select the appropriate set of model components, as well as their respective values, and the appropriate level of details of micro foundations to be included into the model. For empirical validation of simulation results we mean the use of empirical data to test artificial data produced by the simulation, through intensive analysis and comparison with data on empirical reality.

2.5
To clarify the point, let us suppose that a model maker should explain kr, a phenomenon or macro behaviour. As we have argued before, the model maker aims to translate kr into an ABM M because kr is perceived as complex phenomenon that can be understood neither directly nor with other kinds of models. The model maker translates kr into a theoretical system T and then into an ABM M, assuming some premises, definitions and logical sentences, which are mostly influenced by empirical and theoretical knowledge already available (Werker and Brenner 2004). Let us suppose that A, B, C…, are all the possible model components, which ideally allow the model maker to translate T into the model M in an appropriate way. They are, for example, number and type of agents, rules of behaviour, types of interaction structure, and structure of information flow, and so on. Let us suppose that A1, A2, A3…, B1, B2, B3…, C1, C2, C3…, are all the possible features of the model components. The model maker is ideally called to choose the right components and to select their right features to be included in the model, because they should be considered potential sources of generation of the macro behaviour kr.

2.6
To empirically specify the model components it means to use empirical evidences to choose the appropriate model components, let us suppose, for instance, A+C+D+N. To empirically calibrate the model components, it means to use empirical evidences to select the features of the components, that is to say, for instance, A2+ C1+ D3+ N5.

2.7
Now, let us suppose that from the empirically specified and calibrated model it follows ka as the simulation result. For empirical validation, we mean an intensive analysis and comparison between ka (the artificial data) and kr (the real macro behaviour). If A2+ C1+ D3+ N5 come to generate ka and ka is closely comparable with kr, it follows that A2+ C1+ D3+ N5 can be considered as a causal mechanism necessary and sufficient to generate kr (Epstein and Axtell 1996; Epstein 1999; Axtell 1999).

2.8
Three points call for our attention. They are about the possible black holes in available empirical data, the sequential order from specification-calibration to validation, and the condition for causal mechanisms to be considered as a valid theoretical statement.

2.9
As it is well known, getting empirical data for fully specifying and calibrating all the model components and their features is not so easy done. In the worst case scenario, the model maker is forced just to formulate hypotheses about them. In the best case scenario, the model maker can find just some empirical data about some model components and part of their feature, and not about other components and features which are relevant as the formers.

2.10
According to the example mentioned before, let us suppose that the model maker has access to empirical data about A+C+D model components specifications but not about N, and thus just about A2+ C1+ D3 component features. The consequence will be that the model maker will introduce plausible model components and features. They will maybe become important sources of investigation within the model. For instance, the model maker will test different features of N (N5, N2, and so on), and their effects on the other components to generate ka. This is to say that it is usually expected to find empirical data on structure components (number of agents, types of agents, and so on) and on the macro behaviour, but not on rules of behaviour and interaction structures. The effect of these two components often is the real reason for theoretical investigation.

2.11
The second point is about the sequential order from specification-calibration to validation. It is natural to approach the order in the opposite way, from a top-down perspective. To come back to the example, it is natural to use the simulation model to take advantage of a prior selection of model components (A+C+D+N) and features (A2+ C1+ D3+ N5,), and to understand their generativeness as regards to the macro behaviour ka. In the best case scenario, once a good generativeness and a good macro validation are found, the specification-calibration step is carried on in terms of an empirical test for micro foundations. It is worth saying that the argument of the need of such an empirical test on micro foundations does not have a lot of supporters. The widespread approach implies the idea that once a macro empirical validation is found (a good fit between ka kr), the micro foundations can be considered as validated even if they are not empirically based (Epstein 1999; Axtell 1999).

2.12
Here, we come to a focal point. We argued before that if A2+ C1+ D3+ N5 (being them empirically based) come to generate ka and that if ka is closely comparable with kr, it will follow that A2+ C1+ D3+ N5 can be considered as a causal mechanism able to generate kr. But, now let us suppose that the model maker ignores empirical data for the micro specification. The consequence is that is always possible to find out that not only A2+ C1+ D3+ N5 but also other combinations of elements and of their features, for instance A1+ R2+ H3+ L5, can come to generate the same ka the model maker is trying to understand.

2.13
Put in other words, the point is the following: given that a possible infinite amount of micro specifications (and, consequently, an infinite amount of possible explanations!) can be found capable of generating the ka close to the kr of interest, what else, if not empirical data and knowledge about the micro level, is indispensable to understand which causal mechanism is behind the phenomenon of interest?

2.14
To sum up our argument so far, we stress that empirical data are a fundamental ingredient to support mechanism-based theoretical explanations and that they can have a twofold input-output function. They have the function of supporting the model building and to get sound theoretical outcomes out of the model.

2.15
Evaluating the explanatory theoretical mechanism behind the model is the general intended output of the validation process: it means to use empirical evidences to support the heuristic values of the theory in understanding the phenomenon that is the modelling target. To have this, model construction and model validation rather to be considered different stages of scientific knowledge development, need to be considered as a unique process with strong mutual influences. In the middle of this input-output process, there is the mechanism-based theoretical model, which is the overall goal of the process itself. This is the reason why we did not separate the quest of empirical base of model construction and validation.

2.16
But, which kind of empirical data are useful to have an empirical-based model and a validated ABM? According to the aim of the model maker, common approaches to collect empirical data can be used. ABMs need for empirical data does not require the development of ad hoc strategies to collect such data. On the contrary, it requires an effective exploitation of available techniques, considering the evolution in the field and the fact that different approaches are available for the different kinds of issues to be measured, as the case of Sociometric tools to understand the structure of social networks and so forth. Instead of focusing on the quest of which are the available techniques, we focus on direct and indirect strategies to gather empirical data.

2.17
For direct strategies, we mean strategies to take out first hand empirical data directly from the target. This can be done with different tools, or with a mix of them:
  1. experimental methods (experiments with real agents, or mixed experiments with real and artificial agents);
  2. stakeholders approach (direct involvement of stakeholders as knowledge bearers) (Moss 1998; Bousquet et al. 2003; Edmonds and Moss 2005);
  3. qualitative methods (interviews or questionnaires to involved agents, archival data, empirical case-studies);
  4. quantitative methods (statistical surveys on the target)
In this respect, a quite intensive debate about experimental and stakeholder approach on empirical data is in progress.

2.18
Experimental methods are particularly useful when environmental data is already available. Experimental data in fact differs from field data because the formers are collected into a laboratory, that is to say in a controlled and fixed environment. In fact, to conduct an experiment and to get useful data, the researcher must already know for sure the environmental settings to choose in the experiment (which generally will be also used in the ABM). When enough data about the environment is available, it is thus possible to design an experiment capable to mimic such environment, and then it is possible to focus on other issues of interest, such as the interaction among subjects or their behaviour, and collecting data about them (for a survey on the links between experiments and ABMs in Economics, see Duffy 2004; for an example of a technique to gather behavioural data in experiments, see Dal Forno and Merlone 2004).

2.19
Stakeholder approach is a participatory method to gather empirical data. It is based on the idea of setting up a dense cross-fertilization where theoretical knowledge of model makers and empirical knowledge of involved agents enrich each other directly on the ground. It is often used in the so-called "action research" approach and in the literature on evaluation process. The principle of "action research" is that relevant knowledge for model building and validation can be generated by an intensive dialogue between planners, practitioners, and stakeholders, who are all involved in the analysis of specific problems in specific areas (Pawsons and Tilley 1997; Stame 2004). In a different way in respect to the previous case, in this case environmental data are not already available. Rather, they are the action's target and the output of a multi-disciplinary dialogue (Moss 1998; Barreteau, Bousquet and Attonaty 2001; Etienne, Le Page and Cohen 2003; Bousquet et al. 2003; Moss and Edmonds 2005). The direct involvement of stakeholders allows the model maker to exploit involved agents as knowledge holders and bearers, who can bring relevant empirical knowledge about agents, rules of behaviour and target domain into the model, and to reduce asymmetries of information and the risk of theoretical biases. An example of such a strategy is given by Moss and Edmonds (2005) in a water demand model described in the fourth section.

2.20
For indirect strategies, we mean strategies to exploit second hand empirical data, using empirical analyses and evidences already available in the field. As in the foregoing case, these data could have been also produced through different methods (i.e.: statistical surveys or qualitative case-studies). Second hand data are used in all the cases in which it is impossible to have direct data, when it is possible to exploit the presence of institutions or agencies specialised on data production, or when the model maker is constrained by budget or time reasons. As it is known, it is often difficult to find second hand empirical data really useful and complete for creating and testing an ABM.

2.21
Data to be used are both quantitative and qualitative in their nature . That is to say that they can be "hard" or "soft" ones. The first ones allow parameterising variables such as the number of agents, size of the system, features of the environment, dimensions and characteristics of the interaction structure and so on. They refer to everything that can be quantified in the model. The second ones allow introducing realistic rules of behaviour or cognitive aspects at the micro level of individual action. They refer to everything that cannot always be quantified, but can be expressed in a logic language.

2.22
Data should refer to the entire space of the parameters of the model. In ABMs, it is natural to consider that also qualitative aspects, such as rules of behaviour, are parameters themselves, making the word "parameters set" a synonymous of "model micro specification". Moreover, it is worth to clarify that, when one speaks about quantitative data of a model, one often simply means a numerical expression of the qualitative aspects of a given phenomenon.

2.23
Data also differ in their reference analytic level. They can refer on micro or macro analytic level. To evaluate simulation results, a model maker needs to find aggregate data about the macro dynamics of the system that is under investigation. As a consequence, it is possible to compare artificial and empirical data. In order to specify and calibrate the micro-specification, a model maker needs to find out data at a lower level of aggregation, such as those referring to micro-components of the system itself.

* A Taxonomy of ABMs in Social Science from a Model Maker Perspective

3.1
From the empirical validation point of view, ABMs can be differentiated in "case-based models", "typifications", and "theoretical abstractions". The difference among them is understood in terms of characteristics of the modelling target.

3.2
To sum up: "case-based models" are models of empirically circumscribed phenomena, with specificity and "individuality" in terms of time-space dimensions; "typifications" refer to specific classes of empirical phenomena, and are intended to investigate some theoretical properties which apply to a more or less wide range of empirical phenomena; "theoretical abstractions" are "pure" theoretical models with reference neither to specific circumscribed empirical phenomena nor to specific classes of empirical phenomena.

3.3
This section aims at discussing both the difference among these types and their belonging to an ideal continuum. As we stress below, these types are not conceived as discrete. This allows reflecting upon the quest of empirical generalisation of theoretical findings.

Case-Based Models

3.4
Case-based models have an empirical space-time circumscribed target domain. The phenomenon is characterised by idiosyncratic and individual features. This is what Max Weber called "a historical individual" (Weber 1904). The model is often built as an ad hoc model, a theoretically thick representation, where some theoretical hypotheses on micro foundations, in terms of individuals and interaction structures, are introduced to investigate empirical macro properties, which are the target domain of the modelling. The goal of the model maker is to find a micro-macro generative mechanism that can allow explaining the specificity of the case, and sometimes to build upon it realistic scenarios for policy making.

3.5
These models can achieve a good level of richness and detail, because they are usually built in the perspective of finding accuracy, details, precision, veridicality, sometimes prediction. As Ragin (1987) argues, case-based models aim at "appreciating complexity" rather than at "achieving generality". Even if there are methodological traditions, such as ethnomethodology, which overemphasise the difference between theoretical knowledge models and "a-theoretical descriptions", where these last are intended to grasp subjectivity and direct experience of involved agents, it is clear that case-based models can not be conceived as "a-theoretical" models. They are built upon theoretical hypotheses and general modelling frameworks. Often, pieces of theoretical evidence or well-known theories are used to approach the problem, as well as to build the model.

3.6
Anyway, the point is that a case-based model ideally taken per sé can allow to tell nothing else than a "particular story". As Weber argues (1904), the relevance of a case-based model, as well as the condition of its possibility, depends on its relation with a theoretical typification. For instance, a local theoretical explanation, to be generalisable, needs to be extended to other similar phenomena and abstracted at a higher theoretical level. In our terms, this means to relate a case-based model to a typification.

3.7
But now, from the empirical validation point of view, what matters is that, in the case of case-based model, the model maker is confronted with a specific and time-space circumscribed phenomenon.

Typifications

3.8
Typifications are theoretical constructs intended to investigate some properties that apply to a wide range of empirical phenomena that share some common features. They are abstracted models in a Weberian sense, namely heuristic models that allow understanding some mechanisms that operate within a specific class of empirical phenomena. Because of their heuristic and pragmatic value, typifications are theoretical constructs that do not fully correspond to the empirical reality they aim at understanding (Willer and Webster 1970). They are not a representation of all the possible empirical declinations of the class itself one can find in the reality. Accordingly to the idea of the Weberian "ideal type", typifications synthesize of "a great many diffuse, discrete, more or less present and occasionally absent concrete individual phenomena, which are arranged according to those one-sidedly emphasized viewpoints into a unified analytical construct" (Weber 1904).

3.9
The principle is that more is the degree of distance of the typification with respect to all the empirical precipitates of the class it refers, the more convincing is the theoretical root of the model with respect to the empirical components of the class itself and its heuristic value for scientific inquiry.

3.10
This is basically what Max Weber, before others, has rightly emphasised when he wrote about the heuristic value of ideal types (Weber 1904)[2]. Weber rightly argued that such a value does not come from the positive properties of case-based models we have briefly describe above, which are the level of richness and detail, the accuracy, precision, and veridicality. Such heuristic value comes from theoretical and pragmatic reasons.

3.11
In this sense, the possibility of building a good typification has a fundamental pre-requisite: a huge amount of empirical observation and tentative theoretical categorisations, as well as good empirical literature in the field, to be already exploitable. This empirical and theoretical knowledge can be used to build the model and to choose the specific ingredients of the class to be included into the model.

3.12
Here, the point is twofold, as we are going to further clarify in the next sections. The first one is that typifications imply different empirical validation strategies with respect to case-based models. The fact that the model maker is not confronted with a time-space circumscribed empirical phenomenon, but with a particular class of empirical phenomena implies to take up with a deeply different empirical validation challenge. Often, as we argue in the fourth section, case-based models can be an important part of a typification validation. The second point is that the reference to a specific class of empirical phenomena distinguishes typifications from "pure" theoretical abstractions. These last actually take into account abstracted theories about social phenomena that do not have a specific empirical reference, but a potential application to a wide range of different empirical situations and contexts.

3.13
There are several examples of typifications in social science, and a few in ABMs, too. An ABM example is the industrial district model we have worked on in the last years (see: Squazzoni and Boero 2002; Boero, Castellani and Squazzoni 2004). The model refers to industrial districts as a class of phenomena and incorporates a set of features that connotes the class itself, such as types of firms, complementarity-based division of functional labour, sector specialisation, production segmentation and coordination mechanisms, geographical proximity relations, and so forth.

3.14
This model does not refer to a typification of an industrial system or an industrial cluster, that is to say to some theoretical constructs that can be theoretically considered quite close to industrial districts. This is because the model incorporates features that do not apply in the other cases. For instance, a complementarity-based division of labour among firms based on their geographical and social proximity is a feature of industrial districts as a class, but not a feature of industrial systems or industrial clusters as a class.

3.15
At the same time, the model does not refer to an empirically circumscribed industrial district, such as, for example, the Prato textile industrial district, or the Silicon Valley industrial district. It is not a case-based model. Rather, it synthesises some general features of the class, without aiming at representing a particular precipitate of the class itself.

3.16
Finally, the model does not aim at reproducing a general model of competition-collaboration among agents, which can shed light upon an issue that applies both to industrial districts, industrial clusters, network firms, and to a lot of different social contexts.

Theoretical Abstractions

3.17
Abstractions focus on general social phenomena. An abstraction is neither a representation of a circumscribed empirical phenomenon, nor a typification of a specific class of empirical phenomena. Rather, it is a metaphor of a general social reality, often expressed in forms of a typical social dilemma or situation. It works if it is as general and abstracted as to differentiate itself from any empirical situation, or any class of empirical phenomena. According to the definition given by Carley (2002), if case-based models are "veridicality"-based models, aiming at reaching accuracy and empirical descriptions, theoretical abstractions are "transparency"-based models, aiming at reaching simplicity and generalisation.

3.18
They often deal with pure theoretical aims, trying, for instance, to find new intuitions and suggestions for theoretical debates. They often lay upon previous modelling framework and are used to improve some limitations of previous theoretical models, as in the case of game-theory ABMs.

3.19
Abstractions expressed by means of ABMs abound in social science. Examples can be found in game-theory-based ABMs (i.e., see: Axelrod 1997, Axelrod, Riolo and Cohen 2002; for an extensive review, see Gotts, Polhill and Law 2003a), or in "artificial societies" tradition (i.e., see the reputation model described in Conte and Paolucci 2002). Recently, some interesting reviews of this type of models have become available in social science journals (Macy and Willer 2002; Sawyer 2003).

3.20
The reason of such a plenty is that some mechanisms, such as the relation between selfish individual behaviour and sub-optimal collective efficiency in social interaction contexts, have been studied for a long time and a huge tradition of formalised models already exists. It is evident, and often useful that social science proceeds with a path-dependence, gradually and incrementally developing formalised models that have been already established. Another reason of the plenty is that the mechanisms that are studied by means of these models can be found in many different empirical social situations.

Types in a Continuum

3.21
As we said before, the different types of ABMs have to be thought not as discrete forms, but rather as a continuum. This implies to take into account the linkages between the model types, and, consequently, the quest of generalisation, as it is argued in the next section.

3.22
To give a representation of the continuum between types, let us suppose to depict the taxonomy on a Cartesian plane, as in the left part of figure 1 (where C stands for case-based models, T for typifications and A for theoretical abstractions). The two axes of the plane allow to ideally representing a match between the richness of empirical detail of the target and the richness of detail reproduced into the model. Case-based models ideally show the highest level of target and model details, because they refer to a rich empirical reality and the model aims at incorporating such richness. Typifications are all the models in the grey area between case-based models and theoretical abstractions, because their reference to a class of empirical phenomena implies the loss of empirical details of all the possible sub-classes and empirical precipitates the class subsumes.

3.23
To clarify the point, let us suggest an example, as in the right part of figure 1. Let us suppose that the model maker is interested in studying fish markets. A first possibility is that the model maker would like to study a particular real one, like the one of Marseille, France (M in the figure - a detailed work on that market has been reported in Kirman and Vriend 2000; 2001). In this case, the model would be a case-based model, aiming at reproducing the functioning of that market, with a rich level of details, both at level of model and target, so that the idiosyncratic features of that market would be deeply understood.

3.24
A second possibility is that, according to theoretical literature and to some previous empirical case-studies conducted in the field, which allow to have empirical evidence or well known stylised facts, the model maker supposes that some fish markets belong to the same class of phenomena, that is to say that all those markets share some similar features. In this case, the model would be a typification, aiming at capturing, let us suppose, the common features of all the fish markets which characterise, for example, the French Riviera (FR in the figure), or the Mediterranean Sea (MS) or the world (W). The passage from M to W, through FR and MS, would imply an increasing generalisation of the contents of the model, as well as a loss of richness of target and model details. For instance, the specific features of the M model would be not wholly found in FR model, while those of the FR model would be not wholly found in the MS model, and so forth. Moreover, the W model would contain the common features of all the fish markets as a class, that is to say something more, something less or something else with respect to the case of M, namely a specific empirical precipitate of the class, or with respect to FR, or MS, namely two sub-classes of the class itself that show different degrees of empirical extension.

3.25
Finally, a third possibility is that the model maker decides to work on a more abstract model, so that the model allows understanding the characteristics of the auction mechanism which is embedded in most fish markets. This institutional setting, the Dutch auction (DA in the figure), works in many other social contexts. It can be thought as a wide range social institution with generalised properties and extensions. In this case, the model would aim at studying such institution to show, for example, its excellent performance in quickly allocating prices and quantities of perishable goods as fish. It is evident that the model will be taken as simplified and theoretically pure as possible, with no direct reference at all to empirical concrete situations.

3.26
It is worth to underline how the previous example does not imply the embracing of a particular fixed research path[3]. A model maker can build theoretical abstractions without having built before any typification or case based model and vice versa. It is also possible to build a typification which does not refer to a class of time-space circumscribed phenomena, as in the famous example of the Weberian bureaucracy ideal type. Finally, it is obvious that empirical cases are not randomly selected, but are the product of the model maker's choice.

Figure 1
Figure 1. A representation of ABMs taxonomy according to target and model richness of empirical detail (on the left), and the example of fish markets (on the right)

3.27
Just as a clarification, and supposing to be following a path towards generalisation, let us come back to the previous example. Suppose that a model maker, after a case-based model on the Marseille fish market, would try to generalise some theoretical evidences founded in that case. As the literature on case studies generalisation suggests, this is a difficult undertaking, where there is not a general method.

3.28
One of the traditional ways of generalising empirical case studies is to use "methods of scientific inference also to study systematic patterns in similar parallel events" (King, Verba and Keohane 1994). This is what is done in statistical research: generalising from the sample to the universe, trying to test the significance of particular findings with respect to the universe. But, empirical case studies profoundly differ from statistical surveys. The problem of the heterogeneity of similar cases in the reality and the relation between well known cases and unknown cases is usually tackled with a careful selection of cases with respect to the entire reference universe. In fact, from a scientific point of view, as Weber (1904) rightly argues, cases are nothing but a synonymous for instances of a broader class. The selection can be done just under empirical and theoretical prior knowledge and following some theoretical hypotheses. This is why typification models can be useful. As we are going to suggest, the broader class that is the reference universe of a case-based model can be intended both as a class that groups together time-space circumscribed empirical phenomena of the same type (as in the example of fish markets), and, at the same time, as a class that groups together empirical phenomena of a different type that share some common properties.

3.29
To clarify the last point, let us suppose that, instead of undertaking an attempt of generalisation by considering other seaside fish markets to find features which can be similar to those of Marseille case-based model, the model maker considers to study fresh food products (e.g., markets selling fruits, vegetables and meat, wherever their location), or completely different perishable goods (for instance markets where some chemical compounds are sold), and so forth. This is a generalisation strategy that has a different empirical reference target with respect to the first example we began.

3.30
Such further example testifies the two following conclusions: the choice of the classes of phenomena to be considered is not closely related to the classification, but to the model maker's research path, and the classification is not a bound to research but is a concept useful for dealing with empirical data, as better explained in the following paragraph.

* Empirical Calibration and Validation Strategies

4.1
As we have outlined in the introduction, the quest of empirical calibration and validation can be approached just in terms of possible and multiple strategies. This is because there is not a unique method yet. There are just some examples that can be used as best practices to be extended, or as a suggested and tentative to-do list.

Case-based Calibration and Validation

4.2
As said before, case-based models are empirical models in their first instance. Usually, finding aggregate data about a specific time-space circumscribed empirical phenomenon is a not so difficult undertaking. More difficult is to figure out a good strategy for micro-level data gathering. As we argued in the second section, there are different tools to obtain first hand empirical data on a target, such as experimental, stakeholder, qualitative and quantitative methods.

4.3
We selected two good examples about finding and using empirical data in ABMs literature. The first one is the Anasazi model (Dean et al. 2000), a historical phenomenon simulation created by a multidisciplinary research team at Santa Fe Institute. It is a good example on how reproducing historical phenomena with case-based models, by creating realistic representation of environment and populations, mixing different types (quantitative and qualitative) and different sources of empirical data. The second one is the model of domestic water demand recently described by Moss and Edmonds (2005). This second is a good example of what a stakeholder approach to empirical data means. They can be viewed as first examples of possible best practices to be further broaden. They are discussed below.
The Anasazi Example

4.4
The model is the outcome of the Artificial Anasazi Project, which has been the first exploratory multidisciplinary project on the use of ABMs in archaeology and is mostly considered as a best practice in the field (Ware 1995, Gumerman and Kohler 1996). The overall goal of the project was to use ABMs as analytical tools to overcome some traditional problems in the field of evolutionary studies of prehistoric societies, such as the tendency of adopting a "social systems" theoretical perspective, which implies an overemphasizing and a reification of the systemic properties of these societies, the exclusion of the role of space-time as a fundamental evolutionary variable, and the tendency of conceiving culture as a homogenous variable, without paying the due attention to evolutionary and institutional mechanisms of transmission and inheritance of cultural traits.

4.5
The background is a multidisciplinary study of a valley in north eastern Arizona, where an ancient people, the Anasazi[4], had lived until 1300 A.D. Anasazi were the ancestors of the modern Pueblo Indians, and they inhabited the famous Four Corners (between southern Utah, south western Colorado, north western New Mexico, and northern Arizona). In the time period between the last century B.C. and 1300 A.D., they supplemented their food gathering with maize horticulture and they evolved a culture of which we actually can appreciate ruins and debris. Houses, villages and artefacts (ceramics, and so on) are the nowadays testimony of their culture. Modern archaeological studies, based on the many different sites left on the area, stress the mysterious decline of that people. In fact the ruins testify the evolution of an advanced culture, stopped and erased in few years, without violent events such as enemy invasions.

4.6
The goal of the model is to shed light on the following question: why Anasazi community, after a long durée evolution, characterised by stability, growth and development, is disappeared in a few years?

4.7
The research focused on a particular area inhabited by Kayenta Anasazi, the so called Long House Valley in north eastern Arizona. That area has been chosen by model makers both because of its representativeness, its topographical bounds, and the quantity and the quality of available scientific data both on socio-cultural and demographic and environmental aspects.

4.8
An ABM allows building a "realistic" environment, based on detailed data, and considering anthropologically coherent agents' rules[5]. The model aims to reproduce a complex socio-cultural empirical reality, and to check if "the agents' repeated interactions with their social and physical landscapes reveal ways in which they respond to changing environmental and social conditions" (Dean et al. 2000). As Gumerman et al. (2002) suggest, "systematically altering demographic, social, and environmental conditions, as well as the rules of interaction, we expect that a clearer picture will emerge as to why Anasazi followed the evolutionary trajectory we recognize from archaeological investigation". To use a well known reference (Gould and Eldredge 1972), the analytical challenge of Anasazi evolutionary trajectory is conceptually condensable in the overall idea of "punctuated equilibria".

4.9
To sum up the quest of empirical data used, it is worth to outline that around 2000 B.C., the introduction of maize in the Long House Valley started with the Anasazi presence. The area is made by 180 km2 of land. For each hectare, and for each year in the period lasting from 382 A.D. to 1450 A.D., a quantitative index capable of representing the annual potential production of maize in kilograms has been extracted from data.

4.10
The process for finding a realistic "fertility" index to be included into the model has been quite challenging. The index was the main building block the model makers used to create a realistic "production landscape". The index has been calibrated on the different geographical areas within the valley and created by using a standard method to infer production data from data on climate (the so-called Palmer Drought Severity Indices) and by completing it with data on other elements, such as the effect of hydrologic curve and aggradation curve. Some elements which the paleoenvironmental index considers are the soil composition, the amount of rain received and the productivity of the species of maize available in the valley at that time. Obviously the process, made for each hectare and each year, involved many sources from dendroclimatic, soil, dendroagricultural and geomorphological surveys, using high level technologies (for a detailed description, see Dean et al. 2000)[6].

4.11
Following this approach, a description of the whole valley has been created and reproduced into the model, so that really happened production opportunities and a realistic environment have been mimicked[7], ready to test hypotheses on agents.

4.12
In fact, agents have been introduced, following different hypotheses on their attributes. Here, we focus on the smallest social unit, individual households, who have heterogeneous and independent characteristics such as age, location, grain stocks, while sharing the value of death age and nutritional need. Demographic variables, nutrition needs and attributes and rules of household building have been taken from previous empirical bio-anthropological, agricultural and ethnographic analyses (for details, see Dean et al. 2000). Moreover, households identify both "residential" settlement and farming land. Residential settlements are modelled according to empirical evidences on the famous "pithouses" we can see nowadays in different areas of New Mexico. They include: five rooms, five individuals, and a matrilineal regulatory institution.

4.13
Household consumption is fixed on 800 kilograms per year, which is a proxy of data on individual consumption (160 kilograms per year for each household member). Maize not consumed is assumed to be storable two years at most. Households can move, can be created, and can die. To calculate the possible fission of households, the model makers assume that households can get old until 30 years at most and that once a household member is 16 years old there is a 0.125 possibility of creating a new household, thanks to a marriage. Such a probability allows synthesizing different conditions as follows: the probability of a presence of sons in a household, time needed to allow sons to grow; possibility that a female meets a partner, has a child and gives rise to a new household.

4.14
Households have the capacity of calculating the potential harvest of a farmland, identifying other possible farmlands, and selecting them, checking if the selected hectare is unfarmed, uninhabited and it is able to produce at least 160 kilograms per year for each household member. In a similar way, residential areas are chosen if unfarmed, if less than 2 kilometres far, and less productive than the selected farmland. Finally, as a closure, if more residential sites match the criteria, the one with the closest access to domestic water is chosen.

4.15
Nutrition determines fertility and then population dynamics. The environmental landscape allows to reproduce the main different periods in the valley, with a sharp increase of productivity around A.D. 1000, a deterioration around A.D. 1150, an improvement until the end of the 1200s when starts the so called "Great Drought".

4.16
To sum up, the question is: "can we explain all or part of local Anasazi history- including the departure- with agents that recognize no social institutions or property rights (rule of land inheritance) or must such factors be built into the model?" (Dean et al. 2000).

4.17
The simulation starts at A.D. 400 with the historical number of households, randomly positioned. It shows great similarity with real data. Simulation is able to replicate localization and size of real settlements. Moreover, in archaeological record, hierarchy and clustering are strictly correlated. In the simulation, hierarchy, even if not directly modelled, can be inferred from clustering. Interesting evidence is that aggregation of households into concentrated clusters emerges when environmental fluctuations are of low intensity. On the contrary, time periods in which there are higher levels of rain, plenty of streams, and higher ground moisture, allow the growth of dispersion of households, to exploit new possibilities of maize horticulture. In sum, simulation shows that Anasazi population is able to generate a robust equilibrium at the edge of concentration-dispersion of household settlements and low-high frequencies of environmental variability.

4.18
In the period between 1170 and 1270, Anasazi population begins to move to the southern area of the valley, because of the erosion and lowering of phreatic surface (which is an empirical evidence introduced into the model). Despite the empirical evidence about the Anasazi departure around 1270, in the simulated environment Anasazi left completely the valley just around 1305. Over the simulation, different low density settlements resist to the environmental challenge and even grow in the meantime. The evidence is that, despite the embitterment of environmental conditions in the period of the so-called "Great Drought", Anasazi still were in a sustainable regime of environmental possibilities and constraints. A movement towards north areas and a dis-aggregation of clustering settlements were enough to survive in the valley.

4.19
This solution of replacing production and creating more small settlements with much less population has been really implemented in other Anasazi areas, as suggested by Stuart (2000). But, history teaches us that Anasazi completely left the valley in those critical few years. Perhaps, social ties or complex reasons related to power and social structure of Anasazi community, not yet considered in the model, were the reason for that choice. The model makers in fact conclude that "the fact that in the real Long House Valley, the fraction of the population chose not to stay behind but to participate in the exodus from the area, supports the assertion that socio-cultural 'pull' factors were drawing them away from their homeland […] The simple agents posited here explain important aspects of Anasazi history while leaving other important aspects unaccounted for. Our future research will attempt to extend and improve the modelling, and we invite colleagues to posit alternative rules, suggest different system parameters, or recommend operational improvements" (Dean et al. 2000).

4.20
In conclusion, this case-based model is a good example of empirical data-based ABM. The empirical target has time-space circumscribed dimensions. The goal is to understand a particular history. The mean is a realistic model able to mimic historical evolution. Qualitative and quantitative, direct and indirect empirical data are used to build the model as accurately as possible with respect to the target. There is not a typification behind, but model makers proceed on the ground, trying to exploit available empirical and theoretical knowledge. Theoretical findings of the model show that adaptive settlements, movement across space, replacing production and creating small settlements were a possible way of tackling environmental challenges in the case of Anasazi in the Long House Valley. To generalise these findings, model makers should be able to compare that particular history of Anasazi in the Long House Valley with other stories of the same kind, both with Anasazi cases in other areas, or with other populations in similar environmental conditions, as suggested by Stuart (2000).
The Water Demand Model

4.21
The second example we focus on is the water demand model described by Moss and Edmonds (2005). It is a model intended to directly deal with some methodological issues, such as the importance of empirical calibration and validation of simulation via stakeholder approach and the need of a generative model to understand empirical statistical properties.

4.22
The example refers to the role of social influence in water demand patterns. From a theoretical point of view, such a role can be investigated just if heterogeneity at micro level can be assumed. As we have argued before, such a property can be formalised just with ABMs.

4.23
From a methodological perspective, the point is that such a model allows taking into account empirical data on behaviour, so that aggregate macro empirical time series can be appropriately generated by the simulation. The goal of the authors was clearly a methodological one: to demonstrate how the explanation of the macro statistical analysis of an outcome can be deeply improved by a model able to take explicitly into account a social generative process. As the authors argue, if the model allows generating leptokurtic time series with clustered volatility that can be compared with empirical data on domestic water consumption, just a causal model can allow explaining the emerging aggregate statistics.

4.24
The model has been intentionally designed to capture empirical knowledge by stakeholders, in particular the "common perceptions and judgements of the representatives - the water industry and its regulators - regarding the determinants of domestic water consumption both during and between droughts in the UK"[8].

4.25
The first version of the model has been constructed with a little feedback from stakeholders. It was intended to demonstrate the role of social influence in reducing domestic water consumption during periods of droughts. Stakeholders criticized this first version, focussing on the point that, when a drought ended, aggregate water consumption immediately returned to its predrought levels. The second version was designed to address this deficiency, introducing more sound neighbourhood-based social influence mechanisms and the fact that evidence shows a decay function of such an influence over time.

4.26
Cognitive, behavioural and social aspects of the model can be summarised in the idea that agents decide what to do about water consumption by learning over time and by being influenced by other neighbouring agents and institutional agents. Institutional agents issue suggestions to other agents, by monitoring aggregate data. These aspects are all modelled both according to theoretical hypotheses and empirical evidences. Moreover, the model also embodies a sub-model, where empirical knowledge about hydrological issues has been introduced, so that the occurrence of droughts from real precipitation and temperature data can be simulated.

4.27
The simulation data show some interesting features, from the statistical point of view, which can be explained on the basis of the ABM behind. As the authors outline, most of the forms time-series data show are the results of underlying social processes and a consequence of generative mechanisms put into the model, such as social embeddedness, the prevalence of social norms, and individual behaviour. As Coleman argues in his famous critics on the parameter and variables sociology (1990), this is the main difference between descriptive statistics and generative models. As Moss and Edmonds (2005) accordingly argue, "conflating the two can be misleading".

4.28
In this view, participatory models are an important way to cross the bridge between statistical empirical data and generative theoretical models. Empirical observation of "how processes actually occur should take precedence over assumptions about the aggregate nature of the time series that they produce. That is, generalization and abstraction are only warranted by the ability to capture the evidence. Simply conflating descriptive statistics with a (statistical) model of the underlying processes does not render the result more scientific but simply more quantitative".

4.29
In conclusion, this second case-based model differs from the first one. In this case, the goal of model is not intended to shed light on a particular historical evolution, but is intended to support methodological issues about the importance of an empirical foundation of generative models able to understand macro empirical statistical properties. What is important here is that empirical foundation is done by mixing statistical data and a participatory method. This last is a "direct strategy" for empirical data gathering that was manifestly unavailable in the Anasazi case.

The Case of Typifications

4.30
As we stressed before, typifications are theoretical artefacts focused on a particular class of phenomena. In typifications, the relationship between the model and the empirical data is even more difficult than in case-based models, particularly when referring to data for the micro calibration of the model.

4.31
The main issue here is the fact of considering a class of phenomena, and not just an instance of the class, as the modelling target. As in the example about fish markets mentioned before, it is clear how there is less probability to find aggregate data of all the French Riviera fish markets than to find them for just a single case (e.g. Marseille), as well as it is more expensive to collect them in the first case. Such problematic issue can be even more challenging if we do not take into account aggregate data for the macro validation process (e.g., the average weekly price dynamics), but micro data for calibrating model components (e.g. the average percentage of buyers who are restaurant managers). The difficulty rises when qualitative data are taken into account. It is in fact easier to find and collect a description of subjects' behaviour for a single case than for a whole class.

4.32
Such bounds to data availability and collection costs are the reason why typifications mostly lay upon theoretical analyses and second-order (un-direct) empirical data, which are available for some well known classes of phenomena.

4.33
Despite those difficulties, typifications are useful for understanding widespread phenomena, and they can often be empirically calibrated and validated. To testify the first claim (i.e., the usefulness of typification) we report as example the Fearlus model, which allows understanding how a typification can address several different questions related to a class of phenomena and how its flexibility can be exploited to analyse similar classes of phenomena. To show a possible procedure for empirically calibrating a typification, we further illustrate the example about industrial districts we cited before.
The Fearlus Model

4.34
The Fearlus (Framework for Evaluation and Assessment of Regional Land Use Scenarios) model has been developed at the Macaulay Institute of Aberdeen to simulate issues related to land use management.

4.35
With the aim of answering research questions related to rural land use change, the model structure is composed by a two dimensional space divided in land parcels, a set of land uses with different values of yield, and a set of decision makers, that is to say land managers able to carry on social interactions (for instance, information sharing).

4.36
The model has to be considered as a typification because it allows capturing the main mechanisms and actors which determine land use change, and, at the same time, it allows a high degree of flexibility to make the model locally adaptable to undertake particular case-studies.

4.37
In fact, the model components and their features can be fixed according either to some theoretical hypotheses or empirically grounded knowledge. For instance, considering the land space as a flat grid, where each parcel is of the same size and of the same climate and soil composition, and assuming randomly determined yields dynamics and land prices permit to study land managers behaviour and its impact on land scenarios in a general way, not bounded by spatially determined peculiarities.

4.38
The target is therefore a class of phenomena: the rural presence of land parcels owned and managed by small land owners which yearly face the choice of their land parcels use. The typification is exploited as a mean for showing the environmental conditions which make non-imitative or imitative behaviour preferable for land managers (see Polhill, Gotts and Law 2001), for comparing the outcome of different imitative strategies (Gotts, Polhill, Law and Izquierdo 2003), and, finally, for investigating the relationship between land managers aspiration thresholds and environmental circumstances (Gotts, Polhill and Law 2003b).

4.39
As the reader can note, the kind of questions such a model allows to address affects the whole class of phenomena considered. This evidence calls for the creation of a typification model, because a case-based model would answer those questions with very bounded and specific conditions.

4.40
Moreover, scholars working with the Fearlus model have worked in the continuum of the typification space. In fact, in the direction of case-based models, they have adapted Fearlus to a more specific problem even if not case-based, as explained in Izquierdo, Gotts and Polhill (2003). Focussing on the problem of water management, the model has been adapted to consider the problem of water management and pollution together with land use management. The result is a model of river basin land use and water management, considering social ties among actors, water flows on the spatial dimension and so forth. Validating the model with stakeholders, the new version of Fearlus (called Fearlus-W) is used to "increase our understanding of these complex interactions and explore how common-pool resource problems in river basin management might be tamed through socio-economic interactions between stakeholders (primarily rural land managers), and through management strategies aimed at shaping these interactions" (Izquierdo, Gotts and Polhill 2003). In other words, the case of general rural land use scenarios has been bounded to the case of scenarios of river basins.

4.41
Finally it is worth to note that Fearlus has been used also for more theoretical and methodological issues. The fact of being a typification has in fact made possible the comparison of the model features and results with GeoSim, a model of military conflicts among states. The idea was to compare these two models, coming from different fields, in order to understand their structural similarities and differences, and to allow cross fertilisation between them (Cioffi-Revilla and Gotts 2003). Furthermore, the typification has allowed some deep analyses of its internal structure, as in Polhill, Izquierdo and Gotts (2005), where the effects of Floating Point Arithmetic used by programming languages are critically presented in connection with model results.
The Industrial District Model

4.42
Coming back to typification empirical calibration and validation issues, it is useful to reflect again upon the case of industrial districts. This case has recently attracted a growing attention of ABM scholars. Apart from the work we have done in last years (Squazzoni and Boero 2002; Boero, Castellani, Squazzoni 2004), it is worth to remember: the Prato textile Italian industrial district model by Fioretti (2001), an example of a "case-based model" realised to understand historical change of competition strategies of the district in last decades; the Silicon Valley model by Zhang (2003); Brenner (2001); Albino, Carbonara and Giannoccaro (2003); Brusco et al. (2002), who have used a cellular automata modelling approach to study interaction patterns among localised firms through a typification; and more recently Borrelli et al. (2005). These are example of a growing literature on computational approach to industrial districts that has found a theoretical systematisation in an important contribution by Lane (2002).

4.43
In this case, the starting point was the huge body of empirical and theoretical studies already conducted in the field in the last 30 years. For instance, it is known that industrial districts are evolutionary networks of heterogeneous, functionally integrated, specialised and complementary firms, which are clustered into the same territory and within the same industry. As it is well known, both by empirical and statistical surveys, they constitute a fundamental bone structure of Italian manufacturing system. An extensive empirical case-based and statistical literature allowed us to identify a set of building blocks, which are ingredients that belong to the class (Squazzoni and Boero 2002; Boero, Castellani, Squazzoni 2004).

4.44
Let us remember the first four building blocks just as an example:
  1. huge number of small firms clusterised in the same territory;
  2. different types of firms according to the division of labour;
  3. specialised complementary-based production chains that link firms together;
  4. informal coordination and hierarchical/horizontal information flow among firms.
These building blocks can be sustained by empirical data and accordingly calibrated. The first one i) can be inferred by statistical surveys on the agglomeration of firms across space. Referring to the Italian economy, this is a considerable statistical evidence about it, from which it is inferred and monitored (also for policy making reasons) the number of old and new industrial districts over time. The evidence is that agglomeration is a typical ingredient of industrial districts formula. The second one ii) can be empirically inferred by different quantitative surveys that allow to classify firms according to the types of good they produce (final, intermediate, phase, raw materials, and so on). A great variety of types of firms is the second typical ingredient of industrial districts. The third one iii) is usually inferred by empirically reconstructing the production flow, by interviewing entrepreneurs or managers. The last one iv) can be derived by empirical surveys on the absence of formal registered contracts and protocols, and the predominant use of traditional communication tools as coordination scaffolds.

4.45
Some of these data can be acquired by second-hand sources (statistical surveys by government institutions, foundations, or local entrepreneurs' associations). Others, often the more qualitative ones, can be acquired through first-hand sources.

4.46
Another point is that it is also possible to infer if there are different morphologies within the same class and to identify different representative empirical cases, according to the absence of some typification building blocks or to different features between the different representative cases. For instance, in the case of Italy, it is usual to consider the case of Prato industrial district and the case of Northeast districts as different morphologies of the class. The first one shows huge number of small firms, flattered inter-firm networks, Marshallian externalities-based growth, and so on, while the second ones show presence of middle-big firms, internal paths of growth, hierarchical and more formalised networks, and so on (Belussi and Gottardi 2000; Belussi, Gottardi and Rullani 2003). These two examples can be considered, and usually they are, extreme morphologies of the class.

4.47
A particular characteristic of this class is that empirical case-studies and theories abound, while less attention has been paid on formalising models to tests theoretical hypotheses. With this respect, a typification can allow to find out ways of testing theories usually developed in the field, or to deepen our understanding about the basic properties and relations among mechanisms that lie behind the class. For instance, a question can be the understanding of the relation between the different representative morphologies of the class and the features of the environment in which they are embedded: is the Marshallian-like industrial district sub-class (such as Prato) an appropriate organisational formula to cope with a stable technology and market environment, while a network-centred district a good formula to cope with instable environments?

4.48
In conclusion, one of the main challenging questions here is exactly the relation between case-based models and typifications. As Weber (1904) rightly argued, typifications are needed to do scientific research with case-based models. But, referring to a case-based model, they are theoretical means that can be used to build the case-based model, whereas, in the case of typification, they are the model target itself.

4.49
Typifications can be used to embed a case-based model into a wider theoretical reference, so that a possible case generalisation is supported, or vice versa: the case, if it is selected to be representative of the class or of some important sub-classes, can be used to have an empirical test for the typification, so that a deepening of the theoretical completeness of the typification is possible.

4.50
This second way was explored, for example, by Norbert Elias in his famous model of the court society (Elias 1969). The case of the French court society is chosen as an instance of the typification model of civilising process in Western societies developed in Elias (1939). The theoretical mechanism under investigation is the role of social interdependence, behavioural habiti and new forms of power competition in shaping particular configurations of modern social life. It is chosen because of its representativeness with respect to the entire class. The idea is that what happened in French court society is found to be also reflected in other Ancien Régime court societies in Europe.

4.51
In the best case scenario, the outcome of such a process, let this begin with a case-model or a typification, is intended to generate what Merton called a "middle range theory" (Merton 1949), that is to say an empirically grounded theory able to allow the organisation and possibly the generalisation of theoretical knowledge about specific social mechanisms operating in the empirical reality.

The Case of Theoretical Abstractions

4.52
Theoretical abstractions refer to general social mechanisms with no reference to a space-time circumscribed empirical reality. They are mostly used as theoretical tests for implication analyses, as well as an extension of some previous theoretical or modelling frameworks, such as in the case of game-theory ABMs. They are tools to shed light about some theoretical hypotheses, illustrate some new intuitions or ideas, develop modelling frameworks, as well as to test theoretical consistency of hypotheses.

4.53
This is to say that theoretical abstractions can have a value per sé. Often, they allow addressing some topics that can not be empirically understandable. For instance, ABMs of cooperation and social order are used to support a theoretical understanding and explanation of the role of long time evolution and complex interaction structures for the emergence of robust cooperation regimes over time, or to understand some minimal conditions, in terms of social contexts and institutional frameworks, for cooperation to be generated and protected (Axelrod 1997). It is often impossible to understand the role of evolution in empirical controlled experiments.

4.54
At the same time, it is worth to remember that the most famous theoretical abstractions in ABMs literature, such as the game-theory ABMs popularised by Axelrod (1984; 1997) and the segregation model by Schelling (1971), the first one being so far the reference for social order models, while the second for models of micro-macro emerging dynamics (Macy and Willer 2002), and which usually serve as general references, frameworks and inspiration sources for other model makers, were built according to a strong empirical knowledge foundation.

4.55
Models and theoretical foundations summarised in Axelrod (1984) have been cumulatively built upon an extensive empirical analysis about strategies of behaviour in interaction contexts, as well as on a search for empirical salience in different fields of research, from biology to political sciences, which have been a source of discovering of the famous TIT-for-TAT strategy and a theoretical and methodological heuristic for subsequent modelling experiments and theoretical extensions. It was due to the famous round-robin computer tournament between social scientists (enlarged in the second instance to include nonspecialist entrants) that TIT-for-TAT, originally submitted by Anatole Rapoport, was discovered as the most successful strategy, being its robust success due to its combination of niceness, retaliation, forgiveness and clearness (Hoffmann 2000).

4.56
As Hoffmann (2000) reminds, in a recent revisitation of Axelrod-inspired debates, after being delimitating some empirical grounded theoretical findings, Axelrod simulated a learning process by allowing a replicator dynamic to change the representation of tournament strategies between successive generations according to relative payoffs, with the result that, after one thousand generations, reciprocating cooperators accounted for about 75% of the total population, and with TIT-for-TAT displaying the highest representation among all. After that, Axelrod used a genetic algorithm application to simulate learning and evolution, generating the emergence of strategies that closely resembled TIT-for-TAT (Axelrod 1997). Subsequent work by Axelrod has been focussed on the analysis of emergence and robustness of cooperation regimes, on the role of social structures in preserving cooperation regimes, and on a deepening of the quest of minimal conditions for the emergence of cooperation regimes through theoretical abstractions via ABMs (i.e.: Axelrod, Riolo and Cohen 2002).

4.57
Here, the point is not on discussing theoretical appropriateness and generalisability of Axelrod findings. Here, the point is that Axelrod's work is a demonstration of the utility of experimental data and empirical knowledge to support theoretical abstractions.

4.58
About the same applies to Schelling's segregation model, too. Schelling started with an intriguing theoretical challenge that emerged from sound empirical evidences. Are segregation patterns that we observe in empirical reality in most of the American urban contexts an emerging property from simple and relatively tolerant threshold preference functions at micro level? If we assume that people tend to locally respond and adapt to choices given by their neighbours, that is to say if we assume a local interaction structure characterised by social interdependence, can a qualitative different macro outcome emerges over time? A formalised model was used to address the quest, embedding the segregation empirical example in a set of other similar examples about the complex relation between micro motives and macro behaviour, which now are summarised under the category of "tipping point" mechanisms.

4.59
Subsequent works have attended to revise the traditional Schelling model. Axtell and Epstein (1996) have introduced some modifications in preference functions and interaction structures, with a further appreciation of the results of the canonical model. Gilbert (2002) has modified the canonical model to allow a theoretical analysis on the role of heterogeneity at micro level and second order emergence, while Pancs and Vriend (2003) have explored preference functions oriented to intentionally refuse racial segregation. Bruch and Mare (2005) have focussed on an implication analysis of the Schelling assumptions, discovering that the emergence of tipping point segregation dynamics closely depends on the introduction of a threshold function (not continuous) at micro level, which seems not supported by a thoughtful empirical evidence, and that, by allowing agents to respond in a continuous and accurate way to neighbourhood change, as empirical evidence suggests people do, a trend toward integration rather than a segregation pattern emerges. At the same time, they introduce other relevant empirical issues, such as, for instance, the role of income constraints (Bruch and Mare 2005).

4.60
Coming back to the quest of the use of empirical data, abstractions have the advantage to be applicable and testable with respect to a wide range of possible concrete empirical situations, and to be simple and transparent to use (Carley 2002). But, at the same time, the level of abstraction implies the need of a strong and extensive empirical validation. The empirical reference cannot be made just to few empirical realities. A good practice is that data used for calibrating and validating an abstraction have to be gathered in many empirical situations, in order to find a support for a theory that seeks to be as general as possible. For instance, such kind of data can be obtained by surveying very different populations or with a proper randomization of subjects, in laboratory experiments.

4.61
For example, let us suppose that one would like to study the role of reputation for the emergence and the evolutionary robustness of social order, via abstractions, as it happens in the case of the interesting book recently written by Conte and Paolucci (2002). In that case, to summarise, the authors formulate a general theory of the reputation that would apply to a wide range of different empirical phenomena, from infosocieties to on-line communities, from social clubs to corporate markets. The theory arises from theoretical debates, reviews and discussions, above all from the shareable dissatisfaction expressed by the authors regarding the approach on the subject carried on by standard game-theorists. The argumentation is thoroughly examined via abstractions, while different simulation settings are created and compared to focus in close details on all the aspects of the theory itself.

4.62
The point is that empirical evidences about reputation as an efficient social control decentralised mechanism, composed, as the authors argue, by "image" formation and "reputation" circulation, abound in different social contexts. The major evidence comes from laboratory experiments and social artefacts, such as infosocieties and on-line communities. For example, let us remind the reader to the case of eBay, Sporas, and Histos (Zacharia and Maes 2000). If macro empirical evidences about reputation and social order abound, this does not automatically imply that the mechanism-based theoretical explanation behind the reputation model has to be considered the appropriate one. To test it, collecting data for calibrating the theory at micro level is the only available mean. This could be done both by collecting data on the good functioning of reputation-based social artefacts and by running several laboratory experiments on social dilemmas to have empirical evidences on the micro theory behind the model.

4.63
In conclusion, the relation between types of models leads to a great irony (Carley 2002). Abstractions usually are simple models, perceived as transparent, with no requirements for empirical data to be validated, but they always generate only generic knowledge "with a plethora of interpretations" which are difficult to falsify, and apply, too. As we have argued, without empirical foundation, the theory can not find a validation. On the contrary, case-based models or typifications are perceived as being difficult to be theoretically validated and generalised, but they actually generate more knowledge and specific understandings, so that they are paradoxically more easily falsifiable.

* Conclusions

5.1
We emphasise that the quest of empirical validation is an important feature for the development of ABMs in social science. As we argued, the standard view is to consider ABMs as both experimental and 'empirical' methods in their own, or to consider them just as models to do theoretical hypotheses implication analysis. This is the reason why for most of the computational social scientists validation does not refer to empirical issues.

5.2
Obviously, as we have outlined, internal verification is an important issue for the growth, the cumulativeness, the standardisation and the communicability of results of ABMs in social science. Of course, this is the first leg on which ABMs development in social sciences stands on. But, the quest of empirical calibration and validation is the other leg. If the two legs do not coordinate, we run the risk of unconsciously generating a limping development.

5.3
According to this evidence, the goal of this paper has been to figure out some steps forward in the consciousness of the importance of methodology and empirical validation in computational social science, trying to argue the fruitfulness of beginning to embed ABMs within the entire set of empirical methods for social science. Having a look at the ABM literature in social sciences, we saw some enlightening examples and some potential "best practices" that have recently emerged on the ground. We have simply tried to give them a classification and an ordered point of reference.

5.4
In conclusion, it is worth remembering once more what Merton (1949) suggested some decades ago: the challenge of social science within range is neither to produce big, broad and general theories of everything, nor to spend time in empirical accounts per se, but to formalise, test, use and extend theoretical models able to shed light on the causal mechanisms that are behind the complexity of empirical phenomena.

* Acknowledgements

For some enriching discussions on the issues the paper is about, we would like to thank Nigel Gilbert and the participants to EPOS 2004, in particular, Scott Moss, Klaus G. Troitzsch, Nuno David, and Bernd Oliver Heine. Their useful remarks have allowed us to further clarify our understanding of the matter in some respects. Finally, we would like to thank two anonymous referees. Their challenging remarks gave us the chance of revising and further improving the paper. The usual disclaimers apply.


* Notes

1Analytical sociology lays upon a strong and sound scientific tradition. Apart from the traditional reference to Max Weber, the most important influences have been as follows, just to name a few: the “middle range theory” approach suggested by Merton (1949), the theory of social action put forward by Boudon (1979) and Elster (1979, 2000), the tipping point models and the idea of micro-macro emergence popularised by Schelling (1971), and the famous “Coleman boat” (Coleman 1990), which is more and more conceived as a general theoretical framework for explaining social phenomena. To have a good introduction, a summary about the state of 'analytical' art and some examples of mechanism-based generative models in sociology, see: Hedström and Swedberg 1998.

2As Coser stressed (Coser 1977), there are different kinds of “ideal types” in the Weberian means, three at least. The first is historical routed ideal types, such as the well known cases of “the protestant ethics” or “capitalism”. The second one refers to abstract concepts of social reality, such as “bureaucracy”, while the third one refers to a rationalised typology of social action. This last is the case of economic theory and rational choice theory. These are different possible meanings of the term “ideal type”. In our view, the first two meanings refer to a heuristic theoretical constructs that aim at understanding empirical reality, while the third one refers to “pure” theoretical (as well as normative) aims. Such a redundancy in the meaning of the term has been strongly criticised. According to our taxonomy, typifications include just the first two meanings of the Weberian “ideal type”, while the third meaning refers to what we call abstractions.

3The choice of the research path to effectively address the scholar's question and the choice of the kind of ABM to be exploited in such attempt are out of the scope of the present work. We just want to further underline that the introduction of the ABMs taxonomy is intended to shed light on the relationship between models and empirical data and that such classification does not represent a bound to the flexibility of research paths.

4For a good introduction to Anasazi, see Stuart 2000 and Morrow and Price 1997.

5The model is based on Sugarscape platform developed by Epstein and Axtell 1996.

6Most of these efforts have been supported by the findings of a previous survey on the ground, called “Long House Valley Project”, realised by a multidisciplinary team from Museum of Northern Arizona, Laboratory of Tree-Riding Research of the Arizona University and Southwestern Anthropological Research Group. The result has been a database which has been translated and integrated into the Anasazi model.

7It is also important to consider that in the 200 A.D. to 1450 A.D. period, in the area, the only technological innovation introduced has been a more efficient way to grind the maize.

8Even if the case of the water demand model allows to show how the stakeholder involvement has brought into the model relevant qualitative data, which were unknowable for the model maker before, it is worth outlining that, in many cases, such an involvement could allow the model maker to access also quantitative data that, in other ways, were unknowable, for instance, because they weren't a common knowledge, or they were protected from outside access. This may be a common situation when, for example, the model target includes a firm or a corporate actor.


* References

ALBINO V, CARBONARA N and GIANNOCCARO I (2003) Coordination Mechanisms Based on Cooperation and Competition within Industrial Districts: An Agent-Based Computational Approach. Journal of Artificial Societies and Social Simulation, Vol. 6, No. 4 https://www.jasss.org/6/4/3.html.

AXELROD R (1984) The Evolution of Cooperation. New York: Basic Books.

AXELROD R (1997) The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton, New Jersey: Princeton University Press.

AXELROD R (1998) Advancing the Art of Simulation in the Social Sciences. Complexity, 3, 2. pp. 16-22.

AXELROD R, RIOLO R L and COHEN M D (2002) Beyond Geography: Cooperation with Persistent Links in the Absence of Clustered Neighborhoods. Personality and Social Psychology Review, 6, 4. pp. 341-346.

AXTELL R (1999) Why Agents? On the Varied Motivations for Agent Computing in the Social Sciences. Center on Social and Economic Dynamics, Working Paper No. 17. http://www.brookings.edu/es/dynamics/papers/agents/agents.

AXTELL R, AXELROD R, EPSTEIN J M, and COHEN M D (1996) Aligning Simulation Models: A Caste Study and Results. Computational and Mathematical Organization Theory, 1, 2. pp. 123-141.

BARBERA F (2004) Meccanismi sociali. Elementi di sociologia analitica. Bologna: il Mulino.

BARRETEAU O, BOUSQUET F, ATTONATY J M (2001) Role-Playing Games for Opening the Black Box of Multi-Agent Systems: Method and Lessons of Its Application to Senegal River Valley Irrigated Systems. Journal of Artificial Societies and Social Simulation, Vol. 4, No. 2 . https://www.jasss.org/4/2/5.html.

BELUSSI F and GOTTARDI G (Eds.) (2000) Evolutionary Patterns of Local Industrial Systems: Towards a Cognitive Approach to Industrial Districts. Aldershot Brookfield Singapore Sidney: Ashgate.

BELUSSI F, GOTTARDI G, RULLANI E (2003) The Tecnological Evolution of Industrial Districts. Boston/Dordrecht/New York/London: Kluwer Academic Publishers.

BOERO R, CASTELLANI M, SQUAZZONI F (2004) "Cognitive Identity and Social Reflexivity of the Industrial District Firms. Going Beyond the 'Complexity Effect' with an Agent-Based Computational Prototype. In (Eds), Lindermann G, Modt D, Paolucci M, Regulated Agent-Based Social Systems. Berlin Heidelberg: Springer Verlag. pp. 48-69.

BORRELLI F, PONSIGLIONE C, IANDOLI L and ZOLLO G (2005) Inter-Organizational Learning and Collective Memory in Small Firms Clusters: An Agent-Based Model. Journal of Artificial Societies and Social Simulation, Vol. 8, No. 3. https://www.jasss.org/8/3/4.html.

BOUDON R (1979) "Generative Models as a Research Strategy". In (Eds) Merton R K, Coleman J S, and Rossi P H, Qualitative and Quantitative Social Research: Papers in Honor of Paul F. Lazarfield. New York: The Free Press. pp. 51-64.

BOUSQUET F et AL. (2003) "Multi-Agent Systems and Role Games: Collective Learning Processes for Ecosystem Management". In Janssen M A (Ed.), Complexity and EcoSystem Management. The Theory and the Practice of Multi-Agent Systems. Cheltenham UK, Northampton, Massachussets: Edward Elgar. pp. 243-280.

BRENNER T (2001) Simulating the Evolution of Localized Industrial Clusters-An Identification of the Basic Mechanisms. Journal of Artificial Societies and Social Simulation, Vol. 4, No. 3. http://www-soc.surrey.ac.uk/JASSS/4/3/4.html.

BRENNER T and MURMANN J P (2003) The Use of Simulations in Developing Robust Knowledge about Causal Processes: Methodological Considerations and an Application to Industrial Evolution. Papers on Economics and Evolution 0303. Max Planch Institute.

BRUCH E E and MARE R D (2005), Neighborhood Choice and Neighborhood Change. California Center for Population Research, Working Paper Series, CCPR-007-04.

BRUSCO S, MINERVA T, POLI I and SOLINAS G (2002) Un automa cellulare per lo studio del distretto industriale. Politica Economica, 18. pp. 147-192.

BURTON R M (1998) "Validating and Docking: An Overview, Summary and Challenge". In Prietula M, Carley K and Gasser L (Eds.), Simulating Societies: Computational Models of Institutions and Groups. Cambridge, Massachusetts: AAAI/MIT Press.

CARLEY K M (2002) "Simulating Society: The Tension between Transparency and Veridicality". In Macal C and Sallach D. (Eds.), Proceedings of the Agent 2002 Conference on Social Agents: Ecology, Exchange and Evolution. Arbonne, Illinois: Argonne National Laboratory. pp. 103-114.

CARLSON L et AL (2002) "Empirical Foundations for Agent-Based Modeling: How Do Institutions Affect Agents' Land-Use Decision Processes in Indiana?". In Macal C and Sallach D. (Eds.), Proceedings of the Agent 2002 Conference on Social Agents: Ecology, Exchange and Evolution. Arbonne, Illinois: Argonne National Laboratory. pp. 133-148.

CASTI J (1994) Complexification. Explaining a Paradoxical World through the Science of Surprise. New York: Harper Collins.

CIOFFI-REVILLA C and GOTTS N M (2003) Comparative Analysis of Agent-Based Social Simulations: GeoSim and FEARLUS models. In Proceedings, International Workshop M2M: "Model to Model" Workshop on Comparing Multi-Agent Based Simulation Models. Marseille, France, March 31-April 1, 2003, pp. 91-110.

COLEMAN J (1990) Foundations of Social Theory. Cambridge, MA: Harvard University Press.

CONTE R and PAOLUCCI M (2002) Reputation in Artificial Societies. Social Beliefs for Social Order. Dordrecht: Kluwer Academic Publishers.

COSER L A (1977) Masters of Sociological Thought: Ideas in Historical and Social Context. New York: Harcourt Brace Jovanovich.

DAL FORNO A and MERLONE U (2004) From Classroom Experiments to Computer Code. Journal of Artificial Societies and Social Simulation, Vol. 7, No. 3. https://www.jasss.org/7/3/2.html.

DEAN S J, GUMERMAN G J, EPSTEIN J M, AXTELL R, SWEDLUND A C, PARKER M T, MCCARROLL S (2000) Understanding Anasazi Culture Change Through Agent-Based Modeling. In (Eds.) Kohler T A, Gumerman G J, Dynamics in Human and Primate Societies. Agent-Based Modeling of Social and Spatial Processes. Santa Fe Institute Studies in the Sciences of Complexity, New York: Oxford University Press. pp. 179-205.

DUFFY J (2004) Agent-Based Models and Human Subject Experiments, in K.L. Judd and L. Tesfatsion (Eds.) Handbook of Computational Economics vol. 2, Elsevier, Amsterdam, forthcoming.

EDMONDS B and HALES D (2003) Replication, Replication, Replication: Some Hard Lessons from Model Alignment. Journal of Artificial Societies and Social Simulation, Vol. 6, No. 4. https://www.jasss.org/6/4/11.html.

EDMONDS B and MOSS S (2005) From KISS to KIDS: An 'Anti-Simplistic' Modelling Approach. In (Eds) Davidsson P et al., Multi Agent Based Simulation, Lectures Notes in Artificial Intelligence, 3415, Heidelberg: Springer-Verlag. pp. 130-144.

ELIAS N (1939) The Civilizing Process. Oxford, Blackwell, 1994.

ELIAS N (1969) The Court Society. New York, Pantheon, 1983.

ELIASSON G and TAYMAZ E (2000) Institutions, Entrepreneurship, Economic Flexibility and Growth - Experiments on an Evolutionary Micro-to-Macro Model. In (Eds) Cantner U, Hanusch H and Klepper S, Economic Evolution, Learning, and Complexity, Heidelberg: Springer-Verlag. pp. 265-286.

ELSTER J (1979) Ulysses and the Sirens: Studies in Rationality and Irrationality. Cambridge: Cambridge University Press.

ELSTER J (1998) "A Plea for Mechanisms". In (Eds.) Hedstrom P and Swedberg R, Social Mechanisms. An Analytical Approach to Social Theory. Cambridge: Cambridge University Press. pp. 45-73.

ELSTER J (2000) Ulysses Unbound. Studies in Rationality, Precommitment and Constraints. Cambridge: Cambridge University Press.

EPSTEIN J M (1999) Agent-Based Models and Generative Social Science. Complexity, 4, 5. pp. 41-60.

EPSTEIN J M and AXTELL R (1996) Growing Artificial Societies. Social Science from the Bottom-Up. Cambridge, Massachusetts: The MIT Press.

ETIENNE M, LE PAGE C and COHEN M (2003) A Step-by-Step Approach to Building Land Management Scenarios Based on Multiple Viewpoints on Multi-Agent System Simulations. Journal of Artificial Societies and Social Simulation, Vol. 6, No. 2. https://www.jasss.org/6/2/2.html.

FIORETTI G (2001) Information Structure and Behaviour of a Textile Industrial District. Journal of Artificial Societies and Social Simulation, Vol. 4, No. 4. https://www.jasss.org/4/4/1.html.

GILBERT N (2002) "Varieties of Emergence". In (ed) Sallach D, Social Agents: Ecology, Exchange, and Evolution. Agent 2002 Conference. University of Chicago and Argonne National Laboratory. pp. 41-54.

GILBERT N and TROITZSCH K G (1999) Simulation for the Social Scientist. Buckingham Philadelphia: Open University Press.

GOLDTHORPE J H (2000) On Sociology. Oxford: Oxford University Press.

GOTTS N M, POLPHILL J G and LAW A N R (2003a) Agent-Based Simulation in the Study of Social Dilemmas. Artificial Intelligence Review, 19. pp. 3-92.

GOTTS N M, POLHILL J G and LAW A N R (2003b) Aspiration levels in a land use simulation. Cybernetics & Systems, 34 (8). pp. 663-683

GOTTS N M, POLHILL J G, LAW A N R and IZQUIERDO L R (2003) Dynamics of Imitation in a Land Use Simulation. In Proceedings of the AISB '03 Second International Symposium on Imitation in Animals and Artifacts, 7-11 April 2003, University of Wales, Aberystwyth. pp. 39-46.

GOULD S J and ELDREDGE N (1972) Punctuated Equilibria : The Tempo and Mode of Evolution Reconsidered. Paleobiology, 3. pp. 115-151.

GUMERMAN G J, SWEDLUND A C, DEAN J S, EPSTEIN J M (2002) The Evolution of Social Behavior in the Prehistoric American Southwest. Santa Fe Institute, Working Paper.

GUMERMAN G J and KOHLER T A (1996) Creating Alternative Cultural Histories in the Prehistoric Southwest : Agent-Based Modeling in Archeology. Santa Fe Institute, Working Paper.

HEDSTRÖM P and SWEDBERG R (1998) (eds), Social Mechanisms: An Analytical Approach to Social Theory, Cambridge, Cambridge University Press.

HOFFMANN R (2000) Twenty Years on: The Evolution of Cooperation Revisited. Journal of Artificial Societies and Social Simulation, Vol. 3, No. 2 . https://www.jasss.org/3/2/forum/1.html.

IZQUIERDO L R, GOTTS N M and POLHILL J G (2003) FEARLUS-W: An Agent-Based Model of River Basin Land Use and Water Management. Paper presented at "Framing Land Use Dynamics: Integrating knowledge on spatial dynamics in socio-economic and environmental systems for spatial planning in western urbanized countries", International Conference 16-18 April 2003, Utrecht University, The Netherlands.

JACKSON D, HOLCOMBE M, RATNIEKS F (2004) Coupled Computational Simulation and Empirical Research into Foraging Systems of Pharaoh's Ant (Monomorium Pharaonis). Biosystems, 76. pp. 101-112.

KING G, VERBA S and KEOHANE R O (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press.

KIRMAN A P and VRIEND N (2000) "Evolving Market Structure: A Model of Price Dispersion and Loyalty for the Marseille Fish Market". In Delli Gatti D, Gallegati M and Kirman A P (Eds.), Interaction and Market Structure. Berlin Heidelberg: Springer Verlag.

KIRMAN A P and VRIEND N (2001) Evolving market structure: an ACE model of price dispersion and loyalty. Journal of Economic Dynamics and Control, 25. pp. 459-502.

LANE D A (2002) "Complexity and local interactions: Towards a theory of industrial districts". In (Eds.) Quadrio Curzio A and Fortis M, Complexity and Industrial Clusters: Dynamics and Models inTheory and Practice. Berlin: Springer Verlag.

MACY M and WILLER (2002) From Factors to Actors: Computational Sociology and Agent-Based Modelling. American Sociological Review, 28. pp. 143-166.

MAHONEY J (2001) Beyond Correlational Analysis: Recent Innovations in Theory and Method. Sociological Forum, 16, 3. pp. 575-593.

MANSON S M (2003) "Validation and Verification of Multi-Agent Systems". In Janssen M A (Ed.), Complexity and EcoSystem Management. The Theory and the Practice of Multi-Agent Systems. Cheltenham UK, Northampton, Massachussets: Edward Elgar. pp. 58-69.

MERTON R K (1949) Social Theory and Social Structure. New York: Free Press.

MORROW B H and PRICE V B (1997) Anasazi Architecture and American Design. Albuquerque: University of New Mexico Press.

MOSS S (1998) Critical Incident Management: An Empirically Derived Computational Model. Journal of Artificial Societies and Social Simulation, Vol. 1, No. 4. https://www.jasss.org/1/4/1.html.

MOSS S and EDMONDS B (2004) Towards Good Social Science. Centre for Policy Modelling Report.

MOSS S and EDMONDS B (2005) Sociology and Simulation: Statistical and Qualitative Cross-Validation. American Journal of Sociology, 110, 4. pp. 1095-1131.

PANCS R and VRIEND R J (2003) Schelling's Spatial Proximity Model of Segregation Revisited. University of London, Department of Economics, Working Paper No. 487.

PAWSONS R and TILLEY N (1997), Realistic Evaluation. London, Sage.

POLHILL J G, GOTTS N M and LAW A N R (2001) Imitative and nonimitative strategies in a land use simulation. Cybernetics & Systems, 32 (1-2). pp. 285-307.

POLHILL J G, IZQUIERDO L R and GOTTS N M (2005) The Ghost in the Model (and Other Effects of Floating Point Arithmetic). Journal of Artificial Societies and Social Simulation, Vol. 8, No. 1. https://www.jasss.org/8/1/5.html.

PRIETULA M K, CARLEY K and GASSER L (1998) Simulating Organizations: Computational Models of Institutions and Groups. Cambridge, Massachusetts: The MIT Press.

PYKA A and AHRWEILER P (2004) "Applied Evolutionary Economics and Social Simulation", Journal of Artificial Societies and Social Simulation, Vol. 7, No. 2. https://www.jasss.org/7/2/6.html.

RAGIN C C (1987) The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkley: University of California Press.

SAWYER R K (2003) Artificial Societies. Multi-Agent Systems and the Micro-Macro Link in Sociological Theory. Sociological Methods and Research, Vol. 31, No. 3. pp. 325-363.

SCHELLING T (1971) Dynamic Models of Segregation. Journal of Mathematical Sociology, 1. pp.143-186.

SQUAZZONI F and BOERO R (2002) Economic Performance, Inter-Firm Relations and "Local Institutional Engineering" in an Agent-Based Computational Prototype of Industrial Districts. Journal of Artificial Societies and Social Simulation, Vol. 5, No. 1.https://www.jasss.org/5/1/1.html.

SQUAZZONI F and BOERO R (2005) "Towards an Agent-Based Computational Sociology. Good Reasons to Strengthen a Cross Fertilization Between Complexity and Sociology". In Stoneham (Ed.), Advance in Sociology Research II, New York: NovaScience Publishers. pp. 103-133.

STAME N (2004) Theory-Based Evaluation and Types of Complexity. Evaluation, vol. 10, 1. pp. 58-76.

STUART D E (2000) Anasazi America. Albuquerque: University of New Mexico Press.

TROITZSCH K G (2004) "Validating Simulation Models". Proceedings of 18th European Simulation Multiconference on Networked Simulation and Simulation Networks, SCS Publishing House. pp. 265-270.

WARE J (1995) George Gumerman: The Long View. Santa Fe Institute Bulletin, Winter.

WEBER M (1904) Objectivity of Social Science and Social Policy. In The Methodology of the Social Sciences. Edward Shils and Henry Finch (Eds.), 1949, New York: Free Press.

WERKER C and BRENNER T (2004) Empirical Calibration of Simulation Models. ECIS Working Paper 04.013.

WILLER D and WEBSTER M (1970) Theoretical Concepts and Observables. American Sociological Review, 35. pp. 748-757.

ZACHARIA and MAES (2000). Trust management through reputation mechanisms. Applied Artificial Intelligence, 14. pp. 881-907.

ZHANG J (2003) Growing Silicon Valley on a Landscape: an Agent-Based Approach to High-Tech Industrial Clusters. Journal of Evolutionary Economics, 13. pp. 529-548.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2005]