* Abstract

Agent Based Modelling (ABM), a promising scientific toolset, has received criticism from some, in part, due to a claimed lack of scientific rigour, especially in the communication of its methods and results. To test the veracity of these claims, we conduct a structured analysis of over 900 scientific objects (figures, tables, or equations) that arose from 128 ABM papers published in the Journal of Artificial Societies and Social Simulation (JASSS), during the period 2001 to 2012 inclusive. Regrettably, we find considerable evidence in support of the detractors of ABM as a scientific enterprise: elementary plotting attributes are left off more often than not; basic information such as the number of replicates or the basis behind a particular statistic are not included; and few, if any, established methodological communication standards are apparent. In short, 'anarchy reigns'. Whilst the study was confined only to ABM papers of JASSS, we conclude that if the ABM community wishes its approach to be accepted further afield, authors, reviewers, and editors should take the results of our work as a wake-up call.

Agent Based Modelling, Social Sciences, Simulation, Publishing

* Introduction

"While the theoretical and experimental foundations of agent-based systems are becoming increasingly well understood, comparatively little effort has been devoted to understanding the pragmatics of (multi-)agent systems development — the everyday reality of carrying out an agent-based development project. As a result, agent system developers are needlessly repeating the same mistakes, with the result that, at best, resources are wasted — at worst, projects fail."

"It is widely acknowledged that … agent-based models, can play an important role in fostering understanding of the dynamics of complex systems. … However, current [agent-based] modelling practice has two substantial shortcomings: (1) The reasoning behind the choice of a certain human decision model is often not well documented; insufficient empirical or theoretical foundations are given; or the decision model is only assumed on an ad-hoc basis … (2) Often the model is not described in a transparent manner (clear and complete) that would allow for reproducibility and facilitate the communication of the model and its results."

The two quotations above concisely describe a tragedy in the storied history of Agent-Based Modelling (ABM). The tragedy being that, whilst describing essentially the same paradox – the promise of ABM approaches in the social sciences juxtaposed against the lack of well developed practices in ABM science – the first quote (Wooldridge & Jennings 1998) predates the second (Müller et al. 2013) by 15 years.[1]

Of course, during this period – one which has seen a new global financial crisis (GFC), the emergence of several potentially pandemic infectious diseases, and the rise and rise of network-based social media platforms – the champions of ABM methodologies from all relevant fields have re-emphasised, in prominent publishing platforms, the need for, and renewed relevance of, ABM methods: Doyne Farmer and Duncan Foley (2009), Mark Buchanan (2009), and Jean-Philippe Bouchard (2008) all leveraged the 'meltdown' of the 2007–2009 GFC to call for ABMs in Economics at Nature; whilst Joshua Epstein, also writing in Nature, pointed to the network complexity inherit in the emerging H1N1 outbreak of 2009 to point out how suited ABMs are to modelling infectious diseases (Epstein 2009). In short, the promise of the methodology seems, during this period, alive and well.

Meanwhile timely and compelling articles of the generic form "Field X, meet ABM …," were written for, amongst others, the 'human systems' modelling community by Eric Bonabeau in the US flagship, Proceedings of the National Academy of Sciences (Bonabeau 2002), the mind—action socio-economic community by Nigel Gilbert and Pietro Terna (2000) in the (then brand new) Mind and Society journal, and for the sociology community by Michael Macy and Robert Willer in the Annual Review of Sociology (2002), each arriving at the start of this crucial period.

In terms of results, a few ABM practitioners have found a receptive audience for their insightful works. Creative, unashamedly ABM, papers have found their way into top field (e.g. in the Economic sciences: American Economic Review (albeit, Papers & Proceedings of the AEA), Journal of Economic Dynamics and Control, Journal of Economic Behaviour and Organisation) and generalist (e.g. Science) journals during this period (Geanakoplos et al. 2012; Howitt & Clower 2000; Dosi et al. 2010; Lim et al. 2007).

However, outside of these few high-points of the ABM social sciences, ABM studies in general have fared rather less well. Leombruni et al.'s ((2005 ) survey of the top 20 Economic and 10 Social sciences journals found only a handful of published studies (7 in the former, 11 in the latter) up to that point in time which used an ABM methodology. Recent publishing practices seem little different.

Various explanations are given for ABM's difficult reception in field journals. Chief amongst these seem to be a perceived lack of what might be called 'intuitive transparency' relative to the closed-form, deductive, toolsets available. Joshua M Epstein (2006), in his classic contribution to the Handbook of Computational Economics (vol 2), writes (ch 34),
"The real reason some mathematical social scientists don't like computational agent-based modelling is not that the approach is empirically weak (in notable areas, it's empirically stronger than the neoclassical approach). It's that it isn't beautiful."

In a similar direction, Leombruni and Richiardi (2005) consider the two key perceived problems of 'Economists' with ABM studies (lack of generalisability, identification or estimation problems) stand behind the headline comment that ABMs, 'don't prove anything'. (Case in point, Leombruni and Richiardi's paper was published not by a progressive, forward-looking, Economics field journal, but rather by Physica A: statistical mechanics and its applications – not, one assumes, a commonly read journal by most economists.)

Whilst the the 'proof' (or 'beauty') problem is rather over-stated by field editors and reviewers (and is handled very well by the authors just mentioned), a lingering problem remains: the communication of ABM methods and results – this is an ABM science practice issue.

The simple fact is that ABM studies often draw their heritage apart from the deductive mathematical sciences, instead, they build on the sciences of software design and numerical simulation. And here, ABM practitioners in the social sciences seem to have suffered heavily from a lack of well-established communication tools and standards. Richiardi et al. (2006) capture this point well (paragraph 1.5),
"Agent-based models have solid methodological foundations. However, the greater freedom they have granted to researchers (in terms of model design) has often degenerated in a sort of anarchy (in terms of design, analysis and presentation)." [emphasis added]

They go on to elaborate this anarchy as follows (paragraph 1.5),
"a) There is no clear classification of the different ways in which agents can exchange and communicate: every model proposes its own interaction structure.

"b) There is not a standard way to treat the artificial data stemming from the simulation runs, in order to provide a description of the dynamics of the system, and many articles seem to ignore the basics of experimental design. Often, the comparison between artificial and real data is overly naïf, and the parameters' values are chosen without proper discussion; and

"c) Too often, it is not possible to understand the details of the implementation of an agent-based simulation. This makes replication a difficult, sometimes impossible task, thus violating the basic principle of scientific practice and confining the knowledge generated by agent-based simulations to no more than anecdotal evidence." [enumeration added]

… Anarchy indeed. Sadly, and bringing us back to the opening quotation from Wooldridge and Jenning's (1998) article, this exact problem was identified (understandably then) for the 'new field' of ABM eight years prior (section 8.2),
"In a field as new as agent systems, there are few established standards that a developer can make use of when building the agent specific components of an application."

Since this time, there have, of course, been real attempts to order the 'anarchy' of ABM development and communication. As early as 2000 the powerful Unified Modeling Language (UML) was articulated to include agents (Odell et al. 2000) and so offered a potential standardisation candidate (at least of one aspect of ABM development), whilst the Overview, Design concepts, and Details (ODD) (Grimm et al. 2006) and ODD+D (ODD + 'Decision', Müller at al. 2013) protocols of 2006 and 2013 respectively arose from an initiative which sought to respond directly to the 'anarchy' problem of ABM development. Meanwhile, other authors have focussed their contributions to specific areas of ABM practice such as model validation (Law 2005; Windrum et al. 2007; Moss 2008) or validation and replication (Heath et al. 2009) and 2-Dimensional, spatial visualisation (Kornhauser et al. 2009). Whilst these approaches are all highly relevant and important (and in some focussed areas of the literature appear to be having an impact on the quality of ABM practice, Grimm et al. 2010), we wish to provide the community with a slightly more quantitative perspective on the state of affairs.

Specifically, in this study, we focus on the third (c) aspect of Richiardi et al.'s (2006) 'anarchy' enumeration – that of the basic communication of methodologies and results via a study's visual short-hand – the tables, figures and equations that it employs to describe its science (what we shall call a paper's 'objects'). We consider that this simple aspect has received little attention in the standardisation efforts to date. Where previous authors have spent their time on the development and design aspects of ABMs, it would appear that the community has largely assumed that the quality of basic components of ABM communication was in good order. We contend that even if a model is described accurately and helpfully, perhaps using the ODD or ODD+D protocol, and even if the model has been well validated, it will still fail as a piece of science if its key methodological and results objects are of poor quality. This distinction matters perhaps moreso in our visually-orientated, short-attention-span, era of scientific publishing. Indeed, the so-called 'mega-journal' PLOS-One, exemplifies this trend, in asking authors to nominate a 'striking image' (for our study: 'object') during submission. They presumably know well the value of such an image for their social media and marketing platform integrations.

We conduct a structured analysis of over 900 scientific objects (figures, tables, or equations) that arose from 128 ABM papers published in the JASSS, during the period 2001 to 2012 inclusive. By focussing on a single outlet, we reduce any inter-journal variance on publishing and editorial standards and allow authors' own practices to come to the fore.

The journal JASSS was chosen for the study for two principle reasons: first, it is well-regarded as a general social sciences ABM study 'clearing-house' with an active and engaged readership, so stands well for cross-disciplinary social sciences ABM publishing trends; and second, JASSS, being an open-access, online, HTML-based, journal lends itself to facile multi-year study of that which we propose – papers could be identified, harvested and analysed with ease. We note also that JASSS acts as a crucial 'reverberation board' for the transmission of relevant ideas between the social and physical sciences (Squazzoni & Casnici 2013).

Finally, the study period 2001-2012 covers this crucial 'early decade' of social sciences ABM science. As mentioned earlier, not only were multiple, key contributions made to introduce ABM methods to various social sciences fields at the start of our study period, but also the period covers almost all of the major contributions to the standardisation program of ABM social science model development and communication which has been ongoing since at least 2000. If these efforts have had any early impacts on ABM communication practices, these should be evident in our study.

* The ABM Paper Sample

Enrolling JASSS Publications into the Primary Database

World-wide-web (WWW) links to all scientific articles (not reviews, forum etc.) published in JASSS from volume 1 to 15 (1998-2012) were obtained from the index page (e.g. https://www.jasss.org/1/1/1.html), producing 487 links.

A script was written to download the .html file of each article from the JASSS site and then process the meta-data of the HTML files to obtain the paper's fields (title, authors, keywords, abstract, volume, issue, date of publication). In seven cases, the meta-data format deviated from the JASSS standard, causing the articles to be dropped from the sample (e.g. https://www.jasss.org/11/4/3.html). This produced 480 articles with successfully harvested meta-data.

Next, a corpus of unique keywords (the 'Subject' meta-data content in the JASSS template, for an example, see Appendix A) (note, keywords can be phrases) was built, taking care of non-material keyword variations such as the presence or absence of a hyphen. In all, 1501 keywords were gathered in this step.

Finally, an article was enrolled into the primary database if it contained one of the following 'ABM' keywords chosen by the authors after reviewing the unique keyword set, either exactly, or as a keyword root (e.g. 'multiagents', or 'multiagent systems' would match with 'multiagent') (number of matches in parentheses): 'agent based' (151), 'multi agent' (39), 'social simulation' (31), 'individual based' (9), 'multiagent' (8), 'agent simulation' (2), and 'artificial agents' (1). Of the 480 papers, 220 unique papers contained a matching keyword were enrolled in the primary database. The majority of papers (202) had a single ABM keyword match, 16 matched two ABM keywords, and two matched three.

To assist with replication, a full listing of the resulting 937 objects, their home paper ID, and basic descriptors is given in a comma-separated value file online with this work.

* Methods

Defining & Validating the Object Taxonomy

Since no clear taxonomy of objects exists to our knowledge in the literature, the authors set about building a useful and facile taxonomy to describe the publication practices in each ABM paper. Our methodology has been guided by experience arising from related social sciences taxonomy/encoding exercises, albeit of a textual nature (Hara et al. 2000; Rourke & Anderson 2004). Building the taxonomy organically proceeded in 6 cyclical steps:
  1. Two authors reviewed the methods and results objects contained within a randomly selected 20 paper sub-sample of the primary database (Sample A).
  2. A draft taxonomy was compiled collaboratively, including hierarchical descriptions.
  3. The draft taxonomy was then applied, independently by two authors (SDA, BH-M) to Sample A.
  4. The same authors then met to discuss disagreements and imperfections in the draft taxonomy leading to a refined taxonomy.
  5. The refined taxonomy was then applied independently by two authors (SDA, BH-M) to a further 20 paper random sub-sample from the primary database (Sample B).
  6. A second meeting was then convened between the two authors to validate and further refine the taxonomy leading to the final taxonomy, and clarify its application to the pooled (Samples A + B) 40 paper sample used in previous steps.

Note, since our focus of analysis is on the decisions of authors as to how they present the methods and results of ABM studies, we skip duplicate object types found in any article. That is, the first instance of a given object is studied, with second, third, and subsequent objects having the same general attributes as the first, not included in the analysis. Typically, subsequent objects of the same kind were presented with identical features (or lack of features) as the first object, presumably since authors create figures, or tables, via 'templates'.

Summary of the Object Taxonomy

Table 1: Summary of the Object Taxonomy. Counts of objects at each level of the hierarchy are given in the cells.

table 1

Table 1 provides a summary presentation of the Object Taxonomy. As can be seen, all the objects are first categorised based on whether they are used in the Methods or Results sections. A given object is then labelled as Figure, Table or Equation based on what the authors of the original papers have named them. Finally, for each object, depending on where it has appeared and its type, we collected a range of further information relevant to the object as presented in the table.

Approach to an Object's Surrounding Context

If specific characteristics of the object's nature (e.g. the 'Target' of a Methods Figure or Table, or the Granularity of a Results Figure or Table, refer Table 1) were not obvious from the contents of the object, or its caption, the information was sought from the surrounding text. However, in the specific case where the simulation results figures were studied for their quality (see final Results section, 'Quality of ABM results plotting over time' below) a more stringent test was applied requiring the key information we were looking for to be present in the figure itself or figure caption only. We are of the view that a results figure should be, as much as possible, self-contained. Here, we are influenced by the widely used (e.g. by the editors of the Proceedings of the National Academy of Sciences of the USA) "Scientific Style and Format: The CSE Manual for Authors, Editors, and Publishers" (8th edition) (Council of Science Editors 2014, section 30.3) (see especially section 30.2.2 of this guide, 'Figure Caption').

Examples of Taxonomic Objects

Note: in all cases, except where stated otherwise, example objects from papers are provided without explicit attribution. Our intention is not to single out particular authors, but rather to point to general trends in the discipline. Source JASSS article citation information for any example provided can be sought from the corresponding author.

In Figures 1 to 9 we provide examples of the top three (by incidence in the database) objects used to communicate methodologies in the database.

figure 1
Figure 1. Example Methods object 'Look-up table'. In this case, each row of the table describes features of a 'resource' used in the model.

figure 2
Figure 2. Example Methods object 'Schematic diagram'. The figure indicates relationships between example agents in the model ('Seller 001', 'Buyer 001') and functions, without adhering to UML formalism, or a 'Flow-chart' presentation.

figure 3
Figure 3. Example Methods object 'Screen-shot'. A typical screen-shot, in this case, annotated and taken from the NetLogo software platform.

In Figures 1, 2 and 3, examples of the 'Look-up Table', 'Schematic Diagram', and 'Screen-shot' objects are presented. For the former, we were directed by the nomenclature of the authors, where 'Figure' was used, the 'Figure' taxonomy object classes were employed, whilst 'Table' induced the 'Table' taxonomy classes. 'Schematic Diagram', as in the example (Figure 2), was used to code any figure which conveyed the relationship between multiple aspects of the model (agents, procedures) but did not conform to either a flow-diagram or UML-formalism. We have not sought to classify these figures further, owing to the rich variety of symbolic and relational elements employed by authors.

figure 4
Figure 4. Example Methods object 'Parameter initialisation'. Each row of the table provides parameter values for three settings in the model.

figure 5
Figure 5. Example Methods object 'Processes (behaviours, rules)'. Each row of the table provides definitions and thresholds for various rules and quantities in the model.

figure 6
Figure 6. Example Methods object 'Experimental setup'. Each row of the table presents the choices of decision rules and parameters tested in each experiment.

figure 7
Figure 7. Example Equation objects. a: 'A-temporal', b: 'Discrete-time', and c: 'If-else'.

Figures 4, 5 and 6 provide examples of highly used 'Table' class objects, being, unsurprisingly, focussed on parameter initialisation, rule definitions, and (numerical simulation) experimental conditions. Figure 7 (a, b and c) provide examples of the prominent equation types used by authors in the methods section.

Training & Encoding the entire Primary Database

Next, a research assistant, familiar with ABM science, was trained in applying the final taxonomy, before applying it to Sample A. A review and training meeting was held with one of the authors (BH-M) to provide guidance and clarification to the research assistant, before they applied the final taxonomy to Sample B. A final training meeting was held with one of the authors (BH-M) to complete the training and ensure strong coherence with the encoding approach of the authors.

Finally, the research assistant went on to encode the entire Primary Database, recording decisions with an online form to facilitate data entry. Validation of the research assistant's application of the final taxonomy to the Primary Database included: on-going, ad-hoc, conferring with one of the authors (BH-M); random checking by the same author of the encoding (around 10% of the coded objects in all); and then author checking of any residual 'flagged' objects. A 'flag' was used wherever the research assistant was hesitant for any reason in the correct encoding to use. Finally, a different research assistant reviewed all 'flagged' objects to confirm that the correct taxonomy had been applied.

During coding, a number of papers were found not to be ABM in nature and were dropped from the database following a simple rule: if the paper did not convey the methods and results of a scientific enquiry using an ABM model, it was dropped. For example, some papers were found to discuss ABM theory or practice (such as Deichsel and Pyka's (2009) 'A Pragmatic Reading of Friedman's Methodological Essay and What It Tells Us for the Discussion of ABMS') or were intended as 'position papers' for alternative disciplines (such as Ahrweiler's (2011), 'Modelling Theory Communities in Science'). Additionally, papers published prior to 2001 (1998–2000) were dropped due to the small number of resultant ABM papers in these years.

In the end, the Study Database contained 937 objects (139 equations, 599 figures, 199 tables) drawn from 128 papers, spanning the years 2001–2012.

Quantitative analysis

Quantitative analysis was carried out with a combination of tools including Microsoft Excel (pivot tables) and MATLAB (R2014b) (visualisations).

* Results

Summary of the data

Table 2: Unique papers, by year, in the Study Database.

Year Count
2001 4
2002 9
2003 13
2004 7
2005 11
2006 11
2007 14
2008 13
2009 11
2010 18
2011 8
2012 9
Total 128

Table 3: Object counts, by Method or Results, and by Object Type, by year, in the Study Database.

Year Methods Results Total
Eqtn. Figure Table Eqtn. Figure Table Total
2001 10 2 20 2 34
2002 6 21 6 22 5 60
2003 16 32 20 29 8 105
2004 8 19 14 20 2 63
2005 10 33 6 23 7 79
2006 16 15 11 29 9 80
2007 17 20 14 37 10 98
2008 21 29 10 37 6 103
2009 9 26 9 27 8 79
2010 15 36 16 1 39 9 116
2011 5 22 9 12 3 51
2012 15 20 7 21 6 69
Total 138 283 124 1 316 75 937

Tables 2 and 3 present summary statistics for papers and objects, respectively, in the Study Database. Due to the relatively small number of papers enrolled each year over the period, communication practices over time will be studied at a multi-year aggregate level below. Perhaps unsurprisingly, Table 3 demonstrates the tendency to convey methodological aspects of a study predominantly with the use of a balance of equations, figures and tables, whilst results are seldom communicated (or analysed) in equation form (a point we return to later), and largely find expression in graphical (figure) presentation.

To analyse publishing practices over time, we study the incidence of papers which have at least one object of a given type. As presented in Tables 2 and 3, it is obvious that some years have a small (< 5) number of papers or given object type. Thus, to draw meaningful conclusions, we aggregate over four, three year, periods: 2001–2003, 2004–2006, 2007–2009, and 2010–2012 (inclusive).

Temporal patterns in methodological presentation

Table 4: ABM methods object use over time. Percent of papers in the database published within the given period, having at least one methods object of a given type, sorted by descending incidence in final period.

Type '01-'03 '04-'06 '07-'09 '10-'12
Look-up-table 50 59 58 66
Schematic diagram 58 45 58 57
Screen-shot 35 31 32 37
Pseudo-code 0 17 16 26
XY plot of various type 15 3 11 14
Raw-code 15 7 11 11
Flow-chart 19 28 18 9
Map 12 3 5 9
Matrix 0 10 11 6
UML 8 10 11 3
Example-agent 12 17 0 0
Table (other) 12 0 0 0
Misc (all others) 23 21 11 20
Relevant papers in period (count) 26 29 38 35

figure 8
Figure 8. Patterns in ABM methods object use over time. Each bubble represents the percentage of papers in the dataset published during the given three year band which included at least one of the objects indicated. Note: sizing of the bubbles is non-linear.

In Table 4, and visualised in Figure 8, trends in publishing practices for methodological presentation are studied. We use the incidence-rate of an object, defined as the fraction of relevant papers (i.e. those including methodological objects) in the given period in the Study Database which utilised the object at least once, as the summary measure the practices undertaken by authors. We note that percentages do not have to sum within a column, as one paper may exhibit more than one object type.

For the sake of the analysis, we pool Figure and Table object types which together gives 25 unique methods objects. Further, we focus on only those objects which obtain an incidence-rate of at least 10% in one of the four periods. We find 12 such object types (Table 4) which fit this criterion, covering between 80% and 89% of all relevant papers in that period. Prominent object types included in 'Misc (all others)' include 'Algorithm', '3D model of environment', 'Agent type histogram', and 'Video' (recall, JASSS is wholly published online and multi-media is encouraged, though apparently used sparingly).

As can be seen in Figure 8, the top object choices: 'Look-up-table', 'Schematic diagram' and 'Screen-shot' are very stable over the study period. What is perhaps surprising is that the more formal, structured, model description tool of UML appears to have all but disappeared as a form of communication over time. Alternatively, and perhaps more encouragingly, raw-code, and pseudo-code (arguably the most informative and detailed description of the mechanics of a model) have retained, or slightly increased, their incidence-rate over time.

Temporal patterns in results presentation

Table 5: ABM results objects over time. Percent of papers in the database published within the given period, having at least one results object of a given type, sorted by descending incidence in final period.

Type '01-'03 '04-'06 '07-'09 '10-'12
Simulation-only 96 100 100 97
Empirical-only 0 8 8 10
Mixed: Empirical – simulation 0 0 8 13
Theory-only 8 12 3 3
Mixed: Theory – simulation 8 4 3 3
Not reported 4 0 0 0
Relevant papers in period (count) 24 26 38 31

figure 9
Figure 9. Patterns in ABM results object use over time. Each bubble represents the percentage of papers in the dataset published during the given three year band which included at least one of the objects indicated. Note: sizing of the bubbles is non-linear.

In Table 5, and visualised in Figure 9, we present a similar analysis for results objects, again, pooling Figure and Table object types. In this analysis, we focus on the basis of the results presentation – whether the results are drawn from simulation results only (i.e. quantities drawn from artificial datasets), empirical results only (I.e., quantities drawn from survey, or measured datasets), theoretical outcomes (i.e. numerical calculations of parametric systems, typically without stochastic sources), or some combination of the above.

What the data in the table and the figure show is that the predominant practice, by approximately nine to one is the use of simulation data only from which to draw results. That is, authors of identified ABM studies, in JASSS, almost exclusively support their key results (either as tables or figures) with the use of simulation data only. Authors are not, therefore, often found to be presenting (for verification, or comparison) other quantity types along with their simulation data. There is perhaps some evidence of a trend towards comparison of simulation quantities to empirical quantities ('Mixed: empirical – simulation') coupled with a tendency away from theoretical comparison, but the sample is too small to assert these as significant trends.

Quality of ABM results plotting over time

We further analyse the presentation of results in the Study Database by assessing whether a given results (figures-only) object exhibits one of five quality attributes. The five attributes were:
  1. (XY1) X and Y axes labels included (textual or pro-numeral in nature);
  2. (XY2) X and Y scale included (requires numeric max/min or ticks, plus units);
  3. (TYP) The basis of the plotted results indicated clearly on the figure, in the caption, or in the surrounding text (e.g. simulation-only, theory-only, mixed … etc.);
  4. (PAR) The parameters used to generated the data included (either on the figure, in the caption, or in reference to a table);
  5. (NUM) For simulation results, the count of simulations (N) used to create the summary data included.

Each figure was given one point if it displayed any of the above five attributes, leading to a maximum score of five. It should be stressed that we consider the above five features to be the absolute bare minimum for effective communication of scientific results, following the principles laid out in the University of Chicago Scientific Style and Format manual (8th edition) (Council of Science Editors 2014, section 30.3).

figure 10
Figure 10. Example simulations results figure from a 2012 JASSS article scoring poorly (score 1/5) under the quality scoring tool. In this case, the figure received one point for including the X and Y scale. All other features (axes labelling, results basis, parameters used, simulation N) were missing.

figure 11
Figure 11. Example simulations results figure from a 2012 JASSS article receiving an average quality score (score 3/5). In this case, the figure received one point each for including the X and Y scale, axes labels and results basis. Other features (parameters used, simulation N) were missing.

figure 12
Figure 12. Example simulations results figure from a 2012 JASSS article receiving a high quality score (score 4/5). In this case, the figure received one point each for including the X and Y scale, axes labels, results basis and parameters used. However, no record of the number of simulations used was provided.

In Figures 10, 11 and 12, we provide (anonymous) examples, taken from the Study Database, of a low (score: 1/5), medium (score: 3/5) and high (score: 4/5) quality results figure as assessed by our attribute method.

Table 6: The quality of simulation results objects. Data for 229 results figure objects which were labelled as, or appeared to be, simulation-backed in nature.

Count of Objects with Attributes Exists
XY2 210 91.7 8.3 X and Y scale included
XY1 177 77.3 22.7 X and Y axes labelled
TYP 126 55.0 45.0 Simulation basis clearly indicated
PAR 92 40.2 59.8 Parameters included
NUM 75 32.8 67.2 Count of simulation runs
Of Total 229

In all, 229 result plots, drawn from 106 unique papers, were analysed using the quality metric. These objects all claimed, or appeared, to be simulation-based in nature – the predominant form of results presentation choice. By a clear margin (Table 6), the most prevalent attributes displayed were the XY2 and XY1 attributes with over 91% and 77% of objects exhibiting X and Y scales, and X and Y axes labelling respectively. To aid the reader's understanding, we provide in Figure 13 a rare 'negative' example in which a simulation results object failed the 'XY2' (X and Y scale) attribute. However, perhaps alarmingly, over 40% of objects failed to demonstrate one or more of the other three attributes.

figure 13
Figure 13. Example simulations results figure from a 2007 JASSS article in which the XY2 (X and Y scale) attribute was missing. Cases of this specific attribute being missing were rare, occurring in less than 10% of relevant objects.

figure 14
Figure 14. Patterns in ABM results quality over time. Bars represent the percentile of of the distribution of average object scores, by paper, for the given period. The total count of relevant papers in each period was ('01-'03') 18, ('04-'06) 27, ('07-'09) 34, and ('10-'12) 27 respectively. Object quality scoring explained in the text.

If one assumes that authors are ultimately responsible for the presentation of result figures, then one can take an average score across all qualifying results objects by paper (author). In Figure 14, we present the percentiles, by aggregate time period, of average paper scores, where each paper is represented by the average of the scores of the results objects within it, coded in our study.

Again, stressing that we consider the five results figure attributes as basic scientific requirements, the results are alarming, both in terms of the average quality, and the tend over time. For instance, the median paper score in 2001–2003 of 2.50 is only marginally improved upon (3.00) by 2004–2006, and then not again in the subsequent years. This indicates that around 50% of papers present results figures which miss two or more of the basic five attributes we identify. At the top end, a high point paper score of 4.62 is obtained for the 95th percentile in 2004–2006, but this slides to 4.00 by 2010-2012.

These data indicate that for whatever reason, the quality of simulation results presented by ABM authors in JASSS over the last decade is generally poor, and the standards are not improving. We discuss potential reasons for these problems below.

* Discussion & Conclusions

This study set out to study an under-reported aspect of ABM science: the communication practices and quality of methodological and results objects in ABM studies over the last decade. As noted in the introduction, whilst other aspects of ABM development and communication have received academic attention and proposals, we are not aware of any such review of the kind we carry out here.

Against a background and history of early 'anarchy' on the one hand, and attempts at formalisation on the other, we were interested to see if there were any trends towards order or disorder in ABM publishing practices, and in the general quality of these practices for scientific ends.

Below, we draw together three key conclusions from our work, and provide some tentative reflections on their possible causes.

Conclusion 1 ABM science's methodological 'anarchy' shows no signs of submission to formalism

We are indebted to Richiardi et al.'s (2006) 'anarchy' descriptor. Looking at the practices revealed in this study, we see no evidence that the state of affairs is changing. Three dominant modes of visual communication were in the ascendency in 2001–2003 (Look-up-tables, 'Schematic' diagrams', and Screen-shots), and the same three sat in the same position a decade later (Figure 8). Whilst look-up-tables and screen-shots could perhaps be left aside from this analysis, as they serve their own, specific, communicative purpose, the use of relational diagrams not fitting any particular formalism (which we call 'schematics') is interesting. Indeed, when one notes that the incidence-rate of UML and Flow-chart diagrams has declined during the decade, we can only conclude that the social sciences community has turned its back on, or perhaps, has never properly engaged with, the use of more formal visual languages for conveying to the reader the core relationships amongst model components and agents.

A potential explanation for this lack of engagement is offered by Heath et al.'s (2009) 'many fields of study' conclusion to their survey of 297 ABM papers,
"ABM is connecting diverse fields. The fields of biology, business, ecology, economics, the military, public policy, social science and traffic, among others, all use ABM. These diverse fields are trying to understand complex systems and are using ABM as a one common tool. … after reviewing the surveyed articles it is clear that each field has developed their own ABM terminology to describe techniques, applications and results, have their own ABM standards and their own ABM philosophies." (paragraph 4.4)

Whilst the papers surveyed here are all drawn from the 'social sciences' community as they were published by JASSS, a similar driver could be conjectured. One of the powerful hallmarks of JASSS is its wide embrace of agent-based simulation papers from all across the social sciences. However, this diversity will bring with it a diversity of philosophies of knowledge, and a diversity of expertise with quantitative and computational methods. In particular, Wooldridge and Jennings' (1998) 'Agent-oriented Development' pitfall #4.3 comes to mind, "You forget you are developing software" [emphasis retained]. Wooldridge and Jennings write,
"Unfortunately, because the process [of developing any agent system] is experimental, it encourages the developer to forget that they are actually developing software. Project plans tend to be pre-occupied with investigating agent architectures, developing cooperation protocols, and improving coordination and coherence of multi-agent activity. Mundane software engineering processes — requirements analysis, specification, design, verification, and testing — become forgotten. The result of this neglect is a foregone conclusion: the project flounders, not because of agent-specific problems, but because basic software engineering good practice was ignored." [emphasis retained]

Does the anthropologist, writing their first ever ABM in (say) NetLogo, gripped by the notion that their theoretical hunch can, for the first time, be modelled and visualised in real-time before their eyes, think that they are now a 'software engineer'? (Do they even know what that is?) The answer is obviously 'no'. Our anthropologist gets on with using the ABM tool to support their scientific conclusion. Along the way, however, the development, validation, and communication of their methodology and results will very likely be 'home-spun'. And so, another bespoke daughter of the 'schematic' class is born.

Conclusion 2 The scientific presentation of ABM results needs immediate, remedial, attention

The second conclusion rests on the alarming patterns uncovered by our simple '5 attributes' quality survey of simulation-based results plots (Table 6, Figure 14). That 60% of these 'results' figures didn't clearly indicate the parameters used to generate them, nor the number of simulations behind them, strikes somewhat of a mortal blow to any hope of successful replication. Or, at the paper-level of observation, that the 75th percentile average results figure quality score is less than 3.5, during the most recent period (2010–2012) in the Study Database suggests the problem of adequate results presentation practice is wide-spread and continues today.

At face value, it would seem that either the interpretation or enforcement of JASSS' own author guidelines is at fault as they appear clear on the matter of replication,
"Authors are strongly encouraged to include sufficient information to enable readers to replicate reported simulation experiments."

However, the ongoing text of these guidelines focusses more on the request that full model code be made available through a third-party site, rather than the details needed to actually use the model to replicate the results. Indeed, point (4) of JASSS's stated 'Referee guidelines' continues in this 'algorithmic-centric' view, asking referees to comment explicitly on,
"If the article describes a simulation model, is there enough detail provided for the relevant output from the model to be replicated by a reader (the description might be in the form of an algorithm, pseudo-code, or access to the simulation program itself)?"

Whilst the question is 'replication', the apparent sufficient answer, according to JASSS' guidelines, is that authors provide an indication of the algorithm used to generate the results. Of course, if the 'simulation program itself' is provided, then it is possible, with a suitably written code-base (and runfiles) to identify the exact parameters used for every result in the paper. However, anything short of this will not suffice: 'an algorithm', or 'pseudo-code', on its own, will not leap into action and produce the results in the paper.

Here, there seems a clear and simple opportunity for JASSS (at least) to tighten its guidelines around results, broadening its 'algorithmic-centric' view of replication to include all the information required to replicate the results, including for example, parameters, initialisation settings, random-number stream definitions, and perhaps even hardware architecture. Again, such considerations no doubt arise immediately to the mind of the software engineer, but not, our anthropologist (or economist, or sociologist, or …) colleague (we include ourselves here).

Conclusion 3 Parametric interpretation by estimation of ABM simulation data appears to be missing in action

Close inspection of Table 3 highlights one further pattern in the practices related to analysing ABM results: the lack of estimation of ABM simulation data. The 'smoking-gun' for this conclusion is the almost complete lack of equation objects within the results sections of the 128 papers reviewed, indeed, just one 'results' equation was identified by our methods. Whilst we readily acknowledge that there are many cases where parametric estimation of ABM outputs is unnecessary to justify scientific conclusions due to the inherent unpredictability or emergent complexity of classes of ABM models, there are many other cases, (for example, where an ABM model is to be compared to empirical economic data) where estimation is necessary. That just one paper appeared to go down this path, would seem that estimation, as an analysis technique, or as a parameter tuning tool, is effectively unknown in the social sciences ABM community.

To be fair, there is likely cultural and technical reasons at play behind Conclusion 3. Culturally, we refer to the 'many fields of study' argument proposed above in reflection on Conclusion 1. ABM social science demands, at times, a varied and reasonably technical skill-set: not only must the social scientist have their own field-specific knowledge and novelty of contribution, as we have seen, they should also ideally be aware of some basic software engineering principles, and here, they would now appear to need further to be familiar with time-series analysis and estimation techniques. Supposing again that JASSS welcomes a higher proportion of 'many fields' authors who are exploring ABM as a tool to illuminate their discipline than other journals, that a large proportion of papers do not go down the path of estimation is understandable.

However, there is a second, technical reason that could be advanced. Suppose that we are wrong, and that JASSS authors predominantly would like to use estimation techniques to assess the artificial data their models produce and they are well versed in 'standard' approaches, then they would quickly discover that estimation of ABMs can be present some unique challenges. Two features of a large number of ABMs cause trouble – first, the normally large number of parameters, and second, the existence of non-linear dynamics (often the reason for choosing ABM techniques in the first place). Together, these attributes cause enormous difficulty for estimation. There is some 'hope' on the horizon, however: very recently Grazzini and Richiardi (2014, 2015) have begun to provide some credible options for the estimation of ergodic and non-ergodic ABM time-series in the presence of these attributes. Whilst Grazzini's earlier, but still relatively recent (2012) study in JASSS should be highlighted again to the ABM social scientists community as it enables an author to identify whether their ABM data exhibits stationarity and ergodicity in the first place (crucial questions for choice of estimation technique).

Hence, Conclusion 3 should not be seen as all that surprising, and should be read more as a confirmation of the state of ABM social science in general. However, with the efforts now being expended on this problem (see especially the companion work of Lee et al. 2015), if a similar paucity of equation-based estimation analysis of ABM outputs was discovered over the next decade, different conclusions (and prescriptions) would need to be made.

Before drawing final conclusions we return to the most obvious limitation of our study: the use only of JASSS ABM papers for our analysis. We acknowledge several problems here. First, selection bias: by confining ourselves to JASSS we have no vision of the quality of ABM studies published elsewhere: papers submitted to JASSS could be of higher or lower quality than the 'field median'; or papers could express an unusual mix of social science, artificial life, and ABM methodologies, impacting the way that they are presented towards a bespoke 'JASSS' style. Whilst such biases (and others like them) are important, they do not, in our opinion, diminish the responsibility of the JASSS towards building up its publication standards, JASSS can (and does) play a critical educational role in the preparation, presentation and prosecution of ABM science, a point we return to below.

Second, editorial flux: it is possible that during our study period, editorial policies weakened or tightened, or followed some other functional form, perhaps to pursue other (reasonable) motivations such as expanding the 'reach' or 'inclusiveness' of the JASSS community. Indeed, JASSS itself published current JASSS Editor Squazzoni and Casnici's (2013) study which advocated specific editorial policy prescriptions for JASSS to better serve its aims such as keeping a closer watch on JASSS's quantifiable inter-disciplinary impact and potentially targeting specific, un-tapped, domains through special issues. Whilst it is hard to obtain a measure of such policy dynamics, our study's main conclusions encourage adoption of minimum practices, generic to all ABM papers, regardless of field or specific editorial focus, hence, we again would submit that any nuanced editorial movements apply at a layer above the one to which we are studying. In any case, the Editorial oversight of JASSS during this period was as stable as one could hope for any journal: a single (foundational) editor oversaw the first 17 years of the journal's life, generously overlapping with our study period. The editor, as founder, built up the journal from the ground, and has rightly received enormous gratitude from the social simulation community for their contributions to JASSS's success (Elsenbroich & Badham 2015).

One further limitation is worth making clear. We acknowledge that the overall clarity of a paper's scientific contribution may or may not rest on the formalism of its presentation style. In this work, we have merely tabulated the trends in the ABM publishing community, expressed through JASSS. What we cannot conclude is whether the papers which adopt one or other formalism in the communication of methods and results are actually any clearer than those which do not. The answer to such a question would require an altogether different methodology, presumably involving trained human readers and/or replication attempts of the science that a given paper presents. We see such considerations as a natural extension of this work and would encourage others to imagine creative experimental designs which could identify the 'best' communication formalism of methods and results for ABM works.

In one sense, our study concludes without a particularly novel contribution: we appear merely to have found new ways to catalogue the anarchy of ABM social sciences publication practices, or alternatively, the lack of uptake of the various proposals to 'order' the realm. On this question, it would seem that the ABM social sciences community has a choice: it can go on permitting the 'anarchy', choosing to apply laissez-faire publication practice standards to the communication of methodologies and results, or, it can attempt to apply order by enforcing one or more standards.

Affinity or dissatisfaction with 'anarchy' will likely turn on one's stance towards the merits of diversity. On the one hand, proponents of anarchic publishing practices could point to the benefits of enhanced academic freedom, creative expression, and the notion of fitting the authors' personal sense of the 'right' methodological or results object presentation to the scientific claim at hand, with little reference to norms or standards. As mentioned above in Conclusion 1, given the diversity of generating fields (each with their own field-norms) who seek to publish their ABM works in JASSS, there is no need to encourage such diversity, it will arise organically due to each article's provence. Furthermore, who is to say that one author's experimentation or innovation in presenting their ABM method or results will not be self-evidently brilliant, and so receive wider adoption amongst ABM authors via imitation for the benefit of all? There are indeed potential merits to anarchy. That said, let us advance an alternative perspective on the merits of anarchy. If our reading of JASSS is right, that it is a platform of choice for many first-time ABM authors from the social sciences, then JASSS must be seen as far more than simply a 'social sciences simulation publication', JASSS is also a powerful educational tool: the articles in JASSS will implicitly form a corpus of ideas, methods and practices to learn from, extend, and ultimately imitate. This matters because if the ABM methodology is to make progress outside of simulation-specific journals like JASSS and into the 'main-stream' top field journals of our respective disciplines, then JASSS can play an important role in developing (enforcing) the best standards of the ABM discipline 'in house' before authors take their ideas to editors and referees for whom ABM will (still) be an entirely new methodological approach. If, over time, social scientists are made to develop standard approaches, then that standard will be learned and understood by field editors and referees, annulling an easy (rejection) charge that ABM papers lack intelligibility or transparency based on the communication of the methodology or results alone. In summary, there could be strong long-run benefits to finding (and continuously refining) social science ABM's formal voice on the presentation of methods and results.

But our study has done more than confirm this anarchy, it has identified a basic problem with a vast number of ABM results published in JASSS over a decade: the quality of presented results. This situation cannot continue. It is a matter of fundamental scientific practice and reproducibility – a supposedly cherished feature of ABM science. We stress that we are not suggesting that works studied in our survey do not have good scientific points to make, these works have all passed the peer-review process and as such must convey important scientific findings. However, the ABM community cannot on the one hand grumble about the slow uptake of ABM science in top field journals, whilst on the other, fail at practicing basic scientific hygiene when it comes to presenting its results. Again, viewed through an educational lens, JASSS has a real opportunity, by its instructions to referees, and its submission requirements, to set minimum standards for replication and results presentation.

We look forward to contributing to the further development and enforcement of best-practice standards, and call on the social sciences ABM community to do the same.

* Acknowledgements

We thank Taya Annable, and Penelope Mealy for their generous research assistance and contributions in preparing this study, and Ju-Sung Lee and Matteo Richiardi who provided helpful comments on an earlier version of this manuscripts. We also thank the four anonymous referees who provided extensive, interesting, and helpful perspectives on our work which served to improve the clarity of its message. All errors are the authors.

* Notes

1 One could go further back still. For example, Starbuck's fascinating (1983) paper reviewing the prospects for simulation in the social sciences offers the modern ABM practitioner much material for sober reflection.

* Appendix A- Example JASSS HTML header used for term extraction, keywords from the 'Subject' identifier highlighted

Paper: Huet, S., Edwards, M. and Deffuant, G. (2007), "Taking into Account the Variations of Neighbourhood Sizes in the Mean-Field Approximation of the Threshold Model on a Random Network", Journal of Artificial Societies and Social Simulation, 10(1)10 https://www.jasss.org/10/1/10.html.


* References

ARHWEILER, P. (2011). Modelling Theory Communities in Science. Journal of Artificial Societies and Social Simulation, 14(4), 8 <https://www.jasss.org/14/4/8.html>.

BONABEAU, E. (2002). Agent-based modeling: Methods and techniques for simulating human systems. Proceedings of the National Academy of Sciences of the United States of America, 99(suppl 3), 7280–7287. [doi:10.1073/pnas.082080899]

BOUCHAUD, J.-P. (2008). Economics needs a scientific revolution. Nature Materials, 455(7217), 1181. [doi:10.1038/4551181a]

BUCHANAN, M. (2009). Economics: Meltdown modelling. Nature News, 460(7256), 680. [doi:10.1038/460680a]

COUNCIL OF SCIENCE EDITORS (2014), Scientific Style and Format, 8th Edition, Scientific Style and Format online (URL: http://www.shttp://www.scientificstyleandformat.org/Home.html), University of Chicago Press.

DEICHSEL, S., & Pyka, A. (2009). A Pragmatic Reading of Friedman's Methodological Essay and What It Tells Us for the Discussion of ABMs. Journal of Artificial Societies and Social Simulation, 12(4), 6 <https://www.jasss.org/12/4/6.html>

DOSI, G., Fagiolo, G., & Roventini, A. (2010). Schumpeter meeting Keynes A policy-friendly model of endogenous growth and business cycles. Journal of Economic Dynamics & Control, 34(9), 1748-1767. [doi:10.1016/j.jedc.2010.06.018]

ELSENBROICH, C. & Badham, J. (2015). A Standing Ovation for Nigel: An Informal Study. Journal of Artificial Societies and Social Simulation, 18(1), 16 <https://www.jasss.org/18/1/16.html>.

EPSTEIN, J. M. (2006). "Remarks on the Foundations of Agent-Based Generative Social Science." Chapter 34 in Handbook of Computational Economics II: Agent-based Computational Economics, edited by Leigh Tesfatsion and K Judd, North-holland.

EPSTEIN, J. M. (2009). A model approach. Nature Materials, 460(7256), 667.

FARMER, J. D., & Foley, D. (2009). The economy needs agent-based modelling. Nature Materials, 460(7256), 685. [doi:10.1038/460685a]

GEANAKOPLOS, J., Axtell, R., Farmer, D. J., Howitt, P., Conlee, B., Goldstein, J., et al. (2012). Getting at Systemic Risk via an Agent-Based Model. American Economic Review, Papers and Proceedings, 102(3), 53–58. [doi:10.1257/aer.102.3.53]

GILBERT, N., & Terna, P. (2000). How to build and use agent-based models in social science. Mind & Society, 1, 57–72. [doi:10.1007/BF02512229]

GRAZZINI, J. (2012). Analysis of the Emergent Properties: Stationarity and Ergodicity. Journal of Artificial Societies and Social Simulation 15(2), 7 <https://www.jasss.org/15/2/7.html>.

GRAZZINI, J., & Richiardi, M. (2014). Partial Identification in Non-ergodic Models. Technical Report, iNET, Oxford Martin School, University of Oxford.

GRAZZINI, J., & Richiardi, M. (2015). Estimation of ergodic agent-based models by simulated minimum distance. Journal of Economic Dynamics & Control, 51, 148–165. [doi:10.1016/j.jedc.2014.10.006]

GRIMM, V., Berger, U., Bastiansen, F., Eliassen, S., Ginot, V., Giske, J., et al. (2006). A standard protocol for describing individual-based and agent-based models. Ecological Modelling, 198(1–2), 115–126. [doi:10.1016/j.ecolmodel.2006.04.023]

GRIMM, V., Berger, U., Deangelis, D. L., Polhill, J. G., Giske, J., & Railsback, S. F. (2010). The ODD protocol: A review and first update. Ecological Modelling, 221(23), 2760–2768. [doi:10.1016/j.ecolmodel.2010.08.019]

HARA, N., Bonk, C. J., & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28(2), 115–152. [doi:10.1023/A:1003764722829]

HEATH, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4), 9 <https://www.jasss.org/12/4/9.html>.

HOWITT, P & Clower, R, (2000). The emergence of economic organization, Journal of Economic Behaviour & Organization 41, 55–84.

KORNHAUSER, D., Wilensky, R. & Rand, W. (2009). Design guidelines for agent based model visualisation. Journal of Artificial Societies and Social Simulation 12(2), 1 <https://www.jasss.org/12/2/1.html>.

LAW, A. (2005). How to build valid and credible simulation models. Proceedings of the 37th Conference on Winter Simulation, 24–32. [doi:10.1109/wsc.2005.1574236]

LEE, J., Filatova, T., Ligmann-Zielinska, A., Hassani-Mahmooei, B., Stonedahl, F., Lorscheid, I., Voinov, a., Polhill, J. G., Sun, Z. & Parker, D. C. (2015). The Complexities of Agent-Based Modeling Output Analysis. Journal of Artificial Societies and Social Simulation 18(4), 4 <https://www.jasss.org/18/4/4.html>.

LEOMBRUNI, R., & Richiardi, M. (2005). Why are economists sceptical about agent-based simulations? Physica A: Statistical Mechanics and Its Applications, 335, 103–109.

LIM, M., Metzler, R., & Bar-Yam, Y. (2007). Global Pattern Formation and Ethnic/Cultural Violence. Science, 317(5844), 1540–1544.

MACY, M. W., & Willer, R. (2002). From Factors to Actors: Computational Sociology and Agent-Based Modeling. Annual Review of Sociology, 28(1), 143–166. [doi:10.1146/annurev.soc.28.110601.141117]

MOSS, S. (2008). Alternative approaches to the empirical validation of agent-based models. Journal of Artificial Societies and Social Simulation, 11(1), 5 <https://www.jasss.org/11/1/5.html>.

MÜLLER, B., Bohn, F., Dreßler, G., Groeneveld, J., Klassert, C., Martin, R., et al. (2013). Describing human decisions in agent-based models – ODD+D, an extension of the ODD protocol. Environmental Modelling & Software, 48, 37–48. [doi:10.1016/j.envsoft.2013.06.003]

ODELL, J., Parunak, H., & Bauer, B. (2000). Extending UML for agents. In G. Wagner, Y. Lesperance, & E. Yu. Presented at the Proc. of the Agent-Oriented Information Systems Workshop at the th National conference on Artificial Intelligence, Austin, TX: Ann Arbor.

RICHIARDI, M., R. Leombruni, N. Saam, and M. Sonnessa. (2006). A Common Protocol for Agent-Based Social Simulation. Journal of Artificial Societies and Social Simulation 9(1), 15 <https://www.jasss.org/9/1/15.html>.

ROURKE, L., & Anderson, T. (2004). Validity in quantitative content analysis. Educational Technology Research and Development, 52(1), 5–18. [doi:10.1007/BF02504769]

SQUAZZONI, F. & Casnici, N. (2013). Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS. Journal of Artificial Societies and Social Simulation 16(1), 10 <https://www.jasss.org/16/1/10.html>.

STARBUCK, W. H. (1983). Computer simulation of human behavior. Behavioral Science, 28(2), 154–165. [doi:10.1002/bs.3830280207]

WINDRUM, P., Fagiolo, G., & Moneta, A. (2007). Empirical validation of agent-based models: Alternatives and prospects. Journal of Artificial Societies and Social Simulation, 10(2), 8 <https://www.jasss.org/10/2/8.html>.

WOOLDRIDGE, M., & Jennings, N. (1998). Pitfalls of Agent-Oriented Development (pp. 385–391). Presented at the Proceedings of the second international conference on Autonomous agents. [doi:10.1145/280765.280867]