©Copyright JASSS

JASSS logo ----

Magda Fontana (2006)

Simulation in Economics: Evidence on Diffusion and Communication

Journal of Artificial Societies and Social Simulation vol. 9, no. 2

For information about citing this article, click here

Received: 05-Dec-2005    Accepted: 16-Jan-2006    Published: 31-Mar-2006

PDF version

* Abstract

This paper presents the analysis of a dataset of publications in economics that makes use of simulations. Data are explored in order to obtain information about diffusion of simulation techniques in time and across sub-disciplines. Moreover, following Robert Axelrod's concerns about the difficulties in sharing simulation models and their outputs, some peculiarities in the communication process among 'simulators' are highlighted.

Social Simulation, Economic Theory, User Community

* Introduction

In the last decades, as computers improved, the use of simulation has rapidly grown in the field of economics. This paper sketches a picture of this evolution in time and across areas of research and presents some evidence concerning the state of communication among the users of simulation within this field. The paper deals with the following issues:

Under the label of "simulation" fall a great number of techniques which imply different ways of using computers and interpreting social phenomena. They range from statistical or numerical devices to proper theories in silico which compete with other kinds of (mathematical or verbal) formalisations. Just knowing that 'the use of simulation has increased recently' does not actually provide us with insightful information on how it is employed within economics. In order to gain some hints on this topic, I have built a database of publications in economics in which different kinds of simulations are taken into account over a long period of time.

The history of simulation in economics is a characteristic one. While being pioneered by scientists such as Norbert Wiener and John Von Neumann who exerted a great influence on economic theory and methodology, simulation started to spread to the sciences just when economics became addicted to theorems and proofs. While simulation has found steady employment in physics and biology, the accomplishment of the Samuelsonian project has relegated simulation to a secondary role (mainly, finding numerical solutions and conducting statistical analysis) with respect to qualitative mathematics. From the 90s onwards, however, there has been a systematic attempt at promoting simulation as a methodology in its own right (Axelrod 1997) and as a tool to embody and experiment theories (Epstein and Axtell 1996). The distinction between the "instrumental" and the "theoretical" use of simulation is helpful in tracing its role in economics and trying to find evidence for the view that simulation represents an ongoing methodological "revolution". In addition, the propagation of the interpretation of economic phenomena as complex adaptive systems has provided the (micro)foundations for a wider use of simulation as a modelling method. Thus, another interpretation of our analysis could be that of reading evidence as an approximation of the relative positions of the complexity approach as compared to the traditional one.

As for communication, the multiplicity of the available techniques, the difficulty of synthesising algorithms and results in the spaces that are usually allocated to conferences or journals, and the lack of shared tools to illustrate results jointly play a major role in explaining the difficulties of the circulation of both simulation methods and outputs (see Axelrod 1997). In what follows, a different source of frictions in communication will be highlighted. The latter relates to the behaviour of "simulators" that, in fact seem to be reluctant when it is a matter of specifying which kind of simulation has been run.

* Methodology and Database

To investigate both diffusion and communication within economics I have built a database of publications that covers the years from 1969 to 2004. I have focused on "title" and "abstract" which seem to me to be the signals that, at first, catch the attention of readers and, through indexes, citations and internet are more likely to be accessible to the scientific community.

The database is composed by 7.260 publications that contain either in the title or in the abstract the noun simulation(s) or the verb to simulate (in various declensions). In addition, I have also extracted records concerning specific simulation techniques. Namely, following Gilbert and Troitzsch (1999), I have included system dynamics, microanalytical simulation, microsimulation, queuing models, multilevel simulation, agent based (computational) models, learning algorithms (neural networks, genetic algorithms, classifier systems)[2].

Publications are drawn from Econlit, a database of economic publications which includes about 400 journals with abstracts systematically inserted starting from 1985 plus collected articles, books, working papers, and PhD dissertations. Even if Econlit does not represent the entire universe of economic research (actually, it does not include journals which deal exclusively with simulations), it is fairly representative for the purpose of this analysis, since the community of economists considers it as a significant index of the focus and the methods of the discipline. For each record I have coded: title, author, place of publication, kind of publication, year of publication and, where available, abstract. Moreover, all records have been classified according to the area of economic research and to the simulation technique.

A similar research was conducted by Robert Axelrod in 1997. He referred to the Social Science Citation Index of 1995 finding 107 articles with simulation in the title scattered among 74 journals. He discovered a transversal use of simulation in social sciences but also noted that few journals had more than one article concerning simulation. In his words: "the dispersion of these articles demonstrates one of the great strengths as well as one of the great weaknesses of this young field. The strength of simulation is applicable in virtually all of the social sciences. The weakness of simulation is that it has little identity as a field in its own right" (1997, 23).

However, Axelrod has restricted his search to articles published in journals, while I believe that a "young" methodology (within economics, simulation is new with respect to the traditional approach based on theorems and proofs and on econometric analysis) may find difficulties in gaining a place in mainstream journals and thus will circulate by mean of other research outlets. Moreover, the type of publication embeds another relevant piece information: articles published in journals are likely to have a more general impact than the works appearing in other media. Therefore, in order to gain a deeper understanding, the present analysis takes a wider perspective comprehending also books, collected articles, working papers, and PhD dissertations. In addition to the work of Axelrod, publications have been mapped according to the sub-disciplines of economics and to the kind of simulation in order to find out in which areas and in which form simulation has found a warmer reception.

* Simulations in Nutshells

For what concerns its scientific applications, simulation has started to develop as an aid to find numerical solutions to complicated mathematical models. However, as more sophisticated and powerful programming languages were devised, it has assumed a multifaceted role. In fact, it now can be a way to explore the implications of a theory and to achieve a high level of richness in description.

Within the simulation techniques, however, such properties are present in different degrees. It is therefore necessary to draw some distinctions between simulation as a mere instrument that extends the ability of mathematical modelling or statistical analysis and simulation as a theoretical statement. In addition, when focusing on simulation as an autonomous way of conducting a research, it must be considered that each technique implies different views of the functioning of modelled phenomena.

Research about the use of simulation within economics should deal with such a variegated scenario. A detailed investigation of the properties of each of the simulation techniques is beyond the scope of this paper; however it is important to describe them briefly. In the following table, which relies on Gilbert and Troitzsch (1999, p.13), the most important features of the kinds of simulations that will be the object of the following analysis are summarised. The statistical and econometric techniques are not included since they are atheoretical and, therefore, pertain to the instrumental side of the story[3].

System dynamics (Forrester 1958) derives from the cybernetics (Wiener 1956) approach that tends to substitute the linear causation with the circular one. The system is modelled as a whole and its dynamics is expressed through level and rates. Discrete event simulation (e.g. queue models) takes a similar angle. It tries to model the dynamics of the system as discrete time passes. Each unit of the system has an activity and changes its state according to time.

Microsimulation is due to Guy Orcutt (1957) and was born out of his dissatisfaction with the ways in which economic data were gathered and interpreted in order to serve as a guide for policies. He felt that policies could have a varying impact across different groups of people (according, for instance, to gender, age, income…) and that such dissimilarities should be accounted for. Orcutt's endeavour resulted in a view of economic facts that heavily depended on the features of micro units and in a modelling style that allowed strong disaggregation of behaviours. Microsimulation generates a hypothetical panel data in which the history of each unit of a given population is set forth in time according to a set of probabilities of transition. Microsimulation, while being innovative for what concerns the idea of modelling the phenomena starting from the individual level and for the focus on the heterogeneity of individuals, shows no actual interaction and above all does not have an underlying theory of the phenomenon under study. Multilevel simulation is similar in spirit to microsimulation but it adds indirect interaction that is to say that the individual interacts with the entire population.

Agent based models involve theory making and actual decentralised interaction and therefore propose an interpretation of economic phenomena which closely recalls the Spontaneist view and also includes the criticisms to the idea of economic agents intended as faultless maximisers (Epstein 1999).

Finally, learning algorithms are problem solving devices borrowed from biological (Holland 1975) or connesionistic theories (McClelland and Rumelhart 1986) that evolve models by generating and selecting rules that tend to accommodate environmental changeability. Such an approach is incompatible with most of the traditional economic modelling in which the behaviour of actors, being described by means of equations, is fixed.

Table 1: Description of the 'theoretical' simulation techniques

TechniqueDescriptive AbilityAnalytical treatmentSimulation vs. Analytical SolutionsInteraction among agentsAgents'
Number of agents
System DynamicsHighDifferential EquationsDiscrete Time;
Discontinuous and non-differentiable functions
MicrosimulationLowMathematical solutions exist only for the simplest casesCovers the entire range of probability distributions.NoHighMany
Discrete EventsLowMathematical solutions exist only for the simplest casesCovers the entire range of probability distributions.NoLowMany
Multilevel SimulationHighMathematical solutions exist only for the simplest casesDeals with individual probabilities of transitions.YesLowMany
Agent-based modelsHigh(so far) explored only within computerYesHigh or LowMany
Learning modelsHigh(so far) explored only within computerPossibleHighMany

* Analysis

Diffusion: Figure 1 shows the entire dataset plotted by year. It is worth noting that the publication of works concerning simulation has increased especially after 1989. However, this datum is spurious since database abstracts were systematically available only starting from 1985. Nevertheless, the increase is wide and the trend constant even in the following years. A tentative interpretation of this peak is that in the 90's the discontent with the standard approach to economic modelling was growing while, at the same time, the cost of computing was rapidly decreasing.

Figure 1. Trend of publications (1969 -2004)

In figure 2, the population is disaggregated according to the simulation technique where specified. The trend in time is positive for all techniques, with an overwhelming majority of statistical and econometric ones. A second consideration that emerges from this plot is that economics has adopted almost all the spectrum of simulation techniques, apart from multilevel simulation that was represented only by seven items (mainly queuing models in the field of organisation of production and industrial organisation).

Figure 2. Trend of publications (1969 - 2004) disaggregated according to the kind of simulation

The population has also been analysed according to the specific branch of research (sector). I have isolated 19 sub-disciplines of economic theory plus a residual category. Figure 3. shows the distribution of simulations in different areas re-aggregated in a few general categories.

Figure 3. Diffusion of techniques across sectors

The sectors with the highest density of simulation are public economics, industrial economics, and finance, while the sectors with the lowest density are economic growth and international economics[4].

For what concerns the techniques, system dynamics (SD) is mostly used in public economics (and especially in environmental economics), in industrial economics, and in policy analysis; while microsimulations (M) are prevalently adopted in studies regarding the public sector. Learning models are present in all sectors but most consistently in industrial economics (production management), finance and consumer theory. In the first discipline, the prevailing learning (L) device is genetic algorithm whereas, in the second one, the learning is simulated mainly by means of neural networks. Discrete event simulation (DE) is represented almost equally in public, industrial, urban economics and in the residual category. Finally, agent based simulations (AB) are present in consumer theory, industrial organisation, public economics, and in the residual category.

Recalling the discussion brought up in the introductory paragraph it can be said that while economics has been permeable to simulation, the meaning in which it has been received is quite limited. The prevalence of statistical and econometric techniques implies that it has been received as a help in testing and confirming theories that are modelled according to the traditional methodology. Moreover, and to the extent that simulation (obviously excluding the above mentioned statistical and numerical simulations) can be taken as representing the instances of the economists that adhere to the complexity view, it must be said that it is still a quite limited portion of the work of researchers.

By comparing the relevance of the different techniques, it also emerges that the most successful models are those mimicking learning. The interpretation of this information is controversial. On the one side, one can think of genetic algorithms and neural networks as alternative to the faultless rationality paradigm and conclude that their prevalence witnesses a shift of attention towards a type of economic actor which differs from homo oeconomicus. On the other side, a learning algorithm is often used as an optimisation technique under variable conditions and therefore can somehow be embedded in the mainstream view.

Table 2 shows the relative importance of the techniques which imply a theoretical use of simulation. On aggregate, the six listed techniques are responsible for 17% of the observations while the categories 'statistics, econometrics, numerical' and 'unexplained'- which gathers all the publications concerning simulation for which it has been impossible to determine the kind of adopted technique - account for 39% and 44% of the observations, respectively.

Table 2: Theoretical simulations in the population

Simulation TechniquePercentage
System Dynamics6%
Discrete Event5%
Multilevel Simulation1%
Learning Algorithms 47%

Communication After having analysed the population from the viewpoint of the diffusion of the simulation techniques I turn to the matter of communication. In the first part of the paper I have formulated the hypothesis that the multiplicity of simulation techniques might have hindered an efficacious communication among researchers, but from the above discussion a different nuance of the problem emerges. For about a half of the items in the database it is impossible (by observing only the title and the abstract) to know which kind of technique has been used since authors simply affirm to have run a simulation or use statements that are compatible with more than one kind of simulation. Therefore, in addition to the theoretical problems listed by Axelrod, we also encounter a convention of communication that does not help in diffusing concepts concerning simulations.

As Table 3 shows, by dividing the records according to year we discover that the trend of unexplained works is decreasing. However, given that abstracts are available systematically starting from only 1985, only the last three periods (1987-2004) are comparable and the reduction of the unexplained records must be reappraised.

Table 3: Trend of unexplained (1969- 2004)

Years "%"

It would be naïve to think that all authors not relating their work to a given category are obeying to the same behavioural pattern. Surely enough this has to do, to some extent, with mental models, habits and conventions[5], however some insights can be given by a closer look at the data.

For example Table 4 matches the percentage of explained records with the importance of the sector in the population. The most important and most 'explained' category is that of quantitative methods. The latter is not a proper sector in that it includes methodological works[6]. Its importance in the sample is due to the fact that most of the statistical and econometric publications (which constitute the 39% of all observations) fall under this label.

Table 4: Degree of explained works and relative importance of sectors

Sectors ExplainedNumber of Publications Percentage
Quantitative Methods 98%182125%
Consumer Theory 53%2954%
Fiscal and Monetary Policy 47%6178%
Urban and Regional 43%1752%
Growth 42%1682%
Public Economics 42%4136%
Public Policy 41%3795%
Population Economics 40%1352%
Industrial Organisation 38%5908%
Labour 37%2213%
Technological Change 36%481%
International 33%3545%
Teaching 33%531%
Environmental Economics 32%2824%
Health Economics 31%1402%
Other 27%4997%
Development 12%100%
History 11%350%

This evidence confirms the idea of Axelrod (1997) according to which the presence of a consolidated jargon facilitates communication, while it is rather peculiar that outside the very technical disciplines such as econometrics and statistics there is no trend towards explaining the nature of the work. It is worth noting that the percentage of explained works is virtually always below 50%; moreover by reading data in the light of what emerged from figure 3, we can observe that among the sectors which display the higher level of explanation we find those in which agent based economics is mostly represented.

A possible interpretation of the high incidence of unexplained works could be related to their collocation. In fact, if their circulation is limited to a specialised audience then the signal concerning the kind of technique might come through media different from those we have already looked at. For instance it could suffice to know the journal on which the paper is published, or the editor of a book or the topic of a particular series of books. If this were the case information would circulate through informal channels such as reputation and would be directed to a community already built and identified. It follows that the failure in communication should be judged as more momentous the higher the generality of the outlet (and the heterogeneity of the readers).

Table 5 presents different aggregations of the population which is ordered according to the collocation of the essays.

Table 5: Population sorted by outlet

Explained itemsSample without
Agent based
Working Paper23%25%18%13%
Collected Articles9%9%12%8%
PhD Dissertation1%2%1%1%

The majority of the publications have appeared on journals even if we exclude the statistics and econometrics sector. The same holds when the different techniques are taken into account singularly. As an example, in the last column data concerning agent-based simulations (224 items in the population) are reported. Thus, it emerges that the publications included in the sample circulate mainly through journals, that are a general medium.

In order to get more fine grained information I have checked for the incidence of unexplained works in each kind of publication. Table 6 sorts observations according to generality of sources and shows the percentage of unexplained works.

Table 6: Population sorted by generality of sources

Collected Articles50%
Working Paper84%
PhD Dissertation58%

Data shows that such a percentage does not change dramatically across kinds of sources excluding working papers. It therefore seems that the medium does not affect the authors' behaviour.

Another important point to check is whether the journals represented in the sample are "specialised" or can be considered suitable for a wide range of readers. If the first is the case, than the "generality" of the journals must be reconsidered and differences among at least the first three kinds of publication nuanced. In other words, if the journals are specialised than there is not a substantial difference among kinds of publications and therefore the data in table 6 would not depend on the absence of correlation between the authors' behaviour and the kind of publications but, simply, on the homogeneity.

I have tried to appreciate the generality of journals by using the index built by Kalaitzidakis et al. (2003) who rank journals according to their scientific impact. Table 7 shows the percentage of articles about simulation appearing in the first eighty positions of such ranking.

Table 7: Percentage of articles appearing in the first position of the ranking

RankingPercentage of articles

It is worth noting that a significant quota of publications has been published in journals that have a high impact on the scientific community and, therefore, a higher diffusion. The table reports only the top ranked journals but the trend is similar also for the lower positions. I therefore conclude that simulation is not a methodology reserved to niche journals[7] and that the publication outlets reveal a genuine difference in generality.

* Concluding remarks

From the present analysis two orders of considerations emerge. Literature on the topic agrees that simulation is a very special way of conceiving and developing scientific theories. With more advanced techniques such as agent-based models it is possible to obtain a richness in descriptions together with rigour that are not attainable using mathematical and verbal modelling. However, its properties in combination with the recent origin of simulation as a methodology and with the multiplicity of techniques render the literature quite clumsy and vague concerning the role and the different potentialities of simulations.

In the field of economics, this situation is worsened by the habit of the authors who often do not clarify the nature of their simulations and by the fact that simulation is in stark disagreement with the received way of modelling. For what concerns the theoretical side, since economics has taken the route traced by Samuelson, mainstream economists have devoted their efforts to prove theorems and to find stability conditions. Simulation with its experimental vocation seems inappropriate to fit the standards of rigour set after the mathematisation of economic theory[8].

Simulation in most cases proposes a candidate explanation of phenomenon under study often in numerical or graphical terms. Regularities and measurement that derive from simulated models lack in generality that is pursued by economists often to the price of sacrificing realism and applicability of their endeavours[9]. A very important contribution in raising this issue comes from the works of Deirdre McCloskey (1985; 1998) who defends the use of simulation in economics referring especially to microsimulations. From the perspective of the history of economic thought, Colander (2003) reports a positive change of attitude towards simulations mainly for those borrowed from physics[10]. Finally, simulation is a kind of fil rouge that links a great portion of the works related to the complex adaptive systems paradigm[11]. The latter is an unavoidable relationship since the interpretation of economic phenomena in terms of decentralisation and heterogeneity leads to non-linearities that are not amenable to the traditional analytical tools (Axelrod 1997).

Returning to the analysis conducted in this paper, however, in the face of this growing attention, it seems that such debate has not yet affected the researchers' work since the theoretical use of simulation is still negligible with respect to the instrumental one.

* Notes

1I would like to thank Pietro Terna and two anonymous referees for extremely perceptive comments. The usual caveat applies.

2Book reviews and references to simulation conducted in other publications are excluded.

3For a thorough discussion about the epistemology of simulation in social science see David et al. (2005)

4In this representation econometric and statistical techniques are not included.

5For instance, an anonymous referee suggested that authors could be more interested in the problems they are analysing than in clarifying the methods they are using.

6It might be the case that a work that - from the title or the abstract - appears to be as purely methodological contains an application to a given sector. This would imply a bias in the relative weight of the sectors listed in table 4. However, since there is no reason to think that the bias is systematic, the shares of explained works across sectors should not be dramatically affected by it. In any case, ambiguity generated by the lack of details is precisely what repeatedly emerges from this analysis.

7As stressed above, in the population there are no journals devoted to simulation while there are some field journals especially in environmental economics, agricultural and health economics.

8On the impossibility of interpreting economics as an experimental science see Debreu (1991, p. 2).

9Hahn (1991) believes that simulation will become more common in economics as the complexity view gains more consensus. However, he complains about the loss of generality of results and of elegance with respect to the axiomatic approach.

10On the approach of physicists to complex systems see Goldenfeld and Kadanoff (1999)

11For the a survey of the most recent areas of interest in this ambit see Markose (2005) and the other papers appeared on the special issue of The Economic Journal, vol. 115, published in June 2005.

* References

AXELROD R (1997), "Advancing the Art of Simulation in the Social Sciences", in Conte R, Hegselmann R and Terna P , Simulating Social Phenomena, Lecture Notes in Economics and Mathematical Systems, Berlin, Springer - Verlag., pp. 21-40.

COLANDER D (2003), "The Complexity Revolution and The Future of Economics", Middlebury College Discussion Paper No. 03/19.

DAVID N, Sichman JS and Coelho H (2005), "The Logic of the Method of Agent-Based Simulation in Social Sciences: Empirical and Intentional Adequacy of Computer Programs", Journal of Artificial Society and Social Simulation, 8(4), https://www.jasss.org/8/4/2.html.

DEBREU G (1991), "The Mathematization of Economic Theory", The American Economic Review, 81(1), pp. 1-7

EPSTEIN JM (1999), "Agent-based Computational Models and Generative Social Science", Complexity, 9, 4, pp. 41-60.

EPSTEIN JM and Axtell R (1996), Growing Artificial Societies, Cambridge (Mass.), MIT Press.

FORRESTER JW (1958), "Industrial Dynamics - A Major Breakthrough for Decision Makers", Harvard Business Review, 36(4), pp. 37-66.

GILBERT N and Troitzsch KG (1999), Simulation for the Social Scientist", Buckingham, Open University Press.

GOLDENFELD N and Kadanoff LP (1999), "Simple Lessons from Complexity", Science, 284, pp. 87-89.

Hahn F (1991), "The Next Hundred Years", The Economic Journal, 101, pp. 47-50.

HOLLAND JH (1975), Adaptation in Natural and Artificial Systems, Ann Arbor, University of Michigan Press.

KALAITZIDAKIS P, Mamuneas T and Stengos T (2003), "Ranking of Academic Journals and Institutions in Economics", Journal of the European Economic Association, 1, pp. 1346-1366.

MARKOSE S (2005), "Computability and Evolutionary Complexity: Markets as Complex Adaptive Systems", The Economic Journal, 115, pp.159-192.

MCCLELLAND J and Rumelhart D (1986), Parallel Distributing Processing, Cambridge (Mass.), Harvard University Press.

MCCLOSKEY D (1985), The Rhetoric of Economics, Wisconsin, University of Wisconsin Press.

MCCLOSKEY D (1998), "Simulating Barbara", Feminist Economics, 4(3), pp. 181-186.

ORCUTT G H (1957), "A New Type of Socio-Economic System", Review of Economics and Statistics", 58, pp. 773-97.

WIENER N (1956), I am a Mathematician, New York, Doubleday.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2006]