© Copyright Springer-Verlag

  JASSS logo

Computational Techniques for Modelling Learning in Economics

Edited by Thomas Brenner
Dordrecht: Kluwer Academic Publishers
1999
Cloth: ISBN 0-792-38503-9

This review is reprinted from The Journal of Evolutionary Economics, 10(5), October, pages 585-591. It is copyright Springer-Verlag 2000 and must not be reproduced without permission.

Order this book

Reviewed by
Edmund Chattoe
Department of Sociology, University of Oxford, UK.

Cover of book

It is always difficult to review an edited volume without simply describing each of the contributions. Fortunately, the editor of this collection provides a standard by which it can be judged:

"The main focus [of the book] is to describe these [learning] techniques, give some examples of applications, and discuss the advantages and disadvantages of their use." (p. vii.)

The book meets all three objectives, but with differing success. The strengths and weaknesses of the book are revealing, both for what they say generally about economic methods and also about the relationship between evolutionary economics and other approaches.

In this review, I shall draw out three distinctions that are relevant to the research described in the book and use them to organise my discussion of individual contributions.

The first distinction, which can be dealt with relatively quickly, is that between instrumental and descriptive simulation. It is still a widely held belief among economists that the only purpose of computers is to tackle mathematical models that are too complex to be solved analytically. This is instrumental simulation and it is to be judged by pragmatic criteria. Does the use of computers permit a given model to be solved more quickly, more accurately, more easily or for a wider range of parameter values? The value of the model itself is not to be questioned. This approach is distinct from descriptive simulation, which asserts that the capabilities of programming languages allow different kinds of models to be developed (Ostrom 1988). Simulation is essential, for example, when economic agents are believed to have systematically different models of their environment or when the environment operates autonomously. Both of these features are important to evolutionary models. The first is essential to explain "real" novelty - the generation and propagation of new models in a population - while the second is needed to explain irreversible effects. Machinery in a factory will continue to deteriorate unless maintenance is done (Chattoe 1996). The standards by which descriptive simulations should be judged are different. Are they valid, reproducible, robust and interesting?

The first two chapters in the book provide an overview of descriptive simulation and some justification for its use in learning models. Like several others in the book, the chapter by Kwasknicki ("Evolutionary Economics and Simulation") is interesting but suffers from unclear objectives. It presumes too much to serve as an accessible introduction, while covering such diverse ground that it will not satisfy the expert in any one area. It also suffers from another difficulty shared with other contributions, an idiosyncratic view of the literature. Specifically, it makes no mention at all of the "evolutionary algorithm" strand of learning simulation, involving work by Arifovic, Axelrod, Curzon Price, Dawid, Edmonds, Marengo, Vriend and several others. Obviously, restricted space makes complete coverage impossible, but the chapter makes no analytic or pragmatic statement about why some models were described in detail, while others were omitted altogether. By contrast, Troitzsch ("Simulation as a Tool to Model Stochastic Processes in Complex Systems") limits himself to two objectives, a schematic history of simulation techniques and an examination of some of the implications of simulation methodology, discussed in the context of "worked examples". In consequence, he is able to take less for granted and the novice reader will get a better feeling for the simulation approach from this chapter.

Given the fact that descriptive simulations are still relatively rare in economics, it is heartening that a high proportion of the remaining chapters fall into this class. Of the two that definitely do not, one is much more satisfactory than the other is. The better chapter ("Neural Networks in Economics: Background, Applications and New Developments" by Herbrich et al.) makes it quite clear from the outset that it is going to deal purely with the instrumental use of neural networks as pattern recognisers and does so effectively. However, the paper also acknowledges descriptive simulations of bounded rational agents in its literature review. The less satisfactory paper ("Bayesian Learning in Optimal Growth Models Under Uncertainty" by Islam) uses simulation to analyse a stochastic version of an existing model in the literature. Unfortunately, as is common with papers of this type, nothing is said at the outset about the (real or stylised) set of facts that the model seeks to explain.[1] Although the new model "relaxes" the assumption of certainty in the previous model, it does so with the addition of so many additional assumptions and caveats that any progress made is solely in economic technique and not in understanding the original problem. It is particularly revealing that page 290 (which discusses the chosen modelling techniques) says nothing at all about human behaviour or real data, only about technical issues and prevailing traditions amongst economists! The conclusion of this paper is an eloquent testimony to the value sometimes added by rigorous economic theory:

"The future of the economy becomes unsustainable as pollution increases and environmental degradation accelerates." (p. 299)

One reason for making the distinction between instrumental and descriptive simulations is that the chapters which fail to do so suffer in terms of coherence. It is also hard for the reader to know how to evaluate them. This is true of the papers by Curzon Price ("Can Learning-Agent Simulations Be Used for Computer Assisted Design in Economics?") and Shubik and Vriend ("A Behavioral Approach to a Strategic Market Game"). Curzon Price introduces the concept of Economic Computer Assisted Design (Eco-CAD) and illustrates it using a simulation of price setting in electricity markets. He is interested in building simulations that are instrumentally "adequate" for policy explorations into the effects of different institutional arrangements in markets. The simulation model he presents is particularly innovative because it is co-evolutionary, competing strategies are evaluated not against a fixed "fitness function" (which would be hard to justify in most social situations) but against other strategies, just as they are in the real world. However, as he rightly points out, we do not know anywhere near enough to build simulations adequate for market design. Unfortunately, the import of this conclusion (and its connection to the discussion of his model) is not clear. The "orthodox" models also suffer from exactly the same weakness, although they are typically less humble about it. The evolutionary model could be well defended on either instrumental or descriptive grounds in detailed comparison with the existing models. Unfortunately, it ends up being used as inadequate support for a general methodological point.

The same difficulty arises in the Shubik and Vriend paper. They prove analytical results for a simple strategic market game using dynamic programming and then use an evolutionary approach (a Classifier System) to explore the results for a broader class of such games. A descriptive version of this paper might claim that real agents facing a complex decision are acting "something like" the Classifier System and, furthermore, that this approach requires less extreme assumptions about knowledge and rationality. In this case, we would expect a behavioural rationale for using a Classifier System, rather than some other technique (Chattoe 1998). In particular, there would be issues about interpreting the evolved collection of classifiers as a "world model" held by the agent(s) rather than simply a set of unconnected cases or a data mining mechanism. An instrumental approach might suggest that the Classifier System was a technically effective method for identifying equilibria in games. In this case, we would expect the paper to report repeated runs of the simulation demonstrating this contention for a variety of parameter values. Unfortunately, the paper takes neither line clearly and the reader is left wondering what the relationship between theory, simulation and human behaviour is supposed to be.

The next important distinction underpinning the papers in this book is between models assuming that agents have a common model of their world, which I shall call parameter learning, and models which do not, which I shall refer to as model learning. The obvious weakness of parameter learning is that it begs the question. There has to be some higher level mechanism or organising principle ensuring that everyone has somehow acquired the same world model.[2] In fact, the distinction between the two types of learning is not completely sharp. Several of the papers in the book represent learning as involving populations of candidate solutions using techniques like Genetic Algorithms (Beckenbach, Curzon Price, Marks and Schnabl), Genetic Programming (Edmonds) and Classifier Systems (Shubik and Vriend). Whether this corresponds to model learning or parameter learning depends on the assumptions made about behavioural mechanisms by which new candidate solutions are generated. At one extreme, each binary string in a Genetic Algorithm could represent a possible market price set by a firm (Arifovic 1990). In this case, the observation and imitation of a fragment of a string belonging to another firm requires a common representation and awareness of the "position" from which the fragment originated. (The meaning of a "1" depends both on its position and the encoding procedure.) This is effectively parameter learning since it requires common knowledge of all but the numerical values. By contrast, candidate solutions based on Genetic Programming only require the imitator to understand the meaning of each symbol and its syntactic relation to a small number of other symbols in order to generate meaningful candidate solutions. Despite taking familiarity with the approach for granted, Beckenbach ("Learning by Genetic Algorithms in Economics?") provides a useful discussion of the role of Genetic Algorithms in economic learning. He also advances this research with some exploratory simulations. Marks and Schnabl ("Genetic Algorithms and Neural Networks: A Comparison Based on the Repeated Prisoner's Dilemma") provide a comparison of behaviour in the repeated Prisoner's Dilemma when adaptive agents are represented by different learning mechanisms. Although the results are not striking, this paper is useful because the comparative approach is badly neglected in learning simulation.[3] Very few researchers attempt to justify their choice of learning mechanisms in terms of a plausible spectrum of alternatives.

By contrast, the unsatisfactory nature [4] of the paper by Huck, Müller and Strobel ("On the Emergence of Attitudes Towards Risk: Some Simulation Results") illustrates the logical problems of parameter learning models. Agents play repeated games "rationally", based on payoffs involving subjective utility functions with varying degrees of risk aversion. However, the environment selects agent types (with different risk aversions) on the basis of money payoffs rather than utilities. The result is that risk aversion "evolves" in the population. This finding is extremely problematic. Not only is it assumed that all agents have the same (highly rational) way of playing games, but furthermore, that this way of playing remains systematically out of tune with environmental circumstances by assumption. Risk aversion does not co-evolve as part of a more effective representation of the problem of game play. Instead, it is a desperate environmental attempt to rescue the agents from the fact that they have been saddled with the wrong model of the world and are unable to exercise any control over their own decision mechanisms! Surely this is not a very plausible state of affairs to study?

The final section of the book is devoted to "Cognitive Learning Models". Edmonds ("Modelling Bounded Rationality in Agent-Based Simulations Using the Evolution of Mental Models") represents agent strategies ("mental models") as Genetic Programs. This approach is satisfactory in generating relevant aggregate behaviour but the interpretation of individual strategies is very hard, so one cannot tell whether these are particularly plausible. (This is a continuing problem with both neural networks and Genetic Programming. For instrumental applications interpretation does not matter as long as the candidate solution does the job. For plausible descriptive simulations it does matter of course.) One important feature of this approach is that the Genetic Programs have a syntax that allows for different types of learning (time series projection, "rules of thumb", social and individualistic data input). This is a definitive example of model based learning in that the type of adaptation mechanism is not tightly defined by the researcher at the outset but evolves in response to the environment and behaviour of other agents. Another strength of this paper is that it begins (pp. 306-307) with a characterisation of the learning task facing real individuals and a discussion of the way that the chosen method (Genetic Programming) represents that task. You may not agree with the characterisation of individuals put forward by Edmonds, but at least you are clear about what it is.

Brenner ("Cognitive Learning in Prisoner's Dilemma Situations") faces the difficult task of incorporating attributions into game theory. He suggests that individuals identify players as "types" and then interact with them accordingly. This approach, while interesting, faces two serious problems. Firstly, the information available to game players is extremely "thin" (only the game outcomes). This means that outcomes involved in attribution and outcomes involved in play cannot be separated by an outside observer.[5] This leads to the second problem: the consistency of models, "types" of players and attributions. If I am a "type" who always defects in play, I cannot do experiments to establish the type of my opponent without compromising my observed status as a "defecting type". If I am allowed to experiment, I should be aware that co-operation from opponents does not necessarily signify that they are not defecting types, since they could also be experimenting. Agents obliged to reason like this will rapidly end up in regress and circularity. Finally, only very simple types (all C and all D) are identifiable in play. Most patterns of C and D will be hopelessly ambiguous.

The problem facing Brenner is a revealing one. In fact, all learning models, even the most traditional economic ones are implicitly cognitive. If the assumptions they make about individual behaviour do not correspond to that behaviour in some sense, then their interest is purely theoretical. It is true that the papers by Brenner and Edmonds are linked by more detailed discussions of cognitive process than traditional models usually bother with, but what divides them is more significant. In the paper by Edmonds, the models which agents have of the world are made up of relatively "everyday" concepts. By contrast, in Brenner's model, the agents are actually hampered by the fact that they are trying to operate models more appropriate to theorists than everyday actors, particular in an environment that is apparently too "thin" to support this level of sophisticated reasoning.

Different assumptions about the kinds of models which individual agents have also bear on the distinction between parameter and model learning. Model learning not only allows for the fact that individuals may have different models of the world, but also recognises that the kind of models developed by theorists may be different again: particularly since they have different goals and tools. (In particular, the theorist's model may consist of a simulation in which agents are allowed to have different models of the world. Even here the theorist will be obliged to leave out the impact of their theorising on the world at large.) By contrast, parameter based learning requires, but does not explain, congruence between models of the world in individuals and also between those models and that of the theorist.[6] In some cases, as suggested by Brenner's paper, what is an effective aggregate theory from the theorist's point of view may not be consistent for each agent in the simulation to apply to the behaviour of the others.

In conclusion, the book succeeds fully in the first goal of providing a typical cross-section of computational approaches to learning.[7] To this extent, a researcher in the field cannot really afford to be without it. More than half of the chapters in the book also satisfy the second goal, by providing a useful guide to the techniques presented. Thus novices will also find useful material here if they read cautiously. (In particular, they should be aware that the coverage of the literature by individual chapters is idiosyncratic.) The goal that is least well served is in a sense the most important for the future of research in this field. Surprisingly little is said about the choice of techniques for particular problems, the systematic strengths and weaknesses of different techniques (as opposed to the technical difficulties of using them), the relevant dimensions for any kind of typology and the kind of overall role that learning models are supposed to be serving in economics. The book conveys an impression of creative contributors, each busily working in their own way, but without enough reflection on the implications of what others are doing. Put another way, although the book shows well what is happening in the field, it does not say much about what should be happening and the weaknesses of some papers suggests that this lacuna needs addressing. Finally, I have two passing observations. Firstly, the book needed more editing. There were a surprising number of spelling and grammatical errors, sentences that were hard to understand and references that were missing or incorrect. Secondly, some of the less satisfactory chapters show that technical sophistication is no substitute for a clear grasp of a practical problem and its internal logic. With very few exceptions, it was the most formal papers that had the least to say. All in all, this is a useful but not outstanding contribution to the literature.

* Notes

1 This is also true of the paper "Memory, Learning and the Selection of Equilibria in a Model with Non-Uniqueness" by Barucci which starts from previous theory and not from any correspondence with an empirical problem.

2 The obvious advantage in term of theoretical "tidiness" is that common models are likely to lead to consistent expectations and hence relatively "learnable" environments.

3 Another contribution to the comparative approach is the chapter by Frenken, Marengo and Valente ("Interdependencies, Nearly-Decomposability and Adaptation"). This paper considers (in a formal way) the classes of problems that can be handled by different learning mechanisms.

4 The authors remark that their results differ from those of To and Ely and Yilankaya but do not explain this difference.

5 Of course, it can be assumed that observers can distinguish the two types of play through introspected common knowledge of behaviour patterns but this isn't very plausible.

6 One mechanism to ensure congruence between individual models and that of the theorist is good science. Unfortunately another is the normative and rhetorical task of convincing individuals that your scientific model is "rational" and should therefore be adopted.

7 Two additional chapters in the book ("Local Interaction as a Model of Social Interaction?" by Herreiner and "A Cognitively Rich Methodology for Modelling Emergent Socio-economic Phenomena" by Moss) enhance the coverage still further. The fact that they have not been discussed in this review is no reflection on their quality.

* References

ARIFOVIC J. 1990. Learning by Genetic Algorithms in Economic Environments. Working Paper Number 90-001, Economics Research Program, Santa Fe Institute, Santa Fe, NM, October.

CHATTOE E. 1996. Why are we simulating anyway? Some answers from economics. In K. G. Troitzsch, U. Mueller, G. N. Gilbert and J. E. Doran, editors, Social Science Microsimulation. Springer-Verlag, Berlin.

CHATTOE E. 1998. Just how (un)realistic are evolutionary algorithms as representations of social processes?. Journal of Artificial Societies and Social Simulation, 1(3), <https://www.jasss.org/1/3/2.html>.

OSTROM T. M. 1988. Computer simulation: The third symbol system. Journal of Experimental Social Psychology, 24:381-392.

ButtonReturn to Contents of this issue

© Copyright Springer-Verlag, 2000