© Copyright JASSS

  JASSS logo ----

Gérard Ballot and Erol Taymaz (1999)

Technological change, learning and macro-economic coordination:
An evolutionary model

Journal of Artificial Societies and Social Simulation vol. 2, no. 2, <https://www.jasss.org/2/2/3.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 29-Dec-98      Accepted: 8-Apr-99      Published: 14-Apr-99


* Abstract

The purpose of the paper is to model the process of rule generation by firms that must allocate their resources between physical assets, training, and R&D, and to study the microeconomic performances as well as the aggregate outcomes. The framework is a complete micro-macroeconomic Leontieff-Keynesian model initialised with Swedish firms, and provides one of the first applications of the " artificial world " methodology to a complete economic system. The model also displays detailed features of technological change and firms' human capital. In this complex and evolving Schumpeterian environment, firms are "boundedly rational" and use rules. They learn better rules to survive, and we model this process with the use of classifiers. We are able to show that the diversity of rules is sustained over time, as well as the heterogeneity of firms' performances. Simple rules appear to secure larger market shares than complex rules. The learning process improves macroeconomic performance to a large extent whereas barriers to entry are also detrimental for macroeconomic performance.

Technological change, human capital, endogenous growth, artificial intelligence, artificial worlds, classifier systems, microsimulation, evolutionary theory

* Introduction

Explaining the successes and the failures of coordination of economic systems is one of the main tasks of an economist. Coordination successes seem to prevail, yet there are different levels of success. General equilibrium theory has provided a rigorous explanation of why coordination holds. However this result is obtained in a world which is static, timeless. It lacks some essential features of real economic systems, and among these, technological change that affects both the conditions of production and the set of goods that are available for consumption. A very important characteristic of technological change is the discrete nature of innovations, which has been emphasised by Schumpeter (1934). It cannot be captured by the trend of technical progress in the neo-classical growth models.

Technological change takes many forms: learning-by-doing, learning-by-using, learning-by-learning, incremental innovations, and radical innovations are popular distinctions. Firms compete in such an environment. It can no longer be assumed that they know the true model of the economy because it is too complex. However, since their decisions are taken by human beings, they learn from experience, and try to avoid making the same mistakes, provided they have not been kicked out of existence before. This knowledge is embodied in the rules of behaviour. Firms use rules, and they modify these rules in response to their failure or success, or to their rivals' performance, to the extent that they get the information. Coordination should be studied in that dynamic environment. Evolutionary theory offers a challenging framework to the (general) equilibrium models that assume perfect rationality[1]. However there is little value in criticising models built on standard theory if alternative models are not proposed.

Our first objective in this paper is to model the processes of rule generation in an environment in which technological innovation is already studied in a fairly detailed way. It could throw some light on very intriguing questions. Enquiries show the importance of the diversity of the rules used by firms[2]. The model should obtain such a diversity, but is such a diversity sustained by the same firms having their own rules, or is it due to a Darwinian selection of firms? Neo-classical theory would attribute diversity only to the selection of firms. Firms' performances are very heterogeneous in the real world, even in the long run (Mueller 1990). If the diversity of rules is sustained in the long run, do the firms keep rules that do not yield very good performances or do they continuously try new rules? How do technological waves affect the rate of generation of new rules?

The second objective is to study the emerging coordination, or the failure of coordination at the aggregate level[3]. Is the diversity of rules important for the growth of the economy, or is uniformity preferable?[4] Does it affect stability? Are some rules better for aggregate outcomes?[5] Does learning by heterogeneous firms lead to an unpredictable future, with the possibility of lock-in into inefficient situations? If there are lock-ins, is it possible to lock out ? If such is the case, can the evolution be characterised by punctuated equilibria, i.e. long periods of coordination, separated by short periods of disorderly transition (Somit and Peterson 1989)?

The two objectives and levels of analysis are intertwined, since rules and innovations are likely to have reciprocal influences, at both the micro and the macro level. Institutions and technologies co-evolve, as explained by Nelson (1994). Modelling these important interactions may yield important insights. Fortunately, we are able to use a model that already has three features that are necessary for work on our agenda. First, the model, named MOSES, is a complete micro-to-macro simulation model, in which firms and markets are explicitly represented. The model is initialised with data on real firms and calibrated on the Swedish economy. Second, process innovations and technological change are incorporated in a fairly detailed manner in the model (product innovation is not taken into account at present). Third, firms' decision procedures are already modelled as boundedly rational due to the complexity of the environment, but firms learn albeit with stable rules and parameters.

The basic model is ready for the next step that we undertake in this paper, namely, modelling the processes of generation and modification of rules of behaviour in firms that learn and try to improve their performance in the evolving world in which they compete with one another. This step is in the logic of a Schumpeterian framework, in which entrepreneurs strive to capture quasi-rents. Technological innovations are one device, but good decision procedures, or to put it in the ambitious language of contemporary economists, organisational innovations, are another device. The two types of innovations entertain relations that may have important distributional and macroeconomic effects that this paper will start investigating.

Important concepts and tools for evolutionary theorising and modelling have been designed, yet extremely few applications have been made to the evolution of rules in a macroeconomic environment[6]. Classifier systems, initiated by Holland (1976) and developed as a tool for Artificial Intelligence, constitute a flexible tool that we will use to model decision making , and genetic algorithms allow us to create new rules. A classifier system is a list of 'if-then' statements (or rules) called classifiers that map conditions into actions, and a set of algorithms that evaluate the performance of the rules and that generate new rules. Rule discovery is not random but directed by the system's experience. A classifier system does not require that the firm build a causal theory of the economy in order to take decisions as the rational expectations principle would require, an impossible task in a very complex world. It does not even require consistency between rules.

Some other tools exist, such as neural networks, but the tools mentioned above perfectly fit our needs. However, in spite of Holland and Miller's (1991) plea for the construction of complex adaptive systems with these tools, we are not aware of the existence of models of a complete economy, i.e. with individual decision units, markets for goods, labour and credit, and macroeconomic accounts. One of the reasons for the scarcity of such models is the need to model precisely the institutional environment and the flows of information in addition to the firm and market specifications, and these complete models are costly to build in terms of time resources.

The paper is organised as follows. Section 2 presents the structure of the model. Section 3 exposes the module of rules generation and selection. The experiments and simulation results are discussed in Section 4. Section 5 offers some conclusions.

* A micro-to-macro model with human capital and technological change

MOSES (Model of the Swedish Economic System) has been constructed primarily to analyse industrial development, and is designed to reproduce the consequences on the macro accounts as well as on some distributional patterns concerning firms. The first version was published in 1977 (Eliasson 1977), and it has been continuously improved and updated [7]. Manufacturing is modelled both at the level of the individual firm and at the sectoral level, whereas elsewhere the sector is the basic unit. The manufacturing sector is divided into four industries (raw material processing, intermediate goods, durable goods, and consumer non-durables). Each industry produces a homogenous product. There are 225 firms in the manufacturing industry in the base year, of which 154 are real and the others synthetic, to sum up to the national accounts. The firms take decisions on all markets (products, labour, and capital), and these decisions are based on adaptive expectations. Markets are explicitly represented, and transactions occur without iteration of decisions, hence markets do not clear in general. Firms then find themselves with unused capacity, undesired stocks, and unfilled vacancies. The labour supply is assumed to be exogenous and there may be unemployment. The time period is the quarter. Firms revise decisions, but sometimes they are unable to avoid losses, and they are eliminated by competition. However profits in an industry lead to the birth of new firms.

At the aggregate level the sectors interact through an eleven sectors Leontieff type input-output structure which evolves with the relative weight of the firms and with the (endogenous) technological progress. However, in manufacturing, firms individually buy from other sectors and sell to other sectors. There is an aggregate household sector with a Keynesian consumption and savings function. There is also a Government that levies taxes, and makes expenditures. The economy is open. Foreign product prices and the foreign interest rate are determined exogenously. Finally the model is calibrated to reproduce in a coarse way the evolution of the main macroeconomic Swedish variables for the calibration period, 1982-1990. The model is calibrated with the real Swedish macroeconomic data to ensure its internal consistency. Simulation results are used to understand and analyse the causal relations between various economic variables. We do not, of course, use the model to analyse the development trajectories of the real Swedish economy.

Firms' production and investment activities influence households' nominal income through wages (wage offers are formed endogenously; a profitable firm tends to expand its activities by hiring more workers, and may offer higher wages to attract the workers it needs), dividend payments to shareholders, and tax payments that can be used by the State to finance transfer payments. Households also influence firms' profitability by the demand they create in product markets and, indirectly, by supplying funds for investment for expanding firms through the banking system (there is only one bank in the model). There are, of course, complex secondary effects that link all firms and markets in the economy. Since firms produce homogenous products (i.e., there is one price for all firms operating in a certain industry), a firm's supply in the product market determines its market share. Firms, of course, cannot indefinitely increase their output because of decreasing returns. They can lower production costs at high output volumes only if they invest in expanding production capacity. The growth rate of a firm is limited by its ability to invest in new production capacity (that depends on the firm's net cash flows and ability to borrow from the bank), and the growth rate of demand for its products.

The base model appears already as embodying some of the most important ideas of Schumpeter and the more recent evolutionary contributions[8]. The disequilibrium framework allows for the existence of profits, but some firms drive other firms out of the market by taking better decisions, or having better luck. Firms try to obtain profits by searching for best positions in the employment-production plan, since they do not know the shape of their short-run production function. Usually they operate below the production frontier. In other words, there is X-inefficiency (see Eliasson (1991) for a detailed economic presentation of this search process).

Recent developments of the model by Ballot and Taymaz (1994; 1996; 1997; 1998) have introduced different forms of process innovation and, simultaneously, expenditures on training by firms. This module will be summarised in detail in the next section because the rules and the rule generation system we introduce in this paper relate to the allocation of resources by each firm between the expenditures on R&D, training, and fixed assets. In the base MOSES model, technological change is exogenous and the same for all firms. A firm improves its technological level by investment. It is assumed that new investment embodies the best practice technology. The allocation of resources for various activities are controlled by certain behavioural equations that have the same parameters for all firms. The modules we introduced to the base model make technological change and firms' behaviour endogenous. Firms have to invest in R&D and training activities to learn about the technology through incremental and radical innovations. There are certain decision types for allocating resources. The parameter values for each decision type are firm-specific and endogenously modified as a result of learning. Since the model is very complex, we refer the reader to the manual (see footnote 7 for details).

Modelling innovation in MOSES differs in an important way from the standard approach followed in both game theoretical models and in other evolutionary models based on bounded rationality[9]. The standard approach focuses on R&D and, sometimes, learning-by-doing as the determinant for innovation. Basic models consider the current flow of R&D expenditures, whereas more sophisticated models take into account the stock constituted by cumulated R&D expenditures, which is labelled as technological experience, knowledge, or competence. Learning-by-doing à la Arrow (1962) is exogenously introduced in some models such as in Chiaramonte, Dosi and Orsenigo (1993).

The fundamental idea in our module is that the level of human capital of the labour force in the firm is crucial for the discovery of innovations, the imitation of innovations of other firms, and the economic exploitation of these innovations. In our view human capital cannot be reduced to technological knowledge. It is broader. For instance it may help to choose between different technological options, some of which may involve a loss in the firm's technological knowledge, in favour of a more promising paradigm.

Treating human capital as a distinct variable is also useful for the study of economic policies[10]. Education policies are quite distinct from science and technology policies, and if some consistency is preferable to improve the efficiency of Government policy, the model should show it. Finally endogenous growth theory alternatively considers human capital and R&D as the crucial variables for growth and a synthesis would be important, since both interact (Fagerberg 1994, p.1171).

As far as the treatment of the human capital is concerned, the focus in the model is presently on training expenditures. A usual and useful distinction is made between general human capital and specific human capital. Specific human capital is simply non-transferable to other firms, whereas general human capital is embodied in the worker. The two types of human capital have a hierarchical relation in our model. Specific human capital simply improves the efficiency of production. It is partly lost when a radically new technology is adopted. General human capital has four roles. First, it favours the discovery of new technologies, for a given level of R&D. Second, it enables the firm to imitate and improve the technologies of other firms. It raises their absorptive capacity (Cohen and Levinthal 1990) or their receiver competence (Eliasson 1994). Third, it decreases the cost of the training investment necessary to acquire specific human capital. Fourth, it improves the performance of the rule generation module of the classifier system.

Human capital theory in Becker's tradition (1964) shows that under perfect competition firms will not sponsor general human capital, since other firms would lure away the trained manpower. However, empirical studies show that firms spend their resources on general training. Our own response to Becker's argument is that a firm will rationally spend its resources on general training in order to win the race for the innovation and the associated quasi-rent (Ballot 1994). The expectation of the quasi-rent allows the firm to decide to pay both for the training and for the higher wages required. Ex post some firms will not innovate first and may not earn the quasi-rent [11]. They may incur losses, while some others may still make profits after sharing the rent, to deter the trained workers from quitting. In a Schumpeterian world, Becker's theorem does not hold. Firms do not know exactly their chances of winning or losing, and more than one but not all will enter the 'race'. The training for the rent hypothesis is consistent with the central role of technological innovation in the model, and has received empirical support (see Ballot and Taymaz 1993). Recent studies point out to some determinants of the decision to allocate resources to general training, such as the quit rate, the chances of winning the race (the present endowment in human capital, the existing capital stock), etc.

Innovations are embodied in investment, if we keep aside learning-by-doing, which is disembodied. They are of two types. Incremental innovation allows an improvement in the capital and labour productivity within the limits of an existing technology, which we label the "global technology". The essential point is that firms do not know the global technology. Radical innovation leads to a change in the global technology. Global technologies are also ranked by their potential productivity levels, and all the technologies that have the same limiting global technology belong to the same technological paradigm. Such a paradigm, called a techno-economic paradigm by Freeman and Perez (1988), corresponds to a cluster of inter-related innovations that affect most of the sectors [12]. We have introduced user-producer learning (Von Hippel 1988), which stimulates the diffusion of innovations between sectors.

The move towards the global technology within a paradigm is favoured by the increasing returns to adoption (Arthur 1988). This means that better paradigms might not develop if the returns to the present paradigm are satisfactory. Firms that have tried a better paradigm, but have found themselves isolated may even revisit worse paradigms. Several paradigms may be adopted in the manufacturing sector for a long time, involving lock-in effects. One not so good paradigm may also block the development of a better paradigm.

A technology is represented in the model by a set of techniques, and each technique is assumed to take only two alternative values, 0 or 1[13]. The global technology shows the best combination of techniques. The technological level of the firm is measured by its closeness to the global technology.

Genetic algorithms are used as a tool to generate new technologies within a paradigm[14]. Firms recombine their own sets of techniques to obtain new ones (experimentation), recombine their sets with other firms' sets (imitation, but with modification), or invent some technique which they combine with the others (mutation). Only the innovations that improve productivity are adopted (selection operator). R&D for incremental innovations determines the extent of the recombinations, i.e. the proportion of the techniques that will be changed. General human capital determines the probabilities of out-search (imitation) and mutation. Out-search is more effective in improving the technological level of the firm because the variety is larger in the industry than in the present set of technologies known to the firm.

A radical innovation is made more probable by R&D expenditures aimed at this type of innovation, as well as by the level of general human capital, and knowledge spillovers from other firms. The paradigm in which the firm innovates is randomly drawn from an exponential distribution, so that the paradigms with the highest potential are less likely to be generated. The techniques of the new technology are also drawn randomly. The firm however adopts the new paradigm only if the prospects for learning are good enough to make it more efficient than the present paradigm after five years. The probability of a radical imitation depends on the same variables, and on the distance between the paradigm used by the firm and the paradigm to be imitated, and on the level of the paradigm used by the firm. The algorithms are otherwise similar.

To summarise, firms must allocate resources to different items. First, they must invest in physical assets since they embody new technologies, and also simply because physical assets (machinery and equipment) depreciate. Second, they should spend on training, general as well as specific. Finally they might benefit from spending on R&D, but they must then choose or find a balance between incremental and radical R&D. Profits result from the market process, and the precise relation with any of the mentioned expenditures is far too complex to be fully understood by a firm. Consequently the firms' decisions must be modelled as boundedly rational rules, but learning must also be integrated.

* Firms allocation of resources and the rules generation

We have designed a decision module which sets the rules that we used to determine desired expenditures on R&D, total training, and fixed assets [15], as well as their planned output . In the permanent disequilibrium situation where the firms and the economy stand, actual expenditures may be lower if the firm needs a loan from a bank and gets rationed. Once the firm has obtained its resources (net cash flow plus net borrowing), it allocates them among four assets: the three mentioned, plus liquid assets, in proportion to the desired levels. We have designed a two-level decision system. The first and higher level concerns the decision type (simple, informed behaviour, optimiser, and follower). The second and lower level concerns the setting of the parameters within a given type. Decision types represent different concepts of behaviour. In a complex environment, it is impossible to design perfect decisions. It is not only a problem of uncertainty. It is also a question of understanding fully the mechanisms at work as well as the problem of feasibility of computing the solutions of the decision problem (Simon 1976). In this context economists disagree about how to model the behaviour of the firm, and offer different hypotheses. Students of management observe different decision processes . We have chosen a natural but admittedly arbitrary set of four types : simple (or rule of thumb in the Cyert and March (1963) approach), short- run optimiser in the neo-classical tradition, informed which incorporates the influence of a set of variables considered as important by economists, and imitation, a frequently observed behaviour.

These decision processes are radically different and cannot be generated by any learning process we know. They have to be specified exogenously. However firms may abandon a type that is not satisfactory and adopt another one.

Rules determine the parameters of the function in a given type. Classifiers systems are used to generate new rules, and rules that perform badly are discarded. Rule generation is then endogenous.

When the amounts of expenditures on R&D, training and physical assets have been set, the firm decides on the allocation of total R&D and total training expenditures between their components, radical and incremental R&D, and general and specific training. The share of radical R&D is a decreasing function of the paradigm level of the firm, but an increasing function of the average paradigm level of the sector. This corresponds to a desire to catch up other firms. However it also increases with the past successes of the firm in radical innovations, and decreases with the rate of improvement in labour and capital productivity within the existing paradigm. This second set of determinants takes into account the (dis)incentives for a firm to run into a radical innovation race.

The budget for investment in training is allocated between general and specific training on the basis of a distribution parameter, and the share of specific training increases in the ratio of productive capacity with infinite specific human capital (QTOPFR) to the maximum output with the actual stock of specific human capital (QTOP). This ratio indicates the potential for the increase in output if the specific human capital stock is raised. The firms modify these parameters on the basis of their experience by using classifier systems.

We present the decision types first and then rule generation and selection processes.

Decision types

We have designed four alternative decision types (Table 1).

Table 1: Decision types and determinants of the four actions

Change in planned outputPast change : +
Past expected errors : -
Learning rate : +
Same as simple typeShort run profit
max problem
Same as simple type
Desired Investment in R&D

A percentage of sales revenue
A function of :
general human capital stock : +
radical R&D : +
learning rate : +
sales revenue : +
Same as informed type but modified with rate of return to R&D/interest rate ratioImitate the average of the top 50 per cent of firms
Desired Investment in trainingSame as above
A function of :
human capital stock : +
separation rate : -
best practice/average practice output : +
sales revenue : +
Rate of return to training/interest rateSame as above

Desired Investment in physical capitalSame as above

A function of :
- capacity utilisation rate : +
- fixed capital stock : +
- rate of return minus interest rate : +

Same as informed type
Same as above

Note : all parameter values are modified by the rules of the classifier system

Simple type

A simple decision type represents a rule-of-thumb. The firm i simply spends a certain (and different) percentage of its sales revenue on R&D, on training, and on fixed assets.
RD(i) = a(i1)* S(i)

TR(i) = a(i2)* S(i)

PK(i) = a(i3) * S(i)

where RD(i), TR(i), and PK(i) are ith firms' desired expenditures on total R&D, total training, and fixed assets, and Si is its sales revenue. The firm learns which values of these parameters a(i1), a(i2), and a(i3) improve profitability and market share through the classifier system.

Planned output is determined as in the standard version of the MOSES model (Eliasson 1985, p.154), with one modification. The planned change in output is computed from the difference in the planned change of sales and the planned change in the firm's price. The latter depends on a weighted average of the past changes in prices and the actual change in the prices last year, but also on the size of the underestimation of the actual change in the past periods. The planned change in sales is determined in the same way, but we introduce also a positive influence of the average rate of increase in productivity.
Informed behaviour type

The informed behaviour type represents the decision process of managers who think they can pursue consistent policies, taking into account some information they have. It captures some important variables that our theoretical framework suggests, but it should not be forgotten that the parameters are also modified under the influence of information on economic variables (see 3.19 below).

Desired investment in R&D increases with general human capital, and the past share of radical R&D in total R&D, the increase in productivity (PR), and sales revenue (Si).

The desired investment in total training is a decreasing function of the separation rate, and an increasing function of the ratio (QTOPFR/QTOP). Training expenditures also increase in proportion to sales and to total human capital. The latter effect corresponds to an assumption of coherence in manpower policy. The firm either cares about having a skilled labour force or does not care.

The desired investment in fixed assets increases as a function of the capacity utilisation rate (Qi/QMAXi (L)), the capital stock, and the net rate of return. The latter is the sum of the capital gains, the increase in the price of capital goods and the rate of return, minus the borrowing rate of interest.
The optimiser type

The firm estimates its own production function and computes the optimal short-run output, and then derives its desired R&D and training investments. It should be clear that this behaviour is just another form of bounded rationality, since the world is too complex for an exact optimisation. The firm estimates its (Cobb-Douglas) production function using input and output data for the last 20 quarters. However a firm may not be too confident in this estimate, especially if the quantities have not evolved much. Therefore it requires the help of a Statistical Bureau which estimates a sectoral production function on the aggregate time series of all firms. The firm, then, combines its own estimates with the Statistical Bureau's estimates for the sectoral production function to get "average" values.

The firm determines the optimum short-run output level from the maximisation of operational profits (net sales minus labour and material costs).

Then the firm determines, in the first step, the desired R&D investment as in the informed type. Finally it multiplies the desired investment by a factor of the ratio of the rate of return on R&D to the rate of interest, RI. The rate of return on R&D is found by regressing the learning rate on R&D and training intensities. The desired training investment is determined in the same way.
The follower type

Finally the follower decision type represents a behaviour which seems common, and is more and more often introduced in economic models, since it may have important aggregate consequences: imitation. The firm desires to spend a certain share of its sales revenue on different assets, but this share is now a function of the average share spent by successful firms, i.e. the top 50 per cent of firms ranked by their profit margin.
RD(i) = e(i)*S(i)* RDB

TR(i) = e(i2)*S(i)*TRB

PK(i) = e(i3)*S(i)*PKB

where RDB is the average share of R&D expenditures in sales of the successful firms, TRB is the average share of training expenditures in sales of those firms, and PKB is the average share of their fixed assets expenditures.
Changes in decision type

At the beginning of a simulation a decision type is randomly assigned to each firm. When a firm is dissatisfied with its performance, it will try to change its decision type. The change in the rate of return and the change in its market share are used as performance criteria. If the rate of return declines, the firm gets a warning for that quarter. If the firm loses some of its market share, it gets another warning. The firm will attempt to change the decision type when it gets twenty warnings in total. However the probability of changing the decision type increases with the general human capital of the firm, as a consequence of a higher economic competence of the managers.

The new type is selected randomly in the base simulation. Then, for a given decision type, the firm sets the rules randomly, and learns new rules by using classifier systems.

Rule generation and selection

It should be made clear that there are really two parts in the rules. The broad design of the rules of allocation has been set with the decision type. However the parameters can be changed according to the experience of the firm through a process which is described now. This process is not imposed by us in a naïve way such as "if variable x rises, parameter ai increases by y per cent". The learning process determines both the identities of the influential variables and their effects.

For each decision type, the firm uses a classifier system [16]. Such a system has three levels of activity, a performance activity, a credit assignment activity, and a discovery activity (Figure 1). At the lowest level is the performance activity. It is composed of a set of rules or, to use the technical term, classifiers. A classifier is a bit string of fixed length written over the trinary alphabet 0,1,#, where # is interpreted as "do not care". The first part of each bit string encodes a condition statement and the second part an action statement. Each firm keeps in memory a certain number of rules (32 rules in our study). Here is a possible list of the first four rules of a given firm, at some moment in time:
Figure 1
Figure 1: The classifier system

In the current specification, the condition part has five elements and the action part has four elements. In the action part, we find the decisions on the four investment variables that we have listed in Table 1. The first concerns the parameters of RD(i), the investment in total R&D, the second the parameters of TR(i), the total investment in training, the third the parameters of PK(i) , the investment in fixed assets, and the fourth the planned output level EXPDQ(i). For each decision, the corresponding element takes the value 1 if it suggests to the firm to increase the values of the parameters, 0 if it suggests to lower them, and # if it does not suggests any decision, which, ceteris paribus, means keeping the parameter values unchanged.

In the condition part, we have assumed that the firm collects information on five variables or elements:

The # symbol means that the value taken by the variable has no effect on any of the actions that can be taken. When following a rule with a # element in the condition part, the firm does not pay attention to that variable.

For instance the first rule in Figure 1 states that, if the market share has risen (condition 1), if the relative rate of return has decreased (condition 2), if the human capital of the firm per employee is higher than market average (condition 4), and if the rate of learning is higher than market average (condition 5), then the firm will raise desired investment in R&D (action 1) and decrease desired investment in physical capital (action 3). It does not change its parameters concerning the investment in training (action 2) nor planned output (action 4).

The classifier system is used in the following way. At the end of each quarter, the firm observes the environment made of the five elements of the condition vector. It checks if it matches the condition parts of one or more rules in the classifier system.

To determine which rule will be applied if conditions match more than one rule, an auction takes place, and rules are ranked by bid values. A bid value rises with the strength value of a rule and with its specificity. The strength of a rule is a cardinal number that is positively related to its past successes. The specificity of a rule is negatively related to the number of #'s in the condition part. Specificity has the highest value when all conditions must be met to take actions. Higher specificity brings more relevance, since it conveys more information on the situation. Generality is just the reverse. The maximum generality is found in a rule with only # in the conditions part, since it brings no information.

The bid value of a rule is calculated as follows:
BID(j) = ST(j) /(1+GEN(j))
where ST(j) is the strength of the rule j, and where GEN(j) is equal to the number of # elements in the condition part of the jth rule. In our example, a 01#01 observation vector matches rules 3 and 4. Rules 3 and 4 lead to contradictory actions, since rule 3 suggests to increase training, while rule 4 suggests to decrease it. The rule with the highest bid will be selected.

If two (or more) rules suggest the same action, all of them will be applied for that action. For instance, rules 3 and 4 match the observation vector. Both suggest more R&D. Applying several rules involves an increase in the intensity of the change of the parameters for the concerned variable as follows. An intensity variable is defined for each action variable in a given firm. Rules are applied sequentially, by descending bid value.
INT(k) = INT(k-1) * (1 + P1*ACT)/(1 + P2*abs(lnINT(k-1)))
where INT(k) is the intensity variable for the application of the kth rule, and K(0) = 1. P1 and P2 are parameters (with values .5 and 5 respectively in the present sets of experiments). ACT is the action (1 for increase, -1 for decrease, 0 for no change), and abs (.) the absolute function. For instance, if two rules suggest to increase R&D, INT(k) = 1.5, so that the parameters b(i1) and b(i2) are increased by 50 per cent by applying the first rule. One sees that there are decreasing returns in INT. All parameters are changed in the same way except ci1, which relates the training expenditures to the rate of separation, and is assumed fixed, to keep the effects of changes proportional for all expenditures.

The credit assignment system evaluates the rules to set their strength values. This is a difficult task for several reasons. Overt rewards are rare. For instance firms may get profits for reasons which have nothing to do with a decision they have recently taken on R&D. Moreover, the environment is perpetually changing so that one rule may have been successful in the previous environment, but be no longer successful under current conditions. The type of algorithm involved is called the bucket brigade algorithm. It treats each rule as a kind of middleman[17]. A rule deals only with its suppliers - the rules sending messages satisfying its conditions - and its consumers - the rules with conditions satisfied by the messages the middleman sends. Whenever a rule wins a bidding competition, it initiates a transaction wherein it pays out part of its strength to its suppliers. As one of the winners of the competition, the rule serves as a supplier to its consumers, and receives payments from them in turn. The rule's strength is then a sort of capital that measures its ability to provide a profit. If it receives more than it has paid out, it has made a profit and its strength is increased. Presently we use a very simple scheme: If the two performance criteria are decreasing, the strength values of all rules applied are decreased. Here all rules that have been winners of a bid in the past are considered as suppliers to be rewarded or punished.
ST(j,T) = ST(j,T-1) + r*PST*PER(T)*(APP(j,T-1) + (1-r)*APP(j,T-2) + (1-r)*(1-r)*APP(j,T-3) + (1-r)*(1-r)*(1-r)*APP(j,T-4) + ... )
where PER(T) is the performance of the firm at time T (1 if both criteria are satisfied, -1 if none of them is satisfied, 0 if one of them is satisfied). APP(j,t) takes the value 1 if the rule j has been applied in quarter t, and 0 if it has not been applied. PST and r are parameters (.5 and .1 respectively), r being a discounting factor. For example, if the rule was applied 1 and 2 and 4 quarters ago, then:
ST(j,T) = ST(j,T-1) + r*PST*PER(T)*((1-r) + (1-r)*(1-r))

If the rule has been applied every quarter since the Big Bang, then the value in parenthesis will be equal to 1/r, and the strength increased by the upper limit PST*PER(T).

The third activity of the classifier system is the discovery activity. Genetic algorithms are used to generate new rules. Two standard methods are used. In mutation, an element of a rule is changed randomly. For instance a "1" becomes "0" or "#". Managers of firms with a high general human capital are considered to experiment more, and the probability of mutation is then increased in proportion to general human capital. In crossover, a part of a rule (a part of the bit string) is replaced by the corresponding part of another rule, which is either in the memory of the firm or in the memory of another firm. The probability of crossover is increased as a function of the firm's general human capital. If another firm is imitated, this firm is determined randomly, but the probability depends on the relative profit margin of the firm that is to be imitated. Once a firm has been selected for imitation, the rule in that firm is selected randomly, but the probability depends on the rules' relative strength value. Finally the crossover operation is performed. Imitation involves modification here, as is very often the case with real firms, which adapt the rules they imitate to their own corporate culture. Rules with the lowest strengths are discarded to keep no more than 32 rules in a firm's memory.

One sees that the selection process is biased towards the efficient combinations of elements at the firm and rule levels.

* Experiments

We have run four sets of experiments. 20 simulations were run in each set, differing only by the seed number of the pseudo-random number generator. This enables us to compute average results that are little affected by the stochastic nature of the model. After presenting the reference set (BASE), we describe an experiment in which the firms do not change their rules (NOLEARN). The experiment NODIV differs from the preceding only by assuming that all the firms use the same type (informed behaviour) and have the same rules randomly determined at the beginning of simulation. Finally we block entry in the BASE experiment (NOENTRY). We will present the aggregate results first, then the firms' decision types and their performances. Finally we will offer some analysis of rule generation in relation to technological change.

The effect of learning on aggregate coordination

Although it is not the focus of this paper, it must be mentioned that the experiments are able to reproduce changes of technological paradigms endogenously [18]. Figure 2 shows the diffusion of the technological paradigms 2 and 3 over a fifty years period, and the disappearance of paradigm 1. It looks like a smooth process, since paradigm 2 is the first to invade the market, but it starts to be competed out by paradigm 3 after some time. Yet this figure represents the share of the paradigms averaged over 20 simulations, and a variety of patterns may occur in individual simulations, with the possibility of a lock-in into paradigm 2, which is less efficient than paradigm 3.
Figure 2

Investment in radical R&D and in general human capital favour radical innovations and the jump into a higher technological paradigm. The diffusion of such a paradigm will depend on the non-radical innovators' decisions. The learning process of firms, which determines the allocation of resources between R&D, training and other expenditures, and the further allocations between incremental and radical R&D, and general and specific training, will influence the rate of adoption of better paradigms.

Table 2 displays the average terminal aggregate variables for all experiments. The first result for the reference experiment, BASE, is that productivity growth, when taking the average of the 20 simulations, is reasonable when compared with the Swedish economy (see also Figure 3). For instance the existence of followers has not entailed behavioural bubbles with booms and collapses. Hence we can argue that some "bounded rationality" sets of rules, with learning possibilities for the agents, are compatible with an orderly aggregate outcome, at least in probabilistic terms.

Table 2: Macroeconomic performance

G.N.P (BILLIONS OF SKR)36283219*2913*3407*
TECHNOLOGY LEVEL2.381.83*2.302.49*

* Significantly different from BASE at the 5 per cent level.

Table 3: Entry and exit rates and the Herfindahl index of concentration

NUMBER OF FIRMS231231246*125*
SECTOR 1 : RAW MATERIAL 0.1020.1290.1700.316*
SECTOR 2 : INTERMEDIATE GOODS0.1340.2090.1680.205
SECTOR 3 : CAPITAL GOODS0.0970.1660.226*0.232*
SECTOR 4 : CONSUMER GOODS0.3970.4170.3970.639*
ALL SECTORS :0.1830.2310.241*0.349*

* Significantly different from BASE at the 5 per cent level.

The BASE experiment exhibits the highest GNP. The NOLEARN experiment yields a lower GNP. This result shows clearly that learning is not a zero sum game at the aggregate level. Firms that have been given bad rules by the random generator of rules, or badly matched types, may not have a chance to change them. In a model with Schumpeterian competition, it could happen that the selection of firms replaces the selection of rules. It is not the case, since the rates of entry and exit are almost identical in BASE and NOLEARN experiments (Table 3). It seems that the NOLEARN experiment leads the firms endowed initially with well fitting rules to have badly fitting rules later on. Let us then state the first result.
Proposition 1
When firms learn rules of allocation of their resources, the economy has a higher GNP, a higher productivity growth rate, and a higher technological level in the long run.
Figure 3

The NODIV imposes one decision type, namely informed behaviour, and the same rules to all firms. Productivity growth and the terminal GNP are lower than in NOLEARN. In NOLEARN some firms are endowed with good types and rules and can increase their output, partly at the expense of the market shares of the badly endowed firms, the decline of which has then little aggregate effect, while in NODIV all firms have the same competence. This result may show the importance of variety for macroeconomic performance, a distinctive evolutionary result, yet one that has not been obtained often in a complete macroeconomic model, since the required micro-to-macro models do not exist. However we cannot dismiss the possibility that the choice of the informed type as the unique type is responsible for the bad performance, since we have not experimented with the other types.

The experiments differ even more in the variation of their aggregate results. BASE has a low variance, since firms badly endowed in the initial conditions are able to learn better rules. NOLEARN has a higher variance since the random assignment of types and rules in the initial conditions cannot be changed, and is more or less satisfactory from one simulation to the other. NODIV has a still higher variance since, if the rule is bad (or good), it is given to all firms.

NOENTRY differs from BASE only through the blockage of entry. The aggregate outcome is significantly lower than in BASE, although better than in NOLEARN. Several mechanisms may be at work. As the new firms are generated so as to be just like the incumbent firms on average, some may be very good in their rules and in their technology, and may pull the economy. An additional mechanism is that they put competitive pressure on the incumbent firms that must search for better rules and types. This typical evolutionary result can be stated:
Proposition 2
Entry is an important determinant of macroeconomic performance.

Table 4: Distribution of firms by decision type (end of simulation)


* Significantly different from BASE at the 5 per cent level.

Table 5: Market shares by decision type

INFORMED BEHAVIOR24.716.110019.8

* Significantly different from BASE at the 5 per cent level.

Firms' learning, decision types and microeconomic performance

Table 4 displays the distribution of firms by decision type at the end of the simulation. When diversity is allowed, the four types obtain almost equal shares in the different experiments. However the market shares in Table 5 differ much more. In BASE the simple type rules obtain a somewhat higher market share than the other types. When no learning takes place, hence there is no change of type, it obtains almost half of the total market. In the case of no entry, it obtains a little less than the follower type. However these differences, although usually statistically significant, do not correspond to differences in the rates of return (Table 6) and are more diverse when no learning occurs. Hence we are able to state tentative but provocative results:
Proposition 3
No rule type eliminates the others in the long run.
Proposition 4
Simple rules (including imitation) are no worse than complex rules.

Table 6: Rates of return by decision type


In the complex and changing world that we have designed, it seems that the firms have difficulties in finding permanently satisfactory types. They change types 9 times on average in BASE. Since they change decision types when they are not satisfied, it is logical that the rates of returns are very similar. It is the result of an arbitrage mechanism. When it does not exist, as in NOLEARN, the rates of return are more diverse.

Simple types seem fairly efficient for obtaining high market shares. Again, the world seems too complex to be efficiently understood by the informed decision type and optimiser type rules. We cannot exclude that better designed decision types would do better, but this remains to be proved. This makes a strong case for the rationality of simple rules, which are so much in use in real firms. The result is deeper than a proof stating that boundedly rational rules cannot be competed out, simply since a long run optimisation program taking into account all the information, cannot be solved in a complex environment. The complex rules, which take into account some of the information, may then be biased and inefficient, and no better than simple rules. Finally it may be the case that the important learning processes take place at the level of the parameters of the decision functions, as shown below.

Technological change and rules generation

Table 7 displays the averages of the rules' strength values for all firms and the 20 simulations. The changes of rules in BASE and NOENTRY allow the firms to discard the bad rules, so that the average strength of the best rule or the mean rule of a firm is higher. When learning takes place, the rules are more successful, and this may explain why macroeconomic coordination is better.

Table 7: Rules' strength values at the end of simulation

MEAN STRENGTH VALUE PER FIRM (ON 32 RULES)0.7580.2400.2370.826

Figure 4 displays the average rate of change in the technological paradigms and the maximum rule strength in the BASE experiment. An interesting inverse relation appears. The following proposition can be stated:
Proposition 5
Paradigm changes entail a reshuffling in firms in terms of market shares and rates of return, and this reshuffling yields a decrease in the usefulness of existing rules, and a search for better rules and decision types.

In the complex and evolving environment that we have built, there is always the need for firms to experiment and select new rules and decision types. It is a striking result for instance that some rules in the memory of the best performing firm, at the end of a BASE simulation, are contradictory (however we are not certain that they have been used).
Figure 4

* Conclusions

The present paper offers three main results concerning rule learning and aggregate evolutions. The first result is that learning with boundedly rational agents is compatible with successful coordination in the economy, and aggregate growth. Moreover learning improves macroeconomic performance. When radical innovation takes place, rule learning is more intensive and this may be important for the success of the jump into a higher technological paradigm.

The second result is that entry stimulates the aggregate growth. Competitive pressure and diversity, which entails some very efficient firms, are the mechanisms at work. This evolutionary theorem is here obtained in a complete micro-to-macro model.

The third result is that diversity of rules is self-sustained (both at the level of decision types and of the rules in the classifiers). In spite of learning, there is no tendency towards uniformity. A fairly stable, steady state level of diversity occurs. This diversity seems compatible with the existence of a somewhat more efficient decision type, contradicting standard equilibrium models.

The fourth, more tentative but stimulating result, is that the "simple" decision type is no worse than complex types. One possible explanation is that the complex rules we have designed are not good ones. A better conjecture is that the factors for performance are very hard to discover in our complex and evolving economy, and that no decision model fits, whatever its sophistication, due to the state of flux of the economy. The simple decision type is compatible with heuristic learning on parameters (approximated by classifiers), and this may be a more efficient mode of behaviour.

A long agenda of research is in order. The speed of learning, though depending on general human capital, can also be modified to see if economic competence leads to more or less stability and aggregate activity. Previous results with the MOSES base model do not favour very high speeds of adaptation to markets (Eliasson 1991).

The model opens also a path to modelling learning with different categories of agents in the firm. General human capital gives economic competence to managers and therefore increases experimentation with new rules. It also gives more technological competence, in interaction with R&D, to engineers and researchers. Specific human capital and learning-by-doing determine the productivity of the production workers in a given paradigm, and general human capital the capacity to adapt to a new paradigm. These categories of agents have not been distinguished for the time being, but a disaggregation would allow the introduction of incentives, which we feel are important for workers' efficiency and firm performance, as well as for educational policy issues. The interactions between individual learning and organisational learning also constitute a major key for understanding the success or failure of rules and firms [19].


All decision types
Share of radical R&D in total R&D:


where m m is the average paradigm level of sector m and m i the paradigm level of the firm, SCi the number of past successes of the firm in radical innovations, n K the rate of improvement of productivity (or learning rate) of capital, and n l the learning rate for labour.z 1m, z 2m, z 3m, z 5m, are sector specific parameters (m = 1, ...4). z 4p is paradigm specific (p = 1,2,3).

Informed behaviour type
Desired R&D:

RDi = (bi1*GHCi*Li) + (bi2*r pi*w i*Si)  [2]

where GHCi is the general human capital stock, r pi the past share of radical R&D in total R&D, and w i the increase in productivity.

Desired total investment in training:

TRi = (1-ci1*s)*(QTOPFRi/QTOPi)*(ci2*Si) +( ci3*Li*(GHCi + SHCi))  [3]

where s is the quit rate, GHCi and SHCi the general and specific human capital stock.

Desired investment in fixed assets:

IKi = (Qi/QMAXi)* Ki *[di1+di2*(QDPK+ RRi -RIFi)]  [4]

where QDPK corresponds to the capital gains, RRi to the rate of return, and RIFi to the borrowing rate of interest.


The authors are very grateful to Gunnar Eliasson for many stimulating ideas, and the three referees of the Journal for their very helpful comments.

Earlier versions of the paper were presented at the conference "Economic Evolution, Learning and Complexity" (Augsburg, Germany, May 22-25, 1997) and the SIMSOC Conference (Cortona, Italy, September 22-25, 1998).

A grant from the French Commissariat Général du Plan (subsidy 25.95) for research on firm's human capital is gratefully acknowledged. Erol Taymaz thanks the University Paris II for the invitation as a Guest Professor, during which part of the research was done.


1Dosi and Orsenigo (1994) provide a very thorough synthesis of what evolutionary theory has to offer that "neoclassical" theory (i.e. equilibrium plus unbounded rationality) cannot offer. Our present extension of MOSES is concurrent with their views.

2See Patel and Pavitt (1991) for some evidence.

3See Gilbert and Doran (1994) for concepts and simulation models of emergence in social sciences.

4Chiaramonte and Dosi (1992) supply an evolutionary model which shows the importance of diversity.

5Akerlof and Yellen (1985) have shown that small deviations from rationality may matter little for firms, but a lot for aggregate variables.

6Sargent's (1993) survey testifies this.

7A set of manuals give a full description of MOSES before the introduction of human capital and technological change: Albrecht et al. (1989; 1992), Taymaz (1991). A synthesis is given by Eliasson (1991). The model with a synthetic database is available for all researchers.

8See Dosi and Marengo (1994) for an excellent synthesis.

9See Reinganum (1989) for the first approach, which focuses on the determination of R&D expenditures in the patent race. Silverberg, Dosi and Orsenigo (1988) is an example of the second approach.

10Eliasson (1994) provides a very detailed and thought provoking analysis of human capital issues with the same intellectual focus, i.e. economic competence of firms and aggregate growth.

11However they may imitate fast and reap some rent.

12We prefer the term "technological paradigm" coined by Dosi (1982) for the sake of linguistic elegance.

13This assumption does not reduce generality. It corresponds to the use of a binary alphabet, which is very flexible.

14See Goldberg (1989) for an excellent introduction to genetic algorithms.

15For a prior modelling enterprise, dealing only with the R&D expenditures as a learning process, see Silverberg and Verspagen (1994). Another model, developed by Merlateau and Langrognet (1994), introduces learning of the best proportions of workers in the R&D department, and of trained workers in production, with the tool of neural networks.

16Booker, Goldberg and Holland (1990) provide a thorough introduction.

17We borrow the image from Booker, Goldberg and Holland (1990) , p. 255.

18Ballot and Taymaz (1998) provide a detailed analysis, in a version with fixed rules and parameters of allocation.

19For a recent survey of issues, see Cohendet, Llerena and Marengo (1994).


* References

AKERLOF G.A. and YELLEN J. (1985) Can small deviations from rationality make significant differences to economic equilibria? American Economic Review,75, 708-720.

ALBRECHT JW et al. (1989) MOSES Code.Stockholm:IUI.

ALBRECHT JW et al. (1992) MOSES Database. Stockholm: IUI.

ARROW K (1962) The economic implications of learning-by-doing. Review of Economic Studies, 29(2), pp 155-173.

ARTHUR WB (1988) Competing technologies: an overview. In Dosi G et al. (eds) Technical Change and Economic Theory, Pinter: London.

BALLOT G (1994) Continuing education and Schumpeterian competition. Elements for a theoretical framework. In Asplund R (ed.), Human Capital Creation in an Economic Perspective, Berlin: Physica Verlag/Springer-Verlag.

BALLOT G and TAYMAZ E (1993) Firms sponsored training and performance. A comparison between France and Sweden based on firms data. 5th EALE Conference, Maastricht Netherlands, Oct.1-3 (ERMES Working Paper n. 93-09).

BALLOT G and TAYMAZ E (1994) Training, learning and innovation. A micro-to-macro model of evolutionary growth. The International J.A.Schumpeter Society Conference on Economic Dynamism: Analysis and Policy, Münster, Germany, August 17-20.

BALLOT G and TAYMAZ E (1996) Firm sponsored training, technical change and aggregate performance in a micro-macro model. In Harding A (ed.), Microsimulation and Public Policy ,Amsterdam: North Holland.

BALLOT G and TAYMAZ E (1997) The dynamics of firms in a micro-to-macro model with training, learning, and innovation. Journal of Evolutionary Economics, 7, 435-457.

BALLOT G and TAYMAZ E (1998) Human Capital, Technological Lock-in and Evolutionary Dynamics. In Eliasson G and Green C (eds.), Microfoundations of Economic Growth: A Schumpeterian Perspective, Ann Arbor: The University of Michigan Press, 301-330.

BECKER GS (1964) Human Capital. New York: NBER.

BOOKER LB, GOLDBERG DE, and HOLLAND JH (1990) Classifier systems and genetic algorithms. In Carbonnel J (ed), Machine Learning: Paradigms and Methods, Cambridge: MIT Press.

CHIARAMONTE F and DOSI G (1992), The microfoundations of competitiveness and their macroeconomic implications. In Foray D and Freeman C (eds.), Technology and the Wealth of Nations. London: Francis Pinter.

CHIARAMONTE F, DOSI G, and ORSENIGO L (1993) Innovative learning and institutions in the process of development: on the microfoundation of growth regimes. In Thomson R (ed.), Learning and Technological Change, New York: St.Martin's Press.

COHEN WM and LEVINTHAL DA (1990) Absorptive capacity: a new perspective on learning and innovation. Administrative Science Quaterly, 3, 128-152.

COHENDET P, LLERENA P, and MARENGO L (1994) Learning and organisational structure in evolutionary models of the firm, Eunetic Conference, Strasbourg, France, October 6-8.

CYERT RM and MARCH JG (1963) A Behavioural Theory of the Firm, Cambridge: Blackwell.

DOSI G (1982) Technological paradigms and technological trajectories. Research Policy, II (3), 147-162

DOSI G and MARENGO L (1994) Some elements of evolutionary theory of organisational competences. In England RW (ed), Evolutionary Concepts in Contemporary Economics. Ann Arbor: The University of Michigan Press.

DOSI G and ORSENIGO L (1994) Macrodynamics and microfoundations: an evolutionary perspective. In Granstrand O (ed), Economics of Technology, Amsterdam: North Holland.

ELIASSON G (1977) Competition and market processes in a simulation model of the Swedish economy. American Economic Review, 67, 277-281.

ELIASSON G (1985), The Firm and Financial Markets in the Swedish Micro-to-Macro Model. Stockholm: IUI and Almqvist and Wiksell International.

ELIASSON G (1991) Modelling the experimentally organised economy. Journal of Economic Behaviour and Organization,16, 163-182.

ELIASSON G (1994) General purpose technologies, industrial competence and economic growth, working paper, Department of Industrial Economics and Management Working Paper, The Royal Institute of Technology, Stockholm.

FAGERBERG J (1994) Technology and international differences in growth rates. Journal of Economic Literature, XXXII, 1147-1175.

FREEMAN C and PEREZ C (1988) Structural crises of adjustment: business cycles and investment behaviour. In Dosi G et al. (eds), Technical Change and Economic Theory, London: Pinter, 38-66.

GILBERT N and DORAN J (eds) (1994) Simulating Societies, London: UCL Press.

GOLDBERG DE (1989) Genetic Algorithms in Search, Optimisation and Machine Learning, Reading, Mass: Addison-Wesley.

HOLLAND JH (1976) Adaptation. In.Rosen R and Snell FM (eds), Progress in Theoretical Biology IV, New York: Academic Press, 263-293.

HOLLAND JH and MILLER JH (1991) Artificial adaptive agents in economic theory. American Economic Review, 81(2), 365-370.

MERLATEAU M-P and LANGROGNET E (1994) An evolutionary model with human capital, Eunetic Conference, Strasbourg, France, October 3-6 (ERMES discussion paper 94-07).

MUELLER DC (1990) The Dynamics of Company Profits. An International Comparison. Cambridge: Cambridge University Press.

NELSON RR (1994) The coevolution of technologies and institutions. In England RW (ed.), Evolutionary Concepts in Contemporary Economics, Ann Arbor: The University of Michigan Press.

PATEL P and PAVITT K (1991) Large firms in Western Europe's technological competitiveness. In Mattson LG and Stymme B (eds.), Corporate and Industry Strategies for Europe.Amsterdam: Elsevier.

REINGANUM J (1989) The timing of innovation: research, development and diffusion. In Schmalensee R and Willig R (eds), Handbook of Industrial Organisation, vol 1, Amsterdam: North Holland.

SARGENT TJ (1993) Bounded Rationality in Macroeconomics, Oxford: Clarendon Press.

SCHUMPETER J (1934) The Theory of Economic Development, Cambridge, Mass.: Harvard University Press.

SILVERBERG G, DOSI G, and ORSENIGO L(1988) Innovation, diversity and diffusion: a self-organisation model. The Economic Journal, 98, 212-221.

SILVERBERG G and VERSPAGEN B (1994) Collective learning, innovation and growth in a boundedly rational, evolutionary world. Journal of Evolutionary Economics, 4 (3), 207-226.

SIMON HA (1976) From substantive to procedural rationality. In Latrès SJ (ed), Method and Appraisal in Economics, Cambridge: Cambridge University Press.

SOMIT A and PETERSON SA (eds) (1989) The Dynamics of Evolution. The Punctuated Equilibrium Debate in the Natural and Social Sciences. Ithaca: Cornell University Press.

TAYMAZ E (1991) MOSES on PC: Manual, Initialization and Calibration. Stockholm: IUI.

VON HIPPEL E (1988) The Sources of Innovation, New York: Oxford University Press.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1999