©Copyright JASSS

JASSS logo ----

Simon Deichsel and Andreas Pyka (2009)

A Pragmatic Reading of Friedman's Methodological Essay and What It Tells Us for the Discussion of ABMs

Journal of Artificial Societies and Social Simulation 12 (4) 6
<https://www.jasss.org/12/4/6.html>

For information about citing this article, click here

Received: 04-Aug-2009    Accepted: 20-Aug-2009    Published: 31-Oct-2009

PDF version


* Abstract

The issues of empirical calibration of parameter values and functional relationships describing the interactions between the various actors plays an important role in agent based modelling. Agent-based models range from purely theoretical exercises focussing on the patterns in the dynamics of interactions processes to modelling frameworks which are oriented closely at the replication of empirical cases. ABMs are classified in terms of their generality and their use of empirical data. In the literature the recommendation can be found to aim at maximizing both criteria by building so-called 'abductive models'. This is almost the direct opposite of Milton Friedman's famous and provocative methodological credo 'the more significant a theory, the more unrealistic the assumptions'. Most methodologists and philosophers of science have harshly criticised Friedman's essay as inconsistent, wrong and misleading. By presenting arguments for a pragmatic reinterpretation of Friedman's essay, we will show why most of the philosophical critique misses the point. We claim that good simulations have to rely on assumptions, which are adequate for the purpose in hand and those are not necessarily the descriptively accurate ones.

Keywords:
Methodology, Agent-Based Modelling, Assumptions, Calibration

* Introduction

1.1
In the most recent literature on agent-based modelling the issues of empirical calibration of parameter values and functional relationships describing the interactions between various actors increasingly plays an increasingly important role. Agent-based models (ABMs) range from purely theoretical exercises focussing on the patterns in the dynamics of interaction processes to modelling frameworks which are oriented closely at the replication of empirical cases (e.g. history friendly models, Malerba, Nelson, Orsenigo and Winter 1999). In this discussion Brenner and Werker (2007) propose a taxonomy suggesting to classify ABMs in terms of their generality and their use of empirical data. In their conclusion they recommend to aim at maximising both criteria by building what they call what they call "abductive models" when complex problems are at issue. Interestingly, they explicitly recommend to "include as much data as possible when setting up the assumptions".[1]

1.2
This seems to be almost the direct opposite of Milton Friedman's famous and provocative methodological credo "the more significant a theory, the more unrealistic the assumptions".[2] Most methodologists and philosophers of science have harshly criticised Friedman's essay as inconsistent, wrong and misleading. By presenting arguments for a pragmatic reinterpretation of Friedman's essay, we will show why most of the philosophical criticism misses the point. After that, we will use the developed arguments for contesting the claim that good simulations have to rely on descriptively adequate assumptions.

* A Pragmatic Reinterpretation of Friedman's Methodology

2.1
Even more than 55 years after its original publication, Friedman's methodological essay still is the classic among all methodological texts for economists. As Daniel Hausman has stated, it is probably the only methodological work that a fair amount of economists has ever read.[3] More philosophically minded readers have usually rejected it as inconsistent, vague or false.[4] The most commonly held view reduces Friedman's essay to the point that the assumptions of a theory do not matter because all we should expect from economics is good predictions. This is a grossly misleading interpretation, as we will show.[5]

2.2
Before constructively developing our pragmatic interpretation of Friedman's methodology, we will deal with some of the best-known criticisms of his approach.

2.3
Daniel Hausman's essay "Why look under the hood" is a paradigmatic example of critique of Friedman's essay.[6] According to Hausman, Friedman claims that the assumptions underlying a model are irrelevant and all that is relevant is predictive success.[7] Hausman tries to spot an error in this claim by providing an analogy: Suppose you want to buy a used car. Friedman would say that the only relevant test for assessing the quality of the car is checking whether the car drives safely, economically and comfortably. Looking under the hood and checking the status of the components is not necessary. Hausman claims it is obvious, that no one would buy a car without looking under the hood. In analogy to that, we should check the assumptions of theories as well and not merely rely on predictive success as the only criterion. Hausman takes this to be an argument against Friedman's position.

2.4
This criticism is typical for a class of accusations made against Friedman's case. Such accusations, however, are attacking a straw man, because a more thorough reading of Friedman's essay easily shows, that he does not hold the position that the assumptions of a theory are irrelevant at all. The error in Hausman's argumentation can be made clear by the following comparison: Hausman grants later in the text, that modelling always involves simplification, which is why the assumptions do not need to be perfectly true, but can be "adequate approximations [...] for particular purposes".[8] Ironically, in Friedman's essay there is a passage that states just that: "the relevant question to ask about the "assumptions" of a theory is not whether they are descriptively "realistic," for they never are, but whether they are sufficiently good approximations for the purpose in hand."[9] This should make clear, that Friedman does not think the assumptions of a theory to be irrelevant. He aims rather at pointing to the deficits of the naïve demand for more realistic[10] assumptions in economics. Friedman takes the contrary position: Model building necessarily requires simplification, i.e. it is inevitably based on unrealistic assumptions. This point will be applied to our discussion of ABMs in section 3.4 et seq.

2.5
Additionally, Friedman expects good models to "explain much by little".[11] In section 3.8 et seq. we apply this point to the evaluation of ABMs.

2.6
Both of these points give arguments, why interesting models must rely on assumptions, which are descriptively unrealistic and why making them more realistic does not lead automatically to better models. Modelling is different from mere abstraction, it necessarily involves construction and is not just extracting parts from reality. These are all well accepted arguments supporting the view that it can be all right to use unrealistic abstractions. What neither the arguments above nor Friedman imply, is the view that all unrealistic assumptions lead to good models.[12] His point is rather, that some unrealism is necessary and it is even an advantage if it is unrealism of the right kind.

2.7
Friedman's talk about unrealistic assumptions has led to much contradiction and heavy confusion. It is indeed Friedman's fault that he did not formulate his thesis very carefully. He talks in a very general way of "unrealistic assumptions", which is problematic, because both terms "unrealistic" and "assumption" can be understood in many different ways.

2.8
In a famous article Alan Musgrave criticised Friedman's position by distinguishing three kinds of assumptions and trying to show that in all three cases assumptions must be rather realistic than unrealistic.[13] With negligibility and domain assumptions Musgrave's argument seems convincing at first sight: The colours of the traders' eyes are negligible, at least in the strict economic domain of analysing the stock market. In this sense, a model that "assumes away" the influence of eye colour to stock prices can be called realistic rather than unrealistic. However, Musgrave's argument is more a twist with words than a real refutation of Friedman's position. The negligibility of eyes' colours can be called realistic, because it really has no effect on the stock market, or unrealistic, because the traders do have coloured eyes in reality. Friedman tends to the latter view, but stresses the point, that it is not the "realisticness" of the assumptions that matters, but the implications they yield. From this perspective, it does not matter if a model that assumes away eyes' colours is called realistic or unrealistic, because, again, it is not the realisticness[14] of the assumption that counts, but if they are the adequate assumptions for the modelling purpose. This is probably the main point of misunderstanding between Friedman and many critics (take Hausman 1992b) again as an example): Friedman recommends evaluating the assumptions only for specific purposes, whereas many of his critics aim at broad predictive success. The question is, however, if broad predictive success is achievable at all. Friedman holds the view that it is not—there is no "theory of everything", so narrowing the domain of theories is always necessary.

2.9
The third class of assumptions Musgrave distinguishes, he calls "heuristic assumptions", which are employed when there is no domain where a factor that is assumed away is negligible to the outcome. The "heuristic assumptions" are rules for simplifications that guide researchers and tell them how to proceed, if a theory does not fit to the data. Musgrave states "At any rate, his central thesis 'the more significant the theory, the more unrealistic the assumptions' is not true of 'heuristic assumptions' either."[15] It is hard to see how Musgrave wants to judge the realisticness of a heuristic assumption, if he accepts that they are untestable. Heuristic assumptions are rules for simplification, they are intentionally unrealistic, they define the focus of a research programme. From this perspective, it seems hard to decide if they are realistic, before looking at the specific models (and their implications) that are based on them. With heuristic assumptions, the question is not whether they are realistic or unrealistic, but whether they are able to generate fruitful lines of research.[16] Musgrave's distinction between three classes of assumptions is certainly a brilliant addition to the assumption debate, but it does not refute Friedman's position, as it fails to show why seemingly implausible heuristic assumptions such as the rationality assumption are always nothing but an error of a theory.[17] As the two other types of assumptions go, Musgrave seems to attack rather a straw man than Friedman's position (at last in the pragmatic interpretation): Friedman states nowhere that making false negligibility or domain assumptions helps generating significant theories. He says that significant theories are mostly based on unrealistic assumptions, not that any unrealistic assumption creates a significant theory! When Musgrave stresses that wrong negligibility and domain assumptions usually lead to bad theories, this can be interpreted as stressing Friedman's point that the empirical correctness of the implications is relevant for judging assumptions: As soon as wrong negligibility and domain assumptions lead to wrong predictions (which they most probably do quite fast), they are immediately ruled out. If one is serious about the ability to predict, one cannot come up with wrong negligibility or domain assumptions.[18] The case is different with heuristic assumptions, as they are neither directly comparable to reality nor do they lead directly to empirical implications. Here it seems still correct allowing for assumptions that seem prima facie implausible or unrealistic. Musgrave has done a great job at pointing out what Friedman cannot mean by unrealistic assumptions, but as we see it, he has failed to refute him. At no point in his essay, Musgrave delivers an argument why it should be false to claim that significant theories often rely on seemingly unrealistic heuristic assumptions.[19]

2.10
After having discussed the term "assumption" at some length, let us turn to the other term "realistic". As we have seen, Friedman claims that unrealistic assumptions are not a disadvantage of a theory per se. So there must be assumptions that are unrealistic in some sense, but still good ones for the purpose in hand. The following three cases show different interpretations of "unrealistic" that meet this criterion:
  1. In a trivial sense all assumptions are wrong, because they are necessarily incomplete. It is not possible to deliver an objective and complete description of the observable world.
  2. Apart from the incompleteness, assumptions can be "unrealistic" in a different sense: As models propose hypotheses of causality, they must contain more than what is directly observable, as causal relations are not. This is why assumptions cannot be descriptively realistic in the sense of photographic depictions of the observable world.[20]
  3. In a third sense assumptions could be called unrealistic, when they contradict common sense. This is the case e.g. with economic models that make heuristic assumptions such as constant preferences.

2.11
Friedman would not see disadvantages for economic modelling in all three cases of "unrealisticness". In the first case, unrealistic assumptions are unavoidable, in the second case going beyond observable reality is necessary for interesting models. The third case leaves open if the "unrealistic" heuristic assumptions are fruitful or not—the fact that an assumption seems implausible, however, is no good argument against it before its implications have been explored,[21] otherwise many scientific discoveries such as Galilei's laws of falling bodies would have never been made.

2.12
All this shows, that there is indeed support for Friedman's view, that the most important thing about assumptions is not their (seeming) realism, but the predictive success of the models that rely on them, because it is hard to judge the adequacy of assumptions before their implications have been checked. Realists like Uskali Mäki argue it is reasonable to assume that the heuristic assumptions isolate factors of mechanisms that are "out there" in the world. In this view, the assumptions are strategic falsehoods, that serve the purpose of isolating mechanisms from the rest of the world.[22] Friedman did not respond to this point, but a pragmatic interpretation would suggest, we should rather suspend judgement on ontological questions like this, because due to underdetermination of theory it is impossible to know whether the mechanisms of models are in fact true of the world or whether they lead to successful predictions without being literally true.[23]

2.13
Even if Friedman emphasises the relevance of predictive power, he is fully conscious that prediction is unachievable with a high degree of precision in economics. When he stresses predictive success as a quality criterion, he has rather pattern-prediction or conditioned predictions in mind than precisely forecasting stock prices.[24]

2.14
Besides that, Friedman stresses more than once that science is about explanation, which is why he allows for "retrodictions" as well, i.e. theories that "predict" past phenomena.[25] This is one reason why the standard instrumentalist interpretation, which assumes that Friedman is forced to recommend theory-free correlation processing if this would lead to precise predictions, is inconsistent with the text: Such kinds of analysis cannot offer explanations at all.

2.15
The foregoing interpretation should have made clear, that Friedman is supporting a normative position, that can be best characterised negatively at this stage of the argument: The wholesale demand for more realistic assumptions misses the point of many economic models and does not automatically lead to better modelling.

2.16
The central argumentative twist of Friedman's approach is accepting that economic models are made for solving certain problems and not for finding "the truth".[26] To be sure, this does not deprive economics from dealing with highly abstract and theoretical issues. If a specified problem is given, it is far more easy to discuss about the means for solving it, than to discuss about "normativity in general" as it was done in classic philosophy of science, which tried to demarcate science from non-science in general or aimed at developing universal criteria for progress. The analysis of aims and means, i.e. finding the right models for a given aim, does not lead to a loss of normativity, but rather to the opposite: Only if aims are set that can be defined more precisely than in general terms such as "true", "scientific" or "progressive", one can discuss in a meaningful way about theory evaluation.

2.17
In contrast to the common view, Friedman is fully aware that there are many competing aims in science and that predictive accuracy is not the only point of scientific enquiry: Depending on the problem even a less precise theory can be preferable, e.g. if it is easier to apply. Even theories that already have been (constructively) falsified such as Newtonian mechanics are still used today for this reason. This shows that Friedman's argumentation is essentially an economic one when it comes to theory evaluation: He asks what we gain by a new theory or more realistic assumptions compared to its costs relative to a given problem.[27] This economic argument will be discussed further and applied to ABMs in section 3.15 et seq.

2.18
Of course, pragmatic elements can be found in Friedman's essay only in a vague and unsystematic form, which is probably the reason why the "right" interpretation of his essay is still on debate. Our pragmatic interpretation seems, however, convincing for several reasons:[28]
  1. Friedman carefully avoids speaking of "truth". He tries to focus on solutions and explanations that work—whether they are ontologically true is not relevant for him and probably impossible to discover for everyone. His reluctance to speak about truth also shows that he does not claim that theories cannot be true and are nothing but tools for solving problems. He just rejects the relevance of the truth-question for his project of theory evaluation.[29]
  2. Friedman's approach strictly focuses on problem solving. He is far more radical in this than Popper, who used the term "problem-orientation" as well. In contrast to Popper, Friedman allows employing already falsified theories, when they can be useful for a certain class of problems.
  3. Friedman's talk of comparing costs and benefits of theories fits well into the pragmatic/economic line of interpretation.
  4. Friedman does not exclusively concentrate on predictive success as a quality criterion but underlines the importance of pragmatic criteria such as explanatory power, fruitfulness and simplicity.

2.19
To sum up, Friedman pragmatically argues against judging the assumptions of a theory by their "realisticness", because, it is often hard to assess this independently from the rest of the theory and if it is possible to assess, realistic assumptions are neither a necessary nor a sufficient condition for the construction of good economic theories. They are not necessary because all theories rely on idealised rather than realistic assumptions and they are not sufficient, because more realism on the assumption side does not automatically lead to better theorising.

* Application: Descriptive Assumptions yes or no?

3.1
How can we use this pragmatic interpretation of Friedman's methodology for the discussion of ABMs? In the following, we employ our interpretation of Friedman for delivering a critique of simulations that rely, in our opinion, too heavily on empirical data. Following a suggestion by Moss and Edmonds we distinguish between two antagonist simulation approaches called KISS (Keep it simple, stupid!) and KIDS (Keep it descriptive, stupid!).[30] For the sake of a more focused discussion we equate KISS with Friedman's view that the realism of the basic assumptions of a model is not a good criterion to judge it and KIDS with the opposite view, claiming that only models that are as descriptive as possible on the assumption side are likely to generate useful scientific insight.[31]

3.2
One of the first examples of simulation in the social sciences are the checkerboard segregation models by Thomas Schelling and they constitute a paradigmatic example for KISS.[32] A short summary is sufficient here to introduce the main idea: In Schelling's segregation models, black and white stones are distributed on a checkerboard, symbolising the black and white inhabitants of a (north American) city. Now, a certain threshold share of stones in the neighbourhood, that have a different colour from the stone under consideration, is defined (e.g. 70% have a different colour). If this threshold share is reached for an individual stone, the stone is said "to feel uncomfortable" and as a result is moved away from its original position to the next free spot available. The astonishing result of such a simplistic model was, that even if the threshold share requires only 30% of stones in the neighbourhood having the same colour, complete segregation of the colours on the checkerboard results already after a few rounds of moving stones.[33]

3.3
Now modelling in such a simplistic way is clearly against the KIDS suggestion to include as much empirical data as possible in the assumptions. However, in accordance with our interpretation of Friedman we hold that descriptively unrealistic assumptions (such as checkerboards as representations of cities) are not necessarily a disadvantage of a model. In the following, we will broaden Friedman's arguments by extending them to a defence of KISS modelling against the KIDS approach.

Simplification is necessary and inevitable.

3.4
Full realism is neither achievable nor is it desirable.[34] Stressing the need to include as much data as possible seems to suggest that we can come close to full realism by using empirical data where it is available and keeping the model general where it is not. However, it remains largely unclear how a model can be "left general" at all. ABMs are by definition models that assume a certain behaviour for agents in a simulation. Leaving an aspect of behaviour completely general would imply the inclusion of theory-free aspects of agent behaviour, which means nothing more than introducing random elements in the model (or recurring to ad hoc assumptions without being aware of it).

3.5
A possibility to deal with this issue is to calibrate the aspects where a model is "left general" by running the simulation numerous times and thereby varying the parameters by means of a Monte Carlo approach. This implies the belief that the right assumptions can be found in a quasi-automated manner. It seems, however, far too optimistic to believe that descriptively adequate models can be generated in this way. No matter, if we use empirical data for setting up the assumptions or if they emerge after calibration: theoretical considerations heavily influence the process of modelling. First, every observation involves theoretical pre-assumptions, hence there is no pure observation. Even worse, transmitting the observations into program code as it is done in ABM modelling involves even more decisions, which sets limits to an accurate reproduction of the observable reality. If the model is fitted to observation by comparing the results of several runs with the observed patterns in reality, there is even a third layer of theory involved, that is the standard by which the model results are compared with reality: Often the results of a model need heavy interpretation or statistical analysis so that a comparison with reality is far from being straight forward.

3.6
All this shows that it can be misleading to claim highly descriptive assumptions because the proposed ideal is never achievable, due to several inevitable restrictions of theory-ladeness. A really high level of descriptive accuracy in ABMs would require a thorough understanding of all the processes involved, so it becomes hard to see what there is left to be learned from actually building the model. In practice, it is more likely that we do not understand the processes under investigation to a high degree, which makes approximation and estimation inevitable. In a highly complex model this leads probably rather to potentiating errors than to generating accurate predictions.[35]

3.7
Therefore, when a high level of descriptive accuracy is suggested this gives no guideline at all for judging exactly how high this level should be. The KISS modelling approach is clearer in this respect because it does not judge the assumptions in terms of descriptive accuracy at all. Besides, admitting to use "heroic" simplifications is surely more honest than making necessary simplifications while still claiming high descriptive accuracy.

Good models explain much by little

3.8
Even if we grant that it might be possible to build descriptively accurate models, there are other arguments why the usage of strategically unrealistic assumptions is sometimes preferable to highly accurate ones: Very fundamental and counterintuitive effects (such as complete segregation being caused by only mild preference for the own colour in the Schelling models) can get easily out of sight if a much more realistic and less schematic model is used. The main argument of the KISS advocates points to the core of the idea of modelling: we make models in order to reduce the complexity of the real world, not to mirror it. Of course, good models do not neglect the complexities of the systems they try to represent, but striving towards realism in every aspect means nothing else but the rejection of theorising which can result in a mere collection of facts that may be descriptively highly accurate, but rarely helps explaining matters.[36] This is an argument why the call for more realism cannot be sustained as per se argument. It depends on the aim of the model, if the level of abstraction is rightly chosen. When the understanding of fundamental mechanisms is the aim, the KISS method still seems the approach of choice. Highly complex models may accurately generate output, but they do not enable scientists to understand how it comes about. This happens because complex models often develop their own life and produce artefacts, which can make them difficult to interpret and understand.

3.9
This point needs some substantiation which is why we shall illustrate it in some detail by explicitly discussing an agent-based model. For making our critique as strong as possible, we chose a model of aggregate water demand that Edmonds and Moss use to underline the strengths of their KIDS modelling approach.[37] Even if this model is far away from being descriptively accurate, which shows once again how difficult to achieve this "standard" is in practice, it produces dynamics where the only observable regularities are caused by the external shocks that are programmed into the model, which is why it is not very helpful for explaining the observed dynamics.

3.10
Here is a short summary of the model: Agents are distributed at random on a grid. Each agent represents a household and is allocated a set of water consuming devices, in such a way that the distribution resembles empirically found data from the mid Thames region. The households are influenced in their usage of water consuming devices by several sources: "their neighbours and particularly the neighbour most similar to themselves (for publicly observable appliances); the policy agent[38]; what they themselves did in the past; and occasionally the new kinds appliances that are available (in this case power showers, or watersaving washing machines). The individual household's demands are summed to give the aggregate demand."[39] There is no need to explain the model in full detail here; our main methodological point against this model can be made by looking at the outcome of many runs of the model starting with the same initial conditions:

Figure
Figure 1. Aggregate Water demand from model runs specified in Edmonds and Moss (2004), p.10.

3.11
Moss and Edmonds note that "significant events include the droughts of 1976 and 1990, which often show up in a (temporarily) reduced water demand, due to agents taking the advice from the policy agent to use less water. Power showers become available in early 1988 and water-saving washing machines in late 1992 which can cause a sudden increase or decrease respectively."[40]

3.12
For us, it is hard to see, how this simulation solves any problem it all. The only recognisable effects result from the external shocks that are programmed into the model and even those are tough to identify. To us it is even doubtful if the authors achieved their own goal of setting up descriptively adequate assumptions, as there are certainly more factors influencing people's demand for water than those integrated into the model. Additionally, the runs depicted above all start from the same specification and yet widely diverge in their outcomes. It is hard to learn anything from the model as anything is possible from very low water demand to very high and from great changes in water demand to nearly constant consumption. Edmonds and Moss write that the model is made to capture the range of water demand responses.[41] Indeed they show that there is a very wide range, but they fail to give explanations how the different results come about. By trying to create a descriptively adequate model Moss and Edmonds arrive at "a complex model whose behaviour is not fully understood"[42] and therefore does neither explain matters nor does it succeed in being descriptively adequate on both the assumption and the implication side.

3.13
For us, it seems hard to learn anything from models whose dynamics are not fully understood. The advice given by KIDS advocates to build models that are as descriptively accurate as possible can lead to models that can neither predict nor explain in any meaningful sense.

3.14
The ability to explain much by little is therefore not only a pragmatic value, but has epistemic relevance as well. Therefore, the KISS approach has an advantage over KIDS in this respect.

Simplicity is an economic value

3.15
Even from a much more down to earth point of view, there are advantages of keeping the assumptions simple instead of trying to make them descriptively adequate. Simpler models are not only easier to understand, but they are more tractable as well. In a highly complex model, instead, errors are more difficult to trace, the model is easier and cheaper to validate, it is probably easier to adapt to new situations and it leads more quickly to solutions.[43]

3.16
From an economic/pragmatic point of view, there is no such a thing as truth; models are tools for solving problems. Seen like this, simple models are clearly preferable to complex ones, if (and only if!) they achieve the same quality of solution for a given problem. Advocates of highly descriptive models are in charge for explaining the advantages of their models in terms of predictive and explanatory power when descriptive accuracy or truth are rejected as valid criteria due to a pragmatic methodology like Friedman's.

3.17
It is highly important to note, that this does not mean that all models should be as simple as possible. They clearly should not. As we stressed throughout the paper, models need the right level of complexity for the problems they tackle. While simplicity is a value for models, it is surely secondary, compared to the model's ability to contribute to a better understanding of the phenomena under scrutiny. Nonetheless, taking the value of simplicity seriously means that starting with descriptive accuracy as first criterion is the wrong way for building helpful models.

* Conclusions

4.1
  1. The quarrel about the truth of assumptions is highly misleading. No theory and no model rests on realistic assumptions. Modelling is always centred on a specific problem. Whether the right level of abstraction was chosen can be properly assessed only with respect to the problem one wants to solve. For example, neoclassic equilibrium theorising should be criticised along these lines: Its abstractions are assuming away the interesting points of many problems.
  2. When models aim at predictive accuracy, more refined assumptions are probably needed compared with models, which aim at reproducing stylised facts. The advocates of a more descriptive modelling approach are right to point at the difficulty of comparing simple models to reality. However, this task does not necessarily become easier when models are based to a high degree on empirical data.
  3. For economic reasons, it is more useful to start with building a simple model first and refine it by increasing its complexity if it is not successful.
  4. Theories do not emerge out of empirical description. Modeling is necessarily a creative process, that involves construction and hypothesising. Therefore, theoretical elements must be included in ABMs as well, otherwise they are not likely to improve our understanding of the way the world works.

4.2
In accordance with the authors who emphasise a high empirical orientation, we hold that high generality is incompatible with models that make massive use of empirical data. We agree that abduction is the best way to characterise model building, but we contest the view that this requires the modeller to include as much data as possible when setting up the first version of the model.

4.3
As Brenner and Werker repeatedly underline, one advantage of simulations is the possibility to easily go back and forth between assumptions and their implications.[44] This is exactly the point of abductive modelling, but it does not lead to the conclusion to start with a highly complex model, in our opinion rather the opposite: to start with a simple model and proceed with refining and calibrating it by going back and forth between assumptions and implications.[45] In this way, models are based on simple and abstract assumptions, but are nonetheless robust to various changes in parameter settings. These models are best justified by a kind of reflective equilibrium: the assumptions are justified by balancing their prima facie plausibility and simplicity on the one hand and by the implications they yield on the other. If such an equilibrium between simplicity and complexity is reached, the resulting model can surely count as the best explanation for the phenomena under scrutiny, which allows for the abductive step to accept it. Abduction, in the words of Charles Sanders Peirce as cited by Brenner and Werker means "studying the facts and devising a theory to explain them."[46]

4.4
This shows that explanation by theory is the main point of abduction. Abduction does not imply to use assumptions that are as realistic as possible, but merely to study the data as detailed as possible in order to arrive at well established stylised facts that can be used to check one's theories against them.[47] Using the empirical data everywhere it is available and keeping the model maximally general where it is not, will mostly not lead to models that are simple enough to offer mechanisms that can be called theoretical explanations of social processes. Trying to build models out of empirical descriptions is to use the data on the wrong side. As argued above, the usage of well-trained theoretical intuitions seems unavoidable for building models that offer explanations. Accepting this, the main point about assumptions is not, whether they are realistic or unrealistic, but whether they are unrealistic in the right way.[48]

4.5
Brenner and Werker surely agree with us on all this. For our part, we completely agree with the taxonomy they offer.[49] The relevant difference between our positions seems to lie in the starting point of abductive modeling: While Brenner and Werker recommend starting with descriptive assumptions, we recommend starting with simple assumptions and hence entering the process of going back and forth between assumptions and implications at a much earlier stage of model building. We also believe that this is the essential message of Friedman's methodological essay: Assumptions should not be judged on their own, but by looking at their implications as early as possible, making model-building a process of continuous revision.


* Notes

1 Brenner and Werker (2007), p.242.

2 Friedman (1953), p.14.

3 See Hausman (1992a), p.162.

4 We do not deliver an introductory summary of Friedman's essay as it is hard to summarize it in a neutral way due to the huge amount of different interpretations available in the literature. The pragmatic interpretation which we are going to develop in this text is therefore not the only one consistent with the text. Of course, in our view, it is the one that fits best Friedman's general point, even if it has to live with some inconsistencies.

5 See Schliesser (2005), Schröder (2004) and Hoover (2004) for some other recent interpretations of Friedman's classic that agree on this point.

6 See Hausman (1992b), p.70-73.

7 See Hausman (1992b), p.71.

8 Hausman (1992b), p.72.

9 Friedman (1953), p.15.

10 Note, that Friedman equates "realistic" assumptions with descriptively accurate ones. Therefore, his thesis is not an ontological, but a methodological one. We stick to this notion of realism throughout the paper.

11 Friedman (1953), p.14.

12 Sometimes it seems that this view is attributed to Friedman, even if it is obviously absurd. (See e.g. Samuelson (1963), p.233.) Such critics seem to forget that Friedman is accepting only those assumptions that lead to correct predictions. Besides that, it is simply a logical error to conclude from Friedman's "the more significant a theory, the more unrealistic the assumptions" to the statement that unrealistic assumptions imply significant theories.

13 See Musgrave (1981).

14 "Realisticness" is a term introduced by Uskali Mäki in order to distinguish descriptively accurate assumptions from the philosophical position of realism. See e.g. Mäki (1998). This paper equates "realistic assumptions" with "empirically adequate assumptions" and hence avoids the philosophical discussion about realism.

15 Musgrave (1981), p.385.

16 See Mäki (2000), p.326.

17 Of course, it is sometimes not fruitful to assume rational behaviour. E.g. modelling situations that involve decisions under uncertainty or trying to analyse innovation processes that require creativity are not likely to be fruitfully reconstructed by rational choice models. However, as there are situations that can be reconstructed by rational choice models, we cannot judge this methodological principle as such on grounds of its (im)plausibility, but only the aptness of a concrete application of the principle.

18 Marcel Boumans adds that Friedman encourages empirically exploring the domain of a negligibility assumption. See Boumans (2003), p.320.

19 Musgrave takes Newton's reduction of planets to mass points as an example for a heuristic assumption, but instead of refuting Friedman's claim this seems rather to confirm his view that significant theories are often based on unrealistic heuristic assumptions.

20 This statement does not touch the philosophical position of scientific realism, which is a theory about the truth status of causal connections in scientific theories. The above argument is headed against a more naïve form of realism, which identifies realism with a one-to-one correspondence to observation.

21 The rational choice assumption often leads to false implications and cannot be fruitfully adapted to some economic problem fields such as innovation economics, but as there are many problems that can be tackled (if only to a certain degree) by the (implausible) rationality assumption (see e.g. the works of Gary Becker), it should be clear that this assumption cannot be rejected beforehand by calling it unrealistic.

22 See e.g. Mäki (2008), p.14.

23 The current paper largely ignores the philosophical discussion about realism and anti-realism because this can be separated completely from questions of theory evaluation.

24 See Friedman (1953), p.40.

25 See Friedman (1953), p.9.

26 The essential difference between Hausman's and Friedman's position boils down to this point: Hausman favours general models with broad predictive success whereas Friedman would argue that this is neither feasible nor appropriate.

27 See Friedman (1953), p.17.

28 See Hirsch and DeMarchi (1990) for a detailed analysis of the pragmatic elements in Friedman's methodology. Particularly they argue for the thesis that Friedman was heavily influenced by John Dewey's views. Hence, the term "pragmatic" is to be understood in Dewey's sense in this text. However, it is not to be understood in the sense of a pragmatic theory of truth, but meant to shift the focus to the truth-independent usefulness of theories.

29 In that sense Friedman's essay completely is beyond the traditional distinctions of realism (theories can be true) and instrumentalism (theories are neither true nor false).

30 See Edmonds and Moss (2004).

31 The distinction between KISS and KIDS does not deny the fact, that there may be cases in which descriptively adequate assumptions are very simple. In these cases, there is no dissent between advocates of KISS and KIDS. In the vast majority however, there is a huge difference between building models that are as descriptively accurate as possible or as simple as possible.

32 See Schelling (1971) for the locus classicus.

33 This result is robust under various changes, see e.g. Flache and Hegselmann (2001).

34 Note, that even the "toughest" sciences like physics make heavy use of idealisations or unrealistic assumptions. Just think of planets as "mass points"or laws that apply only in vacuum. Without radical simplification, many of the basic laws of physics would have never been found.

35 Note however, that we do not draw the conclusion that simple models are more realistic than complex ones. For advocates of the KISS modelling approach it is crucially important to keep in mind how incomplete such models are and how difficult it is to transfer their results into the real world.

36 The term "explanation" is itself under philosophical discussion. The covering law model of explanation is generally considered out-dated due to several difficulties. We do not enter in this philosophical discussion here, but stick to a commonsense notion of explanation.

37 See Edmonds and Moss (2004).

38 The policy agent suggests a lower usage of water if there is less than a critical amount of rain during a month. This influences the agents to a certain degree.

39 Edmonds and Moss (2004), p. 8.

40 Edmonds and Moss (2004), p.10.

41 See Edmonds and Moss (2004), p.10.

42 Edmonds and Moss (2004), p.8.

43 See Chwif, Barretto and Paul (2000), p.452.

44 See e.g. Brenner and Werker (2007), p.230.

45 A nice example for this can be found in the literature about the evolution of cooperation that started with a simple tit-for-tat model, which was thereafter refined in various respects. See Ball (2004), chapter 17-18 for an overview.

46 See Peirce (1867), 5, p. 145. Again, we suspend judgement on whether accepting an explanation implies accepting its (ontological) truth.

47 See Kaldor (1978), p.2

48 See Kaldor (1978), p.202.

49 See Brenner and Werker (2007), p.233.


* References

BALL, Philip (2004): Critical Mass: How One Things Leads Into Another; London, Random House, Edition of 2004.

BOUMANS, Marcel (2003): How to Design Galilean Fall Expermiments in Economics; in: Philosophy of Science 70, p. 308-329.

BRENNER, Thomas and Werker, Claudia (2007): A Taxonomy of Inference in Simulation Models; in: Computational Economics 30, p. 227-244.

CHWIF, Leonard, Barretto, Marcos Ribeiro Pereira and Paul, Ray J. (2000): On simulation model complexity; in: Proceedings of the 32nd conference on Winter simulation p. 449-455.

EDMONDS, Bruce and Moss, Scott J. (2004): From Kiss to KIDS - an 'antisimplistic' modelling approach; in: P. Davidsson et al. (eds.): Multi Agent Based Simulation; Springer, Lecture Notes in Artificial Intelligence, 3415: p.130-144. http://bruce.edmonds.name/kiss2kids/kiss2kids.pdf, Download Date: 23.9.2008.

FLACHE, Andreas and Hegselmann, Rainer (2001): Do Irregular Grids make a Difference? Relaxing the Spatial Regularity Assumption in Cellular Models of Social Dynamics; in: Journal of Artificial Societies and Social Simulation vol. 4 no. 4, https://www.jasss.org/4/4/6.html, Download Date: 10.10.2008.

FRIEDMAN, Milton (1953): "The Methodology of Positive Economics"; in: Friedman, Milton (1953): Essays in Positive Economics, University of Chicago Press, p. 3-43.

HAUSMAN, Daniel M. (1992a): The Inexact and Separate Science of Economics; Cambridge University Press, Edition of 1992.

HAUSMAN, Daniel M. (1992b): Essays on Philosophy and Economic Methodology; Cambridge University Press, Edition of 1992.

HIRSCH, Abraham and DeMarchi, Neil (1990): Milton Friedman - Economics in Theory and Practice; Harvester Wheatsheaf, Edition of 1990.

HOOVER, Kevin D. (2004): Milton Friedman's Stance: The Methodology of Causal Realism; in: Working Paper University of California, Davis 06-6 http://www.econ.ucdavis.edu/working_papers/06-6.pdf, Download Date: 12.02.2008.

KALDOR, Nicholas (1978): Further Essays on Economic Theory; Gerald Duckworth & Co Ltd., Edition of 1981.

MÄKI, Uskali (1998): "Entry 'As If'"; in: Davis, John B. et al. (1998): The Handbook of Economic Methodology, Cheltenham Northampton: p. 25-27.

MÄKI, Uskali (2000): Kinds of Assumptions and Their Truth: Shaking an Untwisted F-Twist; in: Kyklos 53/3, p. 317-335.

MÄKI, Uskali (2008): Realistic realism about unrealistic models (to appear in the Oxford Handbook of the Philosophy of Economics); in: Personal Homepage http://www.helsinki.fi/filosofia/tint/maki/materials/MyPhilosophyAlabama8b.pdf, Download Date: 14.11.2008.

MALERBA, F., Nelson, R.R., Orsenigo, L. and Winter, S.G. (1999) 'History friendly models of industry evolution: the computer industry', Industrial and Corporate Change, Vol. 8, pp.3–40.

MUSGRAVE, Alan (1981): 'Unreal Assumptions' in Economic Theory: The F-Twist Untwisted; in: Kyklos Vol.34/3, p. 377-387.

PEIRCE, Charles S. (1867): Collected Papers of Charles Sanders Peirce; Harvard University Press, Edition of 1965.

SAMUELSON, Paul (1963): Problems of Methodology - Discussion; in: American Economic Review 54, p. 232-236.

SCHELLING, Thomas C. (1971): Dynamic Models of Segregation; in: Journal of Mathematical Sociology 1, p. 143-186.

SCHLIESSER, Eric (2005): Galilean Reflections on Milton Friedman's "Methodology of Positive Economics", whith Toughts on Vernon Smiths's "Economics in the Laboratory"; in: Philosophy of the Social Sciences Vol. 35, p. 50-74.

SCHRÖDER, Guido (2004): "Zwischen Instrumentalismus und kritischem Rationalismus?—Milton Friedmans Methodologie als Basis einer Ökonomik der Wissenschaftstheorie"; in: Pies, Ingo/Leschke, Martin (2004): Milton Friedmans Liberalismus, Tübingen: Mohr Siebeck, p. 169-201.

----

ButtonReturn to Contents of this issue