©Copyright JASSS

JASSS logo ----

Wendelin Reich (2004)

Reasoning About Other Agents: a Plea for Logic-Based Methods

Journal of Artificial Societies and Social Simulation vol. 7, no. 4

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 28-Dec-2003    Accepted: 05-Mar-2004    Published: 31-Oct-2004

* Abstract

Formal logic has become an invaluable tool for research on multi-agent systems, but it plays a minor role in the more applied field of agent-based social simulation (ABSS). We argue that logical languages are particularly useful for representing social meta-reasoning, that is, agents' reasoning about the reasoning of other agents. After arguing that social meta-reasoning is a frequent and important social phenomenon, we present a set of general criteria (functional completeness, understandability, changeability, and implementability/executability) to compare logic to two alternative formal methods: black box techniques (e.g., neural networks) and decision-theoretical models (e.g., game theory). We then argue that in terms of functional completeness, understandability and changeability, logical representations of social meta-reasoning compare favorably to these two alternatives.

Formal Logic, Social Interaction, Social Simulation, Agents, Social Meta-Reasoning, Reasoning About Reasoning

* Introduction

In simulation-oriented modeling, it is necessary to distinguish models from modeling techniques. In the classical sense, a model is an abstract and simplified or idealized description of a subject matter. A modeling technique, on the other hand, is a specific formalism that allows a model to be implemented on currently available computing platforms. In agent-based social simulation (ABSS), and probably any other research field that makes use of simulation, the choice of a specific modeling technique is (ideally) subordinate to the choice of a specific model. However, from a practical point of view, the fact that many mathematically well-understood techniques (examples being formal logic, game theory, neural networks and genetic algorithms) are now implemented and available for ABSS makes it often justifiable to begin the development of an adequate model with the selection of a technique. In general, such a selection involves a host of theoretical, methodological and practical stipulations and should be made with great care.

This paper presents arguments in favor of selecting logic-based techniques to represent and to execute a well-defined subclass of agent-based models, specifically, models comprising agents that perform social meta-reasoning. Whereas "meta-reasoning" is already a widely used term for reasoning about reasoning (or second-order reasoning), we use the term "social meta-reasoning" to denote the special case of reasoning about other agents' reasoning. Social-psychological and psycholinguistic studies suggest that this form of reasoning occurs in virtually all instances of social interaction (Gumperz 1995; Levinson 2000; Reich 2003). We will show in Section 2 that it can be disregarded by some but not all models of interactions between autonomous agents. Specifically, whenever it is implausible to assume that the behavior of agents is completely determined by the objective and commonly known structure of the social situation, model agents should be allowed to reason about each other's reasoning.

Social meta-reasoning has been discussed and applied extensively in previous work on ABSS, multi-agent systems, social coordination, knowledge representation and user modeling. The aim of Sections 3 - 6 is to defend the use of logic for modeling social meta-reasoning by means of conceptual and methodological considerations rather than by a literature review. Section 3 introduces a set of general criteria according to which different methods of representing social meta-reasoning may be evaluated and compared. Sections 4 - 6 use these criteria in order to compare three candidate techniques. We shall argue that neither black box techniques (Section&182;4) nor decision-theoretical approaches (Section5) can represent social meta-reasoning adequately in the general case, whereas logical calculi (Section 6) are adequate for this purpose. Embedded in ABSS-systems and examined in a series of careful and systematic simulation experiments (Macy and Willer 2002: p. 162f), such logical representations do not need to be instances of "armchair theorizing" as criticized by Edmonds (2004b, see also Dignum and Sonenberg 2004; Edmonds 2004a).

* Social meta-reasoning in social interaction

For the following, "social interaction" shall denote a social situation comprising two or more agents where the selection as well as the outcome of any behavior chosen by any agent depends potentially on the previous and expected future behavior performed by all agents in the same situation. It is safe to say that in real life, most instances of social interaction involve communication, that is, production and understanding of verbal or written messages. As linguists and philosophers of language have argued forcefully (Bach 2001; Grice 1989; Gumperz 1982), both message production and understanding require agents to ascribe each other mental states such as intentions and beliefs. For instance, in order to understand a verbal message, a hearer must be able to infer what the speaker meant by saying it, in other words, what beliefs he intended the hearer to acquire in response to the message. Although this involves second-order reasoning, it is a task at which human beings are very good (Harris 1996) and which does not appear to strain their cognitive resources to a relevant extent. Thus, from a social-psychological or psycholinguistic point of view, social meta-reasoning is both frequent and normal.

Since modeling is based on abstraction, hence selection, the observation that something is a property of reality does not imply that it must be represented in a model. However, an example involving a social situation with which practitioners of social simulation tend to be familiar - strategic negotiation between rational agents - may serve to illustrate that social meta-reasoning is sometimes the key ordering principle of an interactive exchange and, therefore, cannot always be ignored by modelers. Such strategic negotiations constitute a relatively simple but well-understood form of social interaction, and all three types of modeling techniques that are considered in this paper have been applied to them (e.g., Carpenter 2000; Faratin et al. 2002; Maudet 2003; Thoyer et al. 2001). Compare the following two made-up transcripts of verbal behavior during a (successful) price negotiation between a professional buyer (e.g., a jeweler or a pawnbroker; agent 2) and a lay seller of a moderately valuable, non-scarce good (agent 1).

Transcript 1
Agent 1: "(My price is) $500"
Agent 2: "(I offer) $300"
Agent 1: "(My price is) $350"
Agent 2: "(I offer) $300"
Agent 1: "(My price is) $300"
Transcript 2
Agent 1: "(My price is) $500"
Agent 2: "(I offer) $300"
Agent 1: "(My price is) $450"
Agent 2: "(I offer) $320"
Agent 1: "(My price is) $320"

Both agents begin both interactions with identical offers, and in both cases, they eventually agree on a price. However, the bidding-history and the final agreement differ slightly (as highlighted in Transcript 2). If we assume that both situations were informationally identical in all other respects, it becomes impossible to explain this difference in terms of agents' rational inferences. However, if we supplement this information with adequate hypotheses about social meta-reasoning carried out by the agents in the course of the interaction, the difference between the two situations can be explained causally in terms of fully rational inferences.

For each transcript, we propose an adequate (that is, consistent and explanatorily efficient but nonetheless hypothetical) set of such hypotheses on the basis of the following stipulations. Without specifying a technique that implements these conditions (later sections will consider three different alternatives), it is safe to say that the following two sets of hypotheses about the agents' social meta-reasoning are consistent with them.

Hypotheses for Transcript 1
  • Agent 1 believes that the good is worth not more than $350. She takes agent 2's initial offer as evidence that he believes the same and readjusts her initial price drastically, unaware that this is going to reveal her low valuation to him. Finally, since he does not give in, she accepts his offer.
  • Agent 2 believes that the good is worth not more than $350. He takes agent 1's drastic readjustment of her price as evidence that she believes this, too, and that she believes that he believes that. This leads him (correctly) to expect that she will eventually accept his initial offer.
Hypotheses for Transcript 2
  • Agent 1 believes that the good is worth at least $450. She takes agent 2's initial offer as evidence that he believes that it is worth much less and readjusts her initial price hesitatingly. Finally, she sees his moderately adjusted offer as a nonrecurring display of goodwill and accepts it.
  • Agent 2 believes that the good is worth not more than $350. He takes agent 1's moderate readjustment of her price as evidence that she believes that it has a much higher value. This leads him to make a small concession.

According to this explanation, the original reason that the two transcripts differ from turn three onward is that agent 1 assesses the good's value higher in the second situation. Both agents carry out social meta-reasoning in response to observable behavior and both agents assume that the respective other carries it out (e.g., agent 1 when she interprets agent 2's small adjustment as "goodwill"; agent 2 when he reconstructs agent 1's reconstruction of his own subjective valuation of the good). Thus, the initial difference in the covert cognitive configuration of one agent eventually affects the behavior of both agents. The overall result of the interaction is highly ordered because the simple fact that the agents carry out social meta-reasoning actually constrains their ensuing behavior in a way that renders the respective other's attempts at guessing their mental states (partially) adequate (Reich 2003, chapter 3). For instance, in Transcript 1 and according to the hypotheses stated above, agent 1 sees agent 2's low initial offer as an indicator of agent 2's low assessment of the good. Because this leads her to readjust her price drastically, agent 2 is right in assuming that this readjustment reveals that she herself assesses the value of the good as low. Far from being unique to strategic price negotiations, this connection between social meta-reasoning and (partially) expectable self-constrainment is a property of virtually all real-world instances of social interaction that involve communication (Reich 2003, chapter 1.4).

As it has become obvious that social meta-reasoning cannot always be disregarded in social interaction models and ABSS, we need to ask under what conditions it can be disregarded. If we apply the pragmatic criterion that a property of a simulation model can be removed if it is unable to produce variations in the output of the modeling process, the minimal set of simulation models that can ignore social meta-reasoning may be constructed quite easily. It simply includes all models in which (1) the rationality/irrationality of any possible course of behavior depends exclusively on those structural properties of the situation that are common knowledge (mutually known, mutually known to be mutually known, and so on, see Fagin et al. 1995: p. 23ff) and in which (2) the criterion of rationality determines a unique course of behavior at each agent's turn. Such models are built around the assumption that the reasoning process of other agents is completely determined by the objective structure of the situation. In other words, agents' reasoning never renders their own behavior contingent. This means that observing agents cannot improve their own behavior by reasoning about the way in which observed agents reason about their own behavior. Rather, by observing the structural properties of the situation, they can infer directly what constitutes a rational course of behavior for these agents and adjust their own behavior accordingly.

It is obvious that a social situation where agents cannot improve their own behavior by carrying out social meta-reasoning is representable by much simpler models than a situation where they can improve their behavior in this way. Possibly because of this relative simplicity, social situations where social meta-reasoning is unnecessary have received an over-proportional amount of attention in fields such as game theory or exchange theory. For the example of game theory, this bias is reflected by the fact that most research focuses on games with complete information (which, in our terminology, are social situations where social meta-reasoning is unnecessary) rather than games with incomplete information (which are situations where social meta-reasoning is unavoidable; game theorists realize this, see only the theory of "signaling games", Owen 1995: p. 119ff). The correspondence between games with complete information and social situations where social meta-reasoning is unnecessary can be illustrated by the situation modeled by the prisoner's dilemma, which leaves rational participants no choice but to defect and, therefore, determines their behavior entirely. However, many real-world situations do not constrain participants' strategies and behavior to this degree. In these situations, rational agents have no choice but to reason about the (potentially contingent) reasoning of co-agents.

* Criteria for the comparison of representing methods

In order to compare different ways of formalizing social meta-reasoning in agent-based models and simulations, we need a set of criteria that tells us along which dimensions the comparison should proceed. Because the techniques we consider later on are vastly different, we will only consider general criteria, letting criteria that pertain specifically to social meta-reasoning evolve from the discussion. For the sake of technical comparability, we assume that all agent models we have to consider fit into the concept of an agent function (Russell and Norvig 2003), that is, a function that transforms the current state of the environment plus a history of previous internal states into a course of behavior plus a new internal state. Thus, for the following, an agent is essentially a particular input-output model.

Most likely, social meta-reasoning will be only one of several components of such a model. From a conceptual point of view, it will comprise the mechanism that serves both to represent and to redetermine the contingency caused by the assumption that the behavior of co-agents is co-determined by their unobservable mental states (e.g., existing beliefs) and inferences. In the history of social-scientific modeling, probably the most common, most successful and best-understood way of representing externally redeterminable mental contingency is to assume the behavior of agents to be rational. A rational co-agent can be attributed cognitive autonomy because the rules according to which he will make use of it are known by the observing agent or modeler. Therefore, we will expect each of the techniques under scrutiny either to define what makes the behavior of co-agents rational or to provide a conceptually equivalent alternative to the concept of rationality as such.

In the development of a formal model, the choice of a particular technique or calculus commits the modeler to several theoretically, methodologically and practically momentous decisions. Already in the fundamental case of an input-output model, we can distinguish at least six areas where the choice of a specific formal approach imposes constraints on the model.

(1) Parametric input sensitivity. Values which the model does or does not process in the form of parameters, sensitize or desensitize it to variations in its application or execution context. The exact mathematical form of the parameters defines the shape, the range and the complexity these variations may take.

(2) Range and mathematical structure of output values. The internal logic of the model and the specific details of its implementation constrain the range of, and the possible or enforced mathematical relationships between, its output values. In principle, it is always possible to construct input-output models that are functionally equivalent (i.e., mapping identical input to identical output) yet based on different transformative mechanisms. The remaining four criteria provide guidance in the process of selecting between such models.

(3*) Degree of realism. The implementation details of the transformative mechanism defined by a model represent the model's ontology (its "worldview"). It is sometimes argued that, other things being equal, a tight isomorphism between the transformative mechanism and the modeled domain is preferable. In practice, two alternative techniques almost never fulfill the other-things-being-equal-clause; therefore, we consider this criterion too weak to warrant application in subsequent sections. Specifically, as some readers may expect that we will criticize non-logic-based representations of social meta-reasoning for their "lack of realism", we state beforehand that this is not the case. From a psychological point of view, such criticism would be debatable at any rate.

(3) Understandability. As an alternative to evaluating the degree of realism built into a model, it does seem reasonable to assess whether the transformative mechanism of the model is easily or at least generally understandable by the modeler. Whatever else "understanding" a model's transformative mechanism may mean, it is clear that it involves being able to explain in broad outline how variations in the modeled domain translate into variations in the output of the model. Thus, a model is only understandable if it allows us to establish a (possibly complex or far-fetched) isomorphism between model and modeled domain. This means that the criterion of understandability is actually a subjectivized form of the criterion of realism. However, in contrast to this latter criterion, understandability affects directly how suitable the model is for explaining empirical observations with reference to the modeled domain.

(4) Changeability. As mentioned, any model is arbitrarily changeable in a trivial sense because it is always possible to construct an alternative but functionally equivalent model. Nonetheless, two equivalent models may well diverge with respect to the ease with which the modeler can make smaller changes to the model's internal structure in order to better understand this structure, in order to experiment with the model's behavior, in order to correct the model or in order to adapt it to variations in (or new information about) the modeled domain. Some models may support certain types of changes whereas alternative models support other types - in short, changeability is not a property of the model as such but relative to the specific adaptation that needs to be carried out. Thus, for the comparisons that follow, we will keep in mind that in order to compare two models with respect to their changeability, one must have an idea about what changes are most likely to occur.

(5) Implementability and executability. In ABSS, we generally want to implement the model in an available programming language or modeling environment and run it through a series of simulations - both for demonstrating the correctness of the model and for experimenting with its behavior (Macy and Willer 2002: p. 149). Although modern personal computers are powerful enough, in practice, to fulfill virtually all the computational needs arising in ABSS, some formal representations are too complex to warrant tailor-made implementations of suitable software. Similarly, off-the-peg programs are usually not available for every technique or calculus. Therefore, available or easily implementable software is a pragmatic advantage that can be difficult to disregard when choosing among alternative techniques.

* Alternative 1: Black box techniques

We begin our comparison by considering approaches to social meta-reasoning that represent the agent's model of another reasoner as a black box, that is, as an essentially opaque process of transforming stimuli into responses. Here, the other reasoner is treated as an autonomous entity whose behavior is initially contingent but can be predicted by looking at its past record of transforming perceptions into behavior. Perhaps the main reason for using black box approaches is that they support applying sophisticated probabilistic methods for pattern recognition - most importantly, neural networks. Even Bayesian networks and genetic algorithms can be used for this purpose, but such applications seem to be rare (cf. Huber et al. 1994, who apply Bayesian networks[1]). All these methods share the property of focusing on learning instead of representing. In the case of neural networks, a set of elaborate and (more or less) well-understood learning algorithms is available (Haykin 1999). For this reason, but also because of their high practical relevance, we will focus on neural networks in the following discussion.

A few examples may show how neural networks are used to model and simulate social meta-reasoning. Billings et al. (2002) have implemented a poker-playing agent that attempts, among other things, to predict the opponent's next course of behavior. For this component of the agent, they use a standard feed-forward neural network with a single hidden layer. Given a set of parameters about the current game context, the network produces a probability distribution over the types of behavior that the opponent could choose in his next move. The authors report that after being trained on several hundred hands played by a relatively weak opponent, the agent is able to play fairly well against this opponent. Suryadi and Gmytrasiewicz (1999) discuss social meta-reasoning as applied to a set of agents that cooperate in order to attack a set of objects (e.g., the agents represent predators or anti-air defense units). Each agent makes assumptions about which objects are likely to be attacked by co-agents, thus deducing what objects fall under its own responsibility. The authors utilize an influence diagram and a neural network in order to represent each agent's model of other agents. The neural network is trained on information about co-agents' history of physical movements and attacks in order to be able to predict this behavior in future turns. Edmonds and Moss (2001) apply social meta-reasoning for a larger number of agents and more schematically. Rather than devising a separate component for this task, they represent each agent in a multi-agent system by a neural network and parameterize it with assumptions about the personalities of co-agents. These assumptions are generated statically by evaluating the past behavior of co-agents along dimensions such as "trustworthiness" or "reliability". Thus, "reasoning" about co-agents' reasoning occurs only insofar as it is implied by an agent's (opaque) selection of a strategy based on such assumptions.

Concrete applications such as these cannot give us a full idea of the general fitness of black box techniques for modeling social meta-reasoning. To evaluate this fitness somewhat more systematically, we will apply the five criteria of Section3. At least in the case of neural networks, neither parametric input sensitivity nor range and mathematical structure of output values seem principally limited. As mentioned before, we only consider agents that can be squeezed into the concept of an agent function, and neural networks are formally suitable to represent any such function.[2] In fact, because feedback loops can be used to make neural networks stateful (Haykin 1999: p. 732ff), the possibilities of neural network modeling go effectively beyond the concept of an agent function. Furthermore, the implementability of models based on neural networks is, at least in the general case, excellent. Single neurons (the building blocks of neural networks) can be represented by simple functions or procedures, and large and complex networks of neurons can often be assembled automatically from these building blocks. Likewise, many if not most learning algorithms (no matter whether efficient or inefficient) have a direct computational representation. However, already the executability of neural network models faces an important limitation. Almost all nontrivial networks need to be trained, thus requiring substantial amounts of high-quality training data. In social situations that are composed of a large number of highly constrained courses of behavior (e.g., the game of poker, as modeled by Billings et al. 2002), training data may be available or easy to produce. But many other interesting social situations occur only rarely (e.g., political summits), are difficult to observe (business mergers), comprise only a small number of turns (trading low-priced goods, as discussed in Section 2) or allow participants to choose from a dauntingly large number of available courses of behavior (polite conversation). It will be interesting to see how future applications deal with this problem.

On a less pragmatic and more fundamental level, the limited understandability of neural networks constitutes a much more severe constraint, at least in the domain of social meta-reasoning. Today, no implementation of a virtual agent that incorporates social meta-reasoning (whatever the implementing technique used) performs "well" in comparison to human agents. Until an implemented agent model exists that convinces by its sheer performance, models that do not at least increase our understanding of social meta-reasoning will be of highly restricted and fleeting use. Neural networks, being black box models, undermine almost all attempts to locate representations of the problem domain in their internal structure. For instance, although it would be straightforward to create and train a network to participate in the price negotiation of Section2, it would be impossible or at least extremely difficult to correlate differences in the performance of the model with understandable variations in the problem domain, such as: The agent assumes/does not assume that the co-agent prefers to close the deal at any cost; The agent sees/does not see an early price adjustment as a sign of "giving in"; The agent is more/less skilled at social meta-reasoning than the co-agent; and so forth. Arguably, such properties are distinctly symbolic in favor, but it is worth mentioning that this follows from focusing on understandable variations in the problem domain.

Closely related to the limited understandability of neural network-models is their restricted changeability. In ABSS, applying conceptual changes (as opposed to technical changes) is usually an integral part of the development of a virtual agent as well as the exploration of its behavior. Such changes need to be driven by theoretical assumptions or empirical information about the problem domain; hence, they also require modelers to create and manipulate isomorphisms between this domain and its representation. For example, in the negotiation setting of Section 2, it would be extremely difficult to discern exactly what distinguishes a network that sees the co-agent as expecting a display of "goodwill" from a network that does not. Because this psychological distinction is not accessible from observations of real-world negotiations, training data would need to be generated artificially, which makes it rather pointless to train an artificial network merely to follow this artificially generated distinction. Similarly, improvements or adaptations of the model would be highly problematic. For instance, it may be interesting to explore the effects of various "cultural" constraints (e.g., a constraint to begin each negotiation with attributing trustworthiness or lack thereof to the other agent) on the outcome - but again, highly selective changes such as these are extremely difficult to carry out in black box-models.

* Alternative 2: Decision-theoretical approaches

Understood broadly, decision theory encompasses the study of procedures of rational decision making, usually under the condition of risk or uncertainty. Practically all decision-theoretical models share a set of minimal assumptions about the structural conditions under which decision is to take place. Given are a set of alternative actions and a set of consequences that can result probabilistically from the actions. In a situation of risk, a utility function exists that describes the decision-maker's preferences by assigning each action an expected utility. A rational decision-maker is expected to choose the action that maximizes his expected utility.

Decision-theoretical calculi are an essential component of formal models in fields such as game theory, sociological rational choice theory and public choice theory. Within "pure" and applied game theory, they have been used to provide a criterion of rationality that guides players who perform social meta-reasoning. To give an example from pure game theory, the theory of dynamic games claims that, in order to select a strategy that constitutes a subgame perfect Nash equilibrium, a player makes use of backward induction (Romp 1997: p. 33ff). Specifically, a player uses backward induction to determine whether a final state (i.e., a final node in the extensive form of the game) can be ruled out from the set of possible outcomes. Traversing backwards from the final node over all nodes up to the first, the player asks himself whether the action that leads to the final node would be the most rational choice for the respective player. Whenever he crosses a node that represents an opponent's turn, he needs to take the perspective of the opponent and evaluate the opponent's payoffs in terms of decision theory, rather than his own payoffs. In this way, players reason about each other's reasoning and arrive or fail to arrive at a subgame perfect equilibrium. As an example from applied game theory, the "Recursive Modeling Method" developed by Gmytrasiewicz and Durfee (1995) specifies a decision-theoretical procedure that agents in a multi-agent system can apply to reason about the reasoning of co-agents. The procedure operates on a recursive model of the agent's own payoffs, co-agents' presumptive payoffs, co-agents' presumptive models of other co-agents' presumptive payoffs, and so on. With this information, and constrained by computational resources (obviously, the nesting of levels cannot go on forever), the procedure is able to determine an optimal or almost optimal strategy. In essence, the Recursive Modeling Method can be seen as a formally more rigorous and computationally more realistic, though conceptually similar version of the procedure of backward induction discussed above.

Regarded under the set of criteria of Section 3, decision-theoretical models of social meta-reasoning face somewhat different problems than neural network models. Because their psychological interpretation is straightforward, they are not particularly difficult to understand or to change (within the small range of changes warranted by decision theory - more on that below). Their implementation in virtual agents and their execution in ABSS-experiments are even simpler than in the case of neural networks, as only the agent, not an additional learning algorithm and training environment have to be implemented. For these reasons, it is not surprising that decision-theoretical models are used frequently in multi-agent systems.

However, both the parametric input sensitivity and the range and mathematical structure of output values of decision-theoretical approaches are severely limited for representing social meta-reasoning. To give an example, consider the classical SEU-model (subjectively expected utility), which is still at the core of many decision models. It accepts a set of initial "states of the world", a set of alternative actions that transform these states into consequences, and a set of probabilities (one for each initial "state of the world"). On the output side, it assigns each action a cardinal utility value. The "states of the world" do not contain descriptive information; they only serve as indices for the probability values. This means that all qualitative information about the world and all substantive assumptions about the reasoning of co-agents have to be hardcoded beforehand in the probability values and the values of the potential consequences. Social meta-reasoning actually takes place outside of the model, that is, when the modeler defines a parameterization of the decision model. In less restrictive models, such as the procedure of backward induction and the procedure devised by Gmytrasiewicz, social meta-reasoning is once more implied rather than represented. Instead of being entirely hardcoded into the parameters, it is partly hardwired into the mechanics of the decision procedure. Specifically, both procedures stipulate that all players know most of each other's beliefs in advance and are able to reveal each other's preferences by assuming what game theorists call "common knowledge of rationality". This means simply that players know each other's interests (deterministically in games of complete information, probabilistically in games of incomplete information), know that they can expect each other to do what is in their interests, and know that all other players know this, too. Given all this information, it is not exactly difficult to determine how co-players will respond to one's moves.

It has often been noted that decision theory is built on a set of psychologically questionable assumptions (see Gigerenzer and Todd 1999, for a comprehensive overview). Game theory adds its share of social-psychologically implausible postulates, such as the postulate of "common knowledge of rationality" (Heap and Varoufakis 1995: p. 80; Moss 2001). However, we have just seen that decision-theoretical models are far from able to represent all forms of reasoning and meta-reasoning of even unboundedly rational agents. To exemplify this point, consider once again the price negotiation of Section 2. Social meta-reasoning such as, "The co-agent may accept my offer immediately if I make a small concession" cannot be represented by decision theory or game theory because it would not be "rational" (according to these theories) for the reasoner to stay committed to the adjusted price. Likewise, reasoning such as, "A drastically adjusted price reveals that the agents believes her good is worth little" cannot be represented in classical game theory because it involves incomplete information, and it cannot be represented as a game of incomplete information (e.g., a signaling game, Owen 1995: p. 119ff) because (according to our assumptions in Section 2) the agent who adjusts her price may not be aware that she reveals private information. Thus, we see here that it is not just lack of realism that troubles decision theory and game theory. Rather, only a very small portion of all input-output models can be represented by these methods, and rich social meta-reasoning, whether unboundedly or boundedly rational, has almost no place in the represented portion.

* Alternative 3: Formal logic

It is safe to say that in research on virtual agents and multi-agent systems, formal logic is currently the dominant method for representing low-level models as well as assumptions about, and manipulations of, such models (Fagin et al. 1995; Hoek 2001; Wooldridge 2000). Logic makes it easy to characterize arbitrary properties of agents, including properties of their reasoning processes. It is particularly straightforward to find explicit representations for agents' knowledge and beliefs. Such representations have been studied extensively by means of epistemic logic - not just for single agents, but also for the case of multiple agents, where new properties such as shared beliefs, common knowledge and distributed knowledge can emerge (Fagin et al. 1995). Logic makes it equally easy to represent arbitrary relationships between properties in the form of rules. Thus, agents can be characterized as processes that apply rules to their own properties according to the laws of a specific logical calculus, thereby transforming a set of initial states (e.g., perceptions) into a set of final states (e.g., actions).

Compared with other formal approaches, it is a particularly interesting feature of logical agent-models that they render all assumptions about an agent's mental properties and their relationships explicit without imposing a qualitative distinction between reasoning and meta-reasoning. Thus, reasoning about a certain domain and reasoning about other agents' reasoning about this domain can be handled without difficulty within the same framework, even within the same formal language and with arbitrary transitions between levels. This means that the modeler is free to specify explicit relationships between the agent's beliefs about other agents' beliefs, preferences, intentions etc. and the agent's own beliefs etc. For instance, a logical model of the situation described in Section 2 would state in a separate rule that agent 1 of Transcript 1 responds to the fact that agent 2 does not "give in" by ascribing him the belief that the traded good is of low value. For the sake of precision, it must be mentioned that there are two technically different approaches to describing relationships between reasoning and meta-reasoning. In standard first-order logic and related languages, believed entities must be treated as atomic entities (e.g., there is no way to represent a general rule such as "(believes(agent1, p) and believes(agent1, p → q)) → believes(agent1, q)" in first-order logic). Modal logics, on the other hand, are more flexible because they can treat believed entities as decomposable formulae. In ABSS, the decision for one of the two approaches is likely to be guided by chiefly practical criteria (see below).

Conceptually, logical languages describe relationships between propositions, that is, statements that can be true or false (among other modes, in the case of many-valued logics). Because virtually all adult human beings are able to reason propositionally, but also because logical agent models require all mental properties to be specified explicitly, such models tend to be highly understandable. The data (i.e., the propositions) built into these models are also highly changeable, as each proposition represents an atomic and exchangeable piece of knowledge. The parametric input sensitivity and the range and mathematical structure of output values of logical agent models are not inherently limited. Similar to the case of neural network models, there exist logical calculi that are computationally complete - in fact, a subset of first-order logic will do (such a subset is at the core of the general-purpose programming language Prolog). Thus, any computable agent function can be represented by a logical agent model.

If formal logic compares well with respect to four of our five criteria, it seems that the criterion of implementability and executability, easily fulfilled by neural networks and decision theory, is its weakest spot. The (perceived) need to implement a full theorem-prover for a chosen or even a new logical language seems to explain why formal logic plays only a minor role in contemporary ABSS - despite its importance to research on multi-agent systems. A notable exception is SDML, the Strictly Declarative Modeling Language developed at the Center for Policy Modeling at MMU, a declarative programming language that is "close to being a logic" (Edmonds et al. 1996) and that forms the core of a complete and freely available system for simulating social and economic processes involving multiple agents. However, the large number of existing logical languages and calculi (from standard first-order logic to specialized polymodal logics) that would be suitable for diverse areas of ABSS are not readily translated into special-purpose languages such as SDML. Even worse, implementing them in standard programming languages is almost always a challenging problem and subject to ongoing research in the area of automated theory proving. The fact that already first-order logic allows well-formed but mathematically undecidable propositions entails that all general-purpose reasoners based on this calculus (and others) are formally incomplete - certainly not a very appealing characteristic for the practitioner.

However, it appears to me that the practical relevance of these fundamental limitations to ABSS should not be overestimated - an argument that I can only hope to make plausible, as both evidence and counterevidence are currently lacking. First of all, free, efficient and usable theorem-provers exist for first-order logic, for higher-order logic and for several modal logics (among others) - examples include LeanTAP, ModLeanTAP, Otter, RACER, SPASS, Bliksem and Isabelle.[3] The former two are Prolog-modules, thus easily reusable in own software, whereas the remaining ones are full-fledged theorem-proving environments.[4] Second, the example of SDML shows that many applications in ABSS simply do not require the immense representational and computational power of a complete logical language. Many properties of virtual agents, including sophisticated social meta-reasoning, can be encoded directly in a logical programming language such as Prolog. Similarly, it is practically irrelevant that all sufficiently complex logical languages permit representations of problems that are at once well-formed and mathematically undecidable, as this is a technical constraint that applies equally to automated reasoners and to human beings. Any of the aforementioned theorem-provers is able to handle deductions that are far more complex than anything we could imagine occurring in ABSS (within the boundaries of the calculi implemented by the respective theorem-provers).

* Conclusion

As should be clear from our evaluation, we find that logical methods are particularly suitable for modeling social meta-reasoning in ABSS. Logical representations are more understandable and more changeable than neural network models. These are properties that may be irrelevant in areas such as visual object detection (where neural networks have been applied with great success) but vital in a field that is as elusive and ill-understood as social interaction. Likewise, logical representations are more general than decision-theoretical approaches, which are actually unable to formalize all agent functions. In addition, because it is unclear if and how reasoning about a maximization process can be encoded as a maximization process, decision-theoretical models are forced to represent practical all social meta-reasoning statically in the model parameters and the mechanics of the decision procedure. In sum, it seems that only logical methods are able to provide an adequate, elegant and transparent representation of the nesting of reasoning-levels that is so characteristic of social meta-reasoning. Of course, it may be objected that our comparison was partial. However, we hope that our use of a set of comparative criteria that were defined in advance reduced the partiality of our comparison to a sufficient extent.

With the availability (and ongoing development) of general-purpose automated theorem-provers for a range of logical languages, such languages become a practicable method for representing the cognitive structure and features of agents in ABSS. Packaged with these tools comes the prospect of using social simulation in order to explore the interactive and social relevance of a fundamental social-psychological phenomenon - agents' reasoning about the reasoning of other agents. Boosted by new technical and methodological possibilities, social simulation has the opportunity of becoming an indispensable aid for studying this phenomenon.

* Notes

1 I am indebted to an anonymous referee for this reference.

2 Any continuous function can be approximated with arbitrary precision by a multilayer perceptron with a single hidden layer; two hidden layers are sufficient to approximate any discontinuous function (Haykin 1999: p. 209).

3 Instead of providing links that may soon become invalid, we suggest that the interested reader use any of these names on one on the Internet's better search engines - it is likely that the first hit will be the right link.

4 We have used LeanTAP for developing software for ABSS and will be most happy to share our experiences with readers.

* Acknowledgements

I am indebted to three anonymous referees for their valuable comments.

* References

BACH K (2001). You Don't Say? Synthese, 128. pp. 15-44.

BILLINGS D, DAVIDSON A, SCHAEFFER J and SZAFRON D (2002). The Challenge of Poker. Artificial Intelligence, 134. pp. 201-240.

CARPENTER J P (2000). Evolutionary Models of Bargaining: Comparing Agent-based Computational and Analytical Approaches to Understanding Convention Evolution. Computational Economics, 19. pp. 25-49.

DIGNUM F and SONENBERG L (2004). A Dialogical Argument for the Usefulness of Logic in MAS, RASTA 2003. Berlin: Springer.

EDMONDS B (2004a). Comments on "A Dialogical Argument for the Usefulness of Logic in MAS", RASTA 2003. Berlin: Springer.

EDMONDS B (2004b). How Formal Logic Can Fail to be Useful for Modelling or Designing MAS, RASTA 2003. Berlin: Springer.

EDMONDS B and MOSS S (2001). The Importance of Representing Cognitive Processes in Multi-agent Models. In DORFFNER G, BISCHOF H and HORNIK K (Eds.), ICANN 2001 (pp. 759-766). Berlin: Springer.

EDMONDS B, MOSS S and WALLIS S (1996). Logic, Reasoning and A Programming Language for Simulating Economic and Business Processes with Artificially Intelligent Agents. In EIN-DOR P (Ed.), Artificial Intelligence in Economics and Management (pp. 221-230). Boston: Kluwer.

FAGIN R, HALPERN J Y, MOSES Y and VARDI M Y (1995). Reasoning about Knowledge. Cambridge, MA: MIT Press.

FARATIN P, SIERRA C and JENNINGS N R (2002). Using Similarity Criteria to Make Issue Trade-offs in Automated Negotiations. Artificial Intelligence, 142. pp. 205-237.

GIGERENZER G and TODD P M (1999). Simple Heuristics That Make Us Smart. New York/Oxford: Oxford Univ. Press.

GMYTRASIEWICZ P J and DURFEE E H (1995). A Rigorous, Operational Formalization of Recursive Modeling, Proceedings of the First International Conference on Multi-Agent Systems (pp. 125-132). Menlo Park: AAAI Press/The MIT Press.

GRICE H P (1989). Meaning. In GRICE H P (Ed.), Studies in the Way of Words (pp. 213-223). Cambridge, MA: Harvard Univ. Press.

GUMPERZ J J (1982). Discourse Strategies. Cambridge: Cambridge Univ. Press.

GUMPERZ J J (1995). Mutual Inferencing in Conversation. In MARKOV I, GRAUMANN C F and FOPPA K (Eds.), Mutualities in Dialogue (pp. 101-123). Cambridge: Cambridge Univ. Press.

HARRIS P (1996). Desires, Beliefs, and Language. In CARRUTHERS P and SMITH P K (Eds.), Theories of Theories of Mind (pp. 200-220). Cambridge: Cambridge Univ. Press.

HAYKIN S (1999). Neural Networks: A Comprehensive Foundation. Englewood Cliffs, NJ: Prentice-Hall.

HEAP S H and VAROUFAKIS Y (1995). Game Theory: A Critical Introduction. London: Routledge.

HOEK W V D (2001). Logical Foundations of Agent-based Computing. In LUCK M (Ed.), ACAI 2001 (pp. 50-73). Berlin: Springer.

HUBER M J, DURFEE E H and WELLMAN M P (1994). The Automated Mapping of Plans for Plan Recognition. In MNTARAS R L D and POOLE D (Eds.), UAI '94: Proceedings of the Tenth Annual Conference on Uncertainty in Artificial Intelligence (pp. 344-351). San Francisco: Morgan Kaufmann.

LEVINSON S C (2000). Presumptive Meanings: The Theory of Generalized Conversational Implicature. Cambridge, MA: MIT Press.

MACY M and WILLER R (2002). From Factors to Actors: Computational Sociology and Agent-based Modeling. Annual Review of Sociology, 28. pp. 143-166.

MAUDET N (2003). Negotiating Dialogue Games. Autonomous Agents and Multi-Agent Systems, 7. pp. 229-233.

MOSS S (2001). Game Theory: Limitations and An Alternative. Journal of Artificial Societies and Social Simulation, 4(2) http://jasss.soc.surrey.ac.uk/4/2/2.html

OWEN G (1995). Game Theory. San Diego: Academic Press.

REICH W (2003). Dialogue and Shared Knowledge: How Verbal Interaction Renders Mental States Socially Observable, Uppsala University.

ROMP G (1997). Game Theory: Introduction and Applications. New York/Oxford: Oxford Univ. Press.

RUSSELL S J and NORVIG P (2003). Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall.

SURYADI D and GMYTRASIEWICZ P J (1999). Learning Models of Other Agents using Influence Diagrams, Proceedings of the Seventh International Conference on User Modeling (pp. 223-232). Wien/New York: Springer.

THOYER S, MORARDET S, RIO P, SIMON L, GOODHUE R and RAUSSER G (2001). A Bargaining Model to Simulate Negotiations between Water Users. Journal of Artificial Societies and Social Simulation, 4(2)http://jasss.soc.surrey.ac.uk/4/2/6.html.

WOOLDRIDGE M (2000). Reasoning About Rational Agents. Cambridge, MA: MIT Press.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2004]