© Copyright JASSS

  JASSS logo

Reasoning about Rational Agents

Michael Wooldridge
Cambridge, MA: The M. I. T. Presss
2000
Cloth: ISBN 0-262-23213-8

Order this book

Reviewed by
Bruce Edmonds
Centre for Policy Modelling, Manchester Metropolitan University, Manchester, UK.

Cover of book

This book is an archetypal product of the Belief-Desire-Intention (BDI) school of multi-agent systems. It presents what is now the mainstream view as to the best way forward in the dream of engineering reliable software systems out of autonomous agents. The way of using formal logics to specify, implement and verify distributed systems of interacting units using a guiding analogy of beliefs, desires and intentions. The implicit message behind the book is this: Distributed Artificial Intelligence (DAI) can be a respectable engineering science. It says: we use sound formal systems; can cite established philosophical foundations; and will be able to build reliable and flexible software systems.

Thus this is not a radical or deep book but an attempt to present a convincing synthesis. It does not seek to closely question or examine its assumptions, it does not present novel material and it does not seriously consider alternative approaches. Rather it aims for a clear, coherent and focused account of a particular way forward - the BDI-logic way. As an introductory book for students of the BDI way it is excellent - clearly written; at the right level; well structured and with a reasonable number of examples. In this role the only criticism I would have of it is that instead of clearly flagging the underlying biases, assumptions, difficulties and unfinished work it has a tendency to obscure these in a way which might mislead a student as to the status of the claims made. One could be easily mislead into concluding that the vision the book presents is one that is close to being achieved.

I will briefly discuss two examples of this.

The book rightly differentiates the three strands behind the BDI approach: a philosophical analysis of human practical reasoning (almost entirely taken from a single philosopher: Bratman); the formal BDI logics and close relatives; and implemented systems using BDI-like agents. What it does not do is make clear that these three strands do not correspond closely. The BDI logics are only generally inspired by the outline of Bratman's analysis and the formal work on BDI logics is only weakly related to the implemented systems using the BDI analogy (a fact that Wooldridge himself bewails in a recent paper). In fact the only thing that really unites these three strands is the BDI analogy itself.

A second example is in the important question of validation. Wooldridge points out the difficulty of validating the BDI framework but follows this with the phrase "Fortunately we have powerful tools to help us in our investigation" and goes on to discuss BDI logics (and one, called LORA, in particular). This gives the impression that the clarity and precision of formal logic somehow helps solve the problem of validation, which it does not.

Thus the book simply passes over the problem of validation - there are no sections on validation in this book, which is a major omission. Similarly it mentions and then passes over the whole business of learning and updating the beliefs of the agent. Sadly these two omissions are characteristic of much work in AI at the moment. In the former case the paucity of implemented systems that actually scale up to cope with real world problems is an embarrassment to the field. (This, one suspects, is the reason why any lengthy discussion about it is omitted.) Rather, the field seems to be retreating into a self-consistent but ultimately irrelevant approach (thus following the path of economics). In the second case the processes of learning and induction are simply seen as the business of another field: machine learning (ML). This would be fine if the AI and ML fields were not so notoriously disjoint - there seems to be almost nobody considering the interaction or integration of learning and deductive processes. Thus almost all aspects of learning, even those that might have a bearing on the other issues mentioned, are omitted.

If the book is not just supposed to be a summary of the present position but also to present arguments for the approach, then it is weaker. The fact is that the techniques described are far from established engineering science and more in the realm of work in progress. There are elements of coherence in this field but, as a whole, it is riddled with holes. Wooldridge does not make this clear in a way that an outsider or novice would understand.

The majority of this book follows the style of many papers in this field. It uses a formal logic to express issues about agent design, giving the impression that the subject is an analytic one. However a closer inspection reveals that the formal machinery is not actually used to gain any significant results. The extensively introduced and complex formal apparatus gives no benefit apart from being a clear means of expression. There are but a few proofs and all of these are simple consequences of the definitions. There are no useful results about the behaviour of agents at all. Further the book (to its credit) systematically rules out each and every possible use of the formal system presented on the grounds of their impracticality. Thus, in successive sections, the use of LORA for specification, verification, implementation is ruled out for any system of a size or complexity suitable for tackling real problems.

What we are left with is a vision of how agent-based software systems might be developed. The vision is: the functions of the desired system are divided up into separate roles; these roles will be independently executed by separate agents; the behaviour of each of these agents will be formally specified at an abstract level in terms of goals and beliefs; this specification will be implemented (either by direct execution, incremental implementation or compilation); the implementation and resulting whole system behaviour will be checkable using the original formal specification. Thus we would have a way of engineering complex, adaptive, distributed and interactive systems in a provably reliable way.

The vision is remarkably reminiscent of Hilbert's vision for mathematics at the start of the 20th century. Hilbert envisioned the formal mechanisation of mathematics - Wooldridge envisions the formal logicisation of software engineering. Gödel and others proved much of Hilbert's programme impossible - his dream was attractive but unworkable. So far all the signs are that this is also the case for Wooldridge's vision. As far as I am aware there is no multi-agent system based purely upon a priori principles and design that does not suffer from the scaling problem.

What is the case is that analogies like the BDI framework can be helpful for guiding programmers. Thus having a specification and programming languages that facilitate this analogy might help in the production of software. If all that Wooldridge is suggesting is that this analogy can be helpful in three separate areas (philosophy, logic and agent design) then nobody will disagree with him. However he is suggesting more: namely that the philosophical thought of one philosopher (Bratman) does provide some sort of foundation for the BDI formalisms and that these formalisms can be used in the implementation of agent-based systems. He is, unfortunately, totally unconvincing in his efforts to show that the connections between the three strands are substantial enough to support a methodology capable of engineering software systems suitable for the real world.

So what does the book hold for the social simulator? It raises and clearly expresses a number of issues to do with agents and their interaction, but these issues are distorted and simplified to fit into a logicist framework. For example, an agent's goals are reduced to a set of statements describing what would be true in a hypothetical, desirable state of affairs. As you can imagine, this greatly eases the task of expressing goals - since one can clearly separate out reasoning about what is feasible with the planning of how to achieve them! This trend is fairly general in this book, one finds only impoverished accounts of key ideas including communication, co-operation and situatedness. Other key aspects of agents are deliberately passed over: no social interaction more complex than simple one-to-one interactions are discussed.

We are left with a book that presents an accessible account of a current orthodoxy in agent systems. This makes it ideal to those who wish to understand this literature. Unfortunately for the book, it inherits the content and achievements of this orthodoxy, which is at the present time is mostly just a vision.

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2002