© Copyright JASSS

  JASSS
logo

Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents

Edited by Robert Trappl and Paolo Petta
Berlin: Spinger-Verlag
1997
Paper: ISBN 3-540-62735-9

Order this book

Reviewed by
Rosaria Conte, Division of AI, Cognitive and Interaction Modelling, PSS (Project on Social Simulation), IP/CNR, Viale Marx 15, 00137 Rome.

Cover of book

One good thing about the study of synthetic actors is that it stimulates novel research with a highly multi-disciplinary approach and audience. If this were the unique merit of this collection, it would still repay careful reading. However, multi- and inter-disciplinarity are not the only virtues of the book. Many other objectives of the volume, and a good share of the contributions included, merit detailed consideration and commentary, which must inevitably include some reservations and critiques.

In this review, I will begin by describing the main goals which the editors have explicitly addressed and the main contents of the book as I have understood them. Next, I will turn to what appear to me to be its most relevant contributions. Finally, I will make some critical remarks, express some doubts and formulate some urgent questions which, in my view, should be answered before those explicitly addressed in the volume under review.

* Synthetic Agents: A Promising Field

The volume edited by Trappl and Petta gathers contributions in a rather innovative field, the design and implementation of personalities for synthetic agents, which received a strong impulse from a workshop held in Vienna in 1997. From this workshop, the volume in question collects the most significant presentations. However, as is dutifully announced in the introduction, the book is something more than a volume of proceedings, since both the contributors and the editors have successfully achieved consistency of presentation, providing a substantial and comprehensive picture of the field. The guidelines for orienting the non-expert reader in the relevant literature (included at the end of the volume) are an example of the efforts made to produce a book that is a useful research tool. Furthermore, the introduction to the volume presents the reader with arguments justifying interest in the field.

Indeed, why create personalities for synthetic actors at all? Both the introduction by the editors and many of the contributors present answers to this question. These include not only applications in the field of animation and the role of characters in dramatisation, but also actual and potential uses in several fields of science, such as virtual life (Thalmann et al.) and theories of personality, cognition and emotions. (See, for example, the chapters by Moffat and Sloman.) Below, the contents of most of the papers will be briefly described according to the main scientific themes addressed in the collection.

* Current Results

The issue of reproduction is crucial in virtual life, although not in the evolutionary sense intended in the fields of biology and Artificial Life. Instead, reproduction is intended in the sense of faithful representation. As described in the chapter by Thalmann et al. (Autonomous Virtual Actors Based on Virtual Sensors), virtual life is a field at the intersection between virtual reality and Artificial Intelligence, and is based on concepts from real-time computer animation, autonomous agents and mobile robotics. The objective is to provide autonomous virtual humans with the skills necessary to perform, say, stand-alone roles in films. By autonomous, they mean that the actor (in both senses) does not require continued intervention from a user or controller. Other approaches require the actor to have pre-programmed knowledge about the surrounding environment. The approach proposed in this work requires (more realistically) that the actors be equipped with visual, tactile and auditory sensors which allow this knowledge to be acquired. The chapter is devoted to showing how synthetic actors can receive and integrate relevant information from different sensors.

The implementation of realistic human capacities in animats is at the core of several chapters. Realism is somehow pursued through the development of enhanced individuality for synthetic actors. An example is the chapter by Badler et al. (Towards Personalities for Animated Agents with Reactive and Planning Behaviors), where realism is achieved at a higher and more abstract level of complexity than in the contribution previous mentioned, namely at the level of decision-making and planning. For Badler et al., the achievement of realistic animation is thought to result from the implementation of different "styles" of sensing, decision-making and planning. To implement personalities for animats is therefore intended as a means to enhance the realistic character of animation. The term personality is used by the authors to refer to different cognitive styles, including (for example) the agent's commitment to its current goals. However, in the work presented in this chapter, personalities are implemented at the level of behavioural control of locomotion. This is achieved by setting several parameters (such as speed and inertia) to different values. Different combinations of parameter values give rise to different styles of locomotion. The authors believe such an operational definition of personality could be expanded to higher-level performance and include non-locomotor traits. The effective results presented in this paper are based on a rather simple agent architecture, in which lower level control is achieved through a Sense-Control-Act loop, while control at higher level is realised through parallel state machines called Parallel Transition Networks with hierarchically ordered schemata. These PaT-Nets sequence actions based on the current state of the environment, of the goal, or of the system itself. Special purpose reasoners and planners are associated with specific states of a network.

As stated above, other contributions included in the collection are also devoted to the implementation of personality and character in synthetic actors. Some authors (like Hayes-Roth et al. in their chapter Acting in Character) explicitly declare their main interest to lie in dramatisation and consequently turn to the literature on theatre rather than to psychological or psycho-social research. In contrast, the contributions by Moffat (Personality Parameters and Programs) and Sloman (What Sort of Control System is Able to Have a Personality?) use the design of personality and character for synthetic actors to explore aspects of virtual cognition including sentiment, emotion and personality. This purpose is commendable, especially since, as Moffat shows in his short but substantial review of the major psychological theories of personality, none of these theories is able to providea computational model of those phenomena. For those readers, including the present reviewer, who share the view that computation is a fundamental means for creating, testing and applying theories, this is a major defect of psychological theories of personality, whatever their virtues may be. Moreover, as Sloman points out in his contribution, the common theme underlying the study of the artificial is to explore the frontiers of agenthood, intelligence and autonomy, and to answer questions about how a system can be designed to display any given level and type of intelligence or autonomy. The same line of reasoning is applied by both authors to the exploration of personality and affect. Moffat justifies his desire to design an artificial personality not only as a method for investigating natural personality, but also to make artificial agents like robots "better" and to simulate characters for dramatic purposes. Why should a robot have a personality? Moffat answers by means of examples. Suppose you need a robot for a rather delicate and dangerous task, such as controlling a nuclear power station. Obviously, you need it to be reliable, responsible, thoughtful and conscientious, whilst you may need it to have different personality traits in the accomplishment of other tasks. An entertainment robot might need to be cheerful for example. In seeking a computational basis for personality, Moffat draws ideas from theories of social learning (put forward by Rotter, Bandura and Mischel) rather than the better known psychological approaches: trait theory and the views of Freud, Skinner and Maslow. The author presents Frijda's model of emotion and provides the architectural details for a computational model where modules for planning, perceiving and executing communicate through the central memory in a sort of blackboard structure. In his turn, Sloman defines emotions as socially important states involving cognitive processes (page 192). He develops the concept of a "niche" as a set of (environmental) problems and that of "design" as the set of characteristics developed for answering those problems. An example of an artificial personality is presented in a specific context, that of nursing. An agent, called "minder" is defined in terms of the problems a baby can encounter and which a nursemaid is expected to deal with. A quite detailed design for minder is presented and the corresponding architecture is illustrated.

A concern for realism is also at the root of research on Believable Agents. (See IMPROV: A System for Real-time Animation of Behavior-Based Interactive Synthetic Actors by Goldberg or Loyall's chapter Some Requirements and Approaches for Natural Language in a Believable Agent for examples.) Agent believability, sociability and responsiveness are considered as important qualities in problem-solving, even more than intelligence and competence (Loyall, page 113). In the study by Goldberg, believability is intended to mean the agent's capacity to perform certain activities simultaneously; for example, walking and chewing gum (page 58). Now, as the author warns us, there are many ways in which activities can be combined. The rationale chosen by the author refers to the personality of an agent. A character's personality may be found in a certain "... tendency to behave in a certain manner that is consistent from one situation to the next." This is achieved by assigning each actor a set of behavioural attributes to "... give the impression ..." of a unique personality. The same attributes can also be used to describe the relationships between actors, objects and behaviours. Furthermore, a character's behaviour toward another character may be influenced by their sympathy or hostility for that character. How does this relate to autonomy and believability? Agents are provided with script-like structures for plan execution, which leave the actors with a high degree of freedom concerning the specific ways that they may apply the script: to go to work, should I take a bus, a car or a bike? Goldberg suggests that such a decision be taken by feeding the decision-maker with a specific criterion (in the context of the previous example: spare time) which each actor will interpret differently according to their individual propensities: a lazy character might decide to take a cab.

The issue of external and internal control is raised by Blumberg and Galyean in the following paper (Multi-Level Control for Animated Autonomous Agents: Do the Right Thing ... Oh, Not That ...). The authors propose that we should distinguish four levels of control: motor, behavioural, motivational and environmental. They propose a hybrid architecture in which controlling mechanisms are layered. This is explicitly intended by the authors who feel, for example, that the action-selection process (responsible for choosing which behaviour should be active at any given time) should be accessible directly by an external entity.

* Why Create Personalities?

So, why create personalities for synthetic actors? This question is relatively easy to answer from the perspective of applications. Undoubtedly, entertainment and art products have a significant appeal not only for animation specialists, but also for students of Artificial Intelligence, Artificial Life, Cognitive Science, psychology and philosophy. To see why the entertainment industry and educational agencies are likely to profit from advances in the field of synthetic agents with personalities, consider the increasing investment which the film industry has made in animated "characters". The generation of animats which are ever better approximations to real-life agents is of great applicative appeal to the world of fiction. But what about scientific outcomes? The relationship between science and application is never an easy one. Scientists are well-known for their hypocritical snobbery about applicable science. (Keynes could not resist the affectation of claiming that he despised money and technology was evil in the eyes of the great mathematician Hardy.) Lately, scientists have become rather more sensitive to this issue. In many fields, a scientific work which is not applicable to some non-scientific problem or aim is no longer worthy of consideration or, more specifically, publication. However, it should be recalled that the relationship between science and application is scientifically relevant only if and to the extent that it is bi-directional, which is to say, only if applications have a feedback into scientific development. To give but one example, consider Artificial Intelligence. Whilst many of its applications have had no scientific import, applications to problem solving and planning, or to natural language processing and human-computer interaction, whether successful or not from the point of view of their applications, have had the merit of stimulating an enormous amount of good inter-disciplinary and cutting-edge research on the implementation of complex integrated architectures for (multiple) agents (Russell and Norvig 1995, O'Hare and Jennings 1996, Wooldridge and Jennings 1995, Wooldridge and Jennings 1998 and Wooldridge et al. 1996).

* Three Good Reasons ...

Undoubtedly, the field of synthetic actors is an important area of technological application which is likely to expand in the immediate future. Two social factors contribute to the evident success of the field:

  • The application to fiction and virtual drama make a significant contribution to its popularity. As we all know, in recent decades growing resources have been invested in the industries of the ephemeral and this tendency is not likely to reverse in the immediate future.
  • It facilitates and stimulates the use of computation to explore issues of traditional fascination, such as the implementation of artificial personalities. To synthesise personalities or "implement individuals" appeals to specialists' notions of omnipotence and also seduces a non-expert audience.

However, more "scientific" reasons also contribute to this interest:

  • The development of personality agents occurs at the intersection of several interesting fields. This is true at the level of the discipline (from Artificial Intelligence to biology and from psychology to the humanities), at the level of theories (agent theory, social theory, theories of personality and emotions) and at the level of computational techniques: the development of synthetic actors involves visualisation and simulation techniques, but would still profit from a closer interaction with established and novel Artificial Intelligence methodologies.
  • This field interacts with other (more established) fields in the study of some problems, like those associated with autonomous situated agents (Georgeff 1991) while at the same time opening up novel directions like the study of artificial personality and emotions.

However, the study of personality agents also runs the risk of early corruption. This may result from overlooking good opportunities for cross fertilisation with other fields of research, while wasting both potential and resources on tackling a number of difficult questions prematurely. Let us examine this risk and its indicators.

* ... And Missed Opportunities

In the opinion of this reviewer, although rather appealing, almost all the chapters included in the collection present research built on infirm theoretical grounds. Several factors may contribute to this.

Premature Questions

Applications occasionally demand premature investments in avenues of research for which the available theoretical, conceptual and methodological instruments are still inadequate. For example, the features most necessary for animats to impress the human imagination are realism and individuality. Animated agents must reproduce reality in order to create an effective illusion. Therefore, they must resemble natural, individual agents. The magnitude of these requirements is both a challenge and a stumbling block to research. On one hand, it stimulates cutting edge research on agent modelling and design. The need for realistic animation gives impulse to the study of believable, responsive and sociable agents (Loyall) and to research on autonomous effective perception and realistic movement (Thalmann et al.). The requirement for individualised characters pushes forward research on the implementation of emotions (Moffat, Sloman) and personality (Moffat, Hayes-Roth et al.). On the other hand, the ambitiousness of the task may cause:

  • terminological and conceptual confusions. For example, the question of the semantic relationships between character and personality is almost never explicitly addressed, with the exception of Hayes-Roth et al. who conclude (page 111) that the difference between these notions is less than is commonly believed!
  • theoretical confusion. For example, the theories of social learning are generally considered cognitive-behavioural theories (as Moffat reminds us) because they include motivations and expectations. However, the role played by these notions in the theories in question is far from cognitive: social learning theories pay no attention to agents' internal representation and processing. Motivations are control mechanisms which may operate independent of cognition. They need no system of symbolic representation and manipulation. Analogously, expectations are effects of reinforcement at least according to a strictly behavioural model. After an adequate number of positive reinforcements, the organism behaves as if it were expecting that the behaviour reinforced so far will be reinforced again.
  • confusion at the level of the questions raised, as happens when unexplained phenomena and incompletely understood phenomena (like autonomy and intelligence) are replaced by new ones (believability and personality) that do not help highlight the previous ones but, conversely, refer to them implicitly. The chapter by Hayes-Roth et al. (Acting in Character) presents work with a rather impressive impact on dramatisation. However, the treatments of improvisation and role-reversal (which the chapter revolves around) would greatly profit from existing research on autonomous agents. Similarly, no attention is paid by the authors of this chapter to the agents' autonomous adoption of complementary roles such as "master" and "servant", nor to the agents' representation of these roles, nor finally to the generation of internal control mechanisms as an effect of role-adoption, such as obligation and commitment (Jennings 1995), responsibility (Jennings 1992) and the possible conditions for interrupting or abandoning a role (Kinny and Georgeff 1994). Without modelling these internal processes, improvisation and role-reversal are not indicators of autonomy. The dramatic effect may even be convincing, but the model underneath is scientifically irrelevant.

The considerations made with regard to improvisation may be extended to believability, personality and character. How can one render agents more believable without addressing the issue of their autonomy? How can one address the problem of personality without investigating the building blocks of personality, namely goals, motivations, dispositions and preferences? Tackling the issue of personality is premature, especially if it is defined as the behavioural correlate of motivation (Moffat). In Moffat's view, personality is behaviour that must be explained in terms of deep motivations. Rather than investigating the difficult (non-observable) side of the question, it appears more convenient to concentrate on the more tractable (behavioural) one. However, personality is not the behavioural correlate of motivation: a personality is characterised not only by consistent behaviours, but also by consistent actions, decisions, preferences and interpretations. While these aspects of personality remain in need of a deeper explanation, they may be as poorly observable as the factors explaining them. In some sense, Moffat seems to contradict himself. He begins by saying he wants to design artificial personalities in order to investigate natural ones. But later on in his chapter (page 130), we are told that personality is "... behaviour to be explained; what explains behaviour is psychological structure ...". Therefore, it is unclear whether the author wants to explain personality or simply describe it. Finally, one of the features that Moffat explicitly demands from a theory of personality is that it is biologically grounded. So, why are certain personalities, and prior to that, motivations and dispositions needed? Thus formulated, the problem again suggests ideal-type (non-idiosyncratic) constructs. This approach is echoed by Sloman's concept of the mapping of designs onto niches. But his is perhaps too big a jump. Why don't we simply ask what are the uses of given strategies, styles and types of motivations or preferences? Whether they allow for describing individuals or not, they facilitate our understanding of the "... different regions of design space ..." (Sloman, page 168).

The last example concerning a premature research direction is offered by Goldberg's contribution. Here, agent believability is said to be enhanced by allowing simultaneous involvement in different actions. How can we provide a useful answer to the question of which actions should be combined without resorting to a general theory of links among actions? Obviously, the application of script-like representations of plans may provide a partial answer to this question, telling us which actions can be accomplished simultaneously. Perhaps this will increase, to some extent, the believability of the actor implemented. However, it does not tell us which actions cannot be done simultaneously and why. Moreover, such scripts cannot be used to implement agents which create and execute new plans in a believable way. Thus, action compatibility requires a prior general theory with the aim of defining possible differences of personality.

Poor Effective Interdisciplinarity

Despite the evident links between synthetic actor research and other fields in the sciences of the artificial, cross-fertilisation is rather poor. While reading the book under review, the reader who is somewhat familiar with Artificial Intelligence (and particularly Agent and Multi-agent theory) will be struck by the scarcity of references to the huge amount of literature which has already been produced dealing with phenomena like commitment, co-ordination and especially autonomous action.

The notion of an autonomous agent is not consistent between the various contributions included in the volume. Sometimes, an autonomous actor is one which does not require the continued intervention of a user. Alternatively, it may be defined as a system endowed with internal motivations. But the relationship between these two definitions is never inquired into. Furthermore, the notion of integrated levels of control in autonomous architecture (Blumberg and Galyean) is handled in a rather unsatisfactory way. What these authors propose is a partially autonomous agent, in which external and internal controls are combined but not integrated, since each module can be accessed directly from the outside. In what sense can we say that such an actor is autonomous? The crucial question of how it is possible for an autonomous agent to accept external requests autonomously and thus react (or not react) to external pressures is not addressed here.

Analogously, in Goldberg's contribution (previously described), what is the use of external criteria for decision-making? Either the internal criteria or the external ones seem to be superfluous. Furthermore, how are external criteria generated? In terms of autonomy, what is gained by the internal criteria is lost by the external ones.

Autonomy is not the only contribution of agent theory overlooked. Other relevant formal and computational work is being missed. For example, why are references to research on commitment, and especially work by Georgeff and his collaborators, omitted? Moffat's attempt to justify research on artificial personalities in terms of different task-dependent requirements of artificial agents is not fully satisfactory. What is needed is autonomous, flexible agents which at the same time exhibit robust and responsive performance. This issue has received considerable attention from Artificial Intelligence and Multi Agent System research. (To give only one example, see Kinny and Georgeff 1994.) Technical solutions for "simulating" artificial personality are essentially ineffectual in the absence of the kinds of theories developed in the works just mentioned. A conscientious personality cannot fulfil its duties if it has no awareness of what its duties are, a representation of the consequence of their violation, and what would happen if it did not commit to them.

* Conclusions

The field of synthetic actors is destined to produce impressive and profitable applications. Whether it will also achieve important scientific results and contribute to increased understanding of the agent "design space" is an interesting question. The answer largely depends on increased interaction between animation and other fields in the sciences of the artificial. Even if this interaction implies ignoring or delaying responses to the requests posed by the world of applications. From its inception, Artificial Intelligence has repeatedly had to pay the longer term price of making over confident promises.

* References

GEORGEFF M. P. 1991. Situated Reasoning and Rational Behaviour. Technical Report 21, Australian Artificial Intelligence Institute, Melbourne, Australia, April.

JENNINGS N. R. 1992. On Being Responsible. In Y. Demazeau and E. Werner, editors, Decentralised Artificial Intelligence 3, Elsevier, Amsterdam.

JENNINGS N. R. 1995. Commitment and Conventions: The Foundation of Coordination in Multi-agent Systems. The Knowledge Engineering Review, 8:223-250.

KINNY D. and M. Georgeff 1994. Commitment and Effectiveness of Situated Agents. Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93), Sydney, Morgan Kaufmann, San Francisco, CA, 82-88.

O'HARE G. and N. R. Jennings, editors, 1996. Foundations of Distributed Artificial Intelligence. Wiley Interscience, New York, NY.

RUSSELL S. and P. Norvig 1995. Artificial Intelligence: A Modern Approach. Prentice-Hall, Englewood Cliffs, NJ.

WOOLDRIDGE M. J. and N. R. Jennings, editors, 1995. Intelligent Agents, Lecture Notes in Artificial Intelligence 890, Springer-Verlag, Berlin.

WOOLDRIDGE M. J. and N. R. Jennings 1998. Formalizing the Cooperative Problem Solving Process. In M. N. Huhns and M. P. Singh, editors, Readings in Agents, Morgan Kaufmann, San Francisco, CA.

WOOLDRIDGE M. J., J. P. Muller and M. Tambe, editors, 1996. Intelligent Agents II, Lecture Notes in Artificial Intelligence 1037, Springer-Verlag, Berlin.

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1998