© Copyright JASSS

  JASSS logo

IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 31(5), September, Special Issue on Socially Intelligent Agents

Edited by Kerstin Dautenhahn
Piscataway, NJ: IEEE Press
2001
ISSN 1083-4427

Reviewed by
Bruce Edmonds
Centre for Policy Modelling, Manchester Metropolitan University, UK.

Cover of book

Artificial Intelligence has been (and to a large extent still is) dominated by the individualist perspective. That is to say, individual cognition has been the focus of research in AI with social interaction arising (if at all) simply as a result of the interaction of such individuals. In practice this has meant that agents (or in general programs) have been developed in isolation and only later deployed in order to interact. What social intelligence these agents have come solely from the intuitions and experience of their programmers.

This special issue is one of a number of events and publications that indicate that this might be beginning to change. In other words, that the importance of society and social interaction to cognition (and more generally to adaptive behaviour) is being recognised. One of the aspects being re-evaluated is the importance of social signals to the process of interaction. In this issue the focus is on human social signals. In order to obtain full and meaningful interaction between computers and humans it is important that the humans can attribute human traits to the computers. This overcomes one of the barriers to human-computer interaction and allows the interaction to go beyond the use of computers simply as a tool.

In some ways, this is particularly easy to do - humans seem very able, even predisposed, to anthropomorphise phenomena. Thus it is eminently feasible to introduce computers into the human half of the interactive loop - making the computer's actions humanly interpretable allows the computer to use the resulting human reaction as input. If one could complete the loop so that humans and computers interact in the same way as humans with humans then one would have true social interaction.

Thus many of the papers in this issue concentrate on how to enable robots and programs to trigger anthropomorphism in the humans that are interacting with them. The aim is not to fool the humans (as in the Turing test) but to enable social interaction between humans and robots. Thus it has become important to find out how we can facilitate the attribution of human emotions, intentions, plans and so on to computers, programs and robots.

Some researchers go further than this. They claim that facilitating anthropomorphism is not only necessary for social interaction but also sufficient. The first paper in this issue by Pearson, Laaksolahti and Lonqvist ("Understanding Socially Intelligent Agents - A Multi-Layered Phenomenon", pp. 349-360) makes this claim explicitly. They put forward what they call a constructivist thesis: namely that we do not have to develop 'actual' social intelligence in computers, but that the consistent appearance of social intelligence is sufficient. They suggest that a multi-layered approach to engineering this appearance based on psychology and folk psychology could provide such an appearance.

This seems to me to make two fundamental errors. Firstly, that enough general psychology is accessible to exterior study and interior reflection to make the deliberate design and construction of such an agent possible. Secondly, that it is possible (in general) to be ultimately convincing to humans without implementing a 'deeper' social intelligence. I will consider these points in turn.

There are (at least) two main difficulties with attempting to apply a design stance to social intelligence. Firstly much human social activity is not accessible to introspection because much of our reaction is "sub cognitive" (French 1990), that is unconsciously learned behaviour. The second difficulty is that much of our social behaviour seems to be local and contingent. That is we have learned our social reactions with and for those immediately around us, in other words that our developed social abilities are highly context-dependent. This fits in with an evolutionary story that attributes the advantage that our intelligence gives us to the ability to develop and pass on a culture that allows different groups of humans to exploit different ecological niches. This means that such social behaviour is not easily generalisable, which makes a general design stance infeasible. Social behaviour would have to be crafted for each and every social context. Despite these potential difficulties the engineering design stance pervades many of the papers in this issue.

The reason why a "shallow" approach to sociality is likely be inadequate (or, to put it another way, forced to be as deep as the alternative) is the reflectivity of human relations. The "like me" (Dautenhahn 1997) test is easy to pass during a single brief encounter but extremely difficult when extended over periods which include interaction with the rest of the world (including critically the social world) (Edmonds 2000). We do not only expect the immediate signals to be "like me", but also: their reaction to our social signals; their anticipation of our reaction to their social signals; their social signals that were in anticipation of our reaction to their social signals and so on. In short, part of human social intelligence seems to be a deep empathy (and expectation of deep empathy) so that we can use our own "internal" reactions as a useful model of other's reactions and vice versa.

I and others (Burns and Engdahl 1998) have argued that the human self is fundamentally socially developed. My take on this can be summarised as follows: we do not have direct conscious access to our basic decision making processes; thus it is to our advantage to learn to model these processes so that we can predict our own decisions; we develop this self model (at least partly) via a social process, namely we use our models of other's actions as a model of our own and vice versa; so that much social communication depends on this fact.

But anyway, why would evolution (or social processes) result in redundant internal social processing? In the long run, it is unlikely that we would acquire and maintain any process that did not result in an evolutionary advantage. On the other hand, it would seem (at face value) to be of great advantage to be able to predict (or understand) other's social behaviour. (It would also be of considerable advantage to us if other people could predict our behaviour, as when we want people to know that certain things will make us angry.) Between the twin pressures of redundancy and deep empathy lies any 'gap' between a shallow approach to social understanding and a deep one. In fact, the existence of these pressures suggests that these approaches will be nearly identical or, to put it more exactly, that the gap will be no greater than that between how we make decisions and our own understanding of our decision making mechanisms.

For the above reasons I think that the first paper in this special issue (by Persson et al.) is mistaken. They argue for the adequacy of the shallow approach saying: "SIAs need to model the folk-theory reasoning, not the real thing." (original italics). They survey many results on the layers of social interaction and intelligence, implying (without proof or even demonstration) that this will be adequate to the task. I think that they mistake the ease with which you can get people to anthropomorphise some phenomena for what might be necessary in order for full social interaction. This mistake causes them to jump from the plausible premise that much of our social signalling is socially constructed to the over optimistic assumption that something much more tractable than our total cognitive capacities will be sufficient to implement such signalling. It is common folk knowledge that the best way to fool someone into thinking you are innocent is to believe it oneself - no shallow tactic for convincing others is as effective.

In their paper, Hogg and Jennings ("Socially Intelligent Reasoning for Autonomous Agents", pp. 381-393) squarely take the engineering approach allied with an individualist perspective. Their opening sentences read: "Socially intelligent agents are autonomous problem solvers that have to achieve their objectives by interacting with other similarly autonomous entities ... When designing and building such agents, a major concern is, therefore, with the decision-making apparatus that they should use ..." (p. 381). To achieve their aim they resurrect the idea of joint (or social) utility and implement it within a system of interacting agents with a common goal. To their credit they do actually produce the system and test it against a problem, which, though artificial, is not trivial. They find that the ability to swap strategies depending on the context and the ability to learn from the past improves the performance of the system when taken as a whole. In other words, even in the limited social situation they consider, the designer cannot anticipate what is the better strategy for each agent in each situation but needs to delegate at least some of this to the agent so it can learn and decide for itself. Although this may seem obvious to the reader, the inclusion of learning as an essential part of autonomy has been long resisted (or simply passed over) by much of the agent community.

Several of the papers look into the practicalities of usefully exploiting human anthropomorphism in particular situations. These are not only important in terms of their practical conclusions but also provide great insight into the properties of this facility. Human social abilities are immensely complex and context-dependent which is the reason that such applied work can often reveal a lot more about social intelligence than armchair theorising which relies on judgements of plausibility. Thus in the paper by Paiva, Manchado and Prada ("The Child Behind the Character", pp. 361-368), they describe an interactive story creation tool which allows manipulation of characters, actions and expressions. They find that prompting the designer-children to make decisions about the justification for the behaviour of the characters in terms of the story and emotions of the character gives them more control.

Helen McBreen and Mervyn Jack ("Evaluating Humanoid Synthetic Agents in E-Retail Applications", pp. 394-405) evaluated a range of different styles of synthetic agent within the interfaces of some e-retail applications. These included: video, a 3-D talking head, a photo-realistic image with simple facial expressions, a still image and a disembodied voice. A series of cartoon-like agents were also tested with a range of realism (e.g. with or without a body, 2-D or 3-D). Human subjects used the interfaces with the synthetic agent and then filled in a questionnaire to evaluate their responses. The two applications werea home furnishings service and a personalised CD service. Overall the more realistic and consistent the agent the better they were liked, although a small minority sometimes found them annoying. An aspect I found interesting was that the customers seemed to evaluate the agents somewhat as a whole. So, for example, the appearance of the agent affected the perception of its voice and if the agent inhabited a scene it was expected to be in proportion to it.

Breazeal et al. ("Active Vision for Sociable Robots", pp. 443-453) charts another part of the Kismet project at MIT. Behind this project is a strong paradigm: that the physical and social embedding of a robot will enable the 'bootstrapping' of meaningful sociality through extensive interaction with humans. Thus it takes seriously the particularities of human perception and action as it might enable real robot-human interaction. This article concentrates on the active vision component, documenting how the team have designed a system that implements human-like focus and attention. It has particular importance to the social messages that eye-indicated attention will have for others.

Two of the papers report on mechanisms for learning a sequence of actions by demonstration and observation. The first, by Nicolescu and Rosen ("Learning and Interacting in Human-Robot Domains", pp. 419-430), uses a high-level representation of learning in terms of the robot's capabilities to learn sequences of actions after a single demonstration in terms of the perceptible objects in its domain. They then use this to experiment with getting the robot to indicate problems to a present human using only its movements. The second paper, by Andry et al. ("Learning and Communication via Imitation: An Autonomous Robot Perspective", pp. 431-442), proposes a mechanism which uses pattern (and hence novelty) detection to implement a fairly low-level imitation mechanism. This does not require any feedback from a teacher as to how well it is doing, but rather uses the pattern learning of a neural network to predict future input and then uses errors in this prediction as a signal to correct the learnt sequence. It is claimed that this sort of process is what the hippocampus in humans might be doing. This is a very interesting proposal and the authors have done some simulations of the mechanism to test it, but comprehensive results are not described so one has to take the success of the method on trust.

Other papers in this issue include:

In conclusion, this issue provides a good cross-section of the sort of work being done on socially intelligent agents. This work is only just past its inception, but it does illustrate how the practicalities of getting agents and robots to interact successfully with humans and each other is highlighting the importance of their social embedding. It is an area where one has strong intuitions to guide research. However which of these intuitions turn out to be helpful and which misleading will only become clear as the result of implemented systems being tested in non-trivial tasks and environments. This is a task which abstract theorising will not help with, however elaborate it is.


* References

BURNS T. R. and E. Engdahl 1998. The social construction of consciousness part 2: Individual selves, self-awareness, and reflectivity. Journal of Consciousness Studies, 2:166-184.

DAUTENHAHN K. 1997. I could be you: The phenomenological dimension of social understanding. Cybernetics and Systems, 28:417-453.

EDMONDS B. 2000. The constructability of Artificial Intelligence (as defined by the Turing Test). Journal of Logic, Language and Information, 9:419-424.

FRENCH R. M. 1990. Sub cognition and the limits of the Turing Test. Mind, 99:53-65.

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2002