© Copyright JASSS

  JASSS logo

Jim Doran (1998) 'Simulating Collective Misbelief'

Journal of Artificial Societies and Social Simulation vol. 1, no. 1, <https://www.jasss.org/1/1/3.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 29-Oct-1997      Accepted: 13-Dec-1997      Published: 3-Jan-1998

----

* Abstract

It appears that what the agents in a multiple agent system believe is typically partial, often wrong and often inconsistent, but that this may not be damaging to the system as a whole. Beliefs which are demonstrably wrong I call misbeliefs. Experiments are reported which have been designed to investigate the phenomenon of collective misbelief in artificial societies, and it is suggested that their results help us to understand important human social phenomena, notably ideologies.


Keywords:
Multiagent system, collective belief, ideology

* Introduction

1.1
Using computers to simulate naturally occurring societies is a rapidly growing area of research (Gilbert and Doran, 1994; Gilbert and Conte, 1995). Much of the ongoing work is set at the level of societies in which each individual is relatively low on the cognitive scale, but some aims to gain insight into human society, addressing such issues as social action, cognized models, group formation, planned cooperation and "macro-level emergence". The techniques of distributed AI (O'Hare and Jennings,1996) may be used to support this latter work.

1.2
Some explanation and justification of methodology is appropriate. By working in the computational domain we are able to study multiple agent systems independently of any particular modeling interpretation. We can establish rigorously and objectively what consequences, including non-obvious consequences, flow from what assumptions. We can therefore hope to develop an abstract theory of multiple agent systems and then to transfer its insights to human social systems, without an a priori commitment to existing particular social theory. Of course, at a certain level assumptions must be built into whatever systems we create and experiment with, but the assumptions may be relatively low level, and their consequences (including emergent properties) may be discovered, not guessed.

* Agents And Artificial Societies

2.1
Agents may be characterized as computational mechanisms situated in an environment that repeatedly select and perform actions in the light of their current input from the environment (perception) and their current internal state. In general, the actions that agents may perform include the sending of messages to other agents.

2.2
A variety of different agent "architectures" have been designed in recent years and their properties partially explored (see, for example, Wooldridge and Jennings, 1995; Wooldridge, Muller, Tambe, 1996). One simple type of agent architecture, often called a reflex agent (Russell and Norvig, 1995), comprises the following main components: Thus the essential functioning of the agent is that the results of perception are deposited as tokens in the working memory, and matching rules are "fired" -- with the result that actions are performed by the agent in its environment. The heart of such an agent is thus the set of condition-action rules which it contains.

2.3
A standard extension to this design is to add a semi-permanent body of (symbolically coded) information to the working memory, an agent memory, and to include rules which update the contents of both the working memory and, from time to time, the agent memory. The agent is then, in effect, able to maintain and update some sort of representation of and beliefs about its environment. This type of agent, which may be called an extended-agent, will be that at issue for the remainder of this paper.

2.4
A computational multiple agent system is then a set of agents (here extended-agents) which share an environment and which interact and possibly pass messages in some form one to another. The agents in a computational multiple agent system are sometimes referred to as an artificial society (Gilbert and Conte, 1995; Epstein and Axtell, 1996).

* Beliefs And Collective Beliefs

3.1
I take a simple but sufficient view of what it means for an extended-agent to believe p. This is that p may be "decoded" from the agent's memory -- which implies that there are systematic processes by which suitably coded propositions are added to and deleted from and manipulated in the agent's memory and which, as viewed by an outside observer, are consistent with the working of its perceptual and effector processes. I also limit consideration to beliefs which are essentially descriptive, and ignore the matter of degrees of belief. For a much more developed notion of belief see Mack (1994).

3.2
By a collective belief p of a group of agents, I mean merely that the great majority of agents in the group in question have the belief p. This definition is about the simplest possible. There is no requirement, for example, for agents to believe that others hold the belief p, still less any notion of mutual belief or belief associated with a role. The collective beliefs (or collective belief system) of a group of agents are then those beliefs that occur widely amongst the agents. Again, this simple concept of collective belief, imprecise though it is, is sufficient for our purposes. For more developed theories of group belief see Tuomela (1992) and Wooldridge and Jennings (1998).

The Impact Of Belief Upon Behaviour

3.3
An extended-agent's behaviour is determined by a combination of its beliefs and its rules. At first sight, it may seem somewhat counter-intuitive that to change an agent's behaviour one may change either its rules or its beliefs, but this is clearly so. If the beliefs in an extended-agent's memory are changed then, assuming a fixed rule set, the actions it performs will change. In human terms, I will behave differently when I awake in the morning if I believe it is Sunday rather than Monday. More dramatically you can perhaps make me "commit suicide" if you can make me believe that a glass of nitric acid is a glass of water -- or that the dead go to a "better world".

Misbelief

3.4
The examples of the foregoing section raise the issue of the accuracy of beliefs. When an agent believes something which is (from the point of view of an external observer or experimenter) demonstrably NOT the case in its environment, I speak of the agent's misbelief. There is an analogous notion of collective misbelief. Of course, an agent may have beliefs whose soundness is difficult to determine. An agent may, for example, believe that there are two types of agent in its world, those that should be attacked if possible and those that should not. How does one assess the soundness of such a belief?

3.5
In general, sources of limited or mistaken belief in a multiple agent system may easily be identified. They include:

3.6
To the extent that these factors are at work within the agent community partial, inconsistent and errorful belief may occur and may well be the norm. Further, to the extent that there is belief harmonization between spatially neighboring agents, for example by agents communicating possibly erroneous beliefs one to another ("Have you heard, the Martians have landed?"), collective misbelief will also be the norm. Belief harmonization may be achieved in many ways. All that is required is some process which tends to bring the beliefs that neighbouring agents hold into agreement, irrespective of their soundness.

3.7
It is natural to suppose that collective misbelief must be detrimental to the society of agents in question. However, it is easy to see that this is not always the case. To take a human example, consider a group who (incorrectly) believe that a certain stream is holy so that water may not be taken from it. This may be to their detriment. But it may also be to their benefit if the stream is dangerously contaminated in some non-obvious way. Indeed it has been argued that collective misbelief may often be necessary for the survival of those who hold it. Thus the anthropologist Roy Rappaport, discussing the importance of the Tsembaga people's cognized model (which loosely corresponds to what is here called "collective belief") remarks: "It can thus be argued that the cognized model is not only not likely to conform in all respects to the real world... but that it must not" (Rappaport, 1984, page 239). He is referring particularly, but not exclusively, to the way in which socially beneficial ritual truces between different sections of the population are reinforced by mutual fear of the anger of their (dead) ancestors should a truce be violated.

* Experimental Scenarios

4.1
Now I discuss experimental work intended to explore the genesis and impact of collective (mis)belief. A typical experiment involves the "evolution" of (the beliefs of) a population of agents in a specified environment, so that the emergent properties of the population may be related to its initial properties and to the properties of the environment. The evolutionary process itself is akin to the techniques of genetic algorithms and genetic programming, and the emphasis on the evolution of beliefs is similar to Dawkins' (1989) notion of evolving populations of self-propagating ideas or memes (to which I shall return later).

4.2
In Doran (1994) I reported the properties of an artificial society set "on a line" which was simple enough to be given a full mathematical description, and whose simulation properties demonstrated the potential impact of a certain type of collective misbelief which may be called attribute error. Attribute error is present where agents do not hold false beliefs about the existence of entities in their world, or the types of those entities, but are in error about their more detailed properties -- such attributes as colour or dimension. In the experiments reported it was shown, in brief, that in certain circumstances the population of agents would typically evolve to a steady state in which their collective beliefs about their own precise locations on the line were inaccurate, but had the effect of enhancing individual agent survival and hence increasing average population size. Here I report experiments with more complex computational systems which are no longer capable of simple mathematical description, but which nevertheless enable the investigation into collective misbelief to be taken further.

The Scenario-3 Testbed

4.3
The experiments to be described used the software testbed SCENARIO-3 (written in the programming language C) which supports in simulation:

The passing of time is simulated within the testbed as a sequence of "time units", within each of which events at different locations (e.g. the movements of agents) take place simultaneously.

4.4
Words such as "harvesting" and "killing", used to aid intuitive understanding, denote relatively simple events within the testbed. For example, "harvesting" is said to occur when an agent located at a resource reduces the energy level of the resource to zero, and increments its own internal energy store by a corresponding amount. Energy is used up by an agent (reducing its energy store) as an agent moves around in the testbed. "Killing" involves two agents meeting and one possibly becoming "dead" (and deleted from the world) and the killer acquiring the killee's energy store. One agent maintains a representation of another when it holds information (not necessarily accurate) about the other and about some of the other's characteristics.

4.5
In the SCENARIO-3 testbed, many important parameters are under the control of the experimenter. These include the perceptual range of the agents, their rate of movement, the range over which agents can communicate one with another, and their memory capacity. And many potentially complex sequences of sub-events, for example those which in principle would determine each particular agent reproduction, are bypassed by an appeal to suitably chosen probability distributions.

Experiments I: The Impact Of Pseudo-Agents

4.6
The set of experiments I shall now describe focuses on awareness by agents of other agents and of self-renewing energy resources, and on action to harvest them. It particularly concerns the impact of agents (mis)believing in non-existent agents -- what may be called existence errors (in contrast to the attribute errors of the work briefly outlined earlier). The experimental scenario includes simple forms of agent death and reproduction.
The First Experimental Scenario in Detail
4.7
N agents and M resources are located on a plane. They are initially randomly distributed in the unit square with corner points (0,0), (0,1), (1,1), (1,0) in rectangular coordinates, but agents may leave this square as the experiment proceeds. Resources have fixed locations. Each agent has an energy level, which declines with time and which is restored by resource consumption. Resources renew periodically, which implies that there is a maximum carrying capacity for the environment

4.8
If an agent's energy level declines to zero, the agent dies and is removed. Further, if an agent enters a particular "fatal" zone (the circle centered at [0.5,0.5] and with radius 0.25) it immediately dies. In each time unit, there is a small probability that any given agent may reproduce, that is, create an exact copy of itself located at an adjacent point).

4.9
The actions taken by an agent in each time unit, in order, are the following:

4.10
In these experiments, agents are aware of the current locations of all resources and agents in the environment. Additionally, they may (mis)believe in the existence (essentially, the location) of a small number of agents that do not, in fact, exist in the environment. I refer to these as pseudo-agents. Pseudo-agents do not move and cannot, of course, consume resources. It follows that an agent may pass up a good opportunity to harvest a resource from belief in a pseudo-agent.

4.11
Initially, each agent independently (mis)believes in a small set of randomly generated pseudo-agents. Different agents believe in different pseudo-agents. Such beliefs are typically handed on from 'parent' to 'child', with a possibility of random variation when agents' energy levels are low. In addition, and importantly, the testbed may be set so that agents tend to harmonize their beliefs with their nearest spatial neighbours. Specifically, agents are scanned in turn in each time unit, and each takes over the beliefs (about the locations of pseudo-agents) of its nearest neighbour where they differ from its own. If the circumstances are right, therefore, a particular pattern of (mis)beliefs may spread through the population.

4.12
Each agent decides its movements by reference to all its current beliefs, including beliefs about pseudo-agents, not by reference merely to the actual state of the environment. The agents in no way distinguish between real and pseudo-agents when deciding their actions.

Results and Discussion

4.13
In systematic experiments it has been found that the agent society enters semi-stable states in which agent survivability is demonstrably enhanced by collective misbelief in small sets of pseudo-agents. Typically, most of the agents in the society come to share (mis)beliefs in a small number of pseudo-agents in the fatal zone. This causes these agents to seek resources elsewhere, away from the fatal zone and therefore more safely. The effect is to maintain a substantially (about 50%) greater agent population than would otherwise be the case (as determined in control trials). In evolutionary terms, what happens is that groups of agents with particular collective beliefs "compete", and those groups with beliefs which help them to survive tend to increase their numbers at the expense of the remainder.

4.14
Simple though these experiments are, they do illustrate the socially beneficial impact of what is perhaps the most straightforward type of misbelief that can occur in a multi-agent system -- existence errors.

Experiments II: "Cults"

4.15
The second set of experiments involves a significantly different type of misbelief, category error, where an agent assigns an entity to the wrong type category and reacts to it accordingly. Thus in this particular experimental scenario, agents can come to believe that a non-agent, that does indeed exist, is an agent. There is also a more complex interaction between misbelief and behaviour.

4.16
These experiments again use the SCENARIO-3 testbed, but now using that feature of the testbed which enables agents, in certain circumstances, to "kill" one another. Also new are a particular type of agreement, a friendship and, crucially, the notion of a resource agent.

Friendship
4.17
An agent may decide that another agent, which it happens to meet, is its friend. The chance of such an outcome to a meeting is determined by the experimenter. If an agent X does come to "think of" an agent Y as a friend, then X passes information about resources to Y (e.g. their locations) whenever Y is within message passing range AND X never attempts to kill Y . Note that friendship is not necessarily symmetric. X may treat Y as a friend whilst Y does not so treat X.

4.18
There is another important aspect of friendship. An agent X will not attempt to kill an agent Y (even though X does not view Y as a friend) if it is the case that X and Y have a believed friend, say agent Z, in common. This latter aspect of friendship is crucial to what follows.

Resource Agents and Cults
4.19
The testbed may be set so that from time to time an agent "agentifies" a resource, that is, wrongly comes to think of a resource as if it were an agent. We may call a pseudo-agent like this a resource agent. In its thinking an agent does not distinguish between resource agents and real agents, so an agent may even regard a resource agent as a friend and act towards it accordingly. Of course, messages sent to a resource agent go nowhere.

4.20
Once an agent forms a representation of a resource agent, that representation may be passed to other agents by inter-agent communication. It may also be passed from one agent to its offspring. If the circumstances are right, therefore, the representation may spread. When a set of agents all come to believe in the same resource agent, and that they have that resource agent as a friend, we may call that a cult. The resource agent in question may be called the cult head. It follows immediately from the properties of the friendship relation given earlier that the members of a cult will not kill one another.

Results and Discussion
4.21
What is found experimentally is that even a very low-frequency possibility of agents coming to believe in resource agents can regularly lead to the formation of large and enduring cults and that, all other things being equal, killing in the society is then greatly reduced and the average population of the society over time increased.

4.22
To give the reader a feel for these experimental trials and what happens within them, there follows a summary account of key events in one typical trial (and see Figure 1). The agent and resource identifiers are exactly as they appear in the testbed and its output:

Resource number 11 was initially 'conceived' as a friendly agent, 110000000rrr, by agent number 248 in time unit 251.

At time 268 this resource agent had just one 'host' agent in a population of only 3 agents - not agent 248, which died before time 268, but agent 8000254. A cult around 110000000rrr built up thereafter (comprising descendants of 8000254), typically containing about 30 member agents, and lasted for hundreds of time units.

Initially the cult was also around a 'dead' real agent 20000267 which itself had lived for only one time unit, but was also regarded as a friend by agent 8000254. However, memory of 20000267 was lost in time unit 287.

4.23
Note the appearance of a "dead" agent in the account. The potential of dead agents as cult heads was not anticipated, though obvious enough in hindsight. In fact, a dead agent is not as effective as a resource agent as a cult head because the latter, unlike the former, can be "seen" and awareness of it thereby refreshed. What matter most for the formation and indefinite survival of the cult is the effective "immortality" of its head.

4.24
It should be made clear that these experimentally observed phenomena are not entirely straightforward to obtain. As was indicated earlier, the agent society and world embody many parameters (e.g. dimensions, the rate of renewal of resources, agent perception range, agent rate of movement, agent life span, agent memory span, the probability of a new friendship in any time unit, the probability of an agent seeing a resource as a resource agent in any time unit), and different combinations of settings for these parameters often lead to very different outcomes. A heuristic is that the chance of cults is greater the greater the contrast between the "immortality" of the potential cult head and the "mortality" of the potential cult members. Thus relatively short agent life and memory spans make the formation of cults more likely.

4.25
Current experiments are focussing on the dynamics of multiple competing cults each exploiting several resources, and with each agent possibly a member of several cults. These experiments raise such issues as competition between cults, and also the formation of hierarchies topped by a cult head.

Future Work

4.26
The two sets of experiments reported have demonstrated that it is possible to evolve collective misbelief systems in an artificial society which interact in such a way with prior rules governing agent behaviour that the society as a whole benefits. Some of the relevant conditions have been established. But much more experimental work needs to be done to approach a full understanding. Thus the connection between different patterns of misbelief and their impact upon different types of environment needs further examination. Is it the case that in more realistic scenarios the possible benefits of collective misbelief tend to be outweighed by disadvantages?

4.27
For any particular collective belief system, two different questions may be asked: how effective is it in context? and how may it come into existence? These two questions are not as well separated as they should be in the experiments reported here. And the answer to the latter question is surely more than the ad hoc random processes incorporated in Scenario-3 (see next section).

4.28
Finally, what may be called biased belief systems merit investigation. These are belief systems whose effect is to favour one subset of the agents in the system against another. It might be, for example, that agents with certain characteristics are rewarded by the belief system in a way that would not be so were another belief system in place.

* A Computational Theory Of Ideologies?

5.1
The foregoing experiments and associated discussion related to agents in computer based artificial societies. But any discussion of collective belief and misbelief in a system of non-trivial agents, and its functional significance, prompts consideration of a connection with the "social construction of reality" and with "ideologies" in the sense that they are studied in social science. By "ideologies" social scientists typically mean systems of belief held in a society which enable and validate action (especially group motivated action) and/or erroneous belief systems which enable or support domination of one human group or class by another (e.g. Thompson, 1984). It seems reasonable that insofar as ideologies may be regarded as observable features of human society and may be objectively studied (which is admittedly a matter of some controversy), then studies of collective belief and misbelief in DAI agent systems must be one way to further our understanding of their dynamics.

5.2
Approaching ideologies from an AI and artificial societies standpoint, with a consequent emphasis on the connection between micro and macro social phenomena, makes it natural to suggest that the origins of ideological beliefs lie in the inherent cognitive ability of individuals to manipulate beliefs, by adaptation, generalisation or processes of "generate and test", in a manner which is heuristic rather than fully reliable. Manipulation of kin beliefs and concepts may be especially significant (compare Read, 1995), a possibility explored in the context of the EOS project by Mayers (1995). It also seems likely that a significant impact of ideologies is to coordinate the actions of people and to limit, in effect, their individual autonomy by creating "social norms" so enhancing survival. This is the view argued by Rappaport, 1971. Conte and Castelfranchi, 1995, and Ephrati et al., 1995, have recently reported computer based studies from a similar standpoint.

5.3
In all this work there is a close relationship with memes (Dawkins, 1989), that is, ideas or beliefs which propagate and evolve in the noosphere, the collective mental space of a population. Computer based experimental work by Bura (1994) and Hales (1997) has studied in detail the dynamics and functional impact of memes and meta-memes in simple artificial worlds. Further, the work reported here should also be compared with that of Reynolds (1995). The collective beliefs studied by Reynolds, however, are differently represented, and are primarily directed to effective problem solving.

* Conclusions

6.1
It is clear that belief and collective misbelief are inevitably at the heart of the behaviour of natural and artificial societies. Further, collective misbelief is not necessarily something to be avoided -- it may be functional if matched to the agents' environment. This observation is not in itself new. What is new and demonstrated here is our emerging ability to address particular questions and conjectures about such issues by way of precise and targeted computer-based experiments in which functional collective misbelief (involving attribute, existence and category errors) is evolved in an agent society. This has important implications for the study of ideologies in human societies.
----

* Acknowledgements

Much of the experimental work described here has used supercomputer facilities made available at the Centro di Calcolo Interuniversitario dell'Italia Nord-Orientale, under the EC CINECA ICARUS programme.

----

* References

BURA S (1994) MINIMEME: of Life and Death in the Noosphere. In From Animals to Animats 3, Proceedings of the Third International Conference on Simulation of Adaptive Behavior (eds D Cliff, P Husbands, J-A Meyer & S W Wilson) MIT Press: Cambridge Mass. & London, England. pp 479-486.

CONTE R. and Castelfranchi C. (1995). Understanding the Functioning of Norms in Social Groups through Simulation. In Artificial Societies (eds N Gilbert and R Conte). pp. 252-267. UCL Press: London.

DAWKINS R (1989) The Selfish Gene Oxford University Press: Oxford & New York (new edition).

DORAN J (1994) Modelling Collective Belief and Misbelief. In AI and Cognitive Science '94 (eds M Keane et al). Dublin University Press. pp 89-102.

EPHRATI E, Pollack M E and Ur S (1995) Deriving Multi-Agent Coordination through Filtering Strategies, Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), Montreal, August 20-25th 1995. pp. 679-685.

EPSTEIN J M and Axtell R L (1996) Growing Artificial Societies: Social Science from the Bottom Up. The Brookings Institution Press: Washington D.C. & The MIT Press: Cambridge, Mass.

GILBERT N and Doran J eds. (1994) Simulating Societies: the Computer Simulation of Social Phenomena UCL Press: London.

GILBERT N and Conte R eds (1995) Artificial Societies: the Computer Simulation of Social Life. UCL Press: London.

HALES D A P (1997) Modelling Meta-Memes. In Simulating Social Phenomena (eds R Conte, R Hegselman, P Terna). Lecture Notes in Economics and Mathematical Systems, 456. pp. 365-384. Springer: Berlin.

MACK D (1994) A New Formal Model of Belief. Proceedings of the Eleventh European Conference on AI (ECAI-94). John Wiley.

MAYERS S D (1995) Modelling the Emergence of Social Complexity using Distributed Artificial Intelligence, Masters dissertation, Department of Computer Science, University of Essex, Colchester, UK.

O'HARE G and Jennings N eds. (1996) Foundations of Distributed AI Wiley Inter-Science.

RAPPAPORT R A (1971) The Sacred in Human Evolution. Annual Review of Ecology and Systematics 2:23-44.

RAPPAPORT R A (1984) Pigs for the Ancestors (2nd edition) Yale University Press. New Haven and London.

READ D W (1995) Kin Based Demographic Simulation of Societal Processes. Journal of Artificial Societies and Social Simulation vol. 1, no. 1, <https://www.jasss.org/1/1/1.html>

REYNOLDS R G (1995). Simulating the Development of Social Complexity in the Valley of Oaxaca using Cultural Algorithms. Pre-Proceedings of Simulating Societies '95 Symposium (SIMSOC'95). Boca-Raton, Florida, September 1995.

RUSSELL S & Norvig P (1995) Artificial Intelligence: a Modern Approach. Prentice-Hall.

THOMPSON J B (1984) Studies in the Theory of Ideology. Polity Press: Cambridge.

TUOMELA R (1992) Group Beliefs. Synthese 91:285-318.

WOOLDRIDGE M J and Jennings N R eds (1995) Intelligent Agents LNAI 890, Springer: Berlin.

WOOLDRIDGE M J and Jennings N R (1998) Formalizing the Cooperative Problem Solving Process. Readings in Agents. (eds M N Huhns and M P Singh) pp. 430-440. Morgan Kaufmann: San Francisco, California.

WOOLDRIDGE M J, Muller J P, Tambe M eds (1996) Intelligent Agents II, LNAI 1037, Springer: Berlin.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1998