Cristiano Castelfranchi (1998) 'Through the Minds of the Agents'
Journal of Artificial Societies and Social Simulation vol. 1, no. 1, <http://jasss.soc.surrey.ac.uk/1/1/5.html>
To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary
Received: 28-Dec-1997 Published: 3-Jan-1998
This is a contribution to the Forum section, which features debates, controversies and work in progress. There are responses to this article.
[in Game Theory] a positive social action ("Cooperation") is an action which leads the agent to sustaining an intrinsically social cost. In other words, there is no cooperation without a penalty for the cooperative agent. This implicit conceptualisation of social action ties it up to the paradigm of bargaining. It characterises sociality as a costly and dangerous move, in which agents punish and reward each other at the same time, in which there can be no benefit without costs, in which agents face each other each trying to get away scot free. It is a view of sociality as a necessary evil, where agents are fundamentally opponents (as game-theorists indeed define them). Such a view is deeply engrained in the utilitarian philosophy. Actually, it is the only possible view of sociality if one takes a fundamentally utilitarian, that is to say formal in the sense previously defined, notion of interdependence. But it is by no means the only possible view of sociality. Why should a "cooperative" move be by definition less convenient than other moves? why it should necessarily have an additional "social" cost? (Castelfranchi and Conte, 1997)
Functions are just effects of the behavior of the agents, that go beyond the intended effects (are not intended) and succeed in reproducing themselves because they reinforce the beliefs and the goals of the agents that caused that behavior. Then:
- First, behavior is goal-directed and reasons-based; i.e. is intentional action. The agent bases its goal-adoption, its preferences and decisions, and its actions on its Beliefs (this is the definition of "cognitive agents").
- Second, there is some effect of those actions that is unknown or at least unintended by the agent.
- Third, there is circular causality: a feedback loop from those unintended effects to increment, reinforce the Beliefs or the Goals that generated those actions.
- Fourth, this "reinforcement" increases the probability that in similar circumstances (activating the same Beliefs and Goals) the agent will produce the same behavior, then "reproducing" those effects.
- Fifth, at this point such effects are no longer "accidental" or unimportant: although remaining unintended they are teleonomically produced (Conte and Castelfranchi, 1995, ch.8): that behavior exists (also) thanks to its unintended effects; it was selected by these effects, and it is functional to them.Even if these effects could be negative for the goals or the interested of (some of) the involved agents, their behavior is "goal-oriented" to these effects.
Cooperation emerges from the reflex of the past in the mirror of the future. The action remains anticipatory and goal-directed, but is influenced by (possibly not understood) lessons of the past.
2 Just a couple of examples: consider, in economics, the literature on the "irrational" bias due to sunk costs (for example, Bazerman, 1990); in game theoretic approaches to the social sciences consider Olson's claim about the irrationality of participation in social movements (Olson, 1965). In general one should remind Keynes' criticisms to the economists as "bentelhamist".
3 By "sub-cognitive" agents I mean agents whose behaviour is not regulated by an internal explicit representation of its purpose and by explicit beliefs. Sub-cognitive agents are for example simple neural-net agents, or mere reactive agents.
4 Cognitive agents are agents whose actions are internally regulated by goals (goal-directed) and whose goals, decisions, and plans are based on beliefs. Both goals and beliefs are cognitive representations that can be internally generated, manipulated, and subject to inferences and reasoning. Since a cognitive agent may have more than one goal active in the same situation, it must have some form of choice/decision, based on some "reason" i.e. on some belief and evaluation.
Notice that we use "goal" as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions, etc.
5 Of course one might consider the relation between beliefs and goals generating and controlling behaviours as just a complex and flexible form of condition/action rule: where the condition is a configuration of beliefs to be checked up, and the action is a configuration of activated goals to be possibly planned. I don't think this is wrong, I just claim that cognitive representation must be analytically modelled to understand cooperation.
CASTELFRANCHI, C. 1997. Challenges for agent-based social simulation. The theory of social functions. IP-CNR, TR. Sett.97; invited talk at SimSoc'97, Cortona, Italy
CASTELFRANCHI C. and Conte, R. 1992. Emergent functionalitiy among intelligent systems: Cooperation within and without minds. AI & Society, 6, 78-93.
CASTELFRANCHI, C. and Conte, R., 1997. Limts of Strategic Rationality for Agents and M-A Systems. In A. Cesta & P.Y. Shobbens (eds.) Proceedings of the 4th ModelAge Workshop on "Formal Models of Agents",pp. 59-70
CONTE R. and Castelfranchi C. 1995. Cognitive and Social Action, UCL Press, London.
CONTE, R., Miceli, M. and Castelfranchi, C., 1991. Limits and Levels of Cooperation. Disentangling various tvpes of prosocial interaction. In Y. Demazeau & J. P. Muller (Eds.) Decentralized AI - 2, Amsterdam, Elsevier.
ELSTER, J. 1982. Marxism, functionalism and game-theory: the case for methodological individualism. Theory and Society 11, 453-81.
MATARIC, M. 1992. Designing Emergent Behaviors: From Local Interactions to Collective Intelligence. In Simulation of Adaptive Behavior 2. MIT Press. Cambridge.
OLSON, M. 1965. The Logic of Collective Action. Cambridge, Mass. Harvard University Press.
STEELS, L. 1990. Cooperation between distributed agents through self-organization. In Y. Demazeau and J.P. Mueller (eds.) Decentralized AI North-Holland, Elsevier.
TUOMELA, R. and Bonnevier-Tuomela, M. 1997. From social imitation to teamwork. In G. Holmstrom-Hintikka and R. Tuomela (eds.) Contemporary Action Theory, Kluwer, Dordrecht, Vol. II, 1-47.
VAN PARIJS , P. 1982. Functionalist marxism rehabilited. A comment to Elster. Theory and Society,11, 497-511.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 1998