JASSS logo

(8 articles matched your search)

Normative Reputation and the Costs of Compliance

Cristiano Castelfranchi, Rosaria Conte and Mario Paolucci
Journal of Artificial Societies and Social Simulation 1 (3) 3

Abstract: In this paper, the role of normative reputation in reducing the costs of complying with norms will be explored. In previous simulations (Conte & Castelfranchi 1995), in contrast to a traditional view of norms as means for increasing co-ordination among agents, the effects of normative and non-normative strategies in the control of aggression among agents in a common environment was confronted. Normative strategies were found to reduce aggression to a much greater extent than non-normative strategies, and also to afford the highest average strength and the lowest polarisation of strength among the agents. The present study explores the effects of the interaction between populations following different criteria for aggression control. In such a situation the normative agents alone bear the cost of norms, due to their less aggressive behaviour, while other agents benefit from their presence. Equity is then restored by raising the cost of aggression through the introduction of agents' reputation. This allows normative agents to avoid respecting the cheaters' private property, and to impose a price for transgression. The relevance of knowledge communication is then emphasised by allowing neighbour normative agents to communicate. In particular, the spreading of agents' reputation via communication allows normative agents to co-operate without deliberation at the expense of non-normative agents, thereby redistributing the costs of normative strategies.

Special Interest Group on Agent-Based Social Simulation

Rosaria Conte and Scott Moss
Journal of Artificial Societies and Social Simulation 2 (1) 4


Intelligent Social Learning

Rosaria Conte and Mario Paolucci
Journal of Artificial Societies and Social Simulation 4 (1) 3

Abstract: One of the cognitive processes responsible for social propagation is social learning, broadly meant as the process by means of which agents' acquisition of new information is caused or favoured by their being exposed to one another in a common environment. Social learning results from one or other of a number of social phenomena, the most important of which are social facilitation and imitation. In this paper, a general notion of social learning will be defined and the main processes that are responsible for it, namely social facilitation and imitation, will be analysed in terms of the social mental processes they require. A brief analysis of classical definitions of social learning is carried on, showing that a systematic and consistent treatment of this notion is still missing. A general notion of social learning is then introduced and the two main processes that may lead to it, social facilitation and imitation, will be defined as different steps on a continuum of cognitive complexity. Finally, the utility of the present approach is discussed. The analysis presented in this paper draws upon a cognitive model of social action (cf. Conte & Castelfranchi 1995; Conte 1999). The agent model that will be referred to throughout the paper is a cognitive model, endowed with mental properties for pursuing goals and intentions, and for knowledge-based action. To be noted, a cognitive agent is not to be necessarily meant as a natural system, although many examples examined in the paper are drawn from the real social life of humans. Cognitive agents may also be artificial systems endowed with the capacity for reasoning, planning, and decision-making about both world and mental states. Finally, some advantages of intelligent social learning in agent systems applications are discussed. Keywords:

From Social Monitoring to Normative Influence

Rosaria Conte and Frank Dignum
Journal of Artificial Societies and Social Simulation 4 (2) 7

Abstract: This paper is intended to analyse the concepts involved in the phenomena of social monitoring and norm-based social influence for systems of normative agents. These are here defined as deliberative agents, representing norms and deciding upon them. Normative agents can use the norms to evaluate others' behaviours and, possibly, convince them to comply with norms. Normative agents contribute to the social dynamics of norms, and more specifically, of norm-based social control and influence. In fact, normative intelligence allows agents to Check the efficacy of the norms (the extent to which a norm is applied in the system in which it is in force), and possibly Urge their fellows to obey the norms. The following issues are addressed: What is norm-based control? Why and how do agents exercise control on one another? What role does it play in the spread of norms?

Responsibility for Societies of Agents

Rosaria Conte and Mario Paolucci
Journal of Artificial Societies and Social Simulation 7 (4) 3

Abstract: This paper presents a pre-formal social cognitive model of social responsibility as implying the deliberative capacity of the bearer but not necessarily her decision to act or not. Also, responsibility is defined as an objective property of agents, which they cannot remit at their will. Two specific aspects are analysed: (a) the action of "counting upon" given agents as responsible entities, and (b) the consequent property of accountability: responsibility allows to identify the locus of accountability, that is, which agents are accountable for which events and to what extent. Agents responsible for certain events, and upon which others count, are asked to account or respond for these events. Two types of responsibility are distinguished and their commonalities pointed out: (a) a primary form of responsibility, which is a consequence of mere deliberative power, and (b) a task-based form, which is a consequence of task commitment. Primary responsibility is a relation between deliberative agents and social harms, whether these are intended and believed or not, and whether they are actually caused by the agent or not. The boundaries of responsibility will be investigated, and the conceptual links of responsibility with obligation and guilt will be examined. Task-based responsibility implies task- or role-commitment. Furthermore, individual Vs. shared Vs. collective responsibility are distinguished. Considerations about the potential benefits and utility of the analysis proposed for in the field of e-governance are highlighted. Concluding remarks and ideas for future works are discussed in the final section.

Introduction to the Special Section on Reputation in Agent Societies

Mario Paolucci and Jordi Sabater-Mir
Journal of Artificial Societies and Social Simulation 9 (1) 16

Abstract: This special section includes papers from the 'Reputation in Agent Societies' workshop held as part of 2004 IEEE/WIC/ACM International Joint Conference on Intelligent Agent Technology (IAT'04) and Web Intelligence (WI'04), September 20, 2004 in Beijing, China. The purpose of this workshop was to promote multidisciplinary collaboration for Reputation Systems modeling and implementation. Reputation is increasingly at the centre of attention in many fields of science and domains of application, including economics, organisations science, policy-making, (e-)governance, cultural evolution, social dilemmas, socio-dynamics, innofusion, etc. However, the result of all this attention is a great number of ad hoc models and little integration of instruments for the implementation, management and optimisation of reputation. On the one hand, entrepreneurs and administrators manage corporate and firm reputation without contributing to or accessing a solid, general and integrated body of scientific knowledge on the subject matter. On the other hand, software designers believe they can design and implement online reputation reporting systems without investigating what the properties, requirements and dynamics of reputation in natural societies are and why it evolved. We promoted the workshop and this special section with the hope of setting the first steps in the direction of a new, cross-disciplinary approach to reputation, accounting for the social cognitive mechanisms and processes that support it and working towards t a consensus on essential guidelines for designing or shaping reputation technologies.

Repage: REPutation and ImAGE Among Limited Autonomous Partners

Jordi Sabater-Mir, Mario Paolucci and Rosaria Conte
Journal of Artificial Societies and Social Simulation 9 (2) 3

Abstract: This paper introduces Repage, a computational system that adopts a cognitive theory of reputation. We propose a fundamental difference between image and reputation, which suggests a way out from the paradox of sociality, i.e. the trade-off between agents' autonomy and their need to adapt to social environment. On one hand, agents are autonomous if they select partners based on their social evaluations (images). On the other, they need to update evaluations by taking into account others'. Hence, social evaluations must circulate and be represented as "reported evaluations" (reputation), before and in order for agents to decide whether to accept them or not. To represent this level of cognitive detail in artificial agents' design, there is a need for a specialised subsystem, which we are in the course of developing for the public domain. In the paper, after a short presentation of the cognitive theory of reputation and its motivations, we describe the implementation of Repage.

What Do Agent-Based and Equation-Based Modelling Tell Us About Social Conventions: The Clash Between ABM and EBM in a Congestion Game Framework

Federico Cecconi, Marco Campenni, Giulia Andrighetto and Rosaria Conte
Journal of Artificial Societies and Social Simulation 13 (1) 6

Abstract: In this work simulation-based and analytical results on the emergence steady states in traffic-like interactions are presented and discussed. The objective of the paper is twofold: i) investigating the role of social conventions in coordination problem situations, and more specifically in congestion games; ii) comparing simulation-based and analytical results to figure out what these methodologies can tell us on the subject matter. Our main issue is that Agent-Based Modelling (ABM) and the Equation-Based Modelling (EBM) are not alternative, but in some circumstances complementary, and suggest some features distinguishing these two ways of modeling that go beyond the practical considerations provided by Parunak H.V.D., Robert Savit and Rick L. Riolo. Our model is based on the interaction of strategies of heterogeneous agents who have to cross a junction. In each junction there are only four inputs, each of which is passable only in the direction of the intersection and can be occupied only by an agent one at a time. The results generated by ABM simulations provide structured data for developing the analytical model through which generalizing the simulation results and make predictions. ABM simulations are artifacts that generate empirical data on the basis of the variables, properties, local rules and critical factors the modeler decides to implement into the model; in this way simulations allow generating controlled data, useful to test the theory and reduce the complexity, while EBM allows to close them, making thus possible to falsify them.