BEN: An Architecture for the Behavior of Social Agents

: Overthelastfewyears,theuseofagent-basedsimulationstostudysocialsystemshasspreadtomany domains(e.g., geography, ecology, sociology, economy). Thesesimulationsaimtoreproducereallifesituations involving human beings and thus need to integrate complex agents to match the behavior of the simulated people. Therefore, notions such as cognition, emotions, personality, social relationships or norms have to be taken into account, but there is currently no agent architecture that could incorporate all these features and be used by the majority of modelers, including those with low levels of skills in programming. In this paper, the BEN (Behavior with Emotions and Norms) architecture is introduced to tackle this issue. It is a modular architecturebasedontheBDImodelofcognitionandfeaturingmodulestoaddemotions, emotionalcontagion, personality, social relationships and norms to agent behavior. This architecture is integrated into the GAMA simulationplatform. AnapplicationofBENtothesimulationoftheevacuationofanightclubonfireispresented and shows the complexity of behaviors that may be developed with this architecture to create credible and expressive simulations.


Introduction .
BDI architectures have also been derived to integrate norms and obligations, creating the BOID (Belief Obligation Intention Desire) architecture (Broersen et al. ). Obligations are added to the classical BDI, challenging the agent's desires. BOID has been extended to BRIDGE (Dignum et al. ) to include a "social awareness" based on social norms. Another approach consists in creating normative systems, which do not need any cognition, as is the case with EMIL-A (Andrighetto et al. ). This model describes all the possible states of the agent in terms of norms which may be fulfilled or violated, leading an agent to make decisions in conformity with the norms of the system. This approach is extended by the NoA architecture (Kollingbaum ). However, these two last works are only theoretical to our knowledge.
. Other works tried to integrate emotional contagion (Hatfield et al. ) into simulation, leading to the AS-CRIBE (Agent-based Social Contagion Regarding Intentions, Beliefs and Emotions) model (Bosse et al. ) describing the dynamic evolution of emotions in a population. This model was improved to take socio-cultural parameters into account in evacuation simulations and became the IMPACT model (van der Wal et al. ). The concepts developed by the authors involve computing the intensity of a mental state (an emotion, a belief or an intention) by taking into account the intensities of these values from other agents. .
With the same idea of combining social features, di erent architectures have been proposed in simulation by mixing cognition either with emotions, personality and social relations (Ochs et al. ) or with emotions, emotional contagion and personality (Lhommet et al. ). These approaches use the OCEAN model (McCrae & John ) of personality to make a link between the social features integrated.

Commentary and synthesis .
There are two main problems when trying to simulate real case scenarios featuring people: handling a large number of agents in a reasonable computation time and creating a credible behavior. The architecture HiDAC (Pelechano et al. ) proposes to take into account psychological notions such as stress to simulate a dense crowd evacuating a building. This work o ers good results in terms of computation time but is only applied to the specific case of inside evacuations. The MASSIS architecture (Pax & Pavón ) proposes another approach to the same problem, creating the agent's behavior with a set of plans triggered by perceptions. However, this work is also dependant of its case studied and is not made to be easily adapted to social simulations in general. .
The EROS principle (Jager ) fosters the use of psychological theories in the definition of social agents to gain credibility and explainabilty from the obtained results. This section has shown that various e orts were made to create general architectures for making decisions, integrating cognitive, a ective, and social dimensions based on psychological theories. However, none of them covers at the same time all these notions, which could be useful from the perspective of a modeler with low level skills in programming who could try and find the best suited dimensions to its own case. .
The BEN architecture aims to fill this void by integrating cognition, emotions, emotional contagion, management of norms and social relations into a single agent architecture for the social simulation. A personality feature is used to help combining these dimensions so an agent makes a decision based on its own perception of the world and its own overall mental state. All these dimensions enable a modeler to define an expressive and complex agent's behavior while being able to explain the observations in common language (e.g., "this agents is doing this action because it has a fear emotion about a thing and at the same time a social relation with this other agent").
. Besides, BEN is independent from any specific application and is built to be modular. Thus, a modeler does not have to use all the components if it is not necessary. The objective is to be as accessible as possible for all the social simulation community. This means compromises have been made to enable the computation of thousands of agents in a reasonable time, while ensuring a decision-making process that takes into account all the elements present in the agent's mental state.

Representing Social Features
. The BEN architecture features notions such as cognition, personality, emotions, emotional contagion, norms and social relations to describe the agents' behavior in the context of a social simulation. In order to link these features together in the architecture, each one of these components has to be represented using a formalism that ensures its compatibility with the others. In this section, the formalization of each social feature is described in details.
. The main part of BEN is the agent's cognition. A cognitive agent may reason over a set of perceptions of its environment and a set of previously acquired knowledge. In BEN, this environment is represented through the concept of predicates. .
A predicate represents information about the world. This means it may represent a situation, an event or an action, depending on the context. As the goal is to create behaviors for agents in a social environment, that is to say taking actions performed by other agents into account with facts from the environment in the decision making process, an information P caused by an agent j with an associated list of value V is represented by P j (V). A predicate P represents an information caused by any or none agent, with no particular value associated. The opposite of a predicate P is defined as not P. .
The rest of the section introduces how notions related to cognition, personality, emotions, emotional contagion, norms and social relations are represented in BEN.

Cognition about the environment
Reasoning with cognitive mental states .
Through the architecture, an agent manipulates cognitive mental states to make a decision; they constitute the agent's mind. A cognitive mental state possessed by the agent i is represented by M i (PMEm,Val,Li) with the following meaning: • M: the modality indicating the type of the cognitive mental state (e.g. a belief, a desire, etc.).
• PMEm: the object with which the cognitive mental state relates. It can be a predicate, another cognitive mental state, or an emotion.
• Val: a real value whose meaning depends on the modality. . The cognitive part of BEN is based on the BDI paradigm (Bratman ) in which agents have a belief base, a desire base and an intention base to store the cognitive mental states about the world. In order to connect cognition with other social features, the architectures outlines a total of six di erent modalities which are defined as follows: • Belief: represents what the agent knows about the world. The value attached to this mental state indicates the strength of the belief.
• Uncertainty: represents an uncertain information about the world. The value attached to this mental state indicates the importance of the uncertainty.
• Desire: represents a state of the world the agent wants to achieve. The value attached to this mental state indicates the priority of the desire.
• Intention: represents a state of the world the agent is committed to achieve. The value attached to this mental state indicates the priority of the intention.
• Ideal: represents an information socially judged by the agent. The value attached to this mental state indicates the praiseworthiness value of the ideal about P. It can be positive (the ideal about P is praiseworthy) or negative (the ideal about P is blameworthy).
• Obligation: represents a state of the world the agent has to achieve. The value attached to this mental state indicates the priority of the obligation.
Acting on the world through plans .
To act upon the world according to its intentions, an agent needs a plan of actions, that is to say a set of behaviors executed in a certain context in response to an intention. In BEN, a plan owned by agent i is represented by Pl i (Int,Cont,Pr,B) with: • Pl: the name of the plan.
• Int: the intention triggering this plan.
• Cont: the context in which this plan may be applied.
• Pr: a priority value used to choose between multiple plans relevant at the same time. If two plans are relevant with the same priority, one is chosen at random.
• B: the behavior, as a sequence of instructions, to execute if the plan is chosen by the agent. .
The context of a plan is a particular state of the world in which this plan should be considered by the agent making a decision. This feature enables to define multiple plans answering the same intention but activated in various contexts.

Personality .
In order to define personality traits, BEN relies on the OCEAN model (McCrae & John ), also known as the big five factors model. In the BEN architecture, this model is represented through a vector of five values between and , with . as the neutral value. The five personality traits are: • O: represents the openness of someone. A value of stands for someone narrow-minded, a value of stands for someone open-minded.
• C: represents the consciousness of someone. A value of stands for someone impulsive, a value of stands for someone who acts with preparations.
• E: represents the extroversion of someone. A value of stands for someone shy, a value of stands for someone extrovert.
• A: represents the agreeableness of someone. A value of stands for someone hostile, a value of stands for someone friendly.
• N: represents the degree of control someone has on his/her emotions, called neurotism. A value of stands for who is neurotic, a value of stands for who is calm.

Emotions
Representing emotions .
In BEN, the definition of emotions is based on the OCC theory of emotions (Ortony et al. ). According to this theory, an emotion is a valued answer to the appraisal of a situation. Once again, as the agents are taken into consideration in the context of a society and should act depending on it, the definition of an emotion needs to contain the agent causing it. Thus, an emotion is represented by Em i (P,Ag,I,De) with the following elements : • Em i : the name of the emotion felt by agent i.
• P: the predicate representing the fact about which the emotion is expressed.
• Ag: the agent causing the emotion.
• I: the intensity of the emotion.
• De: the decay withdrawn from the emotion's intensity at each time step.

.
An emotion with no intensity and no decay is represented by Em i (P,Ag) and an emotion that isn't caused by any agent is written Em i (P). I[Em i (P,Ag)] stands for the intensity of a particular emotion and De[Em i (P,Ag)] stands for its decay value.

Emotional contagion .
Emotional contagion is the process where the emotions of an agent are influenced by the perception of emotions of agents nearby (Hatfield et al. ). This influence can lead to the perceived emotion being copied or to the creation of a new emotion. In BEN, the formalism of this process is based on a simplified version of the ASCRIBE model (Bosse et al. ), represented by (Em i ,Em j ,Ch i ,R j ,Th j ) for a contagion from agent i to agent j with the following meanings: • Em i : the emotion from i that triggers the contagion if it is perceived by j.
• Em j : the emotion created by j. It can be a copy of Em i (with other value of intensity and decay) or a new emotion.
• Ch i : the charisma value of i, indicating its power to express its emotions.
• R j : the receptivity value of j, expressing its capacity to be influenced by other agents.
• Th: a threshold value. The contagion is executed only if charisma × receptivity is greater than this threshold.

Norms and obligations
.
The definition of a normative system in BEN is based on the theoretical definition provided by Tuomela Tuomela ( ), the BOID architecture (Broersen et al. ) and the framework proposed by López y López López y López et al. ( ). This means that a norm is considered to be a behavior, active under certain conditions, that an agent may choose to obey to answer one of its intentions. In BEN, only the concepts of social norms and obligations are encompassed under the notion of norms as described below. A social norm is a convention adopted implicitly by a social group while an obligation is an explicit rule imposed by an authority. In the BEN architecture, a norm possessed by agent i is represented by No i (Int,Cont,Ob,Pr,B,Vi) with: • No i : the name of the norm owned by agent i.
• Int: the intention which triggers this norm.
• Cont: the context in which this norm can be applied.
• Ob: an obedience value that serves as a threshold to determine whether or not the norm is applied depending on the agent's obedience value.
• Pr: a priority value used to choose between multiple norms applicable at the same time.
• B: the behavior, as a sequence of instructions, to execute if the norm is followed by the agent.
• Vi: a violation time indicating how long the norm is considered violated once it has been violated.
. This definition of norms covers entirely the concept of social norms, but it is not enough to fully represent an obligation. To do so, the concept of laws is introduced. A law is an explicit rule, imposed upon the agent, that creates an obligation, as a cognitive mental state, under certain conditions. Once again, a law may be violated. Using all these elements, a law is represented by La(Cont,Obl,Ob) with: • La: the name of the law.
• Cont: the context in which this law can be applied.
• Obl: the obligation created by the law.
• Ob: an obedience value that serves as a threshold to determine whether or not the law is to be executed depending on the agent's obedience value.
. Finally, as norms and laws may be violated, the architecture needs an enforcement system to apply sanctions against agents violating norms or laws. A sanction is a sequence of instructions triggered by enforcement. Enforcement done by agent i on agent j is represented by (Me j ,Sa i ,Re i ) with the following elements: • Me j : the modality of agent j that needs to be enforced. It can be a norm, a law or an obligation (the agent applied the law but did not execute the norm corresponding to its obligation).
• Sa i : the sanction the agent i applies if the modality enforced is violated.
• Re i : the sanction the agent i applies if the modality enforced is fulfilled, called the reward.
. An enforcement works depending on its modality. Enforcing a norm means checking, if its context was fulfilled, if it was applied or not by the agent enforced. Enforcing a law means checking, if its context was fulfilled, if it created or not the given obligation into the enforced agent. Finally, the enforcement of obligations enables modelers to create systems where an agent may fulfil a law (the law is "accepted" by the agent) while the corresponding obligation (i.e., actions implied) may not be followed by the agent.

Social relations .
As people create social relations when living with other people and change their behavior based on these relationships, the BEN architecture makes it possible to describe social relations in order to use them in agents' behavior. Based on the research carried out by Svennevig Svennevig ( ), a social relation is described by using a finite set of variables. Svennevig identifies a minimal set of four variables : liking, dominance, solidarity and familiarity. A trust variable is added to interact with the enforcement of social norms. Therefore, in BEN, a social relation between agent i and agent j is expressed as R i,j (L,D,S,F,T) with the following elements: • R: the identifier of the social relation.
• L: a real value between -and representing the degree of liking with the agent concerned by the link. A value of -indicates that agent j is hated, a value of indicates that agent j is liked.
• D: a real value between -and representing the degree of power exerted on the agent concerned by the link. A value of -indicates that agent j is dominating, a value of indicates that agent j is dominated.
• S: a real value between and representing the degree of solidarity with the agent concerned by the link. A value of indicates that there is no solidarity with agent j, a value of indicates a complete solidarity with agent j.
• F: a real value between and representing the degree of familiarity with the agent concerned by the link. A value of indicates that there is no familiarity with agent j, a value of indicates a complete familiarity with agent j.
• T: a real value between -and representing the degree of trust with the agent j. A value of -indicates doubts about agent j while a value of indicates complete trust with agent j. The trust value does not evolve automatically in accordance with emotions. .
With this definition, a social relation is not necessarily symmetric, which means R i,j (L, D, S, F, T ) is not equal by definition to R j,i (L, D, S, F, T ). L[R i,j ] stands for the liking value of the social relation between agent i and agent j, D[R i,j ] stands for its dominance value, S[R i,j ] for its solidarity value, F[R i,j ] represents its familiarity value and T[R i,j ] its trust value.

Integrating Social Features into an Agent Architecture
. The BEN architecture, represented in Figure , provides cognition, emotions, emotional contagion, social relations, personality and norms to agents for social simulation. All these features evolve together during the simulation in order to give a dynamic behavior to the agent, which may react to a change in its environment.
. The architecture, which its execution is detailed in Algorithm , is composed of four main parts connected to the agent's knowledge bases, seated on the agent's personality. Each part is made up of processes that are automatically computed (in blue) or which need to be manually defined by the modeler (in pink). Some of these processes are mandatory (in solid line) and some others are optional (in dotted line). This modularity  Figure : Diagram of the BEN architecture providing an agent with cognition, emotions, emotional contagion, personality, social relations and norms enables each modeler to only use components that seem pertinent with the studied situation without creating heavy and useless computations. .
In this section, each part of the architecture is explained; each dynamic concept developed herea er is based on the static representation proposed in Section .

Knowledge of the agent .
The agent's knowledge, which constitutes the core of the architecture, is composed of knowledge bases and variables. .
The cognitive bases store all cognitive mental states of the agent as outlined in Section . ; the emotional base stores emotions; the social relations base contains all the social relations the agent has with other agents and the normative base contains all the norms of the agent as exposed in Section . . The agent also has a base that stores the sanctions and a base for the action plans which are triggered by the cognitive engine. Finally, a personality based on the formalism presented in Section . . is used by the overall architecture for a global parameterization. These three last bases are apart from the center block as the architecture's processes cannot modify them during the simulation. .
In addition to these knowledge bases, the agent also has variables related to some of the social features. The idea behind the BEN architecture is to connect these variables to the personality module and in particular to the five dimensions of the OCEAN model in order to reduce the number of parameters which need to be entered by the user. These additional variables are the probability to keep the current plan, the probability to keep the current intention, a charisma value linked to the emotional contagion process, an emotional receptivity value linked to the emotional contagion, and an obedience value used by the normative engine.
. With the cognition, the agent has two parameters representing the probability to randomly remove the current plan or the current intention in order to check whether there could be a better plan or a better intention in the current context. These two values are connected to the consciousness components (C) of the OCEAN model as it describes the tendency of the agent to prepare its actions (with a high value) or act impulsively (with a low value).
For the emotional contagion, the formalism proposed in Section . requires charisma (Ch) and emotional re-JASSS, ( ) , http://jasss.soc.surrey.ac.uk/ / / .html Doi: . /jasss. ceptivity (R) to be defined for each agent. In BEN, charisma is related to the capacity of expression, which is related to the extroversion of the OCEAN model, while the emotional receptivity is related to the capacity to control the emotions, which is expressed with the neurotic value of OCEAN.
With the concept of norms, the agent has a value of obedience between and , which indicates its tendency to follow laws, obligations and norms. According to research in psychology, which tried to explain the behavior of people participating in a recreation of the Milgram's experiment (Bègue et al. ), obedience is linked with the notions of consciousness and agreeableness which gives the following equation: With the same idea, all the parameters required by each process are linked to the OCEAN model as it is explained in the rest of this section.
Perceiving the environment .
The first step of BEN, corresponding to the module number on Figure , is the perception of the environment. This module is used to connect the environment to the knowledge of the agent, transforming information from the world into cognitive mental states, emotions or social links but also used to apply sanctions during the enforcement of norms from other agents. .
The first process in this perception consists in adding beliefs about the world. During this phase, information from the environment is transformed into predicates which are included in beliefs or uncertainties and then added to the agent's knowledge bases. This process enables the agent to update its knowledge about the world. From the modeler's point of view, it is only necessary to specify which information is transformed into which predicate. The addition of a belief Belief A (X) triggers multiple processes related to belief revision: it removes Belief A (notX), it removes Intention A (X), it removes Desire A (X) if Intention A (X) has just been removed, it removes U ncertainty A (X) or U ncertainty A (notX), and it removes Obligation A (X). .
The emotional contagion enables the agent to update its emotions according to the emotions of other agents perceived. Based on the formalism exposed in Section . . , the modeler has to indicate the emotion triggering the contagion, the emotion created in the perceiving agent and the threshold of this contagion; the charisma (Ch) and receptivity (R) values are automatically computed as explained in Section . . The contagion from agent i to agent j occurs only if Ch i ×R j is superior or equal to the threshold (Th), whose value is . by default. The presence of the trigger emotion in the perceived agent is checked in order to create the emotion indicated. The equations to determine the intensity and the decay of the new emotion are expressed with Equation and Equation .
If Em j (P ) already exists: Therea er, the agent has the possibility of creating social relations with other perceived agents. The modeler indicates the initial value for each component of the social link, as explained in Section . . By default, a neutral relation is created, with each value of the link at . . Social relations can also be defined before the start of the simulation, to indicate that an agent has links with other agents at the start of simulation, like links with friends or family members.
. Finally, the agent may apply sanctions through the norm enforcement of other agents perceived. The modeler needs to indicate which modality is enforced and the sanction and reward used in the process. Then, the agent checks if the norm, the obligation, or the law, is violated, applied or not activated by the perceived agent. To do so, each agent has to have access to other agents' normative bases.
. A norm is considered violated when its context is verified, and yet the agent chose another norm or another plan to execute because it decided to disobey. A law is considered violated when its context is verified, but the agent disobeyed it, not creating the corresponding obligation. Finally, an obligation is considered violated if the agent did not execute the corresponding norm because it chose to disobey.
Updating the knowledge .
The second step of the architecture, corresponding to the module number on Figure , consists in managing the agent's knowledge. This means updating the knowledge bases according to the latest perceptions, adding new desires, new obligations, new emotions or updating social relations, for example. .
Modelers have to use inference rules for this purpose. These rules are triggered by a new belief, a new uncertainty or a new emotion, in a certain context, and may add or remove any cognitive mental state or emotion indicated by the user. Using multiple inference rules helps the agent to adapt its mind to the situation perceived without removing all its older cognitive mental states or emotions, thus enabling the creation of a cognitive behavior. These inference rules enable to link manually the various dimensions of an agent, for example creating desires depending on emotions, social relations and personality. .
Using the same idea, modelers can define laws, based on the formalism defined in Section . . These laws enable the creation of obligations in a given context based on the newest beliefs created by the agent through its perception or its inference rules. The modelers also need to indicate an obedience threshold and if the agent's obedience value is below that threshold, the law is violated. If the law is activated, the obligation is added to the agent's cognitive mental state bases. The definition of laws makes it possible to create a behavior based on obligations imposed upon the agent. .
The other two processes of this module are the automatic computations of the agent's emotions and social relations. The following subsections indicate which models are used in the implementation of these processes.

Adding emotions automatically
.
BEN enables the agent to get emotions about its cognitive mental states. This addition of emotions is based on the OCC model (Ortony et al. ) and its logical formalism (Adam ), which has been proposed to integrate the OCC model in a BDI formalism.

.
According to the OCC theory, emotions can be split into three groups: emotions linked to events, emotions linked to people and actions performed by people, and emotions linked to objects. In BEN, as the focus is on relations between social agents, only the first two groups of emotions (emotions linked to events and people) are considered. .
The emotions about beliefs are joy and sadness and are expressed this way: .
Their initial intensity is computed according to Equation with N the neurotism component from the OCEAN model.
The emotions about uncertainties are fear and hope and are defined this way: Combined emotions about uncertainties are emotions built upon fear and hope. They appear when an uncertainty is replaced by a belief, transforming fear and hope into satisfaction, disappointment, relief or fear confirmed and they are defined this way: .
Their initial intensity is computed according to Equation with Em' i (P) the emotion of fear/hope.
On top of this, according to the logical formalism (Adam ), four inference rules are triggered by these emotions: the creation of fear confirmed or the creation of relief will replace the emotion of fear, the creation of satisfaction or the creation of disappointment will replace a hope emotion, the creation of satisfaction or relief leads to the creation of joy, the creation of disappointment or fear confirmed leads to the creation of sadness. .
The emotions about other agents with a positive liking value are emotions related to emotions of other agents which are in a social relation base with a positive liking value on that link. They are the emotions called "happy for" and "sorry for" which are defined this way: .

Their initial intensity is computed according to Equation with
A the agreeableness value from the OCEAN model.
Emotions about other agents with a negative liking value are close to the previous definitions, however, they are related to the emotions of other agents which are in the social relation base with a negative liking value. These emotions are resentment and gloating and have the following definition: .

Their initial intensity is computed according to Equation .
This equation can be seen as the inverse of Equation , and means that the intensity of resentment or gloating is greater if the agent has a low level of agreeableness contrary to the intensity of "happy for" and "sorry for".
Emotions about ideals are related to the agent's ideal base which contains, at the start of the simulation, all the actions about which the agent has a praiseworthiness value to give. These ideals can be praiseworthy (their praiseworthiness value is positive) or blameworthy (their praiseworthiness value is negative). The emotions coming from these ideals are pride, shame, admiration and reproach and have the following definition: .
Their initial intensity is computed according to Equation with O the openness value from the OCEAN model. .
Finally, combined emotions about ideals are emotions built upon pride, shame, admiration and reproach. They appear when joy or sadness appear with an emotion about ideals. They are gratification, remorse, gratitude and anger which are defined as follows: .
Their initial intensity is computed according to Equation with Em' i (P) the emotion about ideals and Em" i (P) the emotion about beliefs.
In order to keep the initial intensity of each emotion between and , each equation is truncated between an if necessary. .

The initial decay value for each of these twenty emotions is computed according to the same Equation with
∆t a time step which enables to define that an emotion does not last more than a given time.

Updating social relations .
When an agent already known is perceived (i.e. there is already a social link with it), the social relation with this agent is updated automatically by BEN. This update is based on the work of Ochs et al. ( ) and takes the agent's cognitive mental states and emotions into account. In this section, the automatic update of each variable of a social link R i,j (L,D,S,F,T) by the architecture is described in details; the trust variable of the link is however not updated automatically.

.
According to Ortony ( ), the degree of liking between two agents depends on the valence (positive or negative) of the emotions induced by the corresponding agent. In the emotional model of the architecture, joy and hope are considered as positive emotions (satisfaction and relief automatically raise joy with the emotional engine) while sadness and fear are considered as negative emotions (fear confirmed and disappointment automatically raise sadness with the emotional engine). So, if an agent i has a positive (resp. negative) emotion caused by an agent j, this will increase (resp. decrease) the value of appreciation in the social link from i concerning j.

.
Moreover, research has shown that the degree of liking is influenced by the solidarity value (Smith et al. ). This may be explained by the fact that people tend to appreciate people similar to them.

.
The computation formula is described with Equation with mPos the mean value of all positive emotions caused by agent j, mNeg the mean value of all negative emotions caused by agent j and α L a coe icient depending of the agent's personality, indicating the importance of emotions in the process, and which is described by Equation . L Keltner & Haidt ( ) and Shiota et al. ( ) explain that an emotion of fear or sadness caused by another agent represent an inferior status. But Knutson ( ) explains that perceiving fear and sadness in others increases the sensation of power over those persons.

.
The computation formula is described by Equation with mSE the mean value of all negative emotions caused by agent i to agent j, mOE the mean value of all negative emotions caused by agent j to agent i and α D a JASSS, ( ) , http://jasss.soc.surrey.ac.uk/ / / .html Doi: . /jasss. coe icient depending on the agent's personality, indicating the importance of emotions in the process, and which is described by Equation .
As explained in the formalism exposed in section . , the solidarity represents the degree of similarity of desires, beliefs and uncertainties between two agents. In BEN, the evolution of the solidarity value depends on the ratio of similarity between the desires, beliefs and uncertainties of agent i and those of agent j. To compute the similarities and oppositions between agent i and agent j, agent i needs to have beliefs about agent j's cognitive mental states. Then it compares these cognitive mental states with its own to detect similar or opposite knowledge.
. On top of that, according to de Rivera & Grinkis ( ), negative emotions tend to decrease the value of solidarity between two people. The computation formula is described by Equation with sim the number of cognitive mental states similar between agent i and agent j, opp the number of opposite cognitive mental states between agent i and agent j, NbKnow the number of cognitive mental states in common between agent i and agent j, mNeg the mean value of all negative emotions caused by agent j, α S1 a coe icient depending of the agent's personality, indicating the importance of similarities and oppositions in the process, and which is described by Equation and α S2 a coe icient depending of the agent's personality, indicating the importance of emotions in the process, and which is described by Equation . S In psychology, emotions and cognition do not seem to impact the familiarity. However, Collins & Miller ( ) explain that people tend to be more familiar with people whom they appreciate. This notion is modeled by grounding the evolution of the familiarity value on the liking value between two agents. The computation formula for the evolution of the familiarity value is defined by Equation .
All the equations have been elaborated such as the evolution remains between -and for liking and dominance and between and for solidarity and familiarity, in accordance with the formalism exposed in Section . .

.
The trust value is not evolving automatically in BEN, as there is no clear and automatic link with cognition or emotions. However, this value can evolve manually, especially with sanctions and rewards to social norms where the modeler can indicate a modification of the trust value during the enforcement process described in Section . .

Making decisions .
The third part of the architecture, number on Figure , is the only one mandatory as it is where the agent makes a decision. A cognitive engine can be coupled with a normative engine to chose an intention and a plan to execute. The complete engine is summed up in Figure and described by described by the Algorithm .
. This decision making process may be divided into seven steps: • Step : the engine checks the current intention. If it is still valid, the intention is kept so the agent may continue to carry out its current plan. • Step : the engine checks if the current plan/norm is still usable or not, depending on its context.

•
Step : the engine checks if the agent obeys an obligation taken from the obligations corresponding to a norm with a valid context in the current situation and with a threshold level lower than the agent's obedience value as computed in Section . .

•
Step : the obligation with the highest priority is taken as the current intention. Algorithm : Decision making process in BEN • Step : the desire with the highest priority is taken as the current intention.
• Step : the plan or norm with the highest priority is selected as the current plan/norm, among the plans or norms corresponding to the current intention with a valid context.

•
Step : the behavior associated with the current plan/norm is executed. .
Steps , and do not have to be deterministic; they may be probabilistic. In this case, the priority value associated to obligations, desires, plans and norms serves as a probability.
Creating a temporal dynamic .
The final part of the architecture, number on Figure , is used to create a temporal dynamic to the agent's behavior, useful in a simulation context. To do so, this module automatically degrades mental states and emotions and updates the status of each norm. .
The degradation of mental states consists in reducing their lifetime. When the lifetime is null, the mental state is removed from its base. The degradation of emotions consists in reducing the intensity of each emotions stored by its decay value. When the intensity of an emotion is null, the emotion is removed from the emotional base.
. Finally, the status of each norm is updated to indicate if the norm was activated or not (if the context was right or wrong) and if it was violated or not (the norm was activated but the agent disobeyed it). Also, a norm can be violated for a certain time which is updated and if it becomes null, the norm is not violated anymore.
. These last steps enable agent's behavioral components to automatically evolve through time, leading the agents to forget a piece of knowledge a er a certain amount of time, creating dynamics in their behavior.
The contribution presented in this paper is articulated around three points: a global architecture, the aggregation of known social and a ective theories, and an implementation of the global architecture into a behavior model to use BEN on a real case study.
A general behavioral architecture .
The BEN Architecture shown in Figure represents a global and general behavioral architecture for the development of agents simulating human actors. BEN is a proposition that connects multiple a ective and social features for the agent's behavior, making it possible for these social features to interact with each other. Nevertheless, the architecture can be easily adapted to a specific context or refined to integrate new elements without having to modify the overall structure. .
The main advantage of BEN is its modularity; even if all the processes are explained in detail in this section, they are not all mandatory. A modeler may unplug any optional part of the architecture without preventing the rest from working properly. For example, If someone estimates that norms and obligations have nothing to do with the case being studied, processes linked to norms and obligations can be unplugged without stopping the architecture from functioning.
. This modularity means that all the social features are related but not dependent to each other. For example, emotions can be used in the context definition of plans or norms, linking all these notions together and thus creating a richer behavior.

Assumptions about social and a ective theories
.
In order to implement the general architecture into a behavioral model, we had to choose particular theories to support the various processes. As our objective is to build an architecture usable by social scientists and experts of various fields, and not only by computer researchers, we integrated theories already used by the social simulation community (Bourgais et al. ). .
In details, the cognitive engine to make a decision is based on BDI (Bratman ), the representation and automatic creation of emotions is based on the OCC model (Ortony et al. ) and its formalization with BDI (Adam ), the emotional contagion process is based on the ASCRIBE model (Bosse et al. ) and the representation and manipulation of social relations is based on the work of Svennevig ( ). Finally, the personality of the agent, used for the parametrization of the overall architecture, is based on the OCEAN model (McCrae &  John ) which makes a consensus in the community (Eysenck ). .
As experts are familiar with these social and a ective theories, we assess it is easier for them to use BEN for the definition of an agent simulating a human actor as they already manipulates the concepts encompass and used by the architecture.

Implementation of a behavioral model .
To use BEN on a real life scenario, we have to instantiate the theories used with computation formulas where they were missing. To do so, we mostly rely on existing works (Adam ; Ochs et al. ; Lhommet et al. ), adapting them when needed, which is discussed in detail in this section.

.
The various equations proposed for computing the parameters, intensity and decay of emotions or evolution of social relations were all developed to be linked with the fewest dimensions of OCEAN in the simplest way to JASSS, ( ) , http://jasss.soc.surrey.ac.uk/ / / .html Doi: . /jasss. respect the principle of parsimony. Di erent people do not have the same value for these parameters, which is explained by the various personalities observed on a population. Also, one of the objectives was to reduce the number of parameters entered by the modeler to ease the parametrization phase of a model using BEN. .
Most of the relations with OCEAN dimensions are linear to stay simple. However, linear relations were not satisfying for parameters such as obedience as people tends to have a high value of obedience, even with an average personality. This is why the equation proposed uses square roots. .
The equations for the computation of initial intensities of emotions presented in Section . may be simplified in the way they are written; they were presented this way to highlight their construction. These equations are composed of a term directly related to the cognitive mental states involved and then pondered upon by a personality dimension; this dimension being balanced on its neutral value of . as explained in Section . . Also, for each emotion, only one personality dimension was retained: the one closely related to the meaning of the emotion, once again keeping the parsimony principle in mind. For example, the agreeableness dimension of personality is used with emotions related to emotions of others (happy for, sorry for, resentment and gloating). .
The equations for the evolution of dimensions from social relations were developed to respect the underlying psychological notions, but also with a will to keep them in their limits (-and for liking and dominance, and for solidarity and familiarity). Personality dimensions were included once again to reproduce the fact that these evolution are di erent for persons with di erent personalities. The neurotism dimension is used for the part related to emotions, while the openness dimension is used for the part related to the similarities and di erences of mental states between two agents (someone open-minded will give less importance to this field compared to someone narrow-minded). .
The implementation we made of the chosen theories may be discussed or changed by any expert user. For example, the emotional engine can be replaced with another one based on another theory, the same for the update of social relations or the cognitive engine. However, these modifications would not a ect the overall structure of the general architecture.

The Kiss Nightclub Case
. The BEN architecture has been implemented and integrated in the GAMA modeling and simulation platform (Taillandier et al. ). This platform, which has been a growing success over the last few years, aims to support the development of complex models while ensuring that non-computer scientists get an easy access to highlevel, otherwise complex, operations. .
In this section, one of the case studies using the GAMA implementation of BEN to model human behavior is presented. The case study concerns the evacuation of a nightclub in Brazil, the Kiss Nightclub. The main goal of this example is to show the richness of behavior possible with BEN, still keeping high level explanations. The complete model is available on OpenABM (https://www.comses.net/codebases/ ca fbb -e a-ea -a fd ba f d /releases/ . . /).

Presentation of the case .
In January th of , a fire started inside the Kiss Nightclub in Santa Maria, Rio Grande do Sul in Brazil, causing the death of people. Many factors caused this tragedy: between and persons were in the nightclub while it had a maximum capacity of people, there was only one exit door, there was no alarm and the exit signals were broken, indicating the restrooms instead of the exit. The vast majority of people who died were found in the restrooms, dead because of the smoke (Atiyeh ).
. This case has been studied before (Silva et al. ) with a simple model for the agent's behavior. The aim of the authors was to show that respecting the safety measures could have helped reducing the number of casualties. In this paper, the goal is to show how this case could be modeled with the BEN architecture and how it could help in getting a result closer to the real events. Also, BEN enables to incorporate more complex behaviors thanks to its a ective and social features, closer to human reactions in this situation. The goal is also to show that BEN runs on a simulation platform with an acceptable computation time. .
In order to be as close as possible to the real case, the club's blueprint at the time of the tragedy was reproduced. The environment of the simulation is shown in Figure  .
In this case, the focus is on the propagation of the smoke as it is the cause in more than per cent of the casualties. The spread of this smoke has been based on a study made by the French government to model hazards due to fires (Chivas & Cescon ). The main idea is that smoke spreads from its initial point at a constant pace, filling the entire nightclub within to minutes. The floor of the club is divided into square cells, which all have a percentage representing how much smoke is in the volume above it. The visual result of the spread of smoke, two minutes a er the start of the fire, is shown in Figure . . According to the same report (Chivas & Cescon ), it takes about seconds to faint because of this kind of smoke. The simulation was configured to respect this time so agents are not automatically killed when touched by the smoke.

Creating agents' behavior using BEN .
Evacuating a nightclub in fire is not a common situation. It involves not only cognition to evaluate the context and make a decision but also emotions as people react according to their fear of fire. Given that a nightclub features a lot of people who act together, emotional contagion and social relations have to be taken into account. Finally, the evacuation plan imposed by the authorities can be modeled through the notions of laws, obligations and norms. .
The main focus of this example is the behavior of people evacuating the nightclub, given that the objective is to mimic the real situation. Nevertheless, there are not enough testimonials from people who lived this tragedy about their behavior during the evacuation and hence their actions are hypothetical. This means that the agents' behavior is based on hypotheses about the behavior of real people in this situation. .
At the beginning of the simulation, agent's initial knowledge are of three kinds: beliefs about the world, initial desires and social relations with friends. Also, each agent has a personality. .
Once the agent is up to date with its environment, its overall knowledge has to adapt to what was perceived. Each agent i is likely to have a social relation with agent j, representing its friend. .
With the execution of inference rules and laws, each agent creates emotions through the emotional engine. In this case, the presence of an uncertainty about the fire (added through the inference rule concerning the belief about smoke), with the initial desire that there is no fire, produces an emotion of fear which intensity is computed depending on the quantity of smoke perceived. .
Once the agent acquires the desire to flee (because it perceived the fire or its fear of a fire had an intensity great enough), it follows action plans and norms. has a fear emotion about the fire with an intensity greater than a given threshold adds the desire to flee Table : Examples of laws and inference rules by the agent to answer its intention to flee, depending on the context perceived.

conditions actions commentaries
The agent has a good visibility and has a belief on the exact location of the exit door The agent runs to the exit door In this plan, the agent runs to the exit door following the shortest path.
The agent has a good visibility and has no belief about the location of the door the agent follows the agent in its field of view with the highest trust value among its social relations This norm works with the trust value of social relations created during the simulation.
The agent has a bad visibility and has the obligation to follow signs The agent goes to the restrooms In this norm, the agent comply with the law that indicates to follow exit signs.
The agent has a bad visibility and has a belief exit signs are wrong The agent moves randomly In this plan, the agent moves randomly in the smoke. Table : Action plans and norms answering the fleeing intention .
The social relation defined with a friend may also be used to define plans to help one's friend if it is lost in smoke. This plan consists in finding the friend and telling it the location of the exit door. .
The variety in agent personalities makes it possible to obtain heterogeneous behaviors as the intensities computed for the fear emotion are di erent. This means that two agents placed in the same situation will not decide to flee at the time, some going out early and some waiting a little more time.
. By multiplying perceptions, inference rules and plans or norms, it is possible to create a wide variety of behaviors, from agents fleeing when seeing the fire, to agents lost in smoke or going back to the smoke to help their friends. Also, BEN enables agents to react to a change in their environment, not continuing with a behavior with no sense in the current context.

Results and discussion
.
At the start of the simulation, agents are placed randomly inside the nightclub. Indeed, there is no information available about the precise location of each and every occupant of the nightclub. However, since the place was overcrowded, using a random location for them seems to be an acceptable approximation. Also, it is assumed that people were going to the club with friends. The precise initial number of people inside the club at the moment of the tragedy is not known, but reports indicate there were between and people (Atiyeh ). Thus, three cases were simulated: initial people, initial people and initial people at the starting point. .
For each case, simulations were run with a new random location for each agent at each starting time. The simulations were run on an Intel core i with Gb of RAM. Figure shows a visual result of the simulation a er a minute and a half of simulated time. The di erent colors of triangles represent the various behavior adopted by the agents: white agents are not aware of the danger, green agents go to the exit, yellow agents go to the exit direction, blue agents comply with the law, red agents are lost in smoke, brown agents follow another agent and purple agents are going through the smoke as they remember the location of the exit. The complete model with   Table . For each case, a mean value of agents dead because of the smoke as well as a standard deviation was computed. The OCEAN parameters of each agent were randomly initialized with a Gaussian distribution centered on . , the mean value for each personality dimension. The perception of each agent has been based on real value for the field of view and the vision amplitude. The only parameters tuned in the model are the thresholds representing the quantity of smoke to start the evacuation and the quantity of smoke that decreases the field of view and forces the agent to obey to the law.
. Table shows that the simulation is able to statistically be close to the real case where there were victims. However, the main result concern the credibility of the simulation which may be observed with Figure or on the video of the simulation (at the following address: https://github.com/mathieuBourgais/ExempleThese). Agents show various behaviors in almost similar situations which is explained by the various personalities as in real life where two persons in the same situation will not obviously make the same decision based on their personalities. It is also possible to observe behavior patterns: some people are leaving early because they perceived the fire while others are still dancing at the beginning of the simulation; the first persons evacuating are joined by people between them and the exit thanks to emotional contagion; people still in the nightclub lately are trapped in smoke and have to follow the signs which, in this particular case, are pointing to the toilets, leading to the vast majority of death in the simulations; people getting out of smoke and perceiving the exit change their action and flee from the nightclub. BEN enables to explain and express these behaviours with high-level concepts such as the personality which enables the creation of emotions with di erent intensities, di erent threshold of obedience, and so on.
. initial agents. This table tells the personality has an important impact on the results: in a case, most of the people flee at the beginning because of a huge emotional contagion, in the other case, most of the people flee late, because of low levels of emotions and emotional contagion, which simulates the running of the model with modules related to emotions and emotional contagion switched o . .
Table indicates the mean computation time for each step in ms. This means that, for people initially in the club, it takes approximately ms to compute the behavior of all the agents still in the simulation at each step, representing second of simulated time. This result shows that BEN enables the simulation of hundreds of agents with almost all the features possible, within a reasonable computation time. .
As explained previously, the agents' behavior developed in this example is based on hypotheses but only obvious hypotheses were retained. Also, the BEN architecture enables to translate directly these obvious hypotheses into behaviors keeping their high-level descriptions. These high-level concepts are then supported by low level plans which describe simple actions, retaining the high-level explanation for each agent's behavior. .
The evacuation of the Kiss Nightclub has been previously studied by other researchers with an agent-based simulation (Silva et al. ) with a simpler behavior model: agents are either happy and they dance or they are scared by the start of the fire and they flee. However, the results obtained do not reproduce statistically the real case, with more than death in the simulation against the real casualties. Our approach seems better on this particular case with at least a closer result to the real case.

.
There already exists other approaches for the simulation of crowd evacuations with agent-based models, describing the behavior with social forces (Pelechano et al. ) or social contagion (Bosse et al. ) with promising results. There also exists other ways to model emergency evacuations without using agent-based simulations (Bakar et al. ). However, as BEN is grounded on folk psychology concepts (Norling ), we assess it may defines a more credible and a more explainable behavior than the cited approaches. .
Another strength of BEN is its capability to define a large amount of simulated behavior. Although we are not expert in emergency evacuation, we were able to create variations, so the fleeing behaviour seems more believable. This point is supported by previous works where BEN has been compared to a final state machine (Adam et al. ) and has been used by non-programming researchers to test its easiness of use (Taillandier et al. b). .
The main contribution of BEN is the explainability gained for social simulations in general. The behavior of each agent is expressed with psychological terms instead of mathematical equations, which is easier to understand and to explain, from the modeler's point of view (Broekens et al. ; Kaptein et al. ). Besides, the definition of a credible behavior is eased thanks to the modularity of the architecture where only useful processes in the context of the studied case have to be implemented.

Conclusion
. This paper presents BEN, an agent architecture featuring cognition, emotions, emotional contagion, personality, social relations and norms for social simulations. All these features act together to reproduce the human decision-making process. This architecture was built to be as modular as possible in order to let modelers easily adapt it to their case studies.
. This architecture relies on a formalization of all its features which was developed according to psychological theories with the desire to standardize the representation of the concepts as much as possible. Then, BEN has been implemented in the GAMA platform to ensure its usability. An example concerning the evacuation of a nightclub showed that even with a simple model, BEN allows to produce a great variety of behaviors, while remaining credible. This example also showed that using BEN only requires one to translate high level hypotheses into behavior with the same high level concepts.
. BEN also represents a solid base, which may be extended in the future. Other social features such as culture or experience may be integrated into the architecture. Eventually, it is possible to imagine a future architecture based on BEN, with multiple theories and social features, asking the users to select their own combinations, which could be used to test psychological hypotheses.
. Finally, as BEN is a modular architecture, it has already been used in various project where only a few parts of the architecture were put into use. The cognitive engine has help studying the land use change in the Mekong delta (Truong et al. ), cognition and emotions are used by the SWIFT project which studies the evacuation during bushfires in Australia (Adam et al. ), cognition and social relations are used by the Li-BIM project (Taillandier et al. a), which is about household energy consumption and BEN has been used in a more complete way for the evacuation of a nightclub under fire in USA (Valette & Gaudou ). These di erent examples shows that BEN has already been used partially in various contexts, showing its modularity and independence to the case studied.