©Copyright JASSS

JASSS logo ----

Deborah Vakas Duong and John Grefenstette (2005)

SISTER: a Symbolic Interactionist Simulation of Trade and Emergent Roles

Journal of Artificial Societies and Social Simulation vol. 8, no. 1
<http://jasss.soc.surrey.ac.uk/8/1/1.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 20-May-2004 Accepted: 13-Sep-2004 Published: 31-Jan-2005


* Abstract

SISTER, a Symbolic Interactionist Simulation of Trade and Emergent Roles, captures a fundamental social process by which macro level roles emerge from micro level symbolic interaction. The knowledge in a SISTER society is held culturally, suspended in the mutual expectations agents have of each other based on signs (tags) that they read and display. In this study, this knowledge includes how to create composite goods. The knowledge of coordinating their creation arises endogenously. A symbol system emerges to denote these tasks. In terms of information theory, the degree of mutual information between the agent's signs (tags) and their behavior increases over time. The SISTER society of this study is an economic simulation, in which agents have the choice of growing all the goods they need by themselves, or concentrating their efforts in making more of fewer goods and trading them for other goods. They induce the sign of an agent to trade with, while at the same time, they induce a sign to display. The signs come to mean sets of behaviors, or roles, through this double induction. A system of roles emerges, holding the knowledge of social coordination needed to distribute tasks among the agents.
Keywords:
Agent-Based Model, Computational Social Theory, Economics Simulation, Symbolic Interactionism, Emergent Language, Sociological Roles

* Motivation

1.1
The motivation of this study is to distill fundamental social processes of society through a computational integration of social theory. The specific process that this study seeks to distill computationally is the formation of roles. According to Kluver, role formation is a fundamental process: "the motor of socio-cultural evolution is this: creative individuals generate new knowledge that becomes institutionalized in certain roles" (Kluever and Stoica 2003). The computer simulation of this study uses principles of social science to distill this process, and then demonstrates that these roles serve to create and coordinate knowledge in society, whether the society is a human society or a society of intelligent agents. The principles of society are mainly drawn from the field of interpretive social science, particularly from the field of symbolic interactionism in microsociology (Berger and Luckman 1966). Thus the title of this computer simulation is SISTER, a Symbolic Interactionist Simulation of Trade and Emergent Roles.

* Purpose

2.1
The primary goal of this study is to demonstrate that the emergence of a system of roles is a sufficient condition for the creation and coordination of knowledge in an artificial society. In particular, this study demonstrates the emergence of a system of the roles reflecting a division of labor in a bartering society. Each agent in this multi-agent simulation comes to perform a set of tasks that other agents expect it to perform based on an endogenously developed role. Each role is part of a system of roles that agents perform for the good of the whole. A system of roles is to be differentiated from a system of tasks allocated on an individual level. Roles are sets of tasks that are independent of the particular individuals performing them. Agents can recognize other agents on the basis of their role, or on an individual basis. The hypothesis of this study is that agent societies in which agents recognize each other based on roles have a higher capacity to create and coordinate knowledge than agent societies in which agents recognize each other only as individuals. The particular type of knowledge that they create, coordinate and preserve is the knowledge of how to make composite goods, or goods that are composed of other goods. The amount of knowledge that a society is holding is measured in amount of mutual information, a concept from information theory. It is shown that societies with role-based recognition have greater utilities from trade than societies based on individual-based recognition, and that this is correlated with the amount of knowledge in society.

* The Basis of SISTER in Social Theory

3.1
Parsons theorizes that the source of order in society is the "double contingency" (Parsons 1951). That is, each agent in society is simultaneously a subject of interpretation and an interpreter. It is both an interpreter of symbols, and a symbol at the same time. Luhmann also explains social order with Parsons' double contingency (Luhmann 1984). He theorizes that social structures are structures of mutual expectations, or what he called "expectation expectation" (Luhmann 1984). "Expectation expectation" is the concept of agents taking into account what other agents expect of them in their behaviors. Luhmann claims that all macro level social order ultimately comes from the expectation expectations of individuals. Social order emerges from this double contingency by means of a shared symbol system (Parsons 1951).
3.2
When Parsons and Luhmann theorize that social order ultimately comes from individual interactions, they infer a micro-macro link, with expectation expectations on the micro level, and social structure on the macro level. Micro-macro integration is the subfield of sociology which examines how phenomena on the micro level, such as expectation expectations, can emerge into macro level social structures. Our explorations of the process of social emergence are well informed by the theorists of micro-macro integration.
3.3
It is a common mistake to explain macro level phenomena with simple aggregation on the micro level in simulations of social emergence. According to Coleman, simple aggregation is not an adequate micro-macro link (Coleman 1994). For example, Coleman argues that Weber's The Protestant Ethic and the Spirit of Capitalism (Weber 2001) is a poor micro-macro theory because it tries to explain a social system, capitalism, with an aggregation of propensities to conduct derived from religious beliefs, such as thrift. Capitalism is not a property of a homogeneous group of people; rather it is a system of interdependencies between heterogeneous groups. Thus, an adequate micro-macro theory must address the process by which a homogeneous group of people differentiates into a system of relations between heterogeneous groups (Coleman 1994). Parsons believes that double contingency on the micro level results in social structures on the macro level by means of a shared symbol system. To combine this process with Coleman's micro macro theory, a system of relations between classes of persons has to be included. These ideas may be combined if a symbol system that denotes relations between roles emerges from micro level double contingency. This makes sense from the standpoint of other theorists as well. Phenomenologist Peter Berger believes that roles and symbols of roles are important to the creation of social order (Berger and Luckman 1966). SISTER serves to integrate these disparate social theories into one simple socio-cultural engine.
3.4
George Ritzer is one of the major micro-macro integration theorists in sociology. He argues that micro-macro integration can not be dealt with apart from the subjective-objective continuum. (Ritzer 1999). That is, any theory of micro-macro integration in sociology must include both subjective and objective components. In the social sciences, theories of the subjective component of interaction are found in interpretive social science, or the subfield of symbolic interactionism in sociology. Theories of the objective component of interaction are found in microeconomics. It makes sense to include both as a way of grounding interactions based on symbols in the practicality of life, motivating their existence and giving them meaning. SISTER includes ideas from microeconomics and microsociology in its model of a fundamental social process
3.5
Microsociology and interpretive social science in general have much to say about Parsons' double contingency and shared symbol systems. They describe processes of symbolic interaction and interpretation of symbols. Interpretive social science comes from the school of philosophy called Hermeneutics (Winograd 1997). It influences sociology in the symbolic interactionist and phenomenological schools, and economics in the Austrian school. Hermeneutics is a school of philosophy arising out of questions translators of the Bible have about preserving the meaning of texts across different circumstances of life. They see a paradox about people. People can only understand the world through their own context: information can not be directly copied from one person to another, but is understood differently by those in different contexts. Yet they still come to understand each other through institutions and language. People invent their own meanings, yet they share meaning. This hermeneutic paradox is essentially a restatement of Parsonian double contingency, or "expectation expectations" as Luhmann calls it. As the expectation expectations that people have of their symbol system turn into a consensus, then the subjective interpretations of individuals become objective. In other words, the meanings of signs become less a matter of interpretation and more a coercive force that must be understood for interaction to take place.
3.6
In sociology, the symbolic interactionists look deeply into the process by which expectation expectations emerge a solution to the hermeneutic paradox by means of a consensus on a shared symbol system. Symbolic interactionists claim that institutions and language are generated from the interplay of subjective perspectives on the individual level (Berger and Luckman 1966). In other words, shared meaning is an emergent property. The lower order process from which meanings emerge is the display and reading of signs. According to phenomenologist Peter Berger, this occurs in a three phase dialectic of externalization, objectivation and internalization. When we act upon our ideas, we externalize. Once we have done something in the public realm, it becomes separate from ourselves: this is the process of objectivation. Once it is separate from ourselves, it can come back and change ourselves and others: that is internalization. These objectivations are signs that tell us what to expect from each other. They become habitualized and interlocked into a web of meanings that make sense together. This is how we create our ideas. In the process, we create ourselves.
3.7
An important special case of this is our categorizations of people, called roles. When our ideas about people are objectified, and these objectivations change others, this is the process of self fulfilling prophecy. The lower level process is that we change ourselves to meet others' expectations of us. From this emerges different types of people, or different roles. Roles are essential to establishing social order (Berger and Luckman 1966). SISTER models this process by which roles are formed and the hermeneutic paradox is resolved.

* Similar Work

4.1
The literature on emergent communication is relevant to symbolic interactionist ideas about the emergence of social order from the reading and display of signs. However, most work in lexicon development assumes that an ontology already exists, and agents try to map random signs into this ontology (Perfors 2002). Oliphant and Batali have a model in which agents try to send signals they hear most often sent or most often interpreted for a meaning (Oliphant and Batali 1997). However, since the ontology already exists, a meaningful symbol system itself does not emerge, only a mapping. In SISTER, the ontology, in this case, the set of social categories, emerges endogenously. Other work is more grounded in meaningful activities. For example, Werner and Dyer have a model of blind males who move towards female stationary female listeners by interpreting the meaning of the sounds they emit (Werner and Dyer 1992). The ones that reproduce are the ones that develop a shared lexicon. In the end, the entire population comes to share a lexicon. However, these agents reproduce in the same genetic algorithm population. Since convergence is a property of genetic algorithms, the method used ensures that they must eventually have the same lexicon. Agents are not closed with respect to meaning, but are forced to copy each other. It is not appropriate to ask why agents come to agree with the use of an algorithm that forces them to agree.
4.2
Luc Steels has a model of Robot communication in which agents are closed with respect to meaning. In this model, robots with separate induction mechanisms develop shared lexicons by indicating to each other objects on a table and grunting (Steels 1999). They demonstrate a solution to the hermeneutic paradox in that they come to share meaning despite their differing perspectives, in this case, different views of objects on a table. However, consensus is not reached endogenously on large numbers of agents.
4.3
There is a body of work in which agents identify each other by means of tags. The original work which used tags in an artificial life program is John Holland's Echo model (Holland 1975). Echo agents identify each other by a tag. However, they do not incorporate true coevolution, where every agent uses an entire genetic algorithm to evolve a tag and the other behavior. When a genetic algorithm is used, agents are all chromosomes of the same genetic algorithm. This is not good for the same reason that lexicons using a single genetic algorithm are not good: agents are not closed with respect to meaning, but rather copy each other, and converge. This prevents a system from forming, where agents are free to differ. Similarly, Epstein and Axtell have a model of the transmission of culture through tags, which basically involves the spreading of culture and behavior through the copying of tags of neighboring agents (Epstein and Axtell 1996). In Axelrod's model of cultural influence (Axelrod 1997), agents decide to act with other agents based on differences of cultural tags. If they act with each other, they become more alike. A model of social rules which forces agents to be the same does not explain the formation of a system in which agents are sometimes the same, and sometimes different, as micro-macro theorists advocate.
4.4
Hales used tags for grouping in an iterated prisoner's dilemma problem, showing that agents that are programmed to act with each other if they are part of the same group will cooperate (Hales 2002). Riolo, Cohen, and Axelrod extended this model to show that agents will donate to other agents that wear a sign that is similar to its own sign (Riolo, Cohen, and Axelrod 2001). This shows that signs come to indicate group identifications. However the agents were still homogeneous in that they all behaved in the same way, with loyalty towards ones own group. Hales and Edmonds have a model that extends the Riolo, Cohen, and Axelrod's model of indirect reciprocity. Hales and Edmonds give agents different skills, and show that agents donate resources that they don't have the skill to use to agents with a similar tag (Hales and Edmonds 2003). These agents are different in that they have different skills, however, they were given these skills, and did not differentiate into them.
4.5
The same goes for the work of Conte and Castelfranchi (Conte and Castelfranci 1995), which uses models of heterogeneous agents that are fixed throughout the simulation. These models use more coercion to cause social order, whether that coercion be through individual enforcement of order or through law enforcement of order. They are not as purely organic as order arising from Luhmann's "expectation expections," in the natural symbolic interaction of everyday life (Luhmann 1984). These models do not emulate the interpretive paradigm, because they are based on imitation. According to Conte and Dignum, in these models, "conventions, norms regularities, social structures and patterns of any sort (segregation, discrimination, opinion formation) are said to emerge from agent's imitation of either the most successful, the most frequent, or simply the closest behaviors" (Conte and Dignum 2003). This imitation is contrary to the interpretive paradigm, where agent's interpretation of signs are done solely from their individual experiences. If every agent interprets signs from its own experiences, then the signs have every agent's contribution when they come to have shared meaning. In contrast, if only one agent has a behavior which is imitated, and then enforced, the chances for that behavior to be modified by each individual that use it are fewer. To have every agent affect each other, a more coevolutionary model is needed. Coevolution of agent expectations upon agent expectations leads to an organic emergence of both convention and of heterogeneous agents.
4.6
Another ambitious model of basic social processes is Kluver's model of socio-cultural evolution through roles (Kluever 2002). In this theory, agents have a knowledge base and a position in society that represent their role in society. Agents coevolve their roles from interactions with each other, and through this process institutionalize their behaviors. Dittrich Kron, and Banzhaf also have a model of the basic social processes of Parsons' double contingency and Luhman's expectation expectations (Parsons 1951; Luhmann 1984; Dittrich, Kron and Banzhaf 2003) . Dittrich et al however abandons Parsons' theory that a shared symbol system is responsible for the emergence of social order. Dittrich calls this "impossible" and asks "how can a a dyad presuppose and develop a shared symbol system at the same time?". In Dittrich's model, agents take into account what they perceive as other agents' expectations of them in their decisions, and this, they say, predictably leads to social order. The agents read and display signs, each looking at what happened in the past and the predictability of one sign following another. The predictability is measured in entropy. They were able to obtain social order for didactic situations, but the meaning of signs rapidly decreases for numbers of agents above two. If agents are allowed to put themselves in each other's shoes in their predictions, order is improved, but only slightly.
4.7
Dittrich et al leave the modeling of the basic social process as an open question. Dittrech et al's abandonment of the shared symbol system as the solution to social order is premature. Coevolution may be used to solve this type of "chicken and egg" problem. SISTER uses coevolution to bring about a consensus on the symbol system. The symbol systems in Dittrich's model did not become wide spread because they based agent expectations of each other on individual recognition. SISTER proves that role recognition has a significantly higher mutual information level (less entropy) and thus the signs have significantly more meaning in larger groups. SISTER solves both Dittrich's and Kluver's open questions by combining their models, basing the double contingency on roles (Duong 1995; Duong 1996, Duong 2004).
4.8
SISTER is related to another adaptive model of symbolic interaction: Duong and Reilly's model of status symbols, racial prejudice and social class (Duong 1991; Duong and Reilly 1995). In this model, a symbol system emerges as employee agents induce the meaning of the signs they display at the same time that employer agents induce the meanings of the signs they read. Some of these signs can not be changed: they are the racial characteristics. Some must be bought with money from past employment, like a suit, and some signs can change freely, like a fad. Employees induce what sign they display on the basis of what gets them employed while employers induce what signs they should hire on the basis of the degree of talent that employees with those signs had in the past. This amount of talent is hidden during the employment interview. Employees with less talent are laid off and put back in the population of the unemployed in greater numbers than the talented. Agents of this simulation each have a Hebbian neural network to interpret signs with. An interesting result is that signs come to have the same meaning to all of the employers even if they are incorrect. Talent distributions are the same according to race, but race comes to mean a lack of talent in an employer's mind, if it becomes associated with a cheap suit. If one race gets caught in such a vicious circle of not being able to get hired because they as a group have cheap suits, then that race is a social class. If talented employees learn to buy expensive suits to differentiate themselves from those that lack talent, and employers also learn to hire those with more expensive suits, then those suits became status symbols (Duong 1991; Duong and Reilly 1995). Duong and Reilly's model explains a dynamic linguistic enforcement of social class, but does not contradict conflict theorists ideas about the origins of classes.
4.9
In a later simulation, Axtell, Epstein and Young show emergent classes of agents based on race by having agents bargain based on what they expect other agents to bid. This model uses an induction based on fixed signs similar to the one in Duong and Reilly's model of racial prejudice: signs are induced by the reader only, and come to have meaning through coevolution of expectations (Duong 1991; Duong and Reilly 1995; Duong 1995; 1996; Axtell, Epstein, and Young 2001; Duong 2004). The individual recognition runs of SISTER use signs induced by the reader only, but the role recognition runs use a double induction of signs. That is, agents have to induce the sign they display in addition to inducing the signs they seek in trade (Duong 1995; 1996; 2004). This "double induction" of signs is the micro level of a Parsonian system and is the same thing as a double contingency. Coevolution facilitates the modeling of the double induction and the resultant emergent symbol system which Parson theorizes as the source of social order.
4.10
SISTER simulates the emergence of institutions, with the upper and lower levels of the simulation based on social theory. The basis of the lower level is micro sociological and microeconomic rules. In accordance with interpretive social science, agents see signs through their senses and are closed with respect to meaning, and yet still share meaning. They do not copy other agent's signs, but learn which signs to display through their own experiences. In accordance with micro-macro social theory, the upper level is a system of roles. These emergent roles are important to the continuity of a culture of agents. The emergence of a system of roles is shown to be one of a set of sufficient conditions for the creation and coordination of knowledge in an agent society. This knowledge is held in the mutual expectations that agents have of each other based on their roles, in line with Parsons' and Luhmann's theory of social systems (Parsons 1951; Luhmann 1984).

* Methods

5.1
SISTER is a simulation not of a modern economy, but of a simple barter economy, where no wealth may accumulate from day to day. Agents produce in the morning, trade in the afternoon, and consume at night, leaving nothing for the next day. It is assumed that agents seek to have eight goods in equal amounts, and as much as they can get of them. Agents can produce some or all of the goods as they please, but these activities cost effort. An agent has only limited efforts to spend, but by concentrating efforts on a small number of categories of goods then more total units of good will be made. This simulates the benefits of specialization, also known as economies of scale. By this design the agents will be happier if they make lots of one good and trade it away for the others, however, it is up to the agents to learn how to trade. They develop institutions in the process of learning how to trade, and their institutions are solutions to the problem of how to trade. They start out the simulation completely ignorant of what to produce, what to trade, how much to trade, and whom to trade it with, and what sign they should present to others to tell who they are. The knowledge they come to have to get to the answer, the development of interlocked behaviors and shared meanings prerequisite to that answer, are the emergent institutions. This study focuses on just one of these institutions: the emergence of a division of labor. Other institutions that the agents develop are price and money.
5.2
The subjective micro-level rules come from the principles of interpretive social science as found in phenomenological and symbolic interactionist sociology. The agents follow a basic principle of interpretive social science: they are closed with respect to meaning. That is, they can not read the minds of other agents to learn how to interpret signs, but can only understand signs through associations with sensory phenomena that they experience in their individual lives.. They each have their own private inductive mechanism with which they perform the symbolic interactionist task of inducing what signs they should display and the meaning of signs that they read. Their inductive mechanisms are autonomous in the sense that they are not seeded with information from other inductive mechanisms. With these inductive mechanisms, the agents interpret the signs they display solely from the context of their individual previous experiences, never copying another agent's sign, or another agent's interpretation of a sign. Yet despite this autonomy, the signs emerge shared meaning. This is in accordance with the hermeneutic paradox of our inventing our own meaning and yet sharing meaning. As in the symbolic interactionist paradigm, the signs come to have meaning as is pragmatic for the agents in the function of their "everyday life". The signs they learn to read and display are signs of whom to approach to make a trade with and what to display to attract trade. The meanings that the signs come to have are roles in a division of labor. This is in accordance with phenomenological sociologist Peter Burger's emphasis on the importance of roles and symbols of roles as a mechanism of organizing behavior. To learn how to trade, every agent must learn his role in terms of what goods to produce or assemble, and have general knowledge of the other roles in the society so that he may know who to trade with.
5.3
Tests of the importance of roles to the continuity of culture are performed. In the initial experiment, all goods are simple produce, harvested directly from the environment. In subsequent experiments, agents have the option of making composite goods, or goods that are made with more than one simple good. Agents learn to solve the more complex problem of which goods to put together to increase their utility. The ability of the agents to collectively learn how to make these composite goods is compared in agent societies where role symbols are used, and in societies where individual recognition is used.

* The Design of SISTER

6.1
SISTER is a simulation of daily habits of trade in a bartering society. Every agent learns a plan of harvesting, cooking, trade, and a sign to display for a single day of the simulation. Each agent has a limited number of "efforts" that it learns to allocate between harvesting, cooking and trading of goods. It allocates these efforts to specific goods to harvest or cook, and specific trade plans to perform. At the beginning of the day, agents harvest according to their production plans. They may harvest a few of each good, or more of some and less of others as their plans dictate.
6.2
Next, agents trade. All agents have both active and passive trade plans that are activated if agents devote the right amount of effort to them. Each trader follows its plans for active trading, by seeking out agents to trade with that display signs closest to the ones in their trade plans. If the passive trader has a corresponding trade plan and both the active and passive trader have the goods, then the trade takes place. See figure 1 for an illustration of corresponding trade plans. If the scenario simulates composite (cooked) goods, then an agent must have all the components of a good before it can trade that good (Lacobie 1994). An agent may trade to get the ingredients of a recipe, cook it, and trade it away.
figure1
Figure 1. Corresponding Trade Plans. Agents trade with agents who have corresponding trade plans and are wearing the correct sign
6.3
In the evening, the agents consume all of their goods, and judge their trade plans for the day solely on their satisfaction with what they consume that night.
6.4
The agents' motivation for trade lies in the encoded microeconomic rules. This simulation encodes two concepts from micro economics: the nature of the agent efforts and the nature of agent needs. The nature of agent efforts is that if agents concentrate their efforts on fewer activities, they are able to be more productive than if they spread their efforts out over more activities. This simulates the benefits of specialization, or economies of scale. The nature of agent needs is that agents seek an even spread of several goods, and as much as they can get of them. It is as though each of the goods is one of the four essential food groups, and they seek a balanced diet of those goods.
6.5
Economies of scale are encoded by setting the efforts devoted to an activity to an exponent, whether that activity is a particular trade, the harvesting of a good, or the combining of goods (cooking):
equation1 (1)
6.6
The number of specific activities an agent may complete is equal to a constant for that type of activity, K, times the efforts designated to that particular activity, e, raised to an effort concentration factor for that type of activity, c. For example, if the trade constant is 0.5 and the trade concentration factor is 3, and an agent devotes 2 efforts to trade 4 units of oats for 3 units of peas, then there are 0.5(23) = 4 such trades it can make. If the activity is a trade, the concentration of effort might mean that the agent has invested in a cart to make an active trade, or in storage to make a passive trade. If the activity is harvesting, it might mean that the agent has put in the investment to harvest a particular good well. If the effort is composing goods (cooking), the agent may have invested in training to learn how to cook a particular type of food. Whatever activity it is, this equation means that putting a little more effort into the activity will make it much more productive.
6.7
The nature of an agent's desires are encoded with the Cobb-Douglas utility function. At the end of the day, each agent consumes all of its goods. How happy an agent is with its trade plans is judged solely by the Cobb-Douglas utility function of the goods an agent consumes:
equation2 (2)

"Good" is the amount of a good that an agent consumes, n represents the number of different goods, and weight is a measure of how much each individual good is desired. All of the weights add up to one. Each agent has a minimum of one of each good given to it. This is a standard utility function in economics. If all of the weights are the same, it makes it so that agents want a spread of all different types of goods, and as much as they can get of them. The agents want a spread of goods in the same sense that people need some of each of the four food groups. For example, if an agent has eight units each of two of the four food groups, his happiness is 80.25 × 80.25 × 10.25 × 10.25 = 2.82. If the goods are more spread among the food groups, and the agent has four units of each of the four food groups, then its happiness is 40.25 × 40.25 × 40.25 × 40.25 = 3.48. The agent would rather have four of four goods than eight of two goods. With this equation both the spread of goods and the amount goods are important. In this study, the weight for each good is equal, so that differences in outcome can not be attributed to uneven utility values for individual goods.

6.8
The equations for effort and utility make it to the agents advantage to concentrate their efforts on making fewer goods so that they can make more of them, and then trading them for the rest of the goods, so that they can have an even amount of many goods. Agents may choose to make all of the goods for themselves until they learn how to trade them. In order to learn to trade the goods, the agents must learn to read each others signs and to display the correct sign. This is done according to the rules of interpretive social science. Agents never copy another agent's interpretation of a sign, but rather interpret and display signs solely according to its own utility. To simulate Parsons' double contingency, the sign is double induced: both the sign to seek for a trade and the sign to display on ones self are learned (Parsons 1951). These signs come to have common meaning as the agents differentiate themselves. As in Parsons' theory, the ordering of society comes through a shared symbol system, and as in Berger's and Coleman's theory, that ordering is a system of roles (Parsons 1951; Berger and Luckman 1966; Coleman 1994).
6.9
In the role recognition treatment, the passive trader displays a sign that its genetic algorithm has learned, but in the individual recognition treatment, the passive trader displays a unique identification tag. The ability to change the sign is the only difference between the two treatments. In the role recognition treatment, the sign that an agent displays for its passive trades comes to represent that agent's role. That sign starts out random, but comes to signify a role when all the agents that have learned to display the same sign have similar behaviors. Figure 2 illustrates agents that have differentiated into roles denoted by a learned tag that they display in common. In both the individual recognition treatments and the role recognition treatments, an endogenous differentiation of the agents occurs. However, when recognition is based on the role sign, several individuals become replacements for each other in their behavior, while if it is based on unique identification, agents learn to interact on an individual basis. In the role recognition scenario, an agent who wants to make an active trade can teach (that is, exert "selective pressure" on) many different agents who can replace each other to make a trade. In an individual recognition treatment, an individual can only teach one agent to make a trade.
taggedAgents
Figure 2. Agents differentiate into roles. Roles are designated by tags, learned from an agent's individual GA. Different agents which have the same tag are said to be members the same role if the agents who display the same tag also have the same behaviors. These tags are individually learned by each GA, but come to mean the same set of behaviors
6.10
For example, let's call the goods of a role recognition scenario oats, peas, beans, and barley. Suppose an agent in the simulation has produced more oats and fewer beans. Suppose also that he displays a random sign to attract trade, and another agent with more beans and fewer oats guesses the meaning of that sign by coincidence while trying to trade his beans for oats. Both agents are satisfied with the trade and the sign: they remember this, and repeat it. The more the trade is repeated, the more it becomes a stable thing in the environment for other agents to learn. Since an agent with an active trade plan is looking for any agent who displays a particular sign, then any agent can get in on the trade just by displaying the sign. The agents come to believe that the sign means "oats," in the sense that if an agent displays the sign accidentally, the other agents will ask him to sell oats. This will put selective pressure on that agent to make and sell oats. At the same time, other agents who produce oats will benefit from learning the oat sign so as to attract trade. After a while, the society divides into roles, with groups of agents displaying the same sign and having the same behavior. If a new agent comes into the simulation, then to participate in trade he must learn the sign system already present. The signs are a guide to his behavior: when he displays a sign, the other agents pressure him to have the corresponding behavior. Thus a sign creates expectations of behavior, in accordance with Luhmann's model of expectation expectations. A system of signs represents a compromise between the interests of the agents who have put selective pressure on each other to behave in ways beneficial to themselves. These signs enforce evolutionary stable strategies of agent behavior.
6.11
If composite goods or "cooking" is in the scenario, then agents must have all of the goods that a composite good is made of before it can trade that good. In a scenario with composite goods, agents have to know more to make trades in that good than they have to know in simple scenarios. They have to know to either harvest or purchase the goods that a composite good is made of, in addition to having to know who would buy the composite good. If a newly inserted agent displays the sign of one who sells a composite good, then it learns the component parts of the good when other agents come to it to sell those components. An agent is thus trained in how to perform his roles by the expectations of the agents in roles that interact with that role. For example, suppose that in a SISTER society, the harvested goods were corn and lima beans, and the composite good was succotash, made from corn and lima beans. Suppose the agent who discovers succotash puts up a sign that she sells succotash. She buys lima beans and corn from lima bean and corn farmers, and finds that she has much business when she sells her succotash to the local diner, that uses her succotash to compose some of their dinner entrées. Through experience, our inventor of succotash has learned who sells the components of her good, lima beans and corn, as well as who buys the components of her goods, local diners. New agents who want to sell succotash now do not have to relearn all of that, because as soon as they put up a sign that says they sell succotash, lima bean and corn marketers start to call them. The new agent, because of the other agent's expectations, figures out how to make succotash if he didn't know before. If he only knows about the lima beans when he starts to display his sign, he will quickly learn about the corn. This is because he will feel selective pressure to buy corn. It will make the succotash better as far as the diners are concerned, and will give him more business. And it will be easy to buy because corn agents are constantly asking him to buy it. If it comes to be to his advantage, he will learn it. This is how the knowledge of how to make composite goods is held in the mutual expectations of agents. The mutual expectations that the agents have of the roles allows individuals to take advantage of what other individuals have learned in previous interactions. The knowledge of the society is held in the mutual expectations of the symbol system, as in Parsons' and Luhmann's theories (Parsons 1951; Luhmann 1984).
6.12
The reason that role systems can hold more information about how to make cultural products is that agents can replace one another and can learn from the expectations that other agents have of their replacement class. This is how they become trained to do their role. However, this training is not all encompassing: what they do is ultimately connected to their utility. They can reject trades if it is not to their advantage, for example, if they find that succotash tastes better with tomatoes than with lima beans.

* The Implementation of SISTER

7.1
SISTER uses the method of coevolution of genetic algorithms to implement the double induction of the signs. In the double induction, the meaning of signs displayed by some agents are induced at the same time that the meanings of signs read are induced by other agents. The use of genetic algorithms is not essential. Previous symbolic interactionist simulations make use of neural networks (Duong 1991; Duong and Reilly 1995). In this simulation, employers induce the meaning of signs employees displayed in terms of their talent, while at the same time employees induced the meanings of the signs they displayed in terms of their income. However, these meanings were "coevolved" and held in the mutual expectations of Parsons' and Luhmann's theories, just as the signs in SISTER are.
7.2
Genetic Algorithms are an inductive mechanism of artificial intelligence invented by John Holland (Holland 1975). Genetic Algorithms mimic the induction that occurs naturally in Darwinian evolution. A genetic algorithm consists of a population of chromosomes, which are usually strings of ones and zeros. These ones and zeros are "genes" and represent possible solutions to a problem of optimization. Each chromosome is judged with a fitness operator, that rates the solution it represents according to how well it solves the problem. Then, in accordance with Darwinian survival of the fittest, a reproduction operator is applied that lets the more fit contribute to the next generation in greater numbers than the less fit. One typical reproduction operator is the roulette wheel, where chromosomes contribute to the next generation roughly in proportion to their fitness. This is the reproduction operator used for SISTER. These chromosome parents contribute to the next generation through a crossover operator, in which roughly half of each parent's genes are contributed, the degree of mixture being determined by the number of "crossover points." The number of crossover points in SISTER, along with other parameters of the genetic algorithm, is listed in figure 4. Finally, a mutation operator is applied, which randomly flips a small percentage of the genes, to introduce some diversity. The resultant offspring become the next generation, to be judged again by the fitness operator. As this cycle repeats the genes on the chromosomes converge towards a common solution, which is the solution to the optimization problem. Genetic Algorithms are a method of soft computing, along with neural networks and fuzzy logic, meaning that they are robust, that they can deal with uncertainty, and can give a pretty good or satisficing answer to problems too difficult to find an optimal answer to (Holland 1975).
7.3
If the fitness of one genetic algorithm population depends on the behaviors encoded in at least one other genetic algorithm population, then the populations are coevolving (Potter and De Jong 2000). SISTER is a program of coevolution, because the signs and behaviors of an agent are only good for it if the agents it interacts with have corresponding expectations of signs and behaviors. In SISTER, each chromosome in an agent's genetic algorithm population represents a plan of trade for a single day. It interacts with one other chromosome from each agent, for a single day of trading. All the first chromosomes of each agent's genetic algorithm interact for the first day, then all the second chromosomes interact for the second day independently of the first, and so on. In the experiments of this article, each agent has 1000 chromosomes, so there are 1000 days in which each agent's chromosomes interact with the other agent's chromosomes, and 1000 days in which their fitness are judged, before the agents reproduce. The agents' genetic algorithm populations reproduce separately and without seeding from each other. Then a new population of 1000 chromosomes is available for the next 1000 days of trade. The fitness function with which each chromosome is judged is the Cobb Douglass utility function of what goods that trade plan brought that agent after a day of trading. Figure 3 shows the activity cycle for these coevolving agents.
                  For 1 to i generations  //reproduction occurs every n days
                    {
                      For 1 to n  chromosomes  //each chromosome represents a plan for one day
                        {
                          -  Harvest  goods in amounts according to plan.
                          -  Trade goods with traders displaying signs closest to those in the plan.
                          -  Consume goods and rate plan.  
                         }
                       -  Genetically recombine plans in private GA.
                     }
Figure 3. SISTER simulation loop for individual agents. Every day, an agent implements a plan of harvesting and trading on a single chromosome. After it has used all of its chromosomes, its genetic algorithm reproduces, and it has a new set of plans. If death is implemented, some small percentage of agents have the genes of their genetic algorithms randomized, and they are given a new identification tag
7.4
The number of bits in a single chromosome depends on the parameter values for the number of efforts and trade plans. See figure 4 for the parameters of the runs of SISTER in this study.

Figure 4: Parameters common to the experiments of this study

Number of Agents 16
Population Size of GA in each agent 1000
Number of Crossover Points 4
Number of Bits in a Chromosome 772
Mutation Rate 0.01
Number of Efforts 128
Number of Trade Sections 16
Number of Goods 8
Constant of Harvesting .001
Constant of Trade 1
Constant of Cooking 1
Harvesting Effort Concentration Factor 3
Trading Effort Concentration Factor 3
Cooking Effort Concentration Factor 3
Number of Possible Amounts to Trade 4
Cobb-Douglas weight 0.125

* Knowledge Representation of SISTER

8.1
In a single chromosome, the sections for efforts come first. The number of efforts each agent is given per day is a parameter of the simulation: for this study, agents have 128 efforts each. There are four bits per effort: the first bit tells if this effort is to be devoted to trade or production. If it is production, the next three bits tell which good: otherwise they tell which one of a maximum of 16 trade plans are used. For this study, 8 trade plans are passive and 8 trade plans are active. The first bit is a gene switch since it controls the expression of the trade plans in another part of the chromosome. It serves to maintain diversity: when the trade is inactive, the trade section gains diversity through mutation, while it tends to lose diversity when it is active under the influence of natural selection. There are 16 trade plans after the effort sections. Each trade plan encodes a good to give, an amount to give, a good to receive, an amount to receive, and a sign to seek in a trade partner. Finally, the sign section encodes a sign to display to attract traders. We expect these signs to come to have meaning as the simulation progresses. Figure 5 shows the agent's knowledge representation. Figure 6 illustrates the meaning of a chromosomal trade plan.

Figure 5: Knowledge Representation in SISTER, in an 8 good Scenario. A single chromosome has bits which represent efforts, trade plans and a tag. How many effort sections and trade plans in a single chromosome is a parameter of the simulation. A chromosome represents a single day of trade

Chromosome section Bits Meaning
Efforts 1 Production or trade?
2-4 Which good to produce or trade plan to activate
Trade Plans 1-3 Good to give
4-6 Amount to give
7-9 Good to receive
10-12 Amount to receive
13-16 Sign of agent to seek trade with
Role Tag The meaning of the bits on the tag emerges

figure6
Figure 6. Illustration of a chromosomal trade plan section. One section on the chromosome of an agent encodes an active trade plan in the string, 0110010010001001, while a chromosome on the passive agent encodes a passive trade plan in the string 001000011001

* Roles and Mutual Information

9.1
To measure the amount of knowledge held in a society, the concept of mutual information, from information theory, is used. Mutual information is a measure of the information contained in a sign. If there is only one sign, and the agents that display it have several different behaviors, that sign indicates nothing. Conversely, if all sorts of signs all mean the same behavior, then those signs indicate nothing.
9.2
If p(x) is the probability (frequency) of occurrence of a sign and p(y) is the probability (frequency) of a behavior, then mutual information is a measure of the correspondence of symbols to behaviors in a system (Shannon 1993):
equation3 (3)
9.3
If there are many signs that mean many different behaviors, then the mutual information of a system is high. For example, if every agent that sells apples displays sign 1 and every agent that sells pears displays sign 2, then there is more information than if all agents display sign 1 and sell both pears and apples, or if some agents that sell apples display sign 1 and other agents that sell apples display sign 2. Mutual information is shown to correlate with utility.
9.4
Figure 7 shows an example of a trade scenario of trade with a high mutual information content, and two examples of trade scenarios with a low information content. Each table shows the number of trades made for all agents displaying a particular sign. In the high mutual information scenarios, different signs indicate different behaviors. In the low mutual information scenarios, a sign can mean many different behaviors, and many signs can mean the same behavior.

Figure 7: Mutual Information of Example Trade Scenarios. The rows indicate the number of a type of good sold. P(y), or the frequency of trade in a good, is determined by dividing the number of times a goods is sold in a row by the total number of trades. The columns indicate the sign displayed. P(x), or the frequency of the display of a sign, is determined by dividing the number of times a sign was displayed for a trade in a column by the total number of trades. P(x,y) is the frequency of co-occurrence of a particular sign and trade

High Mutual Information Trade Scenario
Sign Displayed
Good Sold Sign 0 Sign 1 Sign 2 Sign 3
Oats 10
Peas 7
Beans 3
Barley 9

Zero Mutual Information Trade Scenario
Sign Displayed
Good Sold Sign 0 Sign 1 Sign 2 Sign 3
Oats
Peas
Beans 10 11 9 8
Barley

Zero Mutual Information Trade Scenario
Sign Displayed
Good Sold Sign 0 Sign 1 Sign 2 Sign 3
Oats 7
Peas 9
Beans 5
Barley 13

* Experiments

10.1
This article describes two experiments to test the sufficiency of roles as one the ingredients for the creation and coordination of knowledge. These experiments were chosen because they demonstrate the formation of a system of roles, as advocated by micro macro theorists, and show how a such a system holds cultural knowledge, as posited by Kluever (2002). These experiments are designed to explore the following specific hypotheses:

Hypothesis of the simple goods experiment

10.2
Agent societies that employ role-recognition (but not individual-recognition) will learn how to trade at least as well as agent societies that employ individual-recognition.
10.3
In this experiment, SISTER is run with agents that display and read a freely changeable sign to use in seeking another agent for trade. It is shown that, in the role recognition treatment, a system of roles has formed in which signs are used for the purpose of trade, with different signs meaning different trades. The performance task is for agents to increase their utility of eight simple goods that can be farmed or traded. This experiment shows the evolution of bartering behavior of simple harvested goods. Agents in both the role recognition treatment and the individual recognition treatment learn to produce fewer kinds of goods and trade them away for other goods. However, the agents in the role recognition treatment learn to do it better, with higher utilities. Experiments are performed with 16 agents, eight goods, and 1000 chromosomes per agent over 1500 generations, for 20 runs for each treatment.
10.4
In both experiments, average utility is used to assess how well a society of agents trades. In SISTER, this is the same as average fitness. A t-test is done on the average utility over a run, as well as for every corresponding cycle in a run. This t-test compares the role recognition treatment to the individual recognition treatment. The t-test measures how unlikely it is that the individual recognition treatment is as good as the role recognition treatment given all of the data from the 20 experiments. If the probability is below 5%, then we have significant evidence that agents in societies that use role recognition generally have higher utilities than agents in societies that use individual recognition, for the given scenarios.
10.5
Another test done on both experiments is a t-test for difference in mutual information between the treatments, to check if the increase in utility is due to an increase in mutual information. A correlation between mutual information and utility is taken to support the hypothesis that the reason behind improved trade is the better information of the role treatments. .

Hypothesis of the composite goods experiment

10.6
Agent societies that employ role-recognition (but not individual-recognition) will learn how to assemble and trade more complex goods than agent societies with individual-recognition alone.
10.7
This is similar to the first experiment, except that two of the eight goods are composite goods. Good 6 is composed of good 0 and good 1, and good 7 is composed of good 2 and good 3. Since the Cobb-Douglas utility function is the fitness function, it is in the agents best interest to get these composite goods in their spread of goods. The Cobb-Douglas utility function makes it so that an agent wants a spread of all of the goods. However, a composite good is a particularly good deal because, in the parameters of this study, it costs less to make than harvesting a new good. However, composite goods require more coordination of social knowledge, and this requirement is a type of stress. This experiment shows that role recognition is more robust under this stress than individual recognition is.

Results for the Simple Scenario

10.8
In this simple scenario of 16 agents with 8 primitive goods, the role recognition treatment does better than the individual recognition treatment at a confidence level of over 99%. The average individual recognition utility is 148.5 as compared to 164.5 for role recognition agents. For the parameters of this study, utility values are actually 1,485,000 and 1,645,000, but we report utility in units of 10,000. For the set of parameters of this study, 130 is a utility level of agents who make every thing for themselves without trade. Mutual information does not differ significantly between the treatments. Average mutual information is 0.36 for the individual recognition treatment as compared to 0.22 for role recognition treatment. The confidence level for a correlation between mutual information and utility is above 99% for both the role recognition treatment and the individual treatment. The correlation between utility and mutual information in the individual recognition treatment is 0.68, as compared to a 0.56 correlation for the role treatment.
10.9
Figure 8 shows a graph of the average utility, in twenty averaged runs, for both treatments. There is a yellow vertical bar in every cycle when a t-test shows a significant difference between the mean of the individual recognition runs and a mean of the role recognition runs. It is completely yellow between the two treatments because in every corresponding cycle, there is over a 99% confidence level in a difference between means.
simpleScenarioAverageFitness
Figure 8. Simple scenario average fitness. The utility of the agents is averaged for the 20 runs of the role recognition treatment and for the individual recognition treatments. The yellow vertical lines indicate places where a t-test shows a significant difference between treatments, which is true for every 10 cycles, making the space between the role and individual lines completely yellow. There are 1000 days of trade per "cycle." Each cycle is one generation of the agent's genetic algorithms

Discussion of the Simple Scenario

10.10
It may seem counter intuitive that a role recognition society would do better than an individual recognition society, because the role recognition treatment seems to be a harder learning problem. Role recognition requires a double induction: agents have to induce what sign they will display in addition to inducing what signs to seek in trade. Agents of the individual recognition treatment only have to induce what sign to seek in trade. However, the reason that trade spreads through the role recognition society more easily is that an active agent may approach any agent of a role for a single trade plan, a one to many relation, whereas the active agent can only approach a single individual for a trade plan if it is in the individual recognition society. In a system of roles, agents can substitute for each other, but in a individual recognition system, they can not. As a result, a single agent in a system of roles puts selective pressure on more agents, spreading its influence wider.
10.11
This simple experiment shows no significant difference between the mutual information of the individual treatment and the role treatment. The average mutual information of 0.36 for the individual recognition is not significantly different from the average mutual information of 0.22 for the role recognition treatment. When the complexities of a composite good are added in, the mutual information is significantly greater in the role treatment. Perhaps this is because this simple scenario is too "easy" for both treatments. An individual recognition scenario can have as much mutual information as a role recognition scenario if only one agent is required to sell each good, and the society learns the identity of that agent. The measure of mutual information does not take into account the number of trades. A system where only half of the agents are trading can have as much mutual information in it as a system where all of the agents are trading. Therefore, increases in the role recognition treatment over the individual recognition treatment can be the result of an increase in the volume of trade that occurs in the role recognition treatment. However, we are still certain at the 99% confidence level that there is a correlation between utility and mutual information in this simple scenario.
10.12
This experiment is partially successful at showing that a society with role recognition is better at creating and coordinating knowledge than an individual recognition society. We are sure that the role recognition treatment does better than the individual recognition treatment in trade, and have rejected the null hypothesis that agent societies that employ role-recognition (but not individual-recognition) trade at equal average utilities as agent societies that employ individual-recognition. However, because we are not sure of the difference in mutual information between the treatments, we can not attribute this increase to the ability of the role treatment to hold more knowledge. The contrast of what we can show in the simple scenario vs. the composite good scenario, in the next section, is instructive. The contrast is of not being able to show a difference in mutual information in the simple scenarios, to being able to show a strong difference in the composite good scenarios. This is consistent with the hypothesis that role recognition is robust in the presence of complex knowledge requirements, while individual recognition fails in the presence of complexity.

Results for the Composite Good Scenario

10.13
In this scenario, two of the goods are composed of other goods: good 6 is composed of good 0 and good 1, and good 7 is composed of good 2 and good 3. In order to sell a composite good, an agent has to obtain the goods that it is composed of through harvesting or through trade. If it is through trade, then more complex social networking is required than in the simple scenario. A t-test comparing the individual treatment average utilities in the simple and the composite good scenarios shows that the composite good scenario is harder for the individual agents at the 92% confidence level. Average utility actually increases for the role recognition agents in the composite good scenario as compared to the simple, meaning that the composite good scenario is easier for them, however the difference is not significant. This is consistent with the hypothesis the role recognition agents are better able to take on stress and hold more information than the individual recognition agents.
10.14
Figure 9 shows the average utilities of both treatments over twenty runs each, again, with so many significant differences in utility as to make the area between the lines yellow with vertical bars indicating a 99% confidence level has been reached. A visual inspection shows the greater difference in utility. Individual recognition average utility decreases from 148 in the simple goods scenario to 140.5 in the composite goods scenario (significant at the 92% level), while role recognition average utility increases from 164.5 in the simple scenario to 166 in the composite goods scenario. There is certainty at the 99% confidence level that the role recognition utilities are better than the individual recognition utilities in the composite goods scenario.
complexSceanarioAverageFitness
Figure 9. Composite good scenario average fitness. The utility of the agents is averaged for the 20 runs of the role recognition treatment and for the individual recognition treatments. The yellow vertical lines indicate places where a t-test showed a significant difference between treatments, which is true for every 10 cycles, making the space between the role and individual lines completely yellow
10.15
The average mutual information in the role recognition treatment almost doubles, from 0.22 in the simple scenario to 0.40 in the composite good scenario, while the mutual information in the individual recognition treatment halves, from 0.36 in the simple to 0.17 in the composite good scenario. However, these differences across scenarios are not at significant levels of confidence. The difference between the average mutual information of the role recognition treatment and the individual recognition treatment is at the 92% confidence level. The correlation between utility and mutual information in the individual treatment is 0.37, and is significant at the 94% level, while the correlation in the role treatment is .72. The level of confidence in the correlation between utility and mutual information in the role treatment exceeds the 99% level.

Discussion of the Composite Good Scenario

10.16
The t-tests show that there is a significant difference between the role recognition treatment and the individual recognition treatment in terms of utility and in terms of mutual information. The agents in the role recognition scenarios were able to trade better than the agents in the individual recognition scenarios, and their symbol systems held more meaningful information than the symbol systems of the individual recognition scenarios. Furthermore, there is a significant correlation between the utility of the agents and the mutual information in the role recognition scenario, implying that the reason that the agents were able to trade better is because of their knowledge-rich symbol systems. A society with high mutual information would have many different roles. It has many agents displaying different tags, with those displaying the same tag having the same behavior, and those displaying different tags having different behaviors. Individual recognition societies can have high mutual information if every agent, each of which has a different sign, has a different behavior. High mutual information means a lot of knowledge, and role recognition societies use their greater amounts of knowledge to trade better.
10.17
This experiment shows that role recognition is superior to individual recognition in the creation and coordination of knowledge. The trades of role recognition agents have a higher utility than the trades of individual recognition agents in a more complex scenario, and it is probably (above the 90% confidence level) because of the greater amount of knowledge, as measured in mutual information, that the role society can create. The knowledge that mutual information measures is knowledge of social coordination. A composite good scenario requires more social coordination than a simple scenario, because an agent has to know both who to buy goods from to make a composite good and who to sell the composite good to. In an individual recognition scenario, this must be learned for every individual with the composite good. In a role recognition scenario, this coordination becomes part of the agent culture for other agents to "get in on." In SISTER as in real life, new interactions between individuals are informed by past interactions between other individuals through roles. In a role recognition society, the knowledge is held culturally. It is distributed in the mutual expectations that agents have of other agents based on roles. This knowledge transcends the individual member of a role. This experiment has shown that more knowledge is created and coordinated in a role recognition society than an individual recognition society in a scenario involving composite goods.

* Conclusion

11.1
The consistent advantage that the role recognition agents have in trade over the individual recognition agents correlates with a consistently higher amount of mutual information in the role recognition agents' symbol systems. This result shows that more knowledge was created in the role recognition treatment than in the individual recognition treatment, and that this knowledge was useful in coordinating trade.
11.2
In addition to demonstrating a hypothesis about knowledge in society, this study of SISTER shows how tasking and communication among agents can develop endogenously. The emergent communication system facilitates a complex coordination of tasking between coevolving agents that is at the same time not preprogrammed.
11.3
We have also shown the use of mutual information to measure the amount of differentiation of agents into a system of roles. Mutual information measures this systemization by measuring the amount of knowledge contained in the symbol system that controls the coordination of agents based on role signs. The more complex the system of roles and interrelations between agents, the more information contained in their symbol systems.
11.4
SISTER contributes to social science by integrating aspects of many different theories. SISTER implements Kluver's concept of a socio-cultural engine based on roles (Kluever 2002). It demonstrates how a system of roles can hold cultural knowledge, as Kluver posited. It models Berger's ideas about symbolic interaction and the importance of roles to social order (Berger and Luckman 1966). Agents read and interpret symbols based on their private experiences with them, and are closed with respect to meaning, in that they do not copy another agent's sign or read another agents mental interpretation of a sign, and yet come to share meaning, in accordance with interpretive social science. At the same time, it models Parsons' ideas of a "double contingency" as the basis of social order, through consensus on a symbol system (Parsons 1951). This is modeled in SISTER agents in that they come to a consensus through a double induction of signs. Cultural knowledge exists in the mutual expectations agents have of each other because of the signs they display, as in Luhmann's model (Luhmann 1984). SISTER does this while adhering to the micro macro integration sociologist's ideas of what an emergence should be, including Coleman's idea that it should be a system of roles (Coleman 1994), and Ritzer's idea that it should include both micro economics and micro sociology in concert (Ritzer 1999). SISTER is one example of an integration of social theory into one socio-cultural engine.

* Acknowledgements

We would like to thank Claudio Cioffi-Revilla, Kenneth De Jong, Harold Morowitz, Larry Hunter, Tom Dietz, and the late Don Lavoie for their careful review, support, and guidance on this project. We would also like to thank the anonymous reviewers for their helpful comments.

* References

AXELROD, R (1997) The Complexity of Cooperation: Agent-Based Models of Conflict and Cooperation. Princeton: Princeton University Press

AXTELL, R. Epstein, and Young (2001) "The Emergence of Class Norms in a Multi-Agent Model of Bargaining." In Durlouf and Young, eds. Social Dynamics. Cambridge: MIT Press

BERGER, P and Thomas Luckmann (1966) The Social Construction of Reality New York: Anchor Books

COLEMAN, J (1994) Foundations of Social Theory. New York: Belknap

CONTE, R. and Castelfranci C (1995) Cognitive and Social Action London: UCL Press

CONTE, R and Frank Dignum (2003) "From Social Monitoring to Normative Influence." Journal of Artificial Societies and Simulation, vol 4, no. 2 http://jasss.soc.surrey.ac.uk/4/2/7.html

DITTRICH, P Thomas Kron and Wolfgang Banzhaf (2003) "On the Scalability of Social Order: Modeling the Problem of Double and Multi Contingency Following Luhmann." Journal of Artificial Societies and Simulation, vol 6, no. 1 http://jasss.soc.surrey.ac.uk/6/1/3.html

DUONG, D V (1991) "A System of IAC Neural Networks as the Basis for Self Organization in a Sociological Dynamical System Simulation." Masters Thesis, The University of Alabama at Birmingham http://www.scs.gmu.edu/~dduong/behavior.html

DUONG, D V and Kevin D. Reilly (1995) "A System of IAC Neural Networks as the Basis for Self Organization in a Sociological Dynamical System Simulation." Behavioral Science, 40,4, 275-303. http://www.scs.gmu.edu/~dduong/behavior.html

DUONG, D V (1995) "Computational Model of Social Learning" Virtual School ed. Brad Cox. http://www.virtualschool.edu/mon/Bionomics/TraderNetworkPaper.html

DUONG, D V (1996) "Symbolic Interactionist Modeling: The Coevolution of Symbols and Institutions." Intelligent Systems: A Semiotic Perspective Proceedings of the 1996 International Multidisciplinary Conference, Vol 2, pp. 349 - 354. http://www.scs.gmu.edu/~dduong/semiotic.html

DUONG, D V (2004) SISTER: A Symbolic Interactionist Simulation of Trade and Emergent Roles. Doctoral Dissertation, George Mason University, Spring.

EPSTEIN, J and Robert Axtell (1996) Growing Artificial Societies: Social Science from the Bottom Up, Boston: MIT Press

HALES, D (2002) "Group Reputation Supports Beneficent Norms." Journal of Artificial Societies and Simulation, vol 5, no. 4. http://jasss.soc.surrey.ac.uk/5/4/4.html

HALES, D and Bruce Edmonds (2003). Can Tags Build Working Systems? From MABS to ESOA. Working Paper, Center For Policy Modelling, Manchester, UK.

HOLLAND, J H (1975) Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press

KLUEVER, J and Christina Stoica (2003) "Simulations of Group Dynamics with Different Models." Journal of Artificial Societies and Simulation, vol 6, no. 4 http://jasss.soc.surrey.ac.uk/6/4/8.html

KLUEVER, J (2002) An Essay Concerning Sociocultural Evolution. Theoretical Principles and Mathematical Models. Dordrecht: Kluwer Academic Publishers

LACOBIE, K J (1994) Documentation for the Agora. Unpublished document

LUHMANN, N (1984) Social Systems. Frankfort: Suhrkamp

OLIPHANT, M and J. Batali (1997) "Learning and the emergence of coordinated communication." Unpublished manuscript.

PARSONS, T (1951) The Social System. New York: Free Press

PERFORS, A (2003). "Simulated Evolution of Language: a Review of the Field." Journal of Artificial Societies and Simulation, vol 5, no. 2. http://jasss.soc.surrey.ac.uk/5/2/4.html

POTTER, M and Kenneth DeJong (2000) "Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents." Evolutionary Computation, 8(1), pages 1-29. MIT Press

RIOLO, R, Michael Cohen and Robert Axelrod (2001) "Evolution of Cooperation without Reciporcity," Nature Vol 414, November

RITZER, G (1999) Sociological Theory. New York: McGrawHill

SHANNON, C E (1993) Collected Papers. New York: Wiley

STEELS, L (1999) The Spontaneous Self-organization of an Adaptive Language. In Koichi Furukawa and Donald Michie and Stephen Muggleton, editors, Machine Intelligence 15, pages 205--224. St. Catherine's College, Oxford: Oxford University Press

WEBER, Max (2001) The Protestant Ethic and the Spirit of Capitalism. New York: Routledge

WERNER and Dyer (1992) "Evolution of communication in artificial organisms." Artificial Life II. eds. Langton et al. New York: Addison Wesley

WINOGRAD, T and Fernando Flores (1987) Understanding Computers and Cognition New York: Addison-Wesley


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2005]