©Copyright JASSS

JASSS logo ----

Josep M. Pujol, Andreas Flache, Jordi Delgado and Ramon Sangüesa (2005)

How Can Social Networks Ever Become Complex? Modelling the Emergence of Complex Networks from Local Social Exchanges

Journal of Artificial Societies and Social Simulation vol. 8, no. 4
<https://www.jasss.org/8/4/12.html>

For information about citing this article, click here

Received: 20-Jun-2005    Accepted: 28-Sep-2005    Published: 31-Oct-2005

PDF version


* Abstract

Small-world and power-law network structures have been prominently proposed as models of large networks. However, the assumptions of these models usually lack sociological grounding. We present a computational model grounded in social exchange theory. Agents search attractive exchange partners in a diverse population. Agent use simple decision heuristics, based on imperfect, local information. Computer simulations show that the topological structure of the emergent social network depends heavily upon two sets of conditions, harshness of the exchange game and learning capacities of the agents. Further analysis show that a combination of these conditions affects whether star-like, small-world or power-law structures emerge.

Keywords:
Complex Networks, Power-Law, Scale-Free, Small-World, Agent-Based Modeling, Social Exchange Theory, Structural Emergence

* Macrostructures and Microprocesses in Complex Network Emergence

1.1
The celebrated small-world and power-law (also known as scale-free) network structures have recently been prominently proposed as models of large networks in a variety of substantive realms, including networks of social relationships. It has been claimed that empirical examples of complex network structures can be found in the global patterns of website references in the World Wide Web, acquaintance relationships between people, networks of sex contacts, networks of scientific collaborations, or contacts between terrorists. These so-called complex networks are of particular interest to social scientists, because theoretical research suggests that these structures are highly efficient in terms of certain global network features. Moreover, these networks may be the emergent, yet unintended result of simple processes of network change that operate at the level of the individual agents or nodes.

1.2
In particular, complex networks have shown to be very efficient in terms of information propagation e.g. (Walsh 1999; Delgado 2002), and their connectedness is very robust against randomly distributed hazards that may destroy single nodes in the network as Albert and Barabási (2000) show.

1.3
However, as we will argue below in more detail, previous modeling work has mainly relied on rather mechanistic and sociologically implausible assumptions about the processes that may generate complex networks. Accordingly, it remains an open question from the point of view of social sciences whether and if so how complex networks can arise from the behavior of real social agents. If complex networks can plausibly be the product of social interactions, then their impressive efficiency and robustness may help to better answer some crucial questions in the social sciences. For example, how is trust possible between strangers? (cf. Yamagishi et al. 1998; Macy and Sato 2002). Some researchers have argued that the efficient diffusion of reputational information through networks may be a decisive mechanism for the emergence and stabilization of trust (e.g. Raub and Weesie 1990; Buskens 2002). Or why can some groups quickly and effectively organize collective actions, such as strikes or demonstrations, while others fail to do so? This has likewise been related to network structures that are efficient for information transfer and recruitment processes between group members (e.g. Marwell et al. 1988; Gould 1993ab). But why and under what conditions can societies or groups develop in the first place the networks they need to sustain trust and facilitate collective action? We believe that this question can not be answered in a satisfactory manner as long as our models of network formation rely on behavioral assumptions that are implausible as models of human social agency.

1.4
The invariants (small world, power law, etc.) found in many empirically studied social networks suggest that humans tend to organize themselves in a very particular way at the macroscopic level. Physicists have provided many models that can reproduce with high accuracy these empirical data. Nevertheless those models are based on assumptions either unrealistic from the sociological point of view (global knowledge), or they consider the agent as a mindless actor in the game. Is the emergence of structure in human societies sufficiently explained by a model of dummies interacting with other dummies? Or can this emergence be better explained as the consequence of a plausible sociological microprocesses? We believe that sociologically plausible microassumptions will not only allow to reproduce the results of previous models, but they will also help to identify new sociologically meaningful hypothesis about the conditions and patterns of complex social networks.

1.5
Physicists' models of the emergence of complex networks have mainly been formulated in the tradition of network theories informed by graph theoretic concepts. This work reflects the predominant structuralist perspective in social network research in mainly focusing on the effects of structure on individuals and largely leaving implicit the individual actions that bring structures about. In our research, we take a radically different view and emphasize individual agency as the driving force of network formation. Accordingly, we use in the following the term agent to refer to the individual actors in social network, in contrast to graph theoretic approaches who often refer to individuals as nodes in a graph. Our actors are agents in the sense of agent-based modeling. In this view, an agent has four defining characteristics: autonomy, reactivity, proactivity and social ability (Wooldridge and Jennings 1995 ; Gilbert and Troitzsch 1999). Autonomous agents have control over their own goals, behaviors, and internal states. Reactive agents perceive and react to their environment. Proactivity refers to an agent's ability to take initiative to change aspects of their environment in order to attain individual goals. Finally, social ability refers to the capacity to influence other agents in response to the influences that are received.

1.6
The two most prominent models of the individual level mechanisms that have been proposed to underlie the emergence of complex networks are the mechanisms of stochastic rewiring, described in the seminal work by Watts and Strogatz (1998), and the mechanism of preferential attachment and uniform growth, proposed by Barabási and Albert (1999). This work has inspired a range of subsequent models, particularly of the emergence of power-law networks, for a recent overview cf. (Albert and Barabási 2002). The model of stochastic rewiring has been designed to show how relatively few random changes in a network can transform an initially fully locally clustered network with large (or even infinite, cf. (Watts 1999)) average path-length, into a small-world structure that is still locally clustered, yet characterized by short average distances between the nodes. Local clustering refers here to a large degree of overlap between the ego-networks of adjacent nodes, as in small villages where residents are mainly connected to other residents and have few external contacts. Watts and Strogatz (1998) showed that a small probability of rewiring a randomly selected tie to some randomly selected new target in the network is sufficient to bring about a radical change in the global features of the network, from a pattern with low speed of information propagation to a structure that is highly efficient for information diffusion through network ties. The main reason is that a single rewired tie does not change fundamentally the local structures of ego-networks, but it is sufficient to bridge the distance between two otherwise distant, or even disconnected, local units. Later elaborations proposed social processes such as geographical migration within or between countries as examples of this mechanism (e.g. Watts 1999). While the work by Watts and others was extremely important to demonstrate how efficient network structures may emerge from simple local processes, the authors have paid little attention to the question why and under what conditions the mechanism of stochastic random rewiring may be a plausible result of social behavior, i.e. of the individual motives and cognitions that lead social actors to make or break relationships. It is, for example, questionable whether geographical migration may produce stochastic random rewiring. Geographical migration is often characterized by locally correlated patterns of behavior, where the experience that one pioneer migrant makes at his new residence is communicated back to old acquaintances, potentially triggering a new wave of migration to the same location. By contrast, random migration would spread new ties across the entire population, presumably with very different consequences for the network structures that may evolve. Accordingly, we believe that the conditions under which small-world structures may plausibly arise in social networks can be better identified when network changes are explicitly derived from sociologically plausible assumptions about the individual goals and behavioral heuristics that underlie network changes at the microlevel.

1.7
The model of preferential attachment also leaves the individual level decision process widely implicit. Barabási and Albert developed their model to explain how in a growing network, like the network of page references in the world wide web, a power-law degree distribution emerges, where, broadly, the number of nodes that have a particular degree decreases with the degree to the power of some constant. The mechanism of preferential attachment assumes that new actors who enter the network prefer to relate to those existing nodes that already have a high degree. A plausible reason for preferential attachment, which generalizes beyond the example of the Internet, may be that actors strive for social status. New actors entering a group may expect that they maximize their own social status by relating to the most popular peers, generating preferential attachment. While the mechanism of preferential attachment may seem better motivated by social structures than random rewiring, it relies on two assumptions that seem implausible for a large range of social networks. First, the assumption that actors have public and full access to reliable knowledge about the structure of the whole network, in particular the in-degree of any other member of the network. Second, the assumption that actors are cognitively capable to process this information accurately. These assumptions are required to obtain a parametric and analytical solution that can reproduce some empirically observed networks. However, we argue that due to the behavioral implausibility of the micro-mechanism, the model of preferential attachment reproduces rather than explains complex network structures. This criticism extends to recent more sophisticated models of preferential attachment, like the one proposed by Walsh (2001). In this model, a new node that is added to the network is attached to a node in the existent network following a probability distribution that depends on the degree of the nodes present in the network. While this model is very successful in generating graphs with the power-law property, it also assumes that agents know the complete distribution of degrees of the graph and process this information accurately when choosing to whom they want to relate.

1.8
Several researchers have addressed the issue of behavioral models underlying complex network dynamics. Transitive linking was in particular proposed by Krapivsky and Redner (2001) and by Ebel et al. (2002) as a mechanism where agents use solely information about their local ties. In a nutshell, transitive linking assumes that new ties are most likely formed between actors who have a common acquaintance in the network. This work could show that preferential attachment behavior may arise from the transitive linking process, even when individual actors have no global knowledge about the network. However, like the original preferential attachment model, transitive linking does not make the underlying social mechanism explicit. While transitive linking may be a plausible description for realms such as co-authorship networks, it seems, for example, less straightforward for the dynamics of help or advice exchange. Why and under what conditions should two actors who have exchanged professional advice with the same third party, become more likely to help or advice each other? While their common acquaintance may have needed advice of both of them, the advise givers may not necessarily need each others' help. Clearly, the decision to enter an exchange relationship depends not only on having common acquaintances, but also on the proper match of demand and supply in exchange relations.

1.9
Recently, social network researchers increasingly call for sociologically more plausible models of complex network emergence. For example, Robins et al. (2005) made important progress by showing how computational methods allow to simulate distributions of Markov Random Graphs including small word structures, based on assumptions about simple and strictly local processes. Their approach specifies local dynamics in terms of statistical interdependencies between ties in the network neighborhood of a node. While this work mainly aims at testing whether a given model of a local process can explain observed complex network structures, it does not explicitly address individual motivations or cognitions underlying local dynamics.

1.10
To derive the emergence of complex networks from more fundamental principles of individual behavior, a model of some optimization process is required in which actors make or break ties to attain specified individual goals, while agents are at the same time restricted by their current position in the network. With the approach of highly optimized tolerance (HOT), Carlson and Doyle (2000) proposed system optimization as explanation of the emergence of complex system structures including complex networks. This work shows that in general power-law distributions of certain individual features may emerge when a system is optimized by a selection process resembling survival of the fittest. An example is the problem to find an optimal distribution of food suppliers in a town so that not every inhabitant depends on the same food supplier, food distribution is not too vulnerable to the loss of a particular supplier, and the overall number of shops is kept to a minimum for market efficiency. HOT models can be extended to network design, for example with the goal to make the network both efficient for information distribution and robust against randomly distributed risks of the destruction of single nodes. However, we argue that the HOT approach is problematic as basis for a model of the emergence of unplanned social networks, because the locus of decision making in this model is not the individual agent, but a central network optimizer. Ferrer and Solé (2003) move a step further and explain complex network emergence from individual optimization. In their approach individuals aim to optimize global structural measures of the network, such as density and average path length. Their model may be adequate to describe a situation in which individuals' interests perfectly coincide with collective goals of, e.g., a social group or organization to which actors belong. But Ferrer and Solé neglect the partial conflict between common goals and individuals' interests that characterizes network formation in a range of areas, such as inter-firm technological cooperation (e.g. Podolny and Page 1998) or social support exchange (e.g. Flache 2001). In these realms, network formation is at least partly driven by competition between agents for relationships with attractive exchange partners, while at the same time agents are restricted in the time and effort they can invest in network contacts. For example, Bonacich (1992) shows that in a research team, a communication dilemma may arise , when status gains or bonus payments reward exceptional individual performance, while at the same time the team as a whole may compete against other teams. The team as a whole may benefit when all members aim to maximize the global efficiency of information exchange networks between them. At the same time, individual team members face incentives to withhold information and collude instead in dyadic exchanges or within small cliques to outperform their colleagues. Clearly, a model which assumes that agents optimize on global network properties is a misleading template for this type of situations where network change faces agents with a social dilemma (e.g. Dawes 1980).

1.11
To sum up, we argue that a sociologically plausible model of the emergence of complex exchange networks should have the following features. First, the locus of decision making about network changes is at the level of individual actors. Second, individual actors make or break ties to optimize their individual goals based on bounded rational individual decision heuristics. Third, individual agents use for this optimization only local and imperfect knowledge of network characteristics. Finally, common goals and individual interests are with respect to desirable network characteristics at least partly in conflict with each other, i.e. network members face a social dilemma situation.

1.12
In the following, we propose a model that has the features listed above. We will show that our approach is not incompatible with the classical preferential attachment model, but can generate preferential attachment behavior under certain conditions. However, relational change is in our model just a consequence of optimization at the agent level. An example for this could be the emergence of scientific collaboration networks, where scientists do not choose their co-authors based on their connectedness in the network, but in terms of whom they perceive as the best partner to conduct research with. In this example, preferential attachment is not the underlying and driving mechanism, but it may be a consequence of the underlying local optimization process. A recent empirical study that compared the network of website references in a scientific community to scientists' publication scores by Pujol et al. (2002) suggests this possibility . This research shows a substantive correlation between journal publication scores and scientists' centrality in the network of website references.

1.13
In the remainder of this paper we proceed as follows. In section 2, we describe our computational model of the emergence of complex networks. In section 3, we present computational experiments that systematically explore the effects of two key sets of conditions in the model on the topological structure of the emergent social exchange network. In particular, we study effects of the harshness of the exchange game in terms of the potential damage agents can suffer from exploitation and the restrictiveness of their access to potential exchange partners, and the learning capacities of the agents (memory size, exploration probability, knowledge exchange mechanism). Section 4 contains the discussion and conclusions of our study.

* A Model of Local Optimization in Emergent Social Exchange Networks

2.1
Our model is based on social exchange theory and assumptions of bounded rationality (e.g. Blau 1964; Homans 1974). As baseline, we take a model of the dynamics of social exchange networks that was originally proposed by Hegselmann (1998) and later adapted by Flache (2001; Flache and Hegselmann 1999a; Flache and Hegselmann 1999b). Their model assumes that agents seek to find and keep attractive exchange partners in a population where agents are to some degree free to exit from ongoing exchange relations, differ in attractiveness as exchange partners, for example due to variation in the amount of material resources at their disposal, and have only limited access to and information about potential new partners. These assumptions reflect empirical results on social exchange, which show for example that in relations of gift exchanges, actors select exchange partners partly on basis of the amount of resources they may expect to attain in the exchange, (e.g. Komter 1996). Similarly, in certain industries, technology cooperations between firms constitute network patterns in which firms seek to establish relationships with partners both based on technological attractiveness and status in the industry of potential partners, as Podolny and Page (1998) suggest. Finally, in networks of advice exchange, actors often differ in their degree of expertise and thus attractiveness as advice givers. For consistence of our terminology, we will use in the remainder advice or support to refer to the exchange commodity, but we wish to point out that scope of our analysis generalizes to most forms of social or professional help exchange and collaboration networks.

2.2
To model individual goals, we assume that agents gain from being supported but incur some loss if they provide support themselves. More precisely, the larger the amount of help an agent receives, the larger his gain is. Conversely, the more help an agent provides, the larger his loss. This implies in particular that agents try to avoid exchange with partners who are either not capable or not willing to give good advice. We assume that agents aim to optimize their exchanges in terms of these goals under imperfect, local information without initial knowledge about others' characteristics or knowledge about the global network structure. Moreover, agents in our model are strongly adaptive, i.e. they acquire knowledge only in the course of interaction and apply simple search heuristics to select potential new partners. As we will show, these assumptions do not preclude the emergence of complex networks.

Memory and Interaction

2.3
The network consists of a population of N agents. Each agent i is characterized by his individual attractiveness for other agents, ai, representing for example his expertise, material wealth or beauty (0 < ai < 1). However, before interaction, every agent is ignorant not only of others' attractiveness but also of his own. This expresses that people learn their market value only through interaction with exchange partners. For simplicity, we assume that attractiveness is initially distributed uniformly and remains constant throughout the simulation. However agents' perception of their own and others' attractiveness changes through interaction.

2.4
The second attribute in which individual agents differ from each other is their memory, representing the knowledge they acquired in previous interactions. More specifically, at any point in time the memory of agent i contains for a number of other agents j the recollection i has of the payoff oij that he attained in previous interactions with j. To express bounded rationality, we assume that agents remember only a small subset of all system members. Formally, memory is represented as

Mit = {(oij,tij)}jJit,

where tij represents the time point at which the memorized interaction took place and Jit represents the subset of all agents that i memorizes, Jit{1..N}. One time step corresponds to an interaction round in which all the agents are activated once in random order to make decisions (asynchronous random activation). The subset of network members which an agent can remember is limited in size by a maximum memory size Mc, i.e. #JitMcti. We model the maximum memory size Mc as a parameter that is equal for all agents, because we are mainly interested in effects of structural conditions that shape individuals' learning capacities through their ability to store information. Such a condition is for example the capacity of the information storage technology that is available in a society (cf. Mark 1998). Obviously, agents have in our model only access to a partial view of the network, and their knowledge remains local, i.e. it depends exclusively on the individual experiences of agents.

2.5
To model the process through which agents gradually get to know other network members, we assume that the agents represented in the memory of an individual i fall apart into two distinct subsets, the known agents Kit, and the unknown agents Uit. An agent j is unknown to i when i does not know the characteristics of j but is at least aware of j's existence. Agents of whose existence i is not aware are not represented at all in i's memory. The total set of all network members of whose existence i is aware is denoted Jit, where Jit=UitKit and Uit Kit = Ø. To have a well-defined baseline situation, we assume that at the beginning of the simulation no one is known to anybody else, but each agent has a number of Mo randomly chosen network members he is aware of, but who are classified as unknown. Only in the course of the simulation, i will learn about the existence of more agents and they may even become known for i through interaction. Accordingly, the recollection (oij,tij) of the payoff that i received after interaction with j at time point t is only defined in i's memory when j is known, i.e. jKit. Both the memory as a whole and its partition into subsets of known and unknown agents may change over time, but it remains always bounded by the maximum memory capacity Mc.

2.6
In the interaction process, the agents are activated in random sequence. Once activated, an agent tries to establish Q interactions with agents contained in his memory. Following previous models of complex network dynamics, for example Barabási and Albert (1999), Watts (1999) and Walsh (2001), we assume that the initiation capacity Q is a system-wide parameter that represents universal technological or social constraints (i.e. communication technology or politeness norms) on the number of interactions an agent can initiate simultaneously. This parameter implements one aspect of the harshness of the exchange situation actors face. The smaller the initiation-capacity relative to the total number of exchanges in which an agent can be involved, the harder it is to find a sufficient number of attractive partners.

2.7
The agent selects the targets of his interaction initiations to optimize the outcomes he expects to attain from the interaction. To reflect bounded rationality, we assume that decision making about action initiation is entirely adaptive or backward-looking (e.g. Macy and Flache 2002). That is, agents derive their expectations solely from the experience represented in their memory. Details of the optimization procedure will be explained below, when we turn to individual decision heuristics.

2.8
Interactions are dyadic and both agents need to agree before an interaction actually happens. Correspondingly, agents also face a limitation of the number of interactions they can be simultaneously involved in. We call this limitation their interaction-capacity C. More specifically, the interaction-capacity C restricts the maximum number of interactions per agent per time unit. Agent's interaction-capacity must be equal or smaller than their memory-capacity, C ≤ Mc. We introduce the interaction-capacity parameter because in many social network dynamics actors have limited resources to build up and maintain social ties, particularly when an interaction implies a certain work load or time investment. Notice that most previous models in the literature on complex networks take it for granted that actors have a potentially unlimited capacity to entertain simultaneous relationships. This applies in particular to the model of preferential attachment. This assumption may be applicable for situations where an interaction does not imply a work load or time investment for both parties, such as deployment of a link from one web page to another. However, we consider unlimited capacity an exception rather than the rule in social network dynamics, where interactions usually require the allocation of some limited resource.

Actions, Outcomes and Payoffs from Social Interaction

2.9
Following Flache and Hegselmann (1999a), we model the actual exchange between agents as a support game. For simplicity, both actors have only two decision options in the constituent (one-shot) game, to provide advice (Cooperate), and to not provide advice (Defect). To further simplify, we assume that the interacting agents make these decisions simultaneously and independently. Accordingly, the outcomes of one interaction between two agents i and j can be DD, mutual defection, unilateral support of i by j or vice versa, DC or CD, or mutual cooperation, CC. The effects of outcomes on agents' goal attainment depend on the attractiveness levels of both participants. The larger the attractiveness of the focal agent, ai, and the smaller the attractiveness of his interaction partner, aj, the more advice i will give (and j receive), if i decides to actually cooperate with j (C). This expresses that in advice exchange a recipient may maximize his profit by getting advice from a very knowledgeable person, say a guru, while the guru may receive little new knowledge from an average advice seeker.

2.10
Technically, we assume that i's cooperation benefits the recipient with ai(1-aj) units of advice. However, when the focal actor fails to support his interaction partner, he provides zero units of advice. Conversely, i receives (and j gives) (1-ai)aj units if j decides to support i (C). At the same time, i receives no advice, when j declines to give support (D).

2.11
The payoffs that accrue to both actors from the exchange, pij and pji, follow from the balance of their respective costs and benefits. To model payoffs, mutual defection, DD, is used as the baseline outcome that yields a payoff of zero to both participants. In the outcome DD, actors neither receive advice nor provide it. Technically, we model i's gain from receiving advice from j, Gij and i's loss as a result of giving advice to j, Lij, as follows:


Gij = (1-ai)ajB
(1)

Lij = ai (1-aj)E
(2)

2.12
The parameters B and E are positive constants that weigh the benefit, B, of receiving one unit of advice against the effort costs, E, of providing the unit. It is a central assumption in our analysis that self-interested individuals may in principle benefit from advice exchange. To ensure this, we assume that the benefits of receiving a unit of advice exceed the costs of producing it. The cost to benefit ratio, E/B, parameterizes a further aspect of the harshness of the exchange situation actors face. The higher this ratio, the more severe the loss agents incur if they are unilaterally exploited compared to the potential benefits of mutual cooperation. In technology cooperations, harshness may vary, for example, between sectors with different firm sizes, where in sectors with small firms, agents are more vulnerable to financial losses caused by others' defection than in sectors with large firms. Similarly, in academic environments with high emphasis on competition and individual publication scores, advice giving or collaboration are potentially more risky and costly than in more collaborative environments.

2.13
Figure 1 illustrates the incentive structure that ensues for the constituent support game. The first entry in a cell refers to the payoff of the row player i, the second entry indicates the payoff of the column player j. Notice that the game is not necessarily symmetrical. Players obtain different outcomes in symmetrical choice combinations, unless they are equally attractive.


                                        Agent j


Agent i

C
D
C
(Gij-Lij , Gji-Lji) (-Lij, Gji)
D
(Gij, -Lji) (0, 0)

Figure 1: Payoff Matrix of the Support Exchange Game

2.14
Figure 1 shows that the support game may be a true social dilemma where cooperation conflicts with self-interest. There is nothing that guarantees reciprocation within one iteration. Particularly when players expect a short-term relationship and conditions are harsh, they may be tempted to withhold support. Exploiting a partner who provides advice is in the short run the most profitable outcome for a selfish agent and to be exploited by a partner who fails to give advice is least attractive, regardless how attractive the players are. Clearly, these incentives may face self-interested actors with a particularly difficult social dilemma, the Prisoner's Dilemma (PD). At the same time, the game is not necessarily a PD. In a PD, both players prefer mutual cooperation (CC) to mutual defection (DD) despite incentives to defect unilaterally. In our support game, it is possible that only the less attractive player may be interested in mutual support. More precisely, when the focal player is not attractive enough in comparison with his counterpart, then it may be impossible for the focal player to receive enough advice to be compensated for the investment in supporting his partner (Gij < Lij). To be precise, the support game is a Prisoner's Dilemma if and only if for both players it holds that (1-ai)ajB > ai (1-aj)E (cf. Hegselmann 1998).

2.15
Notice that our model also implies that the capability to provide advice and the need for advice are inversely related. The more advice an agent needs in a certain period of time, the less he can give to others in the same period. We feel that this assumption is plausible for many social exchange relations. For example, consider the effects of variation in expertise on the individual neediness for advice in an academic community. Expert members may both have more knowledge to share with others and less need for advice themselves, as compared to less knowledgeable members. For another example, in a rural village only some farmers may be wealthy enough to afford expensive farming machinery. These farmers do not need to borrow others' machines, but they might lend their machinery to less wealthy farmers.

2.16
To model local and imperfect knowledge of agents, we assume that B and E, as well as the attractiveness levels are not explicitly known. That is: agents pursue their goals facing a high degree of uncertainty. We explain in the next subsection which knowledge agents actually do have once they decided to enter into an interaction.

Agent's Decision Heuristics

2.17
When an agent i is activated, he first selects another agent k to propose an interaction in the current simulation round t.

2.18
Following recent models of socially rational decision heuristics, such as the proposed by Lindenberg (2001), we assume that agents both seek to optimize their expected returns and tend to search for improvement through new, untried courses of action. Technically, this is modeled with an exploration probability eit with which an agent within i's current memory Jit is chosen randomly. When the focal agent optimizes rather than explores, the interaction partner with the best expected payoff is chosen from the memory of known others, Kit. If there is more than one known agent with the same maximum expected payoff, one of the optimal agents is chosen at random.

2.19
Initially, agents select interaction targets at random, i.e. the exploration probability is 1. We assume that the tendency to explore declines as agents get to know more other network members. This implies that over time the exploration probability differs between agents. In particular, popular agents quickly decrease their exploration probability because they receive many interaction proposals, and, accordingly, learn faster about their environment than agents who receive less interaction proposals. Technically, we model the exploration probability at time point t as the ratio of the number of unknown agents in i's memory to the square of the number of all agents i is aware of at that point in time.


eit = ( #Uit

#Jit
)2
(3)

2.20
After an interaction has been initiated, the targeted agent j needs to decide whether to accept the proposal of the initiator i. We assume that j will only accept the interaction if j has not reached its interaction-capacity C of concurrent interactions and j is aware of i's existence. If, furthermore, the initiator is not known to j, then the proposal will be accepted. If the initiator i is known to j, then j will accept the proposal if and only if he expects the outcome to be not negative.

2.21
If the interaction proposal is accepted, agents i and j play one round of the support game. For simplicity, we exclude in the present version of our model strategic free riding and assume instead that agents are benevolent. We plan to take into account strategically opportunistic behavior in future work. For the time being, our assumption implies that an agent is cooperative, but not altruistic. To be precise, i cooperates with j if and only if the expected net benefit from i's point of view is positive, (Gij-Lij) ≥ 0. In particular, when an agent has no prior experience with his counterpart, he can reliably assess the expected net-benefit from mutual cooperation. This assumption reflects that usually in social exchange agents will learn about each others' characteristics after a mutual decision to interact with each other and before they enter into the actual exchange. Accordingly, agents may actually defect in the exchange despite their benevolence. This happens, when they find out that their interaction partner seems to be less attractive than they may have thought before they entered the interaction. For example, when two scientists decide to talk some time together on a conference because they are interested in each others' work, one of them may find out in the course of their discussion that the other is less knowledgeable than expected. As a consequence, he may invest little effort in actually giving useful advice and rather utter some general and vague comments about the colleagues' work.

2.22
After an interaction is completed, agents update their respective memories. If at that point in time an entry for the interaction partner does not yet exist in an agent's memory, then a free memory slot will be allocated for this purpose. If there is no free slot available because the memory is full, then the agent will forget some other previous interaction partner and replace the corresponding old memory entry with the information about the most recent interaction. We assume that agents are most likely to forget previous partners that were experienced as unattractive. More technically, the agent k who will be forgotten by i is selected according to the following rule:


k = argminnJit( | oin | | (tik < t))
(4)

where t represents the current point in time.

2.23
In case that i has prior experience with his most recent interaction partner, the corresponding memory entry will be updated to average value of the most recent payoff and the payoff stored in the memory provided that i cooperated in its interaction with j. Notice that this implies that the weight of old memories declines after every new interaction with the same partner. Conversely, when i defects in its interaction with j the expected outcome will be set to -pij. This reflects our assumption that agents are not strategic. While a unilateral defection is actually a positive outcome for the agent, the negative memory entry after a unilateral defection implies that agents do not try to exploit that partner again, but avoid instead future interactions because they failed to establish a mutually profitable interaction with j. Technically,


(oij,tij) =

{


( oij + pij

2
, time)
if, tij > 0
(pij , time)
if, tij = 0
(-pij , time)
if, Gij - Lij < 0

(5)

where tij refers to the current simulation time and pij denotes the payoff attained in the most recent interaction. When an interaction proposal is rejected, the memory is also updated according to 5, where the payoff pij, is set to zero.

2.24
How can agents ever get to know new network members, when their exploration only is restricted to the people they are already aware of? In previous work on the diffusion of reputations in social networks two different mechanisms are used or sometimes combined, social learning through observation (e.g. Younger 2004) or gossip (e.g. Raub and Weesie 1990; Buskens 2002). Gossip models assume that reputation information spreads as a byproduct of interactions. To model gossip, we assume in our analysis that agents exchange some knowledge from their respective memories as an act of deference or courtesy after they have experienced a mutually profitable support interaction, that is: pij > 0 and pji > 0. More precisely, in this explicit memory exchange, the interaction partners i and j each tell their counterpart which other agent in their current memory has the best expected outcome. More technically, the referred agent k is found as k = argmaxn( | oin | | (k < > j ∧ tik > 0)). To distinguish effects of explicit knowledge exchange from those of social learning through observation, we use as an alternative assumption implicit knowledge exchange. We assume that implicit knowledge exchange provides less reliable information than explicit gossip. While i can observe with whom j is currently interacting, i does in implicit knowledge exchange not know the payoffs that j derives from this interaction. Hence, in this mode i picks a randomly chosen agent from j's memory with whom j is currently interacting.

2.25
To model bounded rationality and uncertainty also in the knowledge exchange process, we assume that the knowledge an agent acquires always reflects the subjective perceptions of the sender of information, but not necessarily the interests of the recipient. Concretely, agent i gives to agent j the expected outcome from an interaction with a third agent k from i's perspective, oik. Neither the recipient nor the sender have the capability to assess which agent is most suitable from j 's point of view. Finally, we assume that in future interactions the new knowledge acquired is also subject to an updating process similar to the process described above, with the one difference that the time-point tik to which the information refers in the recipient's memory is set to 0, to express that the knowledge of i about k stems from referral or observation rather than direct experience. Technically,


(oik,tik) =

{


( oik + ojk

2
, 0)
if, tik > 0 and ∃oik
(ojk , 0)
if, tik = 0

(6)

Knowledge stemming from referees or observation only applies when first-hand knowledge is not available (denoted by tik=0). Once direct experience knowledge about agent k exists (tik > 0), agent i stops updating the expected outcome oik based on third-parties experience.

* Results

3.1
We are mainly interested in the effects of two key sets of conditions in our model on the topological structure of emergent social exchange networks. The first set of conditions refers to the harshness of the exchange game, in particular the cost to benefit ratio in exchanges and the difficulty to access exchange partners. The other conditions pertain to the learning capability of the agents, notably their memory size, exploration probability and knowledge exchange mechanism. Due to limitations of space, we can not fully explore all corresponding parameters in the present paper. We present in the following results from experiments that manipulated the ratio of costs to benefit in exchanges E/B to vary the harshness of game. Furthermore, we varied the type of memory exchange (henceforth ME) to study effects of learning capability. To explore robustness, we also varied population size, N, the capacity of agent's memory, Mc, the interaction-capacity C and the initiation-capacity Q.

3.2
To have a well defined baseline scenario, we fix the remaining parameters of the model to values that make the emergence of complex network possible, but not trivial. The number of agents initially present in the memory, Mo , is 150. Finally, the attractiveness of agents is uniformly distributed in the range between 0.05 and 0.95 with a precision of 10-3.

3.3
Our main interest is whether our local interaction model (henceforth LO-model) can generate similar complex network structures than previously proposed mechanisms that assume global knowledge or perfect maximization or both. In order to compare our results with other models we use the original model of Barabási and Albert (1999). This model is based on preferential attachment and uniform growth and it is representative for the family of preferential attachment models which assume that agents have complete and perfect knowledge about the structure of the network.

3.4
To compare structural features of networks, we use in the following always the network structures that evolved after 100 units of time. After this number of interaction rounds, the dynamics of agents' exploration probability specified in equation 3 assure that there is practically no more exploration so that the presence of an undirected edge (i,j) indicates a stable mutual support relationship between i and j. Exploration rates tend to zero over time, because the ratio between unknown agents and total number of known agents in individuals' memory decreases monotonically due to learning through direct experience or by referral.

Table 1: General characteristics of emergent networks. LO refers to our local optimization model. BA denotes the Barabási-Albert model. Random refers to a random network using the Erdos-Rényi model. Initiation-capacity Q, corresponding to half the average degree, (k/2), is set to 5. The interaction-capacity C is set to 150. The memory capacity Mc is set to 200. The population size, N , varies across {1000, 5000, 10000}, the cost to benefit ratio E/B varies across {0, 3/16, 8/16}. Finally, we vary memory exchange between explicit and implicit, indicated as {E, I}, respectively. For each condition the table charts average path length l and the clustering coefficient C of the emergent network, together with the average path length and clustering coefficient of the random and the Barabási-Albert network.
Network N=1000 N=5000 N=10000
l C l C l C
Random 3.27 0.0085 3.97 0.0016 4.30 0.00093
BA 2.98 0.037 3.52 0.010 3.76 0.0044
LO(E/B=0,I) 2.47 0.16 3.22 0.05 3.56 0.034
LO(E/B=0,E) 2.51 0.20 3.16 0.085 3.42 0.065
LO(E/B=3/16,I) 3.66 0.052 4.09 0.038 4.29 0.032
LO(E/B=3/16,E) 3.71 0.034 4.14 0.030 4.37 0.029
LO(E/B=8/16,I) 3.98 0.012 4.45 0.015 4.68 0.017
LO(E/B=8/16,E) 3.91 0.027 4.57 0.040 4.84 0.038

3.5
Table 1 gives an overview of the effects of a range of simultaneous manipulations. We varied the population size, N , across three levels {1000, 5000, 10000}. The cost to benefit ratio (E/B) changes from a game that is no social dilemma at all with zero costs of effort, to a mild social dilemma where the ratio is 3/16 to a fairly harsh game in which the costs are half the benefit, i.e. 8/16. Moreover, the table illustrates effects of variation between explicit and implicit memory exchange, indicated as {E, I}, respectively. Table 1 shows for every condition two main characteristics of the simulated networks, their average path length l and their clustering coefficient C. These are compared in table 1 to the corresponding characteristics of random networks of equal size, lrand and Crand and equal average degree. The average degree k corresponds to twice the initiation-capacity Q which is set to 5 in all conditions.

3.6
As Watts and Strogatz (1998) have argued, this comparison allows to identify whether our networks have complex network features, particularly the small-world property. To explain, the average path length l indicates the average number of network ties that form the shortest path between two nodes in the network. The clustering coefficient C measures the extent of overlap between the ego-networks of related nodes. Complex networks are characterized by an average path length that is similar to graphs with a random structure, while at the same time the degree of clustering is much higher. Moreover, in complex networks the average path length increases logarithmically in network size, just as in random networks. A comparison of the columns for the average path lengths lLO and the clustering coefficient CLO generated by the local interaction model with the corresponding properties of random graphs, lrand and Crand shows that our model indeed generates networks with small-world properties. More specifically, we find that across all conditions the LO-networks have similar average path lengths compared to random graphs of the same size and average degree, lLOlrand, and the clustering of LO-networks is much higher compared to these random graphs, CLO >> Crand.

3.7
Table 1 furthermore shows that the LO-networks also satisfy the small-world criterion proposed by Walsh (1999). According to this criterion, (lrandC)/(lCrand) >> 1 in a small-world network. Naturally, the networks generated by the Walsh model are also small-world structures, since this model has been designed to generate power-law networks.

3.8
However, table 1 also highlights an important difference between the Barabási and Albert (1999) model (BA) and the local optimization model (LO). The clustering coefficient of networks generated by the local optimization model is much higher compared to the corresponding BA-networks, CLO >> CBA. If we follow Albert and Barabási (2002), these high clustering coefficients can be seen as an indication that our model corresponds well to the structures that many social networks exhibit. Remarkably, like the Barabási-Albert model, most other models in the literature fail to reproduce clustering coefficients that are comparably high.

3.9
To shed light on the specific types of complex networks generated by the local optimization model, we studied the emergent degree distributions. Figure 2 shows the degree distribution of networks produced with the LO-model under varying cost to benefit ratios and knowledge exchange mechanisms. The horizontal axes in the graphs represent the degree, scaled logarithmically, and the vertical axes indicate the frequency and the degree frequency with which the corresponding degree occurs in the network, also scaled logarithmically. The typical pattern of a power-law complex network in such a graph is a straight declining line, indicating that the number of nodes with a particular degree decreases with the degree to the power of a constant. The figure 2 shows that depending on the exact conditions of the exchange, different kinds of distributions are observed. We can identify three clearly distinct types:

  1. star-like distribution (left sub-figures within figure 2, E/B=0): These networks can be classified as central-periphery networks, where a small subset of nodes forms a highly clustered core network, and the majority of remaining nodes connects solely to the core. In other words, most nodes have either a small or a high number of relationships, but nodes with intermediate degree are rare.
  2. potential distribution (central sub-figures within figure, 2, E/B=6/16 and E/B=7/16): In these networks, low degrees are most frequent and high degrees are least frequent. More specifically, the characteristic property of a potential distribution is its power-law structure, i.e. P(k) ∝ k, where γ is the exponent that indicates the rate of decline of degree frequency. These networks, also called power-law networks, or scale-free networks are by many authors seen as the paradigmatic case of complex networks, due to their amazing robustness and efficiency, as Albert and Barabási (2002) report. Most of the complex network literature heavily focuses on this type of networks. Power-law networks generally have small-world properties, but not all networks with small-world properties are necessarily power-law. Potential distributions allow nodes to have a very high degree, therefore, the degree distribution displays the heavy-tail effect. That is not a unique feature of power-law networks, it is also observable in star-like networks, although their degree distribution do not follow a straight line in the log-log scale.
  3. exponential distribution (right sub-figures in figure 2, E/B=9/16): For exponential distributions the probability density is P(k) ∝ θ-k. In practice, exponential degree distributions are similar to power-law distributions, with the main difference that the frequency of degrees tends to decline faster to zero as the degree increases, therefore the characteristic heavy-tail of the power-law distribution disappears. This is exemplified by the different scaling of the horizontal axes in figure 2 for power-law and exponential structures, respectively. As the figure 2 shows, in the exponential networks the frequency of nodes with a certain degree tends to decline to zero before the degree exceeds 100, whereas degrees up to the maximum of 400 still can occur in the power-law structures .



Figure 2: Degree distribution obtained through Local Optimization. The dots are the degree distribution P(k) (frequency of nodes with degree k). The line with the plus sign is the cumulative degree distribution Pc(k). Parameters: Q, or k/2 = 5, N = 10000, M = 500, C = 400. First row: explicit memory exchange. Second row: implicit. Cost to benefit ratio: E/B = {0,6/16,9/16}, for explicit memory exchange, and E/B = {0,7/16,9/16} for implicit





Figure 3: Cumulative degree distribution obtained through Local Optimization. Parameters: Q, or k/2 = 5, N = 10000, Mc = 200, C = 150. Left figure: explicit memory exchange. Right figure: implicit. Cost to benefit ratio: E/B = {0,4/16,6/16,8/16,11/16}. The solid line is the analytical power-law degree distribution with γ = -3 in the left figure, and γ = -3.3 in the right figure

3.10
To further illustrate the qualitatively different network patterns shown in figures 2 and 3, we replicated the simulations for a set of conditions that allows easy graphical representation of the network topology (N=200, Mc=150, C=150, Mo=10, Q=1, ME=explicit). Figure 4 shows three different networks obtained by the LO-model for three different levels of E/B. The graphs confirm visually the three qualitatively different regimes. The star-like structure in the leftmost figure is exemplified by two big clusters that each center around a few highly central nodes. The power-law pattern (middle figure) shows a few highly connected nodes in the center of the graph, with layers of nodes surrounding the center that are increasingly less connected as they are more distant from the center. Finally, the exponential regime (rightmost figure) exhibits a more modest level of centralization than the power law graph, visually represented by a less clearly accentuated center-periphery structure. To make these pictures, we used the algorithm proposed by Kamada and Kawai (1989) for displaying undirected graphs.


Figure 4: Network obtained by the model LO. Parameters: N=200, Mc=150, C=150, Mo=10, Q=1, ME=explicit. The graph a) is for E/B=0 and corresponds to a star-like network. The graph b) is for E/B=2/16 and corresponds to a power-like network. The graph c) is for E/B=8/16 and corresponds to a exponential network.

3.11
To summarize, the above figures show that the local interaction model can generate not only power-law structures, like many other models in the literature, but it also produces central-periphery networks and small-world networks with exponential degree distribution. To explore model robustness, we repeat in figure 3 the experiment for different memory size Mc , and capacity C. The figure shows that the same qualitatively different structures appear than in the previous experiments. We only graph the cumulative degree distribution Pc(k) in figure 3 and the following figures, because this is a better statistical estimator for small samples than the degree distribution.

3.12
To what extent do the results of the local interaction model correspond to structures generated by a model that is specifically designed to reproduce complex network structures? To answer this question, we used the model proposed by Krapivsky et al. (2000) who extended the original preferential attachment algorithm of Barabási and Albert (1999). Preferential attachment as proposed by Barabási and Albert can be formulated as the assumption that the likelihood of receiving a new relationship increases with the node's connectivity degree ks. Formally, π [ks(t)]= ks(t) / ∑ks(t). Krapvisky et al. extended the preferential attachment to be non-linear, that is, π [ks(t)]= ks(t)α / ∑ks(t)α. It turns out that this model generates three distinct regimes depending on α that clearly correspond with the qualitatively different patterns generated by the LO-model:

  1. Linear case (α = 1.0): this is the original Barabási-Albert model, which produces the well-known power-law degree distribution P(k) ∝ kγ, where γ = 3.
  2. Sub-linear case (α < 1.0): the degree distribution is an stretched exponential of the form P(k) ∝ ke-Ak1-α. However, when &α tends to 1.0 the potential part dominates the exponential becoming very similar to a power-law distribution with an exponent γ > 3.
  3. Super-linear case (α > 1.0): In this regime there is no analytical solution, but for α > 2.0 a winner takes all phenomenon emerges, such that almost all the nodes connect to a single node. Figure 4.c for the LO-model.


Figure 5: Cumulative degree distribution of non-linear preferential attachment

3.13
Figure 5 illustrates those different regimes. The displayed data has been generated using the original Barabási-Albert algorithm with a couple of modifications: 1) replacing the linear preferential attachment by the non-linear tuned by α, and 2) setting a maximum degree in order to match the capacity of our LO model.

3.14
We can observe that the non-linear preferential attachment yields different network structures that matches those obtained with the LO model. For α > 1.0 we obtain a star-like structure. For α = 1.0 we obtain a power-law. For α > 0.75 we obtain power-law with an exponential influence. And, for α < 0.75 the exponential part clearly dominates the power-law one. In the last case there is no preferential attachment. In this case, the probability of node i being chosen by a new node entering the network is P(i)=1/n, where n is the current number of nodes. Without preferential attachment the degree distribution is exponential.

3.15
We now turn to a closer inspection of the conditions that shape the topological features of the emergent networks in our simulations. Figures 2 and 3 show the simultaneous effects of two conditions, the cost to benefit ratio in exchanges in the support game E/B and the variation between explicit and implicit knowledge exchange. Moreover, between these two figures there is the effect of memory size Mc and capacity C. The results demonstrate that particularly the ratio E/B as one aspect of the harshness of the exchange game strongly affects the macro-structure that arises from agents' micro-behavior. At the same time, network structures seem to be more robust against variation in agents' learning capacity. We discuss effects of both conditions in turn, starting with effects of the cost to benefit ratio.

3.16
First, we analyze the cost to benefit ratio effect. Thus, we focus on the left sub-figure of figure 3. A star-like structure appears when actors face no social dilemma or a very mild one, since the cost of giving support is very low compared to the benefit. Thus, we are in a situation of mild harshness of the game (E/B=[0..3/16]). When costs increase and, consequently, the game becomes harsher (E/B=[4/16..7/16]), the star-like structure is replaced by a power-law. This in turn is replaced by an exponential structure as soon as the cost to benefit ratio increases up to E/B ≥ 8/16.

3.17
Why do we see these effects? Let us first consider the case of zero costs. In this situation, all agents get sooner or later acquainted with the most attractive system members and many of them actually exchange cooperatively with highly attractive agents. The reason is that due to zero costs these agents can benefit even from exchanges with the least attractive system members and thus do not reject them. As a consequence, the most attractive members of the population soon become known throughout the network as the most desirable interaction partners. They receive many proposals for interaction which are accepted until these agents exhaust their interaction-capacity. In the course of this learning process, highly attractive members tend to gradually optimize their network so that eventually the core arises in which highly attractive agents mainly relate to each other. This leaves little room for less attractive agents to relate to the core. Many less attractive members are frustrated in their search for further improvement by the frequent rejections they received. The graphs in the left column of figure 2 demonstrate a paradoxical consequence of these dynamics: despite the low costs of exchange, a large number of agents has a very sparse network in this condition with only ten or less exchange partners. This combination of large numbers of sparse ego networks with a highly connected core generates the star-like pattern.

3.18
When costs of the exchange are moderate relative to the benefit, a power-law structure emerges. In this situation, the most attractive agents are more likely to reject interaction proposals from less attractive members even before their interaction capacity is exhausted, or they may defect once an interaction has been established. As a consequence, knowledge about the most attractive agents spreads slower and to less recipients than in the zero-cost condition. Agents throughout the network hold more different perceptions of who may be a desirable interaction partner for them than in the zero-cost condition. The reason is that now there is no absolute top that is optimal for everybody. Because the social dilemma is harder, less attractive agents are more often exploited than in the zero cost condition. Accordingly, these agents become more conservative in their partner search. Exploration tends to diminish and the system ends sooner in a stable configuration than in the zero cost condition. This explains why the resulting degree distribution is thicker for intermediate degrees than in the star-like structure and thinner in the right tail. Now also agents with intermediate attractiveness are often chosen as interaction partners and there are less agents who become highly popular stars in the network.

3.19
Finally, when exchange costs are high, the conditions under which agents are willing to cooperate are very restrictive. A focal agent is now only willing to cooperate when his interaction partner has very similar or higher attractiveness. As a consequence, the large majority of network members experiences that most of the interactions they try fail, or they get exploited in these interactions. Gradually most network members stick to those partners with whom they have positive experience and interact only with a small subset of all network members. Interaction networks tend to be less optimized than under milder social dilemma conditions. In particular, in many instances potential cooperation partner of the most attractive network members fail to find them before they stop exploration. Correspondingly, these most attractive members receive less choices than in milder circumstances. This explains why the resulting degree distribution is thinner in its right tail than the power law pattern that we found for moderate social dilemmas.

3.20
However, the topology of the simulated networks is not only affected by costs and benefits of the exchange but also by conditions that shape the efficiency of agents' learning process. Figures 2 and 3 show that under most conditions the network structures for implicit and explicit knowledge exchange are very similar, yet in some cases there are differences. For instance, in figure 3 we can observe that when E/B=4/16 explicit memory exchange ends up in a power-law distribution, while implicit memory exchange yields a star-like distribution.

3.21
To explore effects of memory exchange with more precision, we conducted a more fine-grained analysis of the interaction effect between cost to benefit ratio of and mode of memory exchange, upon network topology. We simulated both explicit and implicit memory exchange for cost-benefit ratios varying in the range E/B: [0..14/16]. Further parameters of the model are equal to those used for figure 3, that is: N=10000, Mc=200, C=150 and Q=5. To distinguish network structures quantitatively we use two different measures:
  1. the determination coefficient of the cumulative potential regression r2. This coefficient indicates how well the observed degree distribution can be fitted to the cumulative potential function P(k) ∝ k. A determination coefficient equal to or larger than r2=0.95 implies a good fit of the regression (for N=1000 we lower the acceptance threshold to 0.9, because for this small number of nodes statistics are very sensitive to random perturbations). Accordingly, the observed degree distribution can be classified as a potential distribution of exponent γ. Unfortunately, this measure is problematic when the statistics are poor. In this particular case the maximum degree is relatively small, which makes it difficult to discriminate between an exponential distribution and a power-law with a exponential cutoff. In order to overcome this problem rely on an additional measure, the variance of the degree distribution.
  2. variance of degree k2 / k2. This ratio measures the variance between the real connectivity degree compared with the average degree. In the case of a random network this ratio would be close to 1.0, since nodes' degree do not deviate from the average. However, when heavy-tails are observed it means that there are nodes whose degree is much higher than the average degree. Therefore, this measure is very useful to discriminate between power-law and exponential structures. On the other hand this measure does not discriminate between power-law and star-like, since both distributions display heavy-tails. In that case, we rely on the regression's determination coefficient.

3.22
More precisely, to determine the type of distribution we use an algorithm that works as follows:
If [kLO2 / kLO 2] < [kNon-Lin-BAα = 0.752 / kNon-Lin-BAα = 0.752] then the distribution is exponential. Otherwise, we check the determination coefficient of the potential regression rLO. If rLO2 < 0.95, then the distribution is star-like. Otherwise, it is power-law.




Figure 6: Classification in the star-like, power-law and exponential regimes of the networks generated by the LO model. Parameters: N=10000, Mc=200 and C=150. The cost to benefit ratio E/B of the support game varies from 0 to 14/16. Left sub-figure: Explicit Memory Exchange. Right sub-figure: Implicit memory exchange. r2 is the determination coefficient of the regression over the cumulative degree distribution. The dashed line is variance of degree for a network generated using the non-linear BA-model for α set to 0.75, with maximum degree set to 150, number of nodes set to 10000 and m set to 5, so then k=10. The dotted line identify the boundaries between regimes.

3.23
Figure 6 reveals profound interaction effects between the ratio of cost to benefits in the exchange game and the type of memory exchange on the qualitative structure of emergent networks, indicated by the different size of the regions of star-like, potential and exponential degree distribution in both sub-figures. Broadly, the pattern that we find is that both higher cost to benefit ratios and the efficiency of memory exchange determine how difficult it is for agents to find suitable exchange partners. The higher this difficulty, the closer the network matches the exponential distribution. Conversely, the easier suitable exchange partners can be found, the more similar the patterns become to a star-like central-periphery structure. In between these two extremes, we find networks with power-law degree distributions. To illustrate these interaction effects we focus on two specific scenarios.

3.24
Consider the case where E/B=6/16. With explicit memory exchange (left part of figure 6) the LO-model generates a power law network, whereas we obtain an exponential structure under implicit memory exchange. Why this difference? In this condition, the costs of giving support are relatively high. Accordingly, agents experience often exploitation on their interactions, or rejection on their proposals to interact. As a consequence, agents' memory contains a lot of information about suboptimal or even non-profitable exchange partners. Implicit memory exchange exacerbates the difficulties of finding suitable partners in this condition, because agents get only randomly generated information from their interaction partners. Thus, with implicit memory exchange it is very likely that after an interaction, an agent receives a reference to at best a mediocre third party, so that the process of finding attractive partners is slow and inefficient in comparison with explicit memory exchange. With explicit exchange, the references agents receive are filtered, so that bad entries are not propagated. As a result, highly attractive agents are found by a larger number of advice seekers so that relatively more highly attractive network members obtain a high degree. This explains why in this particular condition explicit memory exchange results in a power law network while implicit memory exchange yields an exponential distribution.

3.25
Effects of memory exchange are strikingly different when we look at the other end of the spectrum where the social dilemma actors face is mild. Consider the case E/B=3/16. Figure 6 shows that in this condition explicit memory exchange entails a power-law structure, while under implicit memory exchange the structure is closer to a star-like pattern than to a power-law network as can be observed in figure 2 left-bottom sub-figure. Notice that this is just the opposite of the effect that we observed before. This time, explicit memory exchange generates the pattern that corresponds to the less favorable social dilemma structure as compared to the result of implicit memory exchange, while in the previous case this was reversed. The reason is that under explicit memory exchange references to third parties are filtered. As a consequence, all agents recommend more or less the same very limited subset of potential interaction partners, who in turn - are not capable to process all interaction proposals they receive. The frequent occurrence of rejections introduces a large amount of noise in the system. By noise we mean here that many agents get distorted perceptions of the real attractiveness of the other network members so that they can not find the best available interaction partner. By contrast, in implicit knowledge exchange the references are not filtered, but random. Here, not all agents try to get connected to the same subset of stars. The subset of potential partners that are perceived as highly attractive is wider, so that less interaction requests are rejected and more new interactions take place. In the long run, this fosters the efficiency of agents' learning process and highly attractive agents can be found and accessed by a more network members. This, in turn, drives the system more towards a star-like structure than under the explicit memory exchange mechanism, where agents sooner stop exploring due to frequent rejections.

3.26
So far, we have shown the effect of the cost to benefit ratio E/B and the memory exchange process ME for a particular setting of the model, concretely, when number of agents N is 10000, the memory size Mc is 200, and the capacity C is 150. In order to test consistency of the model we conducted the same analysis for other settings, changing parameters N, Mc , C and Q. The results are summarized in the table 2.

Table 2: Regimes boundaries for different settings in terms of cost to benefit ratio (E/B). Settings: Mo=150 and A) N=10000, M=500, Mc=400, Q=5, B) N=10000, M=500, Mc=400, Q=10, C) N=10000, M=200, Mc=150, Q=5, D) N=5000, M=200, Mc=150, Q=5, E) N=1000, M=200, Mc=100, Q=5
Setting Mem. Exch. Star-like Reg. Power-law Reg. Exponential Reg.
A Explicit [0..5/16] [6/16] [7/16..14/16]
A Implicit [0..6/16] [7/16] [8/16..14/16]
B Explicit [0..3/16] [4/16..6/16] [7/16..14/16]
B Implicit [0..4/16] [5/16] [6/16..14/16]
C Explicit [0..2/16] [3/16..6/16] [7/16..14/16]
C Implicit [0..4/16] [5/16] [6/16..14/16]
D Explicit [0..2/16] [3/16..7/16] [8/16..14/16]
D Implicit [0..4/16] [5/16] [6/16..14/16]
E Explicit [0..2/16] [3/16..4/16] [5/16..14/16]
E Implicit [0..3/16] [4/16] [6/16..5/16]

3.27
Table 2 shows how despite having different population size N, memory size Mc , capacity C and initiation capacity Q all the analyzed settings are consistent with previous simulation runs. All settings display the three regimes, mainly distinguished in terms of cost to benefit ratio or support game harshness. Furthermore, the kind of memory exchange used affects the boundaries and the size of the power-law regime in a similar way as for setting C, which was the subject of the detailed analysis. Therefore, the detailed analysis carried out for setting C is also applicable to other settings.

3.28
Finally, we wish to test explicitly to what extent our LO-model can match results obtained from the original Barabási and Albert (1999) model. For this purpose, we compared the results of both models with regard to simultaneous effects of the population size N and the initiation-capacity Q. We have shown the effect of the population size already in the previous results but we did not compare the obtained degree distributions for the two models. Figure 7 shows that the networks generated by the LO-model scale similarly to those obtained from the BA-model, both for the effect of N and the effect of Q on degree frequency. Moreover, the networks for both models seem to be very similar.



Figure 7: Comparison of distributions for the local optimization model. For the BA model we are using the original algorithm proposed by Barabási and Albert. In the left sub-figure we compare the graphs generated by both models for different networks size N. In the right sub-figure we compare the graphs for different initiation-capacity Q, which is the number of link that a node can deploy in the BA-model, that is to say, m. For the LO-model the memory exchange is always implicit, and the Mo is 150. See the legend for the rest of the parameters. Notice that the LO-model contains a capacity constraint C, which is set at C=150 and C=400 for left and right sub-figure, respectively.

* Summary and Conclusion

4.1
We have argued that a sociologically plausible model of the micro-mechanism underlying complex network emergence needs to meet the following criteria. First, the locus of decision making about network changes should be at the level of individual actors. Second, individual decisions should be derived from the optimization of individual goals based on bounded rational decision heuristics. Third, individual agents should use only local and imperfect knowledge of network characteristics to make decisions. Finally, the model should not trivialize conflicts between agents' interests with regard to the network changes they prefer. We proposed an agent-based computational model that satisfies these requirements, the local-interaction model (LO-model). The model describes network change as a consequence of a social exchange process in which agents differ in attractiveness, are initially unaware of their own and others' attractiveness, are free to change partners and acquire through interaction gradually local and partially distorted knowledge about others' attractiveness. In our model, agents seek to optimize their exchange relations based on backward-looking simple decision heuristics.

4.2
The model's assumptions rely on general theories of boundedly rational human decision making (e.g. Simon 1982; Todd and Gigerenzer 2003) that have partially been validated in experimental research, but have not been specifically tested for individual decision making in making and breaking relations in social networks. While we believe that this is a significant step forward with regard to previous models of complex network emergence, we readily admit that more careful empirical studies of social network dynamics is needed to test underlying micro assumptions. One approach in particular that promises to be fruitful for this is the recently developed technique of 'actor oriented statistics' (Snijders 2001) that is specifically designed to disentangled based on longitudinal network data the various individual motives that may drive network change.

4.3
We conducted simulations to explore how social conditions identified by the local interaction model may shape the structure of emergent networks. First and foremost, we found that despite the minimal and sociologically plausible assumptions we made about agents' knowledge and cognition, our model can replicate the celebrated small-world and power-law network structures that have recently received much attention in the networks literature, due to their high robustness and efficiency for information propagation. We believe that for social network analysis, this is a significant improvement compared to previous models based on the mechanisms of preferential attachment and random rewiring. These models needed to employ implausible assumptions of globally available knowledge about structural positions or failed to explicate the individual goals and cognitions that motivate actors' decisions to make or break ties.

4.4
The second main result of our analysis was that the topological structure of the emergent social networks depends heavily upon the harshness of the exchange game, in particular the ratio of costs to benefits in a social exchange, and conditions that shape the learning capacities of agents, in particular the accuracy of information they receive about attractive exchange partners from their network relations. We show that it depends on the combination of these conditions whether star-like, small-world or power-law network structures emerge. Broadly, the pattern that we find is that both higher cost to benefit ratios and the efficiency of memory exchange determine how difficult it is for agents to find suitable exchange partners. The higher this difficulty, the closer the network matches the exponential distribution. Conversely, the easier suitable exchange partners can be found, the more similar the patterns become to a star-like central-periphery structure. In between these two extremes, we find networks with power-law degree distributions.

4.5
Our results not only show that complex networks can in principle arise from sociologically plausible behaviors of individual agents, but we have also obtained substantive insights that could not be derived from previous models of complex network emergence. The main reason we could do this is that our model contains parameters that relate to the costs, benefits and risks of social exchange actions, something that was not present in the mechanistic microassumptions based on random rewiring or preferential attachment. Accordingly, we believe that our insights may be fruitful for fields of research such as the study of social support or of the emergence of trust. In a society that faces a high level of harshness, e.g. in terms of the scarcity of material resources or harsh climatic living conditions, social support is both particularly needed and prone to social dilemma problems. Our model suggests that this should also show up in the network structures that arise in such a society. We find in our simulations that - paradoxically - the harsher are the conditions - that is: the more costly it is for actors to support others - the sparser and the less optimized are the emergent structures of support exchange. Such a hypothesis might be tested in a cross-national comparison where the relationship between economic wealth and the structures of social networks in different countries or regions is addressed. With regard to reputation and trust we find a similar paradoxical consequence of harshness. In harsh conditions actors are particularly vulnerable to exploitation by untrustworthy interaction partners. However, according to our analysis it is exactly here where emergent networks are least efficient for spreading the reputational information that actors need to protect themselves against exploitation. This, in turn, may imply the testable hypothesis that in poorer societies people need more time to develop trust into new potential interaction partners (e.g. immigrants) because it takes longer before they have information available that allows to assess these strangers' reputation.

4.6
Our work also adds a new note to research studying the effects of social network structures on cooperation in social dilemmas. Cohen et al. (2001) have argued that it is mainly the stability of interaction structures in social networks and not so much their clustering that is needed to sustain cooperation. However, in their work they have treated interaction structures as an exogenous condition. We show instead how in the search for both attractive and cooperative partners agents may self-organize networks that are clustered and stable and sustain cooperative behavior in social dilemmas. Emergent complex networks may be important for cooperation in a heterogeneous setting, because they match those pairs of agents who find each other sufficiently attractive to cooperate with each other in an exchange. This result also resonates with another recent study which emphasized that cooperative role models may drive the self-organization of complex networks that sustain cooperation in social dilemmas (Eguiluz et al. 2005).

4.7
Our work can be improved in a number of respects. For one thing, the model employs a number of assumptions for which we do not know yet how robust model results are against variation in specific details. This concerns for example details of agents' exploration of the search space or the way how decisions are made by agents to accept or reject others' requests for interaction. Moreover, we make some simplifications that may seem unrealistic. In particular, we assume that agents are somehow capable to assess others' attractiveness accurately, once they have entered an interaction with these others. While this is not implausible, it certainly does not apply to all realms in which social exchange occurs. Alternative assumptions that can easily be integrated into our model are that agents learn each others' attractiveness only through the course of repeated interactions, or that they can only assess it with some error margin.

4.8
Finally, we assume that agents are benevolent in the sense that they always cooperate with exchange partners as long mutual cooperation is preferable to mutual defection. The latter assumption in particular neglects the problem of opportunistic behavior that much of the social dilemma literature (e.g. Axelrod 1984) deals with. However, we wish to point out that we did not entirely trivialize the problem of opportunism. Exploitation and unilateral defection of agents is possible and does occur in our model, but more sophisticated decision makers might exhibit these behaviors even more than our benevolent agents. Based on strategies already explored by Axelrod, a possible model extension here could be that agents vary in the extent to which they try to test the waters with occasional defections and then continue to defect unless the exchange partners retaliate. This obviously adds an extra dimension to the difficulties of finding an appropriate partner in our exchange game. We expect that the basic conclusion analysis will remain the same also for this complication. The more agents follow opportunistic strategies, the more difficult it is for them to find suitable partners and the more the emergent network will then exhibit an exponential degree distribution rather than a star-like structure.

4.9
Notwithstanding the need to explore in future work alternative, perhaps more realistic sets of assumptions, we believe we have offered an explicit model of sociologically plausible micro-processes that can generate a range of qualitatively different complex network structures.

* Pseudo-code of the Model

In order to facilitate the replication of our results we include a brief but comprehensive sketch of our model in pseudo-code. For the sake of clarity we focus only on the core of our model; the agent behaviour mainly. We do not cover the manipulation of data structures, graphical display or collecting results.

Declarations

int N;                 // number of agents
int Mc;                // memory size
int Mo;                // initial number of agents in memory
int Q;                 // number of interactions that can be initiated
int C;                 // interaction capacity
double B;              // Benefit
double E;              // Effort
ME={Explicit,Implicit} // Memory exchange

Agent {
	int id;               // agent's id
	double alpha;         // expertise of the agent, know as 'a' in the model
	MemoryEntry memory;
}

MemoryEntry {
  int agent;	  // agent in memory
  int expout;     // expected outcome of the agent
  int time;	  // time of the last interaction. 
                  //    if time<-1 agent belongs to the unknown set
                  //    if time==0 agent belongs to the known set
                  //    if time>0  an interaction occurred at that time
}

Initialization

// initialization of the model
procedure init() {

  Agents agents = new Agent[N];
  for each agent in agents {
    // init expertise of the agent in the [0.05..0.95] range 
    // with a 10^-3 precision
    agent.alpha = (double)((random()*0.90+0.05)/1000);
    agent.memory = new MemoryEntry[Mc];
    for each me in agent.memory up to Mo {
      me.time=-1;
      me.expout=0.0;
      me.agent = random(N); 
      // check that the me.agent is different than the agent itself 
      // and it's not already in the agent's memory.
    }
  }
}

Agent's Behaviour

procedure main() {
  init();
  time=1;

  do {
    createPermutationOfAgents();	
    do {
      agent=getAgentFromPermutation();
      run(agent,time);	
      removeAgentFromPermutation(agent);	
    } while(!isPermutationEmpty());
    time=time+1;
  } while(time<100);

} //end main

// agent's behaviour
procedure run(agent, time) {
  // chose a set of agents to interact with (<=Q)
  agentSet = chooseAgentsToInteract(agent);
  for each agentToInteract in agentSet {
   // is agentToInteract accepting the agent's proposal 
   //for interaction?
   if (acceptInteractionProposal(agentToInteract,agent)) {
     [ag1_outcome,ag2_outcome,ag1_netbenefit,ag2_netbenefit] 
                      = playGame(agent,agentToInteract);
      updateMemory(agent,agentToInteract,
                   ag1_outcome,ag1_netbenefit,time);
      updateMemory(agentToInteract,agent,
                   ag2_outcome,ag2_netbenefit,time);		
      // if interaction is positive for both agents, 
      //then exchange memories
      if (ag1_outcome>0 and ag2_outcome>0)
        exchangeMemories(agent,agentToInteract,time,outcomes);
   }
   else {
     interactionIsRefused(agent,agentToInteract);
   }
  } 
}

// agent chooses a set of agents to interact with (up to Q). 
// The set of agent is chosen at random or maximizing the 
// expected outcome depending on the exploration 
// probability
function chooseAgentsToInteract(agent) {
  agentSet = new List();

  // we have to calculate the exploration probability. So we
  // must know how many agents in agent's memory are known or
  // unknown (K and U set in section 2.1)
  unknown=0;
  known=0;
  for each mem in agent.memory {
    if (mem.agent>=0) {
      if (mem.time<0) unknown=unknown+1;
      else known=known+1;
    }
  }

  // equation 3
  explorationProb = (unknown / (unknown+known))^2.0;

  i=0;
  do {
    if (explorationProb < random()) {
      // agent is not exploring, 
      mem = maximum(agent.memory);
      // mem is the memory entry with maximum expected outcome
      // (mem.expout) provided that mem.agent is not already 
      // in agentSet
      agentSet.add(mem.agent);
    }
    else {
      // agent is exploring
      // choose one agent at random from agent's memory. 
      // Check that mem.agent is not already in agentSet.
      mem = random(agent.memory); 
      agentSet.add(mem.agent); 
    }
    i=i+1;
  } while(i<Q);

  return agentSet;
}

// the agent must decide whether to accept the proposal
// made by initiatorAgent
function acceptInteractionProposal(agent,initiatorAgent) {
  mem=getMemoryEntry(agent,initiatorAgent,time);
  if (isNull(mem)) {
    if (getCurrentInteractions(agent,time)<C) return true;
    else return false;
  }
  else {
    if (mem.expout>=0.0 
         and getCurrentInteractions(agent,time)<C) return true;
    else return false;
  }
}

// the agent updates its memory after the interaction
// equation 5
procedure updateMemory(agent, partner, outcome, netBenefit, time) {
  mem=getMemoryEntry(agent,partner);
  if (isNull(mem)) {
	// the agent did not have partner in its memory
	memToR=getLessAttractiveMemEntry(agent,time);
	memToR.agent = partner;
	memToR.time = time;
	if (netBenefit>=0) memToR.expout = outcome;
	else memToR.expout = -outcome;	
  }
  else {
    // the agent did have partner in its memory
    if (netBenefit>=0) {
      // agent cooperated in the interaction with partner 
      if (mem.time>0) mem.expout=(mem.expout+outcome)/2.0   
      else mem.expout = outcome;
    }
    else {
      // agent defected in the interaction with partner 
      mem.expout = -outcome;
    }
    // update the interaction time, if t>0 means that the agent 
    // and partner have interacted, otherwise the knowledge 
    // about partner comes from the memory exchange process
    mem.time=time;
  }
}

// returns the memory entry from agent's memory whose 
// mem.expout (expected outcome) is minimum in 
// absolute value. Provided that mem.time < time
function getLessAttractiveMemEntry(agent,time);

// remove the memory entry memToRemove from agent's memory
removeMemoryEntry(agent,memToRemove);

// add the memory entry memToAdd to agent's memory
addMemoryEntry(agent,partner,time,expoutPartner) {
  mem = getMemoryEntry(agent,partner);
  if (!isNull(mem)) {
    // partner was already in agent's memory
    if (mem.time<=0) {
      if (mem.time<0) mem.expout=expoutPartner;
      else if (mem.time==0) 
            mem.expout=(mem.expout+expoutPartner)/2.0;
      mem.time=0;
    }
  }
  else {
    // partner was not in agent's memory
    memToR = getLessAttractiveMemEntry(agent,time);
    if (fabs(memToReplace.expout)<fabs(expoutPartner)) {
      // replace the old entry by the new one
      memToR.agent = partner;
      memToR.expout = expoutPartner;
      memToR.time = 0;
    }
  }
}

// agents exchange information about other agents
procedure exchangeMemories(agent, partner, time) { 
  if (ME==Explicit) {
    memFromAgent = chooseMEExplicit(agent);
    memFromPartner = chooseMEExplicit(partner);
  }
  else {
    memFromAgent = chooseMEImplicit(agent,time);
    memFromPartner = chooseMEImplicit(partner,time);
  }

  addMemoryEntry(agent,memFromPartner.agent,time,
                  memFromPartner.expout);
  addMemoryEntry(partner,memFromAgent.agent,time,
                  memFromAgent.expout);
}

// the agent reduces the expected outcome after 
// partner's rejection to interact
procedure interactionIsRefused(agent,partner) {
  mem = getMemoryEntry(agent,partner);
  mem.expout = (mem.expout+0.0)/2.0;
}

// return a memory entry whose mem.time is bigger than 0,  
// which that at least one interaction between agent and mem.agent 
// has occurred. The chosen memory entry is the maximum 
// mem.expout in absolute value. In the case of  a clash the 
// returned memory entry will be chosen at random among those with
// maximum expected outcome. 
function chooseMEExplicit(agent);

// return a memory entry whose mem.time is equal to the current time, 
// which means a current interaction of the agent. The memory entry 
// is chosen at random
function chooseMEImplicit(agent, time);

// returns the current number of interactions of the agent, which is 
// the number of memory entries in its memory with mem.time equal to
// the current time
function getCurrentInteractions(agent,time);

// play the game G
function playGame(agent1, agent2) {
  double Gij, Gji, Lij, Lji;
  double outcomeAg1, outcomeAg2;
  double netBenefitAg1, netBenefitAg2;

  Gij = (1.0-agent1.alpha)*(agent2.alpha)*B;
  Lij = (1.0-agent2.alpha)*(agent1.alpha)*E;

  Gji = (1.0-agent2.alpha)*(agent1.alpha)*B;
  Lji = (1.0-agent1.alpha)*(agent2.alpha)*mEffort;

  netBenefitAg1 = Gij-Lij;
  netBenefitAg2 = Gji-Lji;

  if (netBenefitAg1>=0.0 and netBenefitAg2>=0.0) {
    // both agents cooperate
    outcomeAg1=Gij-Lij;
    outcomeAg2=Gji-Lji;
  }
  else if (netBenefitAg1>=0.0 and netBenefitAg2<0.0) {
    // agent2 defects and agent1 cooperates
    outcomeAg1=-Lij;
    outcomeAg2=Gji;
  }
  else if (netBenefitAg1<0.0 and netBenefitAg2>=0.0) {
    // agent 1 defects and agent2 cooperates
    outcomeAg1=Gij;
    outcomeAg2=-Lji;
  }
  else {
    // both agents defect
    outcomeAg1=0.0;
    outcomeAg2=0.0;

  }
 
  return [outcomeAg1,outcomeAg2,netBenefitAg1,netBenefitAg2];
}

// create a random permutation; so agents are executed only once per
// simulation step and they the order is random
function createPermutationOfAgents(); 
// remove agent from the permutation
procedure removeAgentFromPermutation(agent)
// return when all the agents have been already chosen
function isPermutationEmpty();
// returns the memory entry form agent's memory 
// corresponding to agent2 (agentTo=agent.mem.agent).
// If it does not exist return null
function getMemoryEntry(agent, agentTo);


* Acknowldgements

The authors would like to thank Michael W. Macy for his careful and inspiring discussion of an earlier version of this paper, and the anonymous JASSS reviewers for their insightful comments and suggestions. The first and fourth authors wish to acknowledge the financial support of the Spanish Ministry of Education, Universities project SIMA CICYT-TIC09279-C02-02 and the Catalan Autonomous Government project e-catalunya. The second author wishes to acknowledge the financial support of the Netherlands Organization for Scientific Research (NWO) under the Innovational Research Incentives Scheme (VIDI).

* References

ALBERT, R., Barabási, A.-L., 2000. The internet's achilles' heel: Error and attack tolerance in complex networks. Nature 406, 378-382.

ALBERT, R., Barabási, A.-L., 2002. Statistical mechanics of complex networks. Review of Modern Physics 74, 47-97.

AXELROD, R., 1984. The Evolution of Cooperation. Basic Books, New York.

BARABASI, A.-L., Albert, R., 1999. Emergence of scaling in random networks. Science 286, 509-512.

BLAU, P., 1964. Exchange and Power in Social Life. New York: Wiley.

BONACICH, P., 1992. Social Dilemmas. Theoretical Issues and Research Findings. Pergamon Press, Ofxord, Ch. Communication Networks and Collective Action, pp. 225-245.

BUSKENS, V., 2002. Social Networks and Trust. Kluwer Academin Publishers.

CARLSON, J., Doyle, J., 2000. Highly optimized tolerance: Robustness and design in complex systems. Physical Review Letter 84 (11), 2529-2532.

COHEN, M., Riolo, R., Axelrod, R., 2001. The role of social structure in the maintenance of cooperative regimes. Rationality and Society 13, 5-32.

DAWES, R., 1980. Social dilemmas. Annual Review of Psychology 31, 169-193.

DELGADO, J., 2002. Emergence of social conventions in complex networks. Artificial Intelligence 141, 171-185.

EBEL, H., Davidsen, J., Bornholdt, S., 2002. Dynamics of social networks. Complex. 8 (2), 24-27.

EGUILUZ, V. M., Zimmermann, M. G., Cela-Conde, C., San Miguel, M., 2005. Cooperation and emergence of role differentiation in the dynamics of social networks. American Journal of Sociology 110, 977-1008.

FERRER, R., Solé, R., 2003. Statistical Physics of Complex Networks, Lecture Notes in Physics. Springer, Ch. Optimization in Complex Networks.

FLACHE, A., 2001. Individual risk preferences and collective outcomes in the evolution of exchange networks. Rationality and Society 13 (3), 304-348.

FLACHE, A., Hegselmann, R., 1999a. Altruism vs. self-interest in social support. computer simulations of social support networks in cellular worlds. Advances in Group Processes 16, 61-97.

FLACHE, A., Hegselmann, R., 1999b. Rationality vs. learning in the evolution of solidarity networks: A theoretical comparison. Computational and Mathematical Organization Theory 5 (2), 97-127.

GILBERT, N., Troitzsch, K., 1999. Simulation for the Social Scientist. Open University Press.

GOULD, R., 1993a. Collective action and network structure. American Sociological Review 58, 182-196.

GOULD, R., 1993b. Trade cohesion, class unity and urban insurrection: Artisanal activism in the paris commune. American Journal of Sociology 98, 721-754.

HEGSELMANN, R., 1998. Experimental ethics. De Gruyter, Berlin, Ch. A Computer Simulation of Classes, Cliques and Solidarity, pp. 298-320.

HOMANS, G.C., 1974. Social Behavior. Its Elementary Forms. New York: Harcourt Brace Jovanovich.

KAMADA, R., Kawai, S., 1989. An algorithm for drawing general undirected graphs. Inform. Process. Lett. 31, 7-15.

KOMTER, A., 1996. Reciprocity as a principle of exclusion. Sociology 30 (2), 229-316.

KRAPIVSKY, P., Redner, S., 2001. Organization of growing random networks. Physical Review E 63.

KRAPIVSKY, P., Redner, S., Layvraz, L., 2000. Connectivity of growing random networks. Physical Review Letter 85, 4629-4632.

LINDENBERG, S., 2001. Handbook of Sociological Theory. Kluwer, New York, Ch. Social Rationality versus Rational Egoism, pp. 635-668.

MACY, M., Flache, A., 2002. Learning dynamics in social dilemmas. Proceedings of the National Academy of Sciences 99 (10), 7229-7236.

MACY, M. W., Sato, Y., 2002. Trust, cooperation, and market formation in the u.s. and japan. Proceedins of the National Academy of Sciences 99, 7214-7220.

MARK, N., 1998. Beyond individual differences: Social differentiation from first principles. American Sociological Review 63, 309-330.

MARWELL, G., Oliver, P., Prahl, R., 1988. Social networks and collective action: a theory of the critical mass. American Journal of Sociology 94, 502-534.

PODOLNY, J., Page, L., 1998. Network forms of organizations. Annual Review of Sociology 24, 57-76.

PUJOL, J., Sangüesa, R., Delgado, J., July 2002. Extracting reputation in multi-agent systems by means of social network topology. In: Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems AAMAS-02. ACM, Bologna, pp. 467-474.

RAUB, W., Weesie, J., 1990. Reputation and efficiency in social interaction: an example of network effects. Americal Journal of Sociology 96, 626-654.

ROBINS, G., Pattison, P., Woolcock, J., 2005. Small and other worlds: Global network structures from local processes. American Jornal of Sociology 96, 626-654.

SIMON, H., 1982. Models of Bounded Rationality. MIT Press.

SNIJDERS, T. A., 2001. Sociological Methodology. Basil Blackwell, Boston and London, Ch. The Statistical Evaluation of Social Networks Dynamics, pp. 361-395.

TODD, P., Gigerenzer, G., 2003. Bounding rationality to the world. Journal of Economic Psychology 24, 143-165.

WALSH, T., July 1999. Search in a small world. In: In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI'99). San Francisco, pp. 1172-1177.

WALSH, T., August 2001. Search on high degree graps. In: In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI'01). Seattle, pp. 266-271.

WATTS, D., 1999. Network dynamics and the small world phenomenon. Americal Journal of Sociology 105 (2), 493-527.

WATTS, D., Strogatz, S., 1998. Collective dynamics of 'small-world' networks. Nature 393, 440-442.

WOOLDRIDGE, M., Jennings, N., 1995. Intelligent agents: theory and practice. Knowledge Engineering Review 10, 115-152.

YAMAGISHI, T., Cook, K. S., Watabe, M., 1998. Uncertainty, trust, and commitment formation in the united states and japan. American Journal of Sociology 104, 165-194.

YOUNGER, S., 2004. Reciprocity, normative reputation, and the development of mutual obligation in gift-giving societies. Journal of Artificial Societies and Social Simulation 7 (1).

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2005]