Introduction

As we navigate social media platforms, we are free to customise our networks according to our individual needs: we choose to connect with users we like while we ignore others, we follow ‘Influencers’ that inspire us, and we selectively share and repost content. Combined with curated News Feeds, selective attention to and sharing of content has been associated with spreading of digital misinformation (Del Vicario et al. 2016a) and false news stories (Vosoughi et al. 2018). Over the past decade, several researchers investigated the spreading and retention of misinformation and false news on social media platforms (Bakshy et al. 2015; Bessi et al. 2015; Del Vicario et al. 2016a; Starbird et al. 2014) and their implications for e.g., the polarisation of opinions (Bessi et al. 2016; Sikder et al. 2020). However, the scientific community still lacks clear answers to fundamental questions relating to the general prevalence of misinformation and false news and their effects on individuals (Lazer et al. 2018).

To understand the spreading of misinformation and false news, recent work has investigated the impact of echo chambers on digital misinformation. Echo chambers are enclosed epistemic systems where like-minded individuals reinforce their pre-existing beliefs (Madsen et al. 2018). The enclosing nature of echo chambers has been shown to induce intolerance towards opposing views (Takikawa & Nagayoshi 2017), misleading public and political discourse (Jasny et al. 2015; Jasny & Fisher 2019) and quantitative analyses suggest that echo chambers may contribute to the spread of misinformation (Del Vicario et al. 2016a; Törnberg 2018). Considering these findings, investigating how echo chambers emerge on social media might offer an important opportunity for understanding and potentially counteracting the occurrence of digital misinformation.

Importantly, although some social media users might ‘live’ in echo chambers, using social media does not necessarily imply restricted exposure to information. Compared to non-users, the average social media user experiences more diverse content (Newman et al. 2017a), and it has been suggested that the majority of social media users do not necessarily self-select into echo chambers (Haw 2020). However, the formation of social ties is influenced by social network algorithms, e.g., via selective user recommendations (Chen & Syu 2020), which can limit a user’s exposure to other people with different beliefs or preferences. Interestingly, recent theoretical work has suggested that echo chambers can also improve individual access to information via optimising the allocation of information resources (Jann & Schottmüller 2020). Overall, these findings further highlight the importance of clarifying how echo chambers emerge on social media and raise questions about the value-free nature of echo chambers: What factors lead to an echo chamber causing beneficial vs. unfavourable outcomes?

In the present work, we formally study echo chamber formation within simulated populations of social media users. Extending previous work (Madsen et al. 2017; Madsen et al. 2018; Sasahara et al. 2020), we specifically focus on echo chamber formation as a consequence of a single interaction between generations of network users (contribution 1). As a motivating example, consider a person that shares a tweet with their friends. Subsequently, each of their friends retweets the initial tweet. Assuming that each user has 100 friends that share no social ties with another user’s friends, the ‘second-generation’ of friends that has access to the initial tweet already has a size of 10000. If this process is repeated for only a few generations of users, a single initial piece of information can permeate rapidly through a social network without requiring repeated communications between individual users. Considering this ability to rapidly spread information on social media, we see a single-interaction model of echo chamber formation as an important contribution expanding previous models focusing on repeated interaction and network pruning over time (see also Lorenz 2007; Sikder et al. 2020).

In addition to focusing on a single interaction, our model takes into account that social media users might selectively integrate information from a communicating source based on their perceived credibility of this user. As such, our belief updating mechanism (see Section 2) accounts for the influence of users’ credibility perceptions of each other during belief formation (contribution 2). To isolate the influence of credibility, we contrast two hypothetical populations of agents. The first one (‘social’ agents) evaluates the credibility of a source by comparing their communication with the beliefs of their friends (i.e. social network connections / ties). The second control group (‘asocial’ agents) samples random credibility estimates for a communicating source during belief formation. We test the robustness of our network setup across a wide range of parameter combinations varying the epistemic authority of users (robustness check 1), percentage of users sharing their beliefs with their peers (robustness check 2), and the number of social ties per user (i.e. the connectivity density of the network; robustness check 3). Overall, we hope that our model simulations will help to better understand potentially sufficient causes of echo chambers in social networks.

Background

Echo chambers

To investigate when and how echo chambers emerge, it is important to explore their causes. These might be routed in psychological biases: previous analyses of echo chambers and their impact on digital misinformation identified confirmation bias—seeking information confirming one’s prior beliefs (Nickerson 1998)—and social influence—people’s tendency to align their behaviour with the demands of their social environment (Kelman 1958)—as key driving factors of echo chamber formation (Del Vicario et al. 2016a; Sikder et al. 2020; Starnini et al. 2016). For example, a recent quantitative analysis showed that social influence combined with a preference for forming connections with similar peers and abandoning dissimilar social ties results in rapid echo chamber formation after repeated interaction (Sasahara et al. 2020). Additionally, work in statistical physics has shown that confirmation bias induces clustering of like-minded individuals (i.e. echo chambers) and proliferation of opinions (Ngampruetikorn & Stephens 2016).

The above findings might be explained by the fact that social influence and confirmation bias lead to selective avoidance of information challenging one’s prior beliefs, and consequently, limited access to cross-cutting content on social media such as Facebook (Bakshy et al. 2015; Henry et al. 2011; Ngampruetikorn & Stephens 2016). In addition to contributing to the formation of echo chambers, it is important to point out that selective rewiring can also foster cooperative behaviours. For example, it has been shown that cooperative behaviours are more likely to evolve (Righi & Takács 2014; Yamagishi et al. 1984) and survive (Santos et al. 2006) in networks where individuals can self-select and prune their social ties. Along with psychological biases, it has been argued that cognitive differences between individuals might induce echo chambers (Barkun 2013). Overall, these findings suggest that both psychological variables and cognitive variability among individual agents might be important requirements for the formation of echo chambers.

Aiming to clarify the necessity of psychological variables and heterogeneity, recent simulation-based work has investigated echo chamber formation in an idealised population of homogeneous rational (i.e. Bayesian) agents engaging in repeated interaction and with a preference for similar-minded others (Madsen et al. 2018). Results provided a formal argument for the inherent susceptibility of social networks towards echo chamber formation despite absence of cognitive differences among agents. In other words, repeated interaction in conjunction with a bias against dissimilar peers was sufficient for the formation of echo chambers. These findings are in line with earlier work showing that echo chambers inevitably emerge if users engaging in repeated interaction preferentially rewire or prune their social ties to avoid contact with dissimilar peers (Henry et al. 2011).

Importantly, while previous work on echo chambers (Madsen et al. 2018; Sasahara et al. 2020; Sikder et al. 2020) and opinion dynamics (Hegselmann & Krause 2002; Lorenz 2007) shows that repeated interaction and pruning of social ties with dissimilar others solidifies echo chambers, the present work investigates whether the initial way in which information arrives into the network already ‘skews the pitch’ prior to repeated interaction. This is motivated by theoretical (Bikhchandani et al. 1992) and simulation-based (Pilditch 2017) work on information cascades, which has suggested that single interactions between agents can be sufficient for the formation of echo chambers and other maladaptive population outcomes.

Bayesian source credibility model

The credibility of a source plays an important role when integrating their communications with our own observations and prior expectations (Cuddy et al. 2011; Fiske et al. 2007). Moreover, source credibility plays a critical role in persuasion and argumentation theory, especially in the context of politics (Cialdini & Cialdini 1993; Housholder & LaMarre 2014; Robinson et al. 1999), which has become increasingly influenced by online communication systems (Bail 2016). Both heuristic accounts, such as the heuristic-systematic model (HSM) (Chaiken 1999) and dual-process theories, including the elaboration-likelihood model (ELM) (Petty & Cacioppo 1986), have been used to study the influence of credibility on persuasion, showing a positive general impact (Chaiken & Maheswaran 1994).

Recently, research has investigated the influence of source credibility from a Bayesian perspective, meaning that credibility is modeled as an analytic cue that moderates belief updating (Bovens & Hartmann 2003; Hahn et al. 2009; Harris & Hahn 2009). This Bayesian source credibility model (BSCM) proposes that a person’s prior belief in a hypothesis is represented as subjective probability \(P(h)\) taking values between 0 and 1. Upon observing a source’s communication \(rep\), the Bayesian source credibility framework posits that the posterior probability of a hypothesis, \(P(h|rep)\), is given by the normalised product of the likelihood \(P(rep|h)\) and the prior \(P(h)\):

\[P(h|rep) = \frac{P(h)P(rep|h)}{\sum_{h'\in \mathcal{H}}P(rep|h')P(h')} \] \[(1)\]
where \(\mathcal{H}\) corresponds to the set of hypotheses \(\{h, \neg h\}\). Selecting a binary set \(\mathcal{H}\) was an intended simplification allowing us to account for the dichotomous nature of several prominent topics affected by echo chamber formation (e.g., Brexit with two-party politics). BSCM extends previous heuristic accounts such as HSM or ELM by providing a quantitative, normative framework for modelling the impact of source credibility during belief updating, and it has been shown to provide a good account of people’s judgments in argumentation (Harris et al. 2016; Madsen 2016). Specifically, BSCM represents the credibility of a source by two orthogonal quantities: (1) the perceived probability that the source is an expert \(P(e)\) vs. no expert \(P(\neg e) = 1 - P(e)\) and (2) the perceived probability of the source being trustworthy \(P(t)\) vs. not trustworthy \(P(\neg t) = 1 - P(t)\). In words, \(P(e)\) refers to the probability of the source having accurate information, whilst \(P(t)\) refers to the probability that a source intends to pass on accurate information (to the best of their ability). \(P(e)\) and \(P(t)\) moderate the influence of a source’s communication signal \(rep\) on a target’s initial belief in the hypothesis \(P(h)\) during updating (see Figure 1). Both expertise and trustworthiness received independent support in recent work showing that people account for these properties of a source when updating their prior beliefs based on the source’s belief communications (Hawthorne-Madell & Goodman 2019).

In line with the initial introduction of the BSCM model (Hahn et al. 2009; Harris et al. 2016), we define the likelihood of a source’s communication \(rep\) assuming that the hypothesis is true \(h\) and under consideration of the target’s perceived expertise \(P(e)\) and trustworthiness \(P(t)\) of the communicating source as:

\[P(rep|h) = \sum_{e'\in E}\sum_{t'\in T}P(rep|h, e',t')P(e')P(t') \] \[(2)\]
here, we marginalize over the presence vs. absence of a source’s expertise \(E\) = \(\{e,\neg e\}\) and trustworthiness \(T\) = \(\{t, \neg t\}\). Similar, if the hypothesis was false \(\neg h\), the likelihood of a source communicating \(rep\) is given by:
\[P(rep|\neg h) = \sum_{e'\in E}\sum_{t'\in T}P(rep|\neg h, e',t')P(e')P(t') \] \[(3)\]

In our agent-based simulations, we adapted the established BSCM model to furnish simulated network users with a cognitive architecture for belief formation that allows to incorporate credibility perceptions of others (further details on the computation of likelihoods follow in Section 3). Importantly, by using BSCM as mechanism for belief formation, our agent-based simulations depart from previous ‘bounded confidence’ models in which ties between agents were interrupted once the difference between the beliefs of any two paired agents surpassed a critical threshold (Bolletta & Pin 2020; Hegselmann & Krause 2002, 2005; Lorenz 2007; Madsen et al. 2018; Sasahara et al. 2020). While such bounded confidence models have been successful accounts of echo chamber formation following pruning of network ties between dissimilar agents, they have not yet provided a full description of how interactions between agents with very different beliefs can contribute to the formation of echo chambers. By including the perceived credibility of sources in our simulated network, we hope to further address this question, allowing for the exchange of information between agents irrespective of differences in their beliefs. The next section introduces the details of our agent-based model.

Agent-Based Model

Model setup

We built an idealised social network with \(N\) = 1000 agents. At the start of each run of the model (i.e. a single model simulation), agents were randomly assigned to x-y coordinates in a two-dimensional environment. An agent’s location did not change during subsequent interaction. Following spatial allocation, each agent formed \(n\) (static) social ties with their nearest network neighbors. Ties represent the social network connections between users and allowed for the communication of beliefs between agents. Ties persisted during a model simulation irrespective of changes in agents’ beliefs. Distance to other agents was measured by Euclidean distance, a proxy for relational proximity in social networks (Duggins 2017; Pilditch 2017).

The number of social ties per agent (i.e. the network’s connectivity density) was manipulated between model simulations (robustness check 1; see Table 1). This allowed us to test whether the formation of echo chambers changes as a function of the number of ‘generations’ it takes for information to fully permeate a network (e.g., 10% connectivity means on average it will take 10 generations of communication to permeate the entire network). The more ties (connected friends/users) an agent has, the more immediately an agent will see information appearing on the network (and thus the shorter/faster the cascade).

Table 1: Robustness checks.
Name Description Levels
Connectivity density (%) (Ties per Agent / Total Number of Agents) \(\times\) 100 0.5, 1.0, 1.5, ... 50.0
Expertise strength \(\tau\) Varying levels of a source’s expertise strength 0.00, 0.20, 0.40
P(Declaration) Probability of making a belief public 0.10, 0.50, 1.00

In the BSCM framework, there are several ways for initializing prior beliefs \(P(h)\) in a hypothesis. Following previous work (Madsen et al. 2018; Madsen & Pilditch 2018), agents in our network sampled prior beliefs \(P(h)\) from univariate Gaussians with a neutral mean \(\mu\) = 0.5. We tested different values of \(\sigma^{2}\). To ensure that our simulation results were not a consequence of a Gaussian prior, we also explored settings in which beliefs were sampled from a uniform distribution, which is a common choice in the related complexity literature (Hegselmann & Krause 2002; Lorenz 2007). None of these variations significantly influenced our model results, and in the remainder of this paper we focus on a setting in which agents sampled \(P(h)\) from \(\mathcal{N}(\mu = 0.5,\sigma^{2} = 0.2\)).

In addition to sampling prior beliefs, each agent \(v\) had their own subjective expertise \(e^{v}_{subj.}\) and trustworthiness \(t^{v}_{subj.}\) scores. \(e^{v}_{subj.}\) and \(t^{v}_{subj.}\) are later used for the calculation of the perceived expertise \(P(e)\) and perceived trustworthiness \(P(e)\) estimates of a communicating source. For convenience, \(e^{v}_{subj.}\) and \(t^{v}_{subj.}\) were also sampled from univariate Gaussians with \(\mu\)=0.5 and \(\sigma^2\)=0.20. For all three quantities, distributions were truncated between \([0, 1]\) to ensure that agents sampled values in \(0 \leq x \leq 1\). Figure 2 shows an illustration of an example network setup prior to the start of a simulation and Algorithm 1 further summarises the steps involved in setting up our network.

Algorithm 1 Setup network
1:  procedure Place agents
2:      Create N agents
3:      for v = 1 to N do
4:          set v position random x-y coordinate
5:      end for
6:   end procedure
7:   procedure SETUP PRIORS AND TIES
8:      for v = 1 to N do
9:          \(P(h)[v] \gets X \sim N (\mu,\sigma^2)\)
10:          \(e_{subj.}[v]\gets X \sim N (\mu,\sigma^2)\)
11:          \(t_{subj.}[v]\gets X \sim N (\mu,\sigma^2)\)
12:          create links with n nearest neighbours       ⪧ based on Euclidean distance
13:   end for
14:  end procedure

Each agent was furnished with the same possible behaviours and cognitive functions. At each time step of a model’s simulation, an agent (the communication target) first looked for a communication \(rep\) \(\in\) \(\{h_{support}, h_{reject}\}\) from another agent (the source) which could either communicate support of the hypothesis (\(h_{support}\)) or disapproval of the hypothesis (\(h_{reject}\)). Whether a source communicated support or disapproval was based on the source’s own belief declaration \(\in\) \(\{h, \neg h\}\) at the previous time step. Communication between source and target was only possible if both agents had formed a social tie prior to the start of a simulation. Considering the source’s communication, the communication target then revised their initial belief \(P(h)\) using the BSCM architecture outlined above. In standard BSCM, the orthogonal nature of a source’s expertise and trust is typically operationalised such that trust being high \((t)\) or low \((\neg t)\) leads to changes in the direction of belief revision (i.e. low trust makes you revise your beliefs in the opposite direction than high trust), whilst expertise moderates the strength (size) of the revision (Harris et al. 2016; Madsen & Pilditch 2018). To accommodate for different levels of expertise, we included an additional parameter \(\tau\) (‘expertise strength’) determining how strongly the presence \((e)\) vs. absence \((\neg e)\) of a source’s expertise influenced a communication target during belief revision (robustness check 2; Table 2). Conceptually, \(\tau\) can be thought of as varying levels of expertise or epistemic authority (Walton 2010): a source with stronger expertise is going to exert larger influence on a communication target’s belief than a source with lower expertise. Here, we explored three different levels of expertise: no expertise (\(\tau\) = 0.00), medium expertise (\(\tau\) = 0.20), and high expertise (\(\tau\) = 0.40).

Table 2 shows the resulting conditional probability table which specified how the \(P(rep|h, e^{'},t^{'})\) and \(P(rep|\neg h, e^{'},t^{'})\) components of the likelihoods in Equations 2 and 3 were computed. To ensure that the direction of the influence of expertise strength \(\tau\) matched a communication target’s prior belief (i.e. towards 1 if \(P(h)\) > 0.5 and towards 0 if \(P(h)\) < 0.5), we flipped the impact of expertise strength based on the the target’s \(P(h)\). Similar to an indicator function, \(\textbf{I}\) thus returned 1 for agents having a prior belief \(P(h)\) > 0.5 and -1 for agents having a prior belief \(P(h)\) < 0.5.

Table 2: Conditional Probability Table.
e, t \(\neg\)e, t e, \(\neg\)t \(\neg\)e,\(\neg\)t
h 0.5 + I\(_{P(h)>0.5}\)\(\times\)\(\tau\) 0.5 + I\(_{P(h)>0.5}\) 0.5 - I\(_{(P(h)>0.5}\)\(\times\)\(\tau\) 0.5 - I\(_{(P(h)>0.5}\)
\(\neg\)h 1 - (0.5 + I\(_{P(h)>0.5}\)\(\times\)\(\tau\)) 1 - (0.5 + I\(_{P(h)>0.5}\)) 1 - (0.5 - I\(_{P(h)>0.5}\)\(\times\)\(\tau\)) 1 - (0.5 - I\(_{P(h)>0.5}\))

To complete BSCM revision, a communication target still needed to compute the perceived expertise \(P(e)\) and trustworthiness \(P(t)\) components in the likelihoods outlined in Equations 2 and 3. In our model, each each agent computed \(P(e)\) and \(P(t)\) of a communicating source based on the beliefs of \(n\) other agents with whom they formed social ties. Specifically, following observation of a source’s communication \(rep\) \(\in\) \(\{h_{support}, h_{reject}\}\), a target agent (receiver) computed the perceived expertise and trustworthiness of the communicating source based on the number of \(n\) agents in the receiver’s network that supported the same hypothesis as communicated by the source (e.g., \(h\) if source communicated \(h_{support}\)) and the number of agents in the network entertaining the opposite hypothesis from the source’s communication (e.g., \(\neg h\) if source communicated \(h_{support}\)). Formally, for a source communicating \(rep\) = \(h_{support}\), this can be written as:

\[P(e) = \frac{\sum_{v=1}^{n_h}e^{v}_{subj.}}{\sum_{v=1}^{n_h}e^{v}_{subj.} + \sum_{v=1}^{n_{\neg h}}e^{v}_{subj.}} \] \[(4)\]
\[P(t) = \frac{\sum_{v=1}^{n_h}t^{v}_{subj.}}{\sum_{v=1}^{n_h}t^{v}_{subj.} + \sum_{v=1}^{n_{\neg h}}t^{v}_{subj.}} \] \[(5)\]
where \(e^{v}_{subj.}\) and \(t^{v}_{subj.}\) correspond to the subjective trustworthiness and expertise values each agent sampled prior to the start of simulations, \(n_h\) corresponds to the subset of agents in a receiver’s network \(n\) supporting \(h\), and \(n_{\neg h}\) to the subset of agents in a receiver’s network not supporting the hypothesis \(\neg h\), respectively. If a source’s communication was \(rep\) = \(h_{reject}\), the subset of agents entertaining \(\neg h\) was placed in the nominator of Equations 4 and 5.

To isolate the influence of incorporating credibility estimates by contrasting a source’s communication with the credibility of network peers supporting a source’s communication vs. entertaining the opposite hypothesis, we ran simulations separately for two different agent populations. In the first population (‘social’ agents), agents computed credibility estimates by means of Equations 4 and 5. In the second population (‘asocial’ agents), which functioned as our control group, agents sampled perceived \(P(e)\) and \(P(t)\) of a communicating source from a uniform distribution \([0,1]\). As such, agents in the control group computed stochastic credibility estimates for a source irrespective of the beliefs of their network peers.

Having specified all components of the likelihoods, a communication target (receiver) revised their initial belief by means of Equation 1 and declared for either \(h\) or \(\neg h\) based on a deterministic decision rule:

\[belief = \begin{cases} h \text{ if } P(h|rep) > 0.5 \\ \neg h \text{ if } P(h|rep) < 0.5. \end{cases}\] \[(6)\]
If \(P(h|d)\) = 0.5, an agent declared either belief with a probability of 50%. In a third step, the declared belief (i.e. \(h\) or \(\neg h\)) was then made pubic based on the P(Declaration) probability which was manipulated between simulations (robustness check 3; see Table 1). For example, a declaration of 1 means all agents made their beliefs public, while 0.10 means that there is a 10% probability for each agent making their opinion public. Including different values for P(Declaration) was motivated by recent findings showing that most social media users do not discuss their political beliefs on social media, but mainly focus on exchanging shared hobbies and passions (Newman et al. 2017b). As such, we wanted to ensure that our simulation results are robust across social networks varying in terms of the percentage of users discussing their beliefs with peers. We thus explored three levels of P(Declaration) as a proxy for the percentage of people exchanging their beliefs about a particular topic.

Simulations

Simulations were initiated through a random agent. Random in this case means that the first agent randomly communicated either \(h_{support}\) or \(h_{reject}\) to other agents with whom they formed social ties. Due to this stochastic process, on average, half of the 1st generation of communication targets receiving input from the random agent should declare for a hypothesis \(h\) while the other half will declare \(\neg h\) after BSCM integration. The number of agents the random agent communicated to was determined by the connectivity density. Connectivity values above 50% were omitted as this would have enabled every other agent to be connected to the random agent in the 1st generation, precluding the occurrence of a cascade. To improve the readability of plotted example networks (Figures 2 and 5), the random agent was placed in the center of each simulation (i.e. central green x-y-coordinate).

After revising their prior beliefs, the first-generation of initially neutral agents (those that received input from the random agent) made their beliefs public based on the manipulated P(Declaration) value. Their communication targets (i.e. second-generation) then used the communications \(rep\) \(\in\) \(\{h_{support},h_{reject}\}\) from the first-generation agents as input for their own belief revision following the same procedure. Algorithm 2 shows the basic steps involved in a single instance of belief revision (i.e. from one generation to the the next). Here, source refers to an agent from the previous generation that already publicly declared a belief (i.e. \(h\) or \(\neg h\)) and the connections \(n_{source}\) of the source refer to the potential communication targets of the next generation. As we investigated whether a single pass-through of information (i.e. single interaction) was sufficient for the formation of echo chambers, we did not allow for repeated interaction, meaning that agents could not qualify as communication targets after declaring for a belief. The process of transmitting communications continued down successive generations until the network was either completely saturated (i.e. all agents declared for a belief) or the number of believers (i.e. \(h\) or \(\neg h\)) did not change for two consecutive time periods.

Algorithm 2 Updating beliefs
1:  procedure SOURCE SELECTS COMMUNICATION TARGET
2:      for v = 1 to nsource do       ⪧ n = source's connections
3:         if v = neutral then      ⪧check if target did not already declare for a belief
4:            belief [v] = BSCM (rep, ntarget)      ⪧n = target's connections
5:            if random-uniform > P(Declaration) then
6:                publicly declare for either \(h\) or \(\neg h\) based on \(P(h|rep)\)
7:            end if
8:         end if
9:      end for
10:  end procedure

To ensure that our simulations provided stable results, each possible combination of robustness checks (100 \(\times\) (connectivity density) \(\times\) 3 (expertise strength \(\tau\)) \(\times\) 3 (P(Declaration))) was ran independently for 50 times for each the social agent population and control group. To further test the robustness of our simulations, we varied the size \(N\) of networks in separate simulations (\(N\) = \([100,500,1000,2000]\)). Additionally, following Madsen et al. (2018), we contrasted our random network setup with a scale free network (Amaral et al. 2000) which is commonly used for the study of social networks (Duggins 2017; van der Maas et al. 2020). Overall, varying network setup and size showed that the results for the present setup were only directly dependent on the network size if P(Declaration) was so small that the network fractured (i.e. no cascade occurred, see Section 4), and similar to Madsen et al. (2018), switching from a random to a scale free network did not result in a substantial aggravation/reduction of echo chambers formation. Our model interface allows for intuitive changes to all mentioned robustness checks (and more), and if of interest to the reader, these can be explored by downloading our code (see Model documentation, for details).

Results

To measure echo chambers effectively across simulations, we were first interested in measuring global proportions of beliefs across the whole network (i.e. the relative number of agents with belief \(h\) compared to agents entertaining belief \(\neg h\)). Based on previous simulation-based work, we expected that global proportions would consistently approximate 50/50 across both agent populations (Pilditch 2017). This measure was necessary to ensure that echo chambers are not a by-product of a dominant network-wide belief. Following checks for possible network-wide belief confounds, our key dependent measure of echo chamber formation was the average percentage of like-minded neighbours an agent possessed (i.e. the local network similarity).

Formally, local network similarity corresponded to the average percentage of agents in the target’s direct network that shared the same belief as the target. For example, 50% means that, on average, agents had equal proportions for each belief type in their direct network \(n\), where direct network refers to the fraction of the whole network that is directly connected to an agent via social ties. As such, a higher percentage of agents sharing the same belief as the target is a proxy for a more severe closure of the target’s epistemic belief network. We expected that our population of social agents would show increased percentages of like-minded neighbours (i.e. echo chambers) for low connectivity density values. We did not expect clustering effects in the asocial population in which agents evaluated network peers’ credibility at random.

Figure 3 summarises the average findings from 50 independent runs for each of the 900 parameter combinations shown in Table 1 (100 \(\times\) (connectivity density) \(\times\) 3 (expertise strength \(\tau\)) \(\times\) 3 (P(Declaration))). As expected, we can see that the average global belief proportions of social agents approximated 50/50. Running a Wilcoxon rank sum comparison revealed that global belief proportions in the social agent population did not differ significantly from our asocial control group (\(W\) = 3636508, p = 0.882). Figure 4 further shows global proportions for each of our possible parameter combinations. Here, each measure for a specific parameter combination corresponds again to an average of 50 independent runs. Further examination under consideration of different parameter combinations confirms the above results, suggesting that irrespective of varying expertise strength, connectivity density, or P(Declaration), proportions consistently approximated 50/50 (with some noise in the social population, especially for low connectivity values). Overall, this finding is important, as it ensures that subsequent clusters of like-minded others were not a function of a global bias towards either belief.

Next, we contrasted the average proportions of like-minded others between our social population in which agents computed the perceived credibility of a source based on the support for vs. disapproval of the communication by their fellow network peers and our asocial control group in which credibility perceptions were randomly sampled from a uniform distribution. Again, results are based on running 50 independent runs for each of the 900 parameter combinations. When combining clustering effects across all parameter combinations, the social population showed significantly larger proportions of clustering (mean = 51.51, SD = 5.46) than the asocial control population (mean = 49.76, SD = 1.96; \(W\) = 5836752, p < 0.001). While this finding suggests that considering peers’ beliefs during the computation of a source’s credibility affected echo chamber formation, it does not reveal how clustering relates to our robustness checks. To isolate the influence of each robustness check, we next looked at clustering effects independently for each parameter combination.

Figure 5a shows the the average proportion of like-minded neighbours for our target social population. Here, we found that clustering increased as a function of increasing expertise strength \(\tau\), increasing P(Declaration) and decreasing connectivity density (x-axis). Importantly, to fully reduce the formation of echo chambers, the average network member must be connected to around 15-20% of the network, which is infeasible considering the size of real world social networks which can have several Billion users. The reason for a reduced clustering effect given increased connectivity density is that increasing connectivity density increases agents’ access to information across the network (i.e. the beliefs of other agents). Thus, after reaching a connectivity density of around 15-20%, each agents had access to a significant proportion of the beliefs across the entire network, which reduced the formation of echo chambers.

The finding that an expertise strength of \(\tau\) = 0.00 (i.e. neutral) prevented the formation of echo chambers is a natural result of our model. Specifically, setting \(\tau\) to 0.00 reduced the communicative impact of a source to 0, irrespective of their perceived credibility (see Table 2). Consequently, a receiver was not influenced by the source’s communication \(rep\), which can conceptually be compared to disregarding the communication of a social media user that has no epistemic authority (e.g., no knowledge about the topic of discussion).

Figure 5b shows the results from our asocial agent population. If agents randomly computed credibility estimates of sources, thus ignoring the declared beliefs of their network peers when evaluating the impact of a communication, no clustering emerged irrespective of a given parameter combination. This finding highlights the implications of evaluating network-peers based on credibility perceptions: while stochastic evaluation prevents echo chambers, evaluating a source’s credibility based on the support their communication finds in one’s network was a key requirement for echo chamber formation in the present model. The results in the left panels of Figure 4 a and Figure 5 a-b (P(Declaration) = 10%) showing stronger noise for global belief proportions and a reduced clustering effect for connectivity density values of 0.5% can be fully attributed to fracturing of the network (i.e. the number of social ties was so small that no propagation of beliefs occurred). This outcome is a function of our model setup and has no implications for our findings. Specifically, for connectivity density values of 0.5% for \(N\) = 1000 agents, a single agent had 5 social ties. Combined with a P(Declaration) probability of 10%—meaning that on average only one out of 10 agents made their belief public and was thus able to communicate to others—the majority of simulations for this specific combination prevented information exchange between agents and as such no belief propagation / cascade occurred. A larger population or larger propagation probabilities resolved this fracturing effect, as can be seen in the other two P(Declaration) panels.

To visualise the above findings, Figure 6 includes example outcomes of post-cascade belief proportions with 1% and 5% connectivity density. a) and c) correspond to our social agent population and b) and d) to the asocial control population. Red \((h)\) and blue \((\neg h)\) colours illustrate clusters of similar-minded agents holding different beliefs. Specifically, as seen in Figure 6a and Figure 6c, social agents formed two polarised clusters of similar-minded agents, with limited belief exchange between clusters. Figure 6b and Figure 6d show the results from the asocial agent population, which did not show any signs of echo chambers. Here, most agents connected to an equal proportion of similar and dissimilar beliefs, which is illustrated by the absence of distinct colour patterns.

Discussion

Our work examined whether echo chambers emerge in a population of homogeneous, equally rational users that update their beliefs through a single interaction and under consideration of the credibility of communicating sources. Our results show that when agents evaluated the credibility of a communicating source by looking how many of their fellow network peers supported vs. not supported the source’s communication (‘social’ agent population), echo chambers emerged inevitably as a result of single interactions with connectivity densities smaller than 15-20%. This result suggests that previously identified causes of echo chambers, including psychological biases and inter-individual differences in cognition, are not strictly necessary for the observation of echo chambers. Additionally, extending previous work (Hegselmann & Krause 2002; Lorenz 2007; Madsen et al. 2018; Sasahara et al. 2020), agents interacted irrespective of differences in their beliefs. As such, we showed that limiting interactions based on a bounded confidence threshold (Hegselmann & Krause 2002) or network pruning (Sasahara et al. 2020) was not necessary for the formation of echo chambers.

Moreover, our findings suggest that repeated interaction between agents may not be required to form echo chambers. More precisely, while results of previous models (e.g., Lorenz 2007; Madsen et al. 2017; Madsen et al. 2018) revealed that echo chambers emerged and strengthened as a consequence of repeated interaction between similar-minded agents, we have shown here that a single interaction between generations of agents is sufficient to produce local echo chambers. Importantly, on average, each belief was equally represented in our simulations. Thus, our results further show that segregated groups were not a consequence of global dominance of either belief. We note that repeated interactions, as well as threshold adoptions and network pruning are likely to further exacerbate the present echo chamber effects found after single cascades.

Overall, our findings illustrate that echo chambers, which might induce spreading and retention of misinformation (Vosoughi et al. 2018), conspiratorial thinking (Del Vicario et al. 2016a), and political polarisation (Bessi et al. 2016; Del Vicario et al. 2016b), are not necessarily caused by psychological biases, rewiring of social ties/network pruning, or repeated interactions between users. Rather, the lateral (i.e. peer-to-peer) transmission of information in combination with limited access to information (i.e. low connectivity density) and and a simple mechanism for evaluating the credibility of communicating sources based on the beliefs of fellow network peers, can be sufficient for echo chamber formation. The degree of making opinions public (P(Declaration)) mainly affected echo chamber formation if it was so low that it effectively fractured the functional message passing around the network. Additionally, the magnitude of expertise strength modulated the influence of credibility, resulting in increased echo chamber effects for higher levels of expertise. This suggests that being friends with users having strong knowledge of a topic might exacerbate the formation of echo chambers. Given that the present simulations included rational Bayesian agents, it is further expected that incorporation of additional psychological variables, such as the confirmation bias (Del Vicario et al. 2016a; Ngampruetikorn & Stephens 2016; Starnini et al. 2016), could intensify the strength and persistence of echo chambers.

More generally, our results show that agent-based models, which enable capturing of dynamic interactions between individuals, provide a valuable opportunity for studying the formation of emergent phenomena such as echo chambers. This is in line with a growing body of literature employing agent-based models to investigate several related phenomena, including opinion polarisation (Duggins 2017), identity search (Watts et al. 2002), (dis)belief in climate change (Lewandowsky et al. 2019; Williams et al. 2015) and micro-targeting (Madsen & Pilditch 2018). Given the potential of agent-based models for the study of emergent behaviours, further work could focus on developing interventions aiming to reduce the occurrence of opinion segregation. Such interventions might extend previous work using ‘educational broadcasts’ (Madsen et al. 2018) or behavioural changes allowing for the interaction between dissimilar peers (van der Maas et al. 2020).

An important avenue for further work might be a closer examination of the belief updating process of the present agent populations. Specifically, agents in the present populations updated beliefs sequentially based on the declarations of previous generations (see also Bikhchandani et al. (1992; Pilditch 2017). Recent empirical studies demonstrated that people are sensitive to such statistical dependencies in social learning. For example, Whalen et al. (2018) showed that when beliefs of others were formed sequentially, people updated their prior beliefs less. Considering such findings, an important step for follow-up simulations involves testing the robustness of echo chambers between varying levels of belief dependencies in a network. An additional extension might focus on the declaration functions used prior to communicating beliefs. In the present work, agents used a deterministic decision rule. Potential alternatives that might be contrasted in future work include probability matching (Shanks et al. 2002) or the communication of full probability densities (Fränken et al. 2020).


Model Documentation

The model was implemented in NetLogo version 6.0.4 (Wilensky 1999). All simulations were performed in R version 3.6.3 (R Core Team 2020) using the package RNetLogo (Thiele 2014). The code for the multi-agent model and simulation configurations is available on CoMSES Network-Computational Model Library as: Cascades across networks are sufficient for the formation of echo chambers: An agent-based model (version 1.0.0), https://www.comses.net/codebases/0654205c-5645-4da7-888f-4aecca8fafd5/releases/1.0.0/. Model code and simulation results can also be found in our Github https://github.com/janphilippfranken/information_cascades_jpf_tdp_2020 repository.

References

AMARAL, L. A. N., Scala, A., Barthelemy, M., & Stanley, H. E. (2000). Classes of small-world networks. Proceedings of the National Academy of Sciences, 97(21), 11149–11152. [doi:10.1073/pnas.200327197]

BAIL, C. A. (2016). Combining natural language processing and network analysis to examine how advocacy organizations stimulate conversation on social media. Proceedings of the National Academy of Sciences, 113(42), 11823–11828. [doi:10.1073/pnas.1607151113]

BAKSHY, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. [doi:10.1126/science.aaa1160]

BARKUN, M. (2013). A Culture of Conspiracy: Apocalyptic Visions in Contemporary America. Berkeley, CA: University of California Press. [doi:10.1525/9780520956520]

BESSI, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs conspiracy: Collective narratives in the age of misinformation. PloS ONE, 10(2), e0118093. [doi:10.1371/journal.pone.0118093]

BESSI, A., Zollo, F., Del Vicario, M., Puliga, M., Scala, A., Caldarelli, G., Uzzi, B., & Quattrociocchi, W. (2016). Users polarization on Facebook and Youtube. PloS ONE, 11(8), e0159641. [doi:10.1371/journal.pone.0159641]

BIKHCHANDANI, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992–1026. [doi:10.1086/261849]

BOLLETTA, U., & Pin, P. (2020). Polarization when people choose their peers. Available at SSRN: https://ssrn.com/abstract=3245800. [doi:10.2139/ssrn.3245800]

BOVENS, L., & Hartmann, S. (2003). Bayesian Epistemology. Oxford, UK: Oxford University Press.

CHAIKEN, S. (1999). 'The heuristic-systematic.' In S. Chaiken and Y. Trope (Eds.), Dual-Process Theories in Social Psychology, Guilford, UK: Guilford Press.

CHAIKEN, S., & Maheswaran, D. (1994). Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology, 66(3), 460. [doi:10.1037/0022-3514.66.3.460]

CHEN, T. S., & Syu, S. W. (2020). Friend recommendation based on mobile crowdsensing in social networks. 2020 21st Asia-Pacific Network Operations and Management Symposium (APNOMS). [doi:10.23919/apnoms50412.2020.9236783]

CIALDINI, R. B., & CIALDINI, R. B. (1993). Influence: The Psychology of Persuasion. New York, NY: Morrow.

CUDDY, A. J. C., Glick, P., & Beninger, A. (2011). The dynamics of warmth and competence judgments, and their outcomes in organizations. Research in Organizational Behavior, 31, 73–98. [doi:10.1016/j.riob.2011.10.004]

DEL Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. [doi:10.1073/pnas.1517441113]

DEL Vicario, M., Vivaldo, G., Bessi, A., Zollo, F., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). Echo chambers: Emotional contagion and group polarization on Facebook. Scientific Reports, 6, 37825. [doi:10.1038/srep37825]

DUGGINS, P. (2017). A psychologically-Motivated model of opinion change with applications to american politics. Journal of Artificial Societies and Social Simulation, 20(1), 13: https://www.jasss.org/20/1/13.html. [doi:10.18564/jasss.3316]

FISKE, S. T., Cuddy, A. J. C., & Glick, P. (2007). Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences, 11(2), 77–83. [doi:10.1016/j.tics.2006.11.005]

FRÄNKEN, J. P., Theodoropolous, N., Moore, A., & Bramley, N. (2020). Belief revision in a micro-social network: Modeling sensitivity to statistical dependencies in social learning. Proceedings of the 42nd Annual Meeting of The Cognitive Science Society.

HAHN, U., Harris, A. J. L., & Corner, A. (2009). Argument content and argument source: An exploration. Informal Logic, 29(4), 337–367. [doi:10.22329/il.v29i4.2903]

HARRIS, A. J. L., & Hahn, U. (2009). Bayesian rationality in evaluating multiple testimonies: Incorporating the role of coherence. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1366. [doi:10.1037/a0016567]

HARRIS, A. J. L., Hahn, U., Madsen, J. K., & Hsu, A. S. (2016). The appeal to expert opinion: Quantitative support for a Bayesian network approach. Cognitive Science, 40(6), 1496–1533. [doi:10.1111/cogs.12276]

HAW, A. L. (2020). What drives political news engagement in digital spaces? Reimagining "echo chambers" in a polarised and hybridised media ecology. Communication Research and Practice, 6(1), 38–54. [doi:10.1080/22041451.2020.1732002]

HAWTHORNE-MADELL, D., & Goodman, N. D. (2019). Reasoning about social sources to learn from actions and outcomes. Decision, 6(1), 17. [doi:10.1037/dec0000088]

HEGSELMANN, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2: https://www.jasss.org/5/3/2.html.

HEGSELMANN, R., & Krause, U. (2005). Opinion dynamics driven by various ways of averaging. Computational Economics, 25(4), 381–405. [doi:10.1007/s10614-005-6296-3]

HENRY, A. D., Prałat, P., & Zhang, C. Q. (2011). Emergence of segregation in evolving social networks. Proceedings of the National Academy of Sciences, 108(21), 8605–8610. [doi:10.1073/pnas.1014486108]

HOUSHOLDER, E. E., & LaMarre, H. L. (2014). Facebook politics: Toward a process model for achieving political source credibility through social media. Journal of Information Technology & Politics, 11(4), 368–382. [doi:10.1080/19331681.2014.951753]

JANN, O., & Schottmüller, C. (2020). Why echo chambers are useful.Available at: https://olejann.net/wp-content/uploads/echo_chambers.pdf.

JASNY, L., & Fisher, D. R. (2019). Echo chambers in climate science. Environmental Research Communications, 1(10), 101003. [doi:10.1088/2515-7620/ab491c]

JASNY, L., Waggle, J., & Fisher, D. R. (2015). An empirical examination of echo chambers in US climate policy networks. Nature Climate Change, 5(8), 782–786. [doi:10.1038/nclimate2666]

KELMAN, H. C. (1958). Compliance, identification, and internalization three processes of attitude change. Journal of Conflict Resolution, 2(1), 51–60. [doi:10.1177/002200275800200106]

LAZER, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. [doi:10.1126/science.aao2998]

LEWANDOWSKY, S., Pilditch, T. D., Madsen, J. K., Oreskes, N., & Risbey, J. S. (2019). Influence and seepage: An evidence-resistant minority can affect public opinion and scientific belief formation. Cognition, 188, 124–139. [doi:10.1016/j.cognition.2019.01.011]

LORENZ, J. (2007). Continuous opinion dynamics under bounded confidence: A survey. International Journal of Modern Physics C, 18(12), 1819–1838. [doi:10.1142/s0129183107011789]

MADSEN, J. K. (2016). Trump supported it?! A Bayesian source credibility model applied to appeals to specific American presidential candidates’ opinions. In A. Papafragou, D. Grodner, D. Mirman, & J. C. Trueswell (Eds.), Proceedings of the 38th annual conference of the cognitive science society (pp. 165–170). Cognitive Science Society.

MADSEN, J. K., Bailey, R. M., & Pilditch, T. D. (2018). Large networks of rational agents form persistent echo chambers. Scientific Reports, 8(1), 12391. [doi:10.1038/s41598-018-25558-7]

MADSEN, J. K., Bailey, R., & Pilditch, T. D. (2017). Growing a Bayesian conspiracy theorist: An agent-Based model. Proceedings of the Annual Meeting of the Cognitive Science Society.

MADSEN, J. K., & Pilditch, T. D. (2018). A method for evaluating cognitively informed micro-targeted campaign strategies: An agent-based model proof of principle. PloS ONE, 13(4), e0193909. [doi:10.1371/journal.pone.0193909]

NEWMAN, N., Fletcher, R., Kalogeropoulos, A., Levy, D., & Kleis Nielsen, R. (2017). Reuters digital news report 2017. Reuters Institute for the Study of Journalism, University of Oxford. [doi:10.2139/ssrn.2619576]

NEWMAN, N., Fletcher, R., Kalogeropoulos, A., & Nielsen, R. (2017). Reuters institute digital news report 2019. Reuters Institute for the Study of Journalism, University of Oxford.

NGAMPRUETIKORN, V., & Stephens, G. J. (2016). Bias, belief, and consensus: Collective opinion formation on fluctuating networks. Physical Review E, 94(5), 052312. [doi:10.1103/physreve.94.052312]

NICKERSON, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. [doi:10.1037/1089-2680.2.2.175]

PETTY, R. E., & Cacioppo, J. T. (1986). 'The elaboration likelihood model of persuasion.' In R. E. Petty & J. T. Cacioppo (Eds.), Communication and Persuasion (pp. 1–24). Berlin, Heidelberg: Springer. [doi:10.1007/978-1-4612-4964-1_1]

PILDITCH, T. D. (2017). Opinion cascades and echo-chambers in online networks: A proof of concept agent-based model. Proceedings of the Annual Meeting of the Cognitive Science Society.

R Core Team. (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing. Available at: https://www.R-project.org.

RIGHI, S., & Takács, K. (2014). Emotional strategies as catalysts for cooperation in signed networks. Advances in Complex Systems, 17(02), 1450011. [doi:10.1142/s0219525914500118]

ROBINSON, J. P., Shaver, P. R., & Wrightsman, L. S. (1999). Measures of Political Attitudes. Cambridge, MA: Academic Press.

SANTOS, F. C., Pacheco, J. M., & Lenaerts, T. (2006). Cooperation prevails when individuals adjust their social ties. PLoS Computational Biology, 2(10), e140. [doi:10.1371/journal.pcbi.0020140]

SASAHARA, K., Chen, W., Peng, H., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2020). Social influence and unfollowing accelerate the emergence of echo chambers. arXiv preprint. [doi:10.1007/s42001-020-00084-7]

SHANKS, D. R., Tunney, R. J., & McCarthy, J. D. (2002). A re-examination of probability matching and rational choice. Journal of Behavioral Decision Making, 15(3), 233–250. [doi:10.1002/bdm.413]

SIKDER, O., Smith, R. E., Vivo, P., & Livan, G. (2020). A minimalistic model of bias, polarization and misinformation in social networks. Scientific Reports, 10(1), 1–11. [doi:10.1038/s41598-020-62085-w]

STARBIRD, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false flags, and digital vigilantes: Misinformation on Twitter after the 2013 Boston marathon bombing. IConference 2014 Proceedings.

STARNINI, M., Frasca, M., & Baronchelli, A. (2016). Emergence of metapopulations and echo chambers in mobile agents. Scientific Reports, 6, 31834. [doi:10.1038/srep31834]

TAKIKAWA, H., & Nagayoshi, K. (2017). Political polarization in social media: Analysis of the "Twitter political field" in Japan. IEEE International Conference on Big Data. [doi:10.1109/bigdata.2017.8258291]

THIELE, J. C. (2014). R marries NetLogo: Introduction to the RNetLogo package. Journal of Statistical Software, 58(2), 1–41.

TÖRNBERG, P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLoS ONE, 13(9), e0203958.

VAN der Maas, H. L. J., Dalege, J., & Waldorp, L. (2020). The polarization within and across individuals: The hierarchical Ising opinion model. Journal of Complex Networks, 8(2), cnaa010. [doi:10.1093/comnet/cnaa010]

VOSOUGHI, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. [doi:10.1126/science.aap9559]

WALTON, D. (2010). Appeal to Expert Opinion: Arguments from Authority. University Park, PA: Penn State University Press.

WATTS, D. J., Dodds, P. S., & Newman, M. E. J. (2002). Identity and search in social networks. Science, 296(5571), 1302–1305. [doi:10.1126/science.1070120]

WHALEN, A., Griffiths, T. L., & Buchsbaum, D. (2018). Sensitivity to shared information in social learning. Cognitive Science, 42(1), 168–187. [doi:10.1111/cogs.12485]

WILLIAMS, H. T. P., McMurray, J. R., Kurz, T., & Lambert, F. H. (2015). Network analysis reveals open forums and echo chambers in social media discussions of climate change. Global Environmental Change, 32, 126–138. [doi:10.1016/j.gloenvcha.2015.03.006]

YAMAGISHI, T., Hayashi, N., & Jin, N. (1984). 'Prisoner’s dilemma networks: Selection strategy versus action strategy.' In U. Schulz, W. Albers, & U. Mueller (Eds.), Social Dilemmas and Cooperation (pp. 233–250). Berlin Heidelberg: Springer. [doi:10.1007/978-3-642-78860-4_12]