Introduction

In the modern age, media (especially new media) play significant roles in changing people’s thoughts and ways of thinking. Human minds are complex and not rational in most situations, and their opinions are constantly influenced by the input of acquaintances and media such as TV, newspaper, YouTube, and Facebook. In the past, broadcast media featuring one-to-many information acted as gatekeepers of the information flow and primary vehicles for the messages of political elites or authorities to reach people (Quattrociocchi et al. 2014). Nowadays, with the continuous popularization of the internet, especially mobile internet, each individual has the ability to publish information and give opinions, which rapidly changes the means of spreading information and opinion. With the emergence of such plurality in media, the decentralization of media has become apparent, and new communication technologies continue to grow more attractive to the former audiences of traditional media (Jenkins 2004; Noll & Price 1998). Today, the media compete for people’s attention, feeding (as much as possible) each individual a steady stream of content built for and seen only by them. The people of the real world may have become more deeply divided and fragmentary than ever before.

Information cocoon is a well-known concept, it describes the phenomenon in which individuals only choose the information sources and contents they like, eventually a cocoon is formed to wrap themself (Sunstein 2006). Similar concepts are proposed by scholars continually, such as echo chamber (Jamieson & Cappella 2008), filter bubble (Pariser 2011), and information enclaves (Weeks et al. 2016). Those works are reflections of concerns about the opinion polarization brought by the diverse media. For instance, the High Level Expert Group on Media Diversity and Pluralism state the increasing filtering mechanisms may shake the foundations of democracy (Vı̄ke-Freiberga et al. 2013).Hence, it is necessary for researchers to study public opinion formation when media have more diversity and people have more choices.

In order to figure out the role of the diverse media in shaping public opinion, theoretical modeling is a useful tool. Opinion dynamics, inspired by conformity experiments (French Jr 1956), is a type of model-based method of exploring the process of interactions among individuals and opinions’ fusion (Liang et al. 2016). Nearly all existing models of opinion dynamics are agent-based, representing a kind of model created for simulating the behavior and interaction of autonomous agents, with one agent representing one individual. In a model of opinion dynamics, agents consider the opinions of other agents and then form new opinions according to the fusion rule defined for that model (Dong et al. 2018). The experimental result of public opinion dynamics based on the agent-based model can surpass the traditional empirical research and reach a certain standard for the scientific explanation (Flache et al. 2017).

Existing models of opinion dynamics can be divided into two categories: discrete opinion models (Clifford & Sudbury 1973; Holley & Liggett 1975; Stauffer 2002; Sznajd-Weron & Sznajd 2000) and continuous models (Axelrod 1997; Deffuant et al. 2000; DeGroot 1974; Fan & Pedrycz 2015, 2016; Hegselmann & Krause 2002). A discrete model is generally used to describe a public choice, such as whether to approve of a motion, which candidate to vote for in an election, etc.; the most representative of these is the voter model (Clifford & Sudbury 1973; Holley & Liggett 1975) and the Sznajd model (Stauffer 2002; Sznajd-Weron & Sznajd 2000). However, their discrete expressions restrict the these models’ usage scenarios. Hence, continuous models are more prevalent in recent research on opinion dynamics. The DeGroot model (DeGroot 1974) is the fundamental continuous model, and one of its concise variants, the bounded confidence model, is the most popular. The bounded confidence model was introduced by Axelrod (1997), with the fusion rule determined that the agent adjusts opinion only when another agent it interacts with has a differing opinion within a specific confidence boundary. It should be noted that the opinions of the agent called cultural attributes in Axelrod’s model consist of a vector of discrete variables. Hence, the model is actually a discrete opinion model. The two most representative bounded confidence models are the Deffuant-Weisbuch(DW) model (Deffuant et al. 2000) and Hegselmann–Krause (HK) model (Hegselmann & Krause 2002). The crucial conclusion is that the people split into separate opinion clusters if the confidence boundary is small enough, otherwise convergent into consensus, but polarization emerges when the initial state contains both extremist agents with extremes initial opinions and minimal confidence boundaries, and mild agents with larger confidence boundaries.

Beyond these basic models of opinion dynamics, researchers have proposed several creative models. Based on the Social Judgment Theory, Jager and Amblard (Jager & Amblard 2004) first introduced repulsions between agents to the traditional bounded confidence model. On this basis, Fan et al. proposed social judgment-based opinion (SJBO) model (Fan & Pedrycz 2016, 2017). The continuous opinion and discrete actions (CODA) model (Martins 2008), meanwhile, is the name of a hybrid model in which agents have continuous opinions but their behaviors are discrete. Subsequently, by introducing the concept of trust to this model, the same author (Martins 2013) discussed the issue of trust in society. Although most people potentially enjoy the convenience of interaction that online technology provides, some people can only interact or get information offline, and there is also someone who chooses to do so offline. Hence, Dong et al. (2017) designed an online-offline model to unveil the interaction mechanism among the agents between the online and offline social networks. It reveals that online agents can smooth opinion changes in the opinion dynamics and decrease the number of opinion clusters.

Additionally, as a useful tool for modeling and studying social opinions, opinion dynamics theories have been applied in various fields of scientific research. Topics include elections (Moya et al. 2017), the spread of extremist opinions (Fan & Pedrycz 2015), cyber violence (Liu et al. 2019), and even cognitive dissonance (Li et al. 2020).

For several years now, researches have been focused on understanding the role of media (Hu & Zhu 2017; Martins et al. 2010; Pineda & Buendía 2015; Quattrociocchi et al. 2011) and the web (Ugander et al. 2012; Zhang et al. 2018) in the formation, evolution, and diffusion of public opinions. Quattrociocchi et al. (2011) studied the formation of the agents’ opinions under the influence of mass media and experts, based on an Italian political campaign in 2008. Martins et al. (2010), who had introduced the repulsion behavior to the Deffuant–Weisbuch model, then studied the reaction of this model under external information, arriving at the conclusion that the repulsion behavior can promote people to reach consensus under the influence of external information. Hu & Zhu (2017) proposed an analyzable model, based on the multi-state voter model, to study the effects of mass media on public opinions, with simulations showing that media influence is amplified by the interactions among agents within social networks. However, a majority of these works are focused on the impact of the mass media, and less attention has been dedicated to exploring how diverse media and audiences’ choices might affect the course of public opinions.

Considering previous researches by scholars from social and computer science, this paper aims to study the process by which opinions spread in a closed social network with various information sources (i.e., media) and allowing individuals choice on how to obtain information. The main objectives of this paper will be achieved by answering the following three questions.

  • When individuals can choose their favorite media, will this leads to opinion polarization?
  • How do the quantity and distribution patterns of the diverse media affect the spread of the opinions?
  • How can campaigns for consciousness or education be improved through specific media constructions?

Therefore, we propose a novel model, social judgment and influence based opinion (SIBO), to describe how agents adjust their opinions after interacting with other agents or with media. Multiple media holding various opinions are treated as external information sources bringing opinions to the closed social network, and individuals have the freedom to choose the media they like. The simulations are based on a real-life social network instead of on artificial data sets.

The rest of this paper is organized as follows. In the following section, we present a new model of opinion dynamics and detailed simulation plans. Then, our simulation results and related discussions are provided, and the conclusion is summarized at the end.

Method

Model

We model the social network using agent-based modeling (ABM), proven for describing complicated socio-technical systems such as Facebook and Twitter (Alvarez-Galvez 2016). In the model, each individual or medium is considered as an agent, and their connections are simplified as links or edges. The social network and information network are distinct two networks as modeled in our research, shown in Figure 1. As in the real world, the social network is not fully-connected, and agents are connected by their relationships whether online or offline. As Figure 1(b) shows, in today’s media era, the form and content of the media have become more diverse. Considering that an individual chooses their information sources based on their own opinion (An et al. 2013), the individuals can and will choose specific media contents and forms. Consequently, in our study, the individuals’ options are viewed as changing with effects from two sources: acquaintances in the social network and media in the information network.

Inspired by the SJBO model (Fan & Pedrycz 2016, 2017), which can be seen as an extension of the widely used DW model, we present our model as the social judgment and influence based opinion (SIBO) model. A schematic diagram of our model is presented in Figure 2; in the model, each agent’s opinion is a continuous value in the range of \(\left[ { + 1, - 1} \right]\). In the opinion space \(\left[ { + 1, - 1} \right]\), an opinion value of \(0\) means that the agent is entirely ignorant or neutral on one topic, while a positive value shows that the agent holds a positive or supportive opinion, and a negative value shows the opposite. After agent \(i\) communicates with agent \(j\) from their social network at time \(t\), the opinion of the agent \(i\) will shift according to:

\[ x_i^{t + 1} = \left\{ \begin{array}{r} x_i^t + {\mu _{ij}}\left( {x_j^t - x_i^t} \right),{\rm{ }}\left| {x_i^t - x_j^t} \right| \le {\varepsilon _i}\\ x_i^t,{\rm{ }}{\varepsilon _i} < \left| {x_i^t - x_j^t} \right| \le {\tau _i}\\ x_i^t - {\mu _{ij}}\left( {x_j^t - x_i^t} \right)\left( {1 - \left| {x_i^t} \right|} \right),{\rm{ }}\left| {x_i^t - x_j^t} \right| > {\tau _i} \end{array} \right.\] \[(1)\]
where \(x_i^t\) and \(x_j^t\) stand for the opinions of agent \(i\) and \(j\) at time \(t\), respectively. Meanwhile, \(x_i^{t+1}\) is the opinion of agent \(i\) at time \(t+1\) , \(\mu_{ij},\varepsilon_i\) and \(\tau_i\) are the influence coefficient, convergence threshold and repulsion threshold for agent i, respectively. From the Equation 1, we can see that the opinion of agent \(i\) is attracted to the persuader when their opinions’ difference is smaller than the convergence threshold. However, different from the bounded confidence model, when the opinion difference is greater than the repulsion threshold, the opinion of agent \(i\) shifts away from the persuader’s opinion. Except for the above two cases, the agent \(i\) sticks to its own opinion.

The influence coefficient \(\mu_{ij}\) is inspired by the concept of universal gravitation, which reflects the force between two agents that draws the two agents toward each other or push each other away. The value of \(\mu_{ij}\) is proportional to the influence of the persuader and inversely proportional to squared opinion difference. Hence, the definition is given by:

\[ {\mu _{ij}} = 0.5\tanh \left[ {\frac{{{m_j}}}{{20{{\left( {x_j^t - x_i^t} \right)}^2}}}} \right]\] \[(2)\]
where \(m_{j}\) stands for influence level of agent \(j\), and its value is equal to the total number of edges for this agent (i.e., the total number of friends it has); the hyperbolic tangent is used as a saturation function to limit the value. In fact, the influence coefficient contains the information from the social network itself, which has gone ignored in previous models. By using this information, the agent with more influence can be more persuasive and has more impact on the opinion dissemination; such individuals are usually called opinion leaders.

The initial definition of opinion leaders in social opinion dynamics was proposed in Katz and Lazarsfeld’s book (Katz & Lazarsfeld 1955) as "the individuals who were likely to influence other persons in their immediate environment". This concept is a good indication of the asymmetry influence of the individuals in a social network. Many papers have appeared to analyze the function of opinion leaders in the context of opinion dynamics (Chen et al. 2016; Roch 2005; Watts & Dodds 2007) based on different opinion fusion rules. Meanwhile, the total number of an agent’s edges, i.e., the degree of a node in graph theory, is the most intuitive information of an agent’s influence within a network without considering the global structure of the graph(Bamakan et al. 2019). That is the reason why the value of \(\mu_{ij}\) and \(m_j\) have a positive correlation.

Similarly, when the agent \(i\) chooses a medium at time \(t\), its opinion will be updated according to:

\[ x_i^{t + 1} = \left\{ \begin{array}{r} x_i^t + 0.5\left( {{G_k} - x_i^t} \right),{\rm{ }}\left| {x_i^t - {G_k}} \right| \le {\varepsilon _i}\\ x_i^t,{\rm{ }}{\varepsilon _i} < \left| {x_i^t - {G_k}} \right| \le {\tau _i}\\ x_i^t - 0.5\left( {{G_k} - x_i^t} \right)\left( {1 - \left| {x_i^t} \right|} \right),{\rm{ }}\left| {x_i^t - {G_k}} \right| > {\tau _i} \end{array} \right.\] \[(3)\]
where \(G_k\) is the opinion of the medium \(k\) which as chosen by agent \(i\). Although not all the media have the same degree of influence level, for simplicity, we set the coefficient to the maximum in Equation 3. In addition, we introduce another parameter \({\eta _i} \in \left[ {0,{\rm{ }}2} \right]\) to the model, and it stands for the horizon of an agent. The agents will only choose media in terms of their horizon (i.e., their range of choices) in getting information according to our model: this phenomenon is becoming much more common in the new media era. In the model, the convergence threshold \(\varepsilon_i\) always less than or equal to the repulsion threshold \(\tau_i\), while the value of horizon \(\eta_i\) has no such limitation, it can be less than \(\varepsilon_i\) or greater than \(\tau_i\).

The repulsion threshold, as well as the convergence threshold, are inspired by the Social Judgment Theory (Sherif & Hovland 1961) which has been utilized in many articles. Those two confirmation bias related parameters determine how individuals change their opinion after being confronted with another opinion. As Axelrod mentioned in his work (Axelrod 1997), in addition to beliefs, attitudes, and behavior, there are still more things over which interpersonal influence extends, such as language, art, etc. Therefore, the thresholds are representational parameters of internal emotion, and they are determined by factors such as education, growth environment, and so on. On the other hand, the parameter horizon we designed is to explore how the individuals’ behavior of choice affects the opinion dynamic. Hence, it represents a kind of active external behavior. Nowadays, people know a good deal about the media environments, and reflect upon how they use those environments, even provide a rational account of their actions (Webster 2009). For instance, one individual can try to use TikTok or Clubhouse just for curiosity, but this person may still feel hard to accept the information these new media provide due to its bias. Therefore, the repulsion threshold and horizon are conceptually different.

Simulation implementation

Although the opinions in the model are continuous, the behavior of agents to obtain information and communicate with each other is discrete. Hence, in the simulation, the agents adjust their opinions in discrete timesteps, with the description of the implementation process being depicted in Figure 3. In each timestep \(T\), every agent chooses a medium in terms of their horizon and then interacts with the medium first. Afterward, considering the asynchronization in online and offline interactions (Ding et al. 2017), we consider that the agent would communicate with three random acquaintances (instead of one) in the social network. After each interaction, the agents adjust their opinions according to Equations 1 and 3.

In this study, we use a real-life social network, rather than an artificially generated one, for simulation. The social network is a Facebook friendship network of Haverford College, Pennsylvania (Traud et al. 2012). As shown in Figure 4, this network has 1,446 agents and 59,589 edges, and the average degree is up to 41.2 (i.e., each individual has an average of 41.2 friends). Considering the inter-individual diversity, the agents’ convergence thresholds \(\varepsilon_i\) are uniformly distributed in the range of \(\left( {0,2} \right)\), and the repulsion thresholds \(\tau_i\) are uniformly distributed in the range of \(\left( {\varepsilon_i,2} \right)\). Moreover, because agents change their opinions based only on differences in opinion, it does not matter what or who the interaction object is. Therefore, the agents have the same threshold for convergence and repulsion toward other agents including media. When the public has a certain level of understanding of something, individuals generate all kinds of opinions. Hence, without loss of generality, the initial opinions of the agents are distributed uniformly over \(\left( {-1,1} \right)\). In the simulation, the interaction cycle repeats 500 times, i.e., \(T = 500\). Considering the randomness of the whole process, the dynamic simulation will be carried out 400 times with different possible initial configurations.

Simulation Results and Discussion

Simulation scenarios

In order to answer the three questions given in the introduction, simulation scenarios based on different conditions are given in Table 1. Those scenarios are designed based on the one-factor-at-a-time methodology (ten Broeke et al. 2016), which reduces the influences on the result from other parameters by modifying as few parameters as possible.

Table 1: Simulation scenarios
Scenario Changing Parameters Question to Answer
1 Horizon of the agent How audiences’ choices affect opinion dynamics?
2 Opinion of one authoritarian medium How media opinion affects opinion dynamics?
3 Quantity and distribution patterns of the media How the quantity and distribution patterns of the media affect opinion dynamics?
4 Distribution patterns or changing opinion of the media How to guide the public opinion?

It is clear enough in our times that a consensus of public opinion is rarely achieved. Thus, a metric is needed to reflect the degree of dispersion in public opinion. Although the number of opinion clusters is widely used, the algorithm that calculates this number often requires iteration involving substantial computational costs. Therefore, we introduce a metric called the participation ratio (Nagel et al. 1984), borrowed from physics, and its definition is given by:

\[ P = \frac{{{{\left( {\sum\limits_{i = 1}^n {x_i^2} } \right)}^2}}}{{n\sum\limits_{i = 1}^n {x_i^4} }}\] \[(4)\]
where \(x_i\) is the final opinion of agent \(i\) and \(n\) is the total number of agents, while \(P\) is a scalar in the range of \(\left( {0,1} \right]\). \(P=1\) means that all the agents reach a consensus, and its value diminishes with increased diversity of discrete public opinions. In short, \(P^{-1}\) is approximately equal to the number of opinion clusters but requires less computation.

Scenario 1

In this scenario, the influence of individual choice behavior is discussed. Hence, the number for media is only one: one authoritarian medium broadcasting an unchanged opinion of \(0.6\) (i.e., \(G = 0.6\)). The reason for choosing this opinion value will be given in our presentation of the next scenario.

The horizon value \(\eta_i\) is the same for all agents, set from \(0\) to \(2\) with intervals of \(0.2\). Besides, a condition has been added such that agents’ horizon values are between each one’s convergence and repulsion thresholds, which is more in line with practical situations. The simulation results are depicted in Figure 5, where H stands for the additional condition. To reveal more information from the random results, the box plot is utilized instead of the simple mean value plot. As demonstrated in Figure 5(a) and 5(b), the overall trend is that the average opinion increases and the participation ratio decreases with higher horizon values. The reason is not hard to understand: in a close-minded social network with a low horizon value, agents only get information from friends and media that have very similar opinions to theirs. In such a society, the agents with extreme opinions (\(- 1\) or \(+1\)) are more committed because they refuse to choose and obtain information from the media different from them. As a result, individuals have more potential to be attracted by extreme opinions, and they tend to bring out polarization as shown in Figure 5(c). To be clear, in order to reduce the randomness, the distribution graph is based on all the agents’ opinions in the 400 repeated simulations.

When \(\eta_i<0.6\) , the final average opinion value contains greater uncertainty. Then, once \(\eta_i\) reaches \(0.6\), the public opinion is influenced by the propaganda most of the time, but due to low horizon values, there remains a greater possibility that the propaganda repulses public opinion toward the negative side. This phenomenon can be seen from the crossed outlier in Figure 5(a). In addition, we can obtain a counterintuitive phenomenon whereby the public opinion decreases slightly after \(\eta_i>0.8\) . Analyzing opinion distribution, we find that this is because the value for the horizon is higher than most agents’ repulsion threshold, which leaves some agents exposed to information they repulsive, pushing them to take attitudes of opposition. As for the additional condition H, with random horizon value between two thresholds, the average opinion of agents is relatively high.

In summary, an open-minded social network with a high horizon improves the spreading of the media’s opinions, but if the society is too open, the effectiveness of authoritarian propaganda will be lower. Moreover, in this scenario, no matter whether society is close-minded or not, there are more than 15% of extremists exist.

Scenario 2

Although many scholars have discussed the impact of mass media on public opinion, it is still necessary to study it in this paper because of the appearance of the audiences’ choices. In this scenario, the opinion of the one medium varies from \(- 1\) to \(1\), and the agents’ horizon values are between each one’s convergence and repulsion thresholds. The results are shown in Figure 6. It is obvious that the medium has a strong impact on public opinion, as seen in Figure 6(a), but as the opinions have become extreme, the influence of the medium has declined slightly. When the opinion of the medium reaches \(1\) or \(-1\), not only the average opinion can only reach \(0.6\) or \(-0.6\), but opposed public opinions emerge in some cases. From Figure 6(b) it can be seen that the participation ratio decreases as the opinion’s radicalness decreases, until the opinion is totally neutral. Even in the case where the medium is totally neutral, 20% agents hold extreme opinions (\(1\) or \(-1\)), as shown in Figure 6(d).

The phenomenon of reduced media influence in 6(a) is related to agents’ horizon values. In order to illustrate it, the simulation result of changing horizon with one medium hold an extreme opinion (\(G = 1\)) is given in 6(c). As observed in 6(c), the appearance of opposed public opinion diminishes as agents have wider horizons. In addition, compared with 5(a), extreme media opinions have a higher possibility of generating opposed public opinion; that is the reason for our choice of \(G = 0.6\) in Scenario 1.

In conclusion, the authoritarian media play an essential role in propaganda, which can change public opinion effectively, and its influence can be mitigated with the development of open minds (i.e., high horizon values). Additionally, extreme media opinions lead to polarization, and those agents who were pushed to be opponents lowered the average opinion. Extreme media opinions also show greater potential to generate collective opposing public opinion in a close-minded society.

Scenario 3

In this scenario, we explore how the quantity and distribution patterns of media affect opinion dynamics. For this objective, the following simulations present multiple media with different distributions. We set the numbers for the media to 15, 73, 145, 217, and 289, accounting for 1%, 5%, 10%, 15%, and 20% of the total number of agents, respectively. To better reflect reality, we consider three different distributions of these media.

In the first case, the opinions of the media are evenly distributed in the space of public opinion with \(\left[ {- 1, + 1} \right]\), which is highly idealized. Figure 7 presents public opinion under different numbers of media with uniform distribution. It is important to note that a new metric called the Euclidean norm is introduced here. The Euclidean norm is related to the Euclidean distance used to describe the straight-line distance between two points. In our work, the Euclidean distance can indicate the distance between the public opinion and the average opinion of media, with its definition is given by :

\[ E = \sqrt {\sum\limits_{i = 1}^n {\left(x_i-\bar G\right)^2} }\] \[(5)\]
where \(\bar G\) is the average opinion of media, and in this case \(\bar G=0\).

As observed in Figure 7(a), although there is a great deal of randomness, the average opinion maintains values in the range of \(\pm 5\) because the average opinion of the media is zero. In fact, the diversity of the media distracts the agents and leads to randomness and disorder, but the difference in the quantity of the media makes little difference. Furthermore, compared to the situation under one medium, both the average opinion and the Euclidean norm are more random under multiple media. Comparing the distribution in Figure 7(c) and (d) with Figure 6(d), it is evident that opinion is more evenly distributed on the opinion space because the agents have more choice among media by which to obtain information. In general, uniform distributed media bring strong randomness to the social network, hence breaking down the information monopoly of one authoritarian medium and also reducing the percentage of extreme opinions.

In case 2, the opinions of the media are normally distributed, with their mean set to \(0\) and the standard deviation at \(0.3\), i.e., \(G \sim {\cal N}\left( {0,{{0.3}^2}} \right)\). Although these parameters are carefully chosen, there is still a chance that the opinions will be greater than \(1\) or less than \(-1\), so a saturation function should be used to maintain the opinions in \(\left[ {- 1, + 1} \right]\). Figure 8 shows public opinion under different numbers of media with the normal distribution. It can be seen that, being different from case 1, normally distributed opinions of the media bring some level of orderliness to the outline of public opinion. First of all, average opinion is converged around \(\pm 0.2\), more compact than in case 1, and the randomness is decreased with the increasing quantities of media.

Moreover, as observed in Figure 8(b), with more media available, the Euclidean norm is decreased significantly, but the trend slows down after 145 media have reached. In fact, due to the properties of the normal distribution, there is a high possibility that most opinions of the media are near neutral instead of extreme, leading to similar distributions in public opinion as shown in Figure 8(c) and (d) except with extreme opinions. Compared to case 1, this case not only reduces the extreme opinions, but also concentrates that of neutral opinion in a higher density, even more than in the case where only one medium is available (see Figure 6(d)).

In case 3, a special distribution is utilized in the simulation; we call it reverse-normal distribution (this is not a mathematical term). As the reverse of the normal distribution, most of the media hold extreme opinions instead of neutral opinions, and the opinions are obtained using the equation \(G' = {\mathop{\rm sgn}} (G) - G,G \sim {\cal N}\left( {0,{{0.3}^2}} \right)\) . The effect of this distribution on public opinion is depicted in Figure 9. It can be observed that although the average opinion is still around \(0\), the Euclidean norms mark a sharp increase with multiple media. Moreover, in Figure 9(c) and (d), the distribution of public opinion is similar to the opinion distribution characterizing the media’s input to the system. Although extreme opinions predominate in social opinion, compared to the situation with one authoritarian medium that holds an extreme opinion (i.e., \(G = 1\) or \(- 1\)), various opinions are still existing and scattered on the opinion space in this case.

Combining all three cases above, our conclusions are obtained as follows. 1) Compared with the situation under a single medium, a variety of media will bring choices for agents, thereby reducing the attraction of extreme opinions and leading to dispersed opinion distribution. It is a counterintuitive result, the diversity of the media is not only not conducive to opinion polarization, but also plays a role in discretizing public opinion. 2) The distribution of the media’s opinions plays an important role in influencing public opinion: with more than a single medium, opinions of diverse media with a gradual increase in density around a particular opinion can concentrate agents on the same opinion. The reason is that the media which hold opinions slightly away from the center act as agencies and bridges to guide greater numbers of agents to the center.

Furthermore, it should be noted that the increased number of media is not always effective, because similar media with high density attract agents with approximate opinions around them, isolating or repulsing other remaining agents. Similar phenomena have been observed in several works, such as Pineda & Buendía (2015), Pulick et al. (2016), and Gargiulo & Gandica (2017).In these papers, if media pressure is low, media are able to attract most of the agents but have a self-defeating effect if the media pressure is too high. Meanwhile, compared with case 2, the effectiveness of increasing media number is relatively poor in case 3, because the strong opinion and more media accentuate the extrusion effect, which is in agreement with the result previously obtained in Gonzalez-Avella et al. (2007) and Gargiulo et al. (2008).

Scenario 4

In real life, policy makers design their campaigns for consciousness or education to improve public, health, or environmental awareness in the general public. Take the public health crisis caused by the COVID-19 pandemic as an example. Governments worldwide try to propagandize the right behavior for protecting people from COVID-19, from social distancing and hand washing to wearing medical masks. Combining the knowledge obtained from the above simulations and discussions, we present two methods to improve epidemic prevention knowledge in the general public. In this Scenario, opinion -1 stands for the agent distrusts the public health institution and panics about the epidemic, while opinion 1 indicates that the agent trusts the public health institution and has obtained the correct knowledge of epidemic prevention from propaganda.Therefore, for the purpose of greater health awareness, the Euclidean norm is given by:

\[ E = \sqrt {\sum\limits_{i = 1}^n {{{({x_i} - 1)}^2}} }\] \[(6)\]
The number of the media in this Scenario is set to 217.

Method 1: The opinions of the media are generated by following arc-shaped function:

\[ G\left( k \right) = {\rm{sat}}\left( {\sqrt {4 - {{\left( {\frac{{2k}}{{{217}}}} \right)}^2}} - p} \right)\] \[(7)\]
where \(k=1,2,3 \cdots 217\), \(p\) is the adjustable parameter which decides the average opinions of the media, and \(\rm sat()\) is a saturation function as follows:
\[ {\rm{sat}}\left( x \right) = \left\{ \begin{array}{l} 1,{\rm{ if }}x \ge 1{\rm{ }}\\ x,{\rm{ if }} - 1 < x < 1\\ - 1,{\rm{ if }}x \le - 1 \end{array} \right.\] \[(8)\]

From the above, simulation results are depicted in Figure 10. As shown in Figure 10(c), most media hold opinions around \(1\) in the opinion space we construct, and a few outliers hold opinions spread in the range of \(\left[ { - 1, + 1} \right]\). Although the average opinion of the media decreases with larger \(p\), average public opinion first increases then decreases as illustrated in Figure 10(a). It reflects the fact, as discussed in Scenario 3, that in guiding public opinion, the distribution pattern is more significant than the average opinion of the media. In fact, by constructing the space of media’s opinions, it is possible to achieve much better results than through one authoritarian medium. This conclusion can be obtained by comparing Figure 10(a) and Figure 6(a): the average for public opinion can be as high as \(0.7053\) with Method 1, but it only reaches \(0.5791\) under one authoritarian medium in Scenario 2. However, Figure 10(b) shows that the Euclidean norm is smallest when \(p = 0.6\), which means that public opinion is closest to the target. Comparing the opinion distribution of two situations (\(p = 0.3\) and \(0.6\)), it can be said that most agents have opinions equal to \(1\) when \(p = 0.3\), but there are more agents close to the right side of the coordinate with fewer opponents when \(p = 0.6\). In practice, policy makers can choose between those two situations or compromise according to their purpose.

Method 2: A section of the media is utilized as a set of guide media to improve agents’ health awareness, and those media change their opinions over time according to:

\[ G_k(t)=2\times5^{0.02t-1}-1\] \[(9)\]
As shown in Equation 9, the opinions of guide media, in 500 timesteps, increase from \(- 0.6\) to \(1\). In this Method, the guide media are selected randomly while the opinions of other media are uniformly distributed in \(\left[ {-1,+1} \right]\), and the percentage of guide media will increase from 10% up to 100%. The simulation results are given in Figure 11. From Figure 11(a) and (b), it is evident that the average opinion increases and the Euclidean norm decrease with higher percentage of guide media, until it is over 90%. However, when all the media become guide media, the public opinion declines instead. The reason is similar to what we discussed in Scenario 1, because overpowering propaganda will increase antipathy and generate more opponents. In fact, compared with Method 1, this method not only has higher values (up to \(0.8059\)) for average opinion, but most of the agents reach maximum when the time is over, with fewer opponents, as shown in Figure 11(d).

Finally, by constructing opinions of media in space (Method 1) or time (Method 2), the effect of campaigns for consciousness or education can be efficaciously improved. Nonetheless, too many media speaking with one voice may be counterproductive, thus generating more opponents. Of course, to achieve the best results (i.e., highest average public opinion), at least 75% of media in Method 1 and 90% of media in Method 2 need to broadcast the information with the same opinion, and this is unattainable in real life. Moreover, the research (Castaldo et al. 2021) shows that even if people’s circadian rhythms change and they spend more time on social media, there is no evidence of information mass increasing during the lockdown. Therefore, too many media broadcasting similar information during lockdown time may lead to a backfired situation. Even so, a small proportion of media choosing opinions following our approach can still obtain certain achievements. Hence, the methods we proposed can offer insights that help policy makers to improve public, health, or environmental awareness in the general public.

Conclusions

In this paper, we proposed a novel model of opinion dynamics, considering degrees of influence and free choice among social agents. Individuals in a social network interacted with their acquaintances and media they chose, and they attracted to each other’s opinions when those opinions were similar enough, but pushing each other away when opinions were too different. Furthermore, we introduced diverse media and their representations of opinion, in different quantities and distributions, into the social network as external information sources. Finally, we proposed strategies to help policy makers improve the effects of campaigns for consciousness or education.

Our numerical simulations based on a real-world social network revealed the following conclusions. First, when given choices to make individually, people from open-minded societies are more susceptible to the media, while those from close-minded societies ignore the media and brew more extreme opinions. Furthermore, if every individual is open-minded, an overwhelming propaganda program could have the opposite of its intended effect. Secondly, compared to the situation under one authoritarian medium, diverse media’s presentation of more choices can result in scattered opinions and reduce the harm that extreme opinions could cause. Although the decentralized media will not lead to opinion polarization, we cannot afford to ignore the threat the polarized media may bring. Finally, taking into account that the distribution of opinions can be affected by the distribution of the media, we designed two methods of media programs with specific distribution and changing opinions. Simulations verified that the methods could effectively enhance awareness among the public.

In this work, we applied the model in a real-life social network to study the influence on public opinion brought by diverse media and audience choices. Nevertheless, our study has several limitations. In the future, we will introduce more realism to each aspect of the model and simulation, for instance, agents have prejudices on specific topics or feel loyal to certain media, and there are special individuals such as informed agents, inflexible agents, contrarians, and zealot agents. The simulations will not be limited to one network, while different networks from reality can be used, and it will be possible to study the effect of the social networks’ structure on public opinion. Moreover, contemporary media compete with each other and adjust their opinions to attract audiences, and this will also bring significant changes to the model. Nevertheless, the reality is full of complexity and randomness, and we cannot simulate all real-world elements in one enormous simulation; doing so would undoubtedly lead to undecipherable results.

Considering that the study of public opinion is a combination of social science, computer science, and math, its future success will depend not only on modeling and simulation but also on creative and multidisciplinary methods. Therefore, new knowledge and technology, such as deep learning, can help us understand the real thoughts of the public, providing new perspectives in this field. Finally, such approaches will help us obtain more novel and practical explanations and conclusions.


Model Documentation

The agent-based model is written and simulated in Matlab. All related files containing the simulation model’s code are accessible at https://www.comses.net/codebase-release/19cbafdc-15ca-4fe1-a818-05f3d422bec7. Please read the "readme.md" file for more details.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61873239) and Zhejiang Provincial Soft Science Research Program (No. 2020C25032).

Appendix

A: How the Opinion Leaders Affect Opinion Dynamic?

In this section, the role of individual influence in the formation of public opinion is discussed. Rewrite Equation 2 as:

\[ {\mu _{ij}} = 0.5\tanh \left[ {\frac{{{m_j}}}{{s{{\left( {x_j^t - x_i^t} \right)}^2}}}} \right]\] \[(10)\]

the additional parameter \(s\) determines the distribution characteristics of individuals’ influence. The increase of the parameter \(s\) represents the increase of the gap between the influence levels of different individuals. It also means that the influence of opinion leaders on public opinion will be more obvious.

The value of \(s\) is set from 1 increases to 10000, and the agents’ horizon values are between each one’s convergence and repulsion thresholds. The other parameters’ other settings are the same as in Scenario 1, and the results are shown in Figure 12. It can be seen from the Figure 12(a) that the average opinion of the public begins to decrease slightly with the increase of \(s\), but the randomness will greatly increase after \(s\) exceeds 1000. Meanwhile, the trend of participation ratio is consistent with the increase of \(s\), especially when the value of \(s\) exceeds 1000, the participation ratio can be near 0.9 in some cases. It means that most individuals of the public have reached a certain degree of consensus under the leadership of opinion leaders. Figure 12(d) depicts the possible distribution of public opinions when \(s = 10000\) (one simulation). Unlike the more scattered distribution in Figure 5(d), public opinions are clearly divided into two clusters, except for the 75% of the public who converge to the media opinion 0.6, most of the rest are opponents with extreme opinions. Through observation Figure 6(c), we can tell some key agents play vital roles in forming public opinion. For instance, the agent numbered 296 has always insisted on its own opinion of 0.2138, and eventually attracted some followers.

In summary, as the value of \(s\) increases, the differences in influence levels between individuals become larger. On the one hand, these opinion leaders with greater influence can help the media spread their opinions, but on the other hand, these leaders who hold obstinate opinions may also completely change the trend of public opinion and bring great randomness to the final public opinion. The main object of this paper is to explore how diverse media might influence public opinion. Therefore, the parameter \(s\) is set relatively small to reduce the randomness the opinion leader may bring.


B: Robustness Check with Different Repulsion Threshold Setting

In this Appendix section, we explore whether the distribution of \(\tau_i\) will affect the general trends of simulation results. Different from the previous simulations, in this section, the repulsion thresholds \(\tau_i\) follow normal distribution \({\cal N}\left( {1,{{0.3}^2}} \right)\) while satisfying \({\tau _i} \ge {\varepsilon _i}\). This setting limits the extreme value of \(\tau_i\) and makes the average value of \(\tau_i\) smaller. The simulations are taken based on similar settings of Scenario 1 to 3 except for the value of \(\tau_i\) , and the results are presented in Figures 13 to 16.

Figure 13 shows the public opinion under different horizons with normally distributed \(\tau_i\); compared to Figure 5(a) and Figure 6(c), the general trends are identical. However, through careful inspection, we can explore the following differences compared with the previous simulation results. First, the average opinion values are smaller, especially when \(G = 1\), only slightly bigger than \(G = 0.6\) when \({\eta _i} \ge 1.4\). Secondly, the non-monotonic is still existing but not that obvious until the opinion of the medium is extreme enough. Both changes are not difficult to explain, the normally distributed \(\tau_i\) have limited extreme values and small mean, therefore, agents are more likely to repulse the opinion of the authoritarian medium. Meanwhile, the model is insensitive to the parameter \(\eta_i\) due to more repulsive interactions, which leads to the inconspicuous of the non-monotonic.

In Figure 14, the simulation settings are the same as case 2 and 3 of Scenario 3 except for the setting of \(\tau_i\). The change patterns of the Euclidean norm are similar to the Figure 8(b) and 9(b). Nonetheless, the small \(\tau_i\) brings more randomness, the Euclidean norm even bigger with 15 media than 1 medium, as shown in Figure 14(a). Even then, the norm is continually decreasing as the number of media increases. The impact brought by the multiple media is still diminishing when the number increases, just as in Scenario 3.

The public opinions under two guide methods are depicted in Figures 15 and 16. The curves’ shapes are basically identical to Figures 10 and 11, respectively; the differences are that the average opinion is smaller and Euclidean norm is larger due to the intensive repulsive interaction. Moreover, in this situation, method 2 has more possibilities of creating opposed public opinion, hence its effect is worse than that of method 1. Because the smaller repulsive threshold setting makes it easier for the agents to revolt against the media broadcasting the same information. This result illuminates that the policy makers should consider both the appropriate method and the social environment simultaneously when designing the propaganda.

In this Appendix section, we have tested a different setting of the repulsive threshold, and obtained similar results as the previous simulations, proving the robustness of the model.

References

ALVAREZ-GALVEZ, J. (2016). Network models of minority opinion spreading: Using agent-based modeling to study possible scenarios of social contagion. Social Science Computer Review, 34(5), 567–581. [doi:10.1177/0894439315605607]

AN, J., Quercia, D., & Crowcroft, J. (2013). Fragmented social media: A look into selective exposure to political news. Proceedings of the 22nd International Conference on World Wide Web. [doi:10.1145/2487788.2487807]

AXELROD, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226. [doi:10.1177/0022002797041002001]

BAMAKAN, S. M. H., Nurgaliev, I., & Qu, Q. (2019). Opinion leader detection: A methodological review. Expert Systems with Applications, 115, 200–222. [doi:10.1016/j.eswa.2018.07.069]

CASTALDO, M., Venturini, T., Frasca, P., & Gargiulo, F. (2021). The rhythms of the night: Increase in online night activity and emotional resilience during the spring 2020 Covid-19 lockdown. EPJ Data Science, 10(7). [doi:10.1140/epjds/s13688-021-00262-1]

CHEN, S., Glass, D. H., & McCartney, M. (2016). Characteristics of successful opinion leaders in a bounded confidence model. Physica A: Statistical Mechanics and Its Applications, 449, 426–436. [doi:10.1016/j.physa.2015.12.107]

CLIFFORD, P., & Sudbury, A. (1973). A model for spatial conflict. Biometrika, 60(3), 581–588. [doi:10.1093/biomet/60.3.581]

DEFFUANT, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(01n04), 87–98. [doi:10.1142/s0219525900000078]

DEGROOT, M. H. (1974). Reaching a consensus. Journal of the American Statistical Association, 69(345), 118–121. [doi:10.1080/01621459.1974.10480137]

DING, Z., Dong, Y., Liang, H., & Chiclana, F. (2017). Asynchronous opinion dynamics with online and offline interactions in bounded confidence model. Journal of Artificial Societies and Social Simulation, 20(4), 6: https://www.jasss.org/20/4/6.html. [doi:10.18564/jasss.3375]

DONG, Y., Ding, Z., Chiclana, F., & Herrera-Viedma, E. (2017). Dynamics of public opinions in an online and offline social network. IEEE Transactions on Big Data, 1–11. [doi:10.1109/tbdata.2017.2676810]

DONG, Y., Zhan, M., Kou, G., Ding, Z., & Liang, H. (2018). A survey on the fusion process in opinion dynamics. Information Fusion, 43, 57–65. [doi:10.1016/j.inffus.2017.11.009]

FAN, K., & Pedrycz, W. (2015). Emergence and spread of extremist opinions. Physica A: Statistical Mechanics and Its Applications, 436, 87–97. [doi:10.1016/j.physa.2015.05.056]

FAN, K., & Pedrycz, W. (2016). Opinion evolution influenced by informed agents. Physica A: Statistical Mechanics and Its Applications, 462, 431–441. [doi:10.1016/j.physa.2016.06.110]

FAN, K., & Pedrycz, W. (2017). Evolution of public opinions in closed societies influenced by broadcast media. Physica A: Statistical Mechanics and Its Applications, 472, 53–66. [doi:10.1016/j.physa.2017.01.027]

FLACHE, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4), 2: https://www.jasss.org/20/4/2.html. [doi:10.18564/jasss.3521]

FRENCH Jr, J. R. P. (1956). A formal theory of social power. Psychological Review, 63(3), 181–194.

GARGIULO, F., & Gandica, Y. (2017). The role of homophily in the emergence of opinion controversies. Journal of Artificial Societies and Social Simulation, 20(3), 8: https://www.jasss.org/20/3/8.html. [doi:10.18564/jasss.3448]

GARGIULO, F., Lottini, S., & Mazzoni, A. (2008). The saturation threshold of public opinion: Are aggressive media campaigns always effective? Proceedings of ESSA 2008 Conference.

GONZALEZ-AVELLA, J. C., Cosenza, M. G., Klemm, K., Eguiluz, V. M., & Miguel, M. S. (2007). Information feedback and mass media effects in cultural dynamics. Journal of Artificial Societies and Social Simulation, 10(3), 9: https://www.jasss.org/10/3/9.html.

HEGSELMANN, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2: https://www.jasss.org/5/3/2.html.

HOLLEY, R. A., & Liggett, T. M. (1975). Ergodic theorems for weakly interacting infinite systems and the voter model. The Annals of Probability, 3(4), 643–663. [doi:10.1214/aop/1176996306]

HU, H., & Zhu, J. J. (2017). Social networks, mass media and public opinions. Journal of Economic Interaction and Coordination, 12(2), 393–411. [doi:10.1007/s11403-015-0170-8]

JAGER, W., & Amblard, F. (2004). Uniformity, bipolarization and pluriformity captured as generic stylized behavior with an agent-based simulation model of attitude change. Computational & Mathematical Organization Theory, 10(4), 295–303. [doi:10.1007/s10588-005-6282-2]

JAMIESON, K. H., & Cappella, J. N. (2008). Echo Chamber: Rush Limbaugh and the conservative Media Establishment. Oxford, MA: Oxford University Press.

JENKINS, H. (2004). The cultural logic of media convergence. International Journal of Cultural Studies, 7(1), 33–43. [doi:10.1177/1367877904040603]

KATZ, E., & Lazarsfeld, P. F. (1955). Personal Influence: The Part Played by People in the Flow of Mass Communications. Washington, DC: Free Press. [doi:10.4324/9781315126234]

LI, K., Liang, H., Kou, G., & Dong, Y. (2020). Opinion dynamics model based on the cognitive dissonance: An agent-based simulation. Information Fusion, 56, 1–14. [doi:10.1016/j.inffus.2019.09.006]

LIANG, H., Li, C. C., Dong, Y., & Jiang, Y. (2016). The fusion process of interval opinions based on the dynamic bounded confidence. Information Fusion, 29, 112–119. [doi:10.1016/j.inffus.2015.08.010]

LIU, W., Li, T., Cheng, X., Xu, H., & LIU, X. (2019). Spreading dynamics of a cyber violence model on scale-free networks. Physica A: Statistical Mechanics and Its Applications, 531, 121752. [doi:10.1016/j.physa.2019.121752]

MARTINS, A. C. R. (2008). Continuous opinions and discrete actions in opinion dynamics problems. International Journal of Modern Physics C, 19(4), 617–624. [doi:10.1142/s0129183108012339]

MARTINS, A. C. R. (2013). Trust in the CODA model: Opinion dynamics and the reliability of other agents. Physics Letters A, 377(37), 2333–2339. [doi:10.1016/j.physleta.2013.07.007]

MARTINS, T. V., Pineda, M., & Toral, R. (2010). Mass media and repulsive interactions in continuous-opinion dynamics. EPL (Europhysics Letters), 91(4), 48003.

MOYA, I., Chica, M., Saez-Lozano, J. L., & Cordon, O. (2017). An agent-based model for understanding the influence of the 11-M terrorist attacks on the 2004 Spanish elections. Knowledge-Based Systems, 123, 200–216. [doi:10.1016/j.knosys.2017.02.015]

NAGEL, S. R., Grest, G. S., & Rahman, A. (1984). Phonon localization and anharmonicity in model glasses. Physical Review Letters, 53(4), 368. [doi:10.1103/physrevlett.53.368]

NOLL, R. G., & Price, M. E. (1998). A communications cornucopia: Markle foundation essays on information policy. Washington, DC: Brookings Institution Press.

PARISER, E. (2011). The Filter Bubble: What The Internet is Hiding from You. London: Penguin UK.

PINEDA, M., & Buendía, G. (2015). Mass media and heterogeneous bounds of confidence in continuous opinion dynamics. Physica A: Statistical Mechanics and Its Applications, 420, 73–84. [doi:10.1016/j.physa.2014.10.089]

PULICK, E., Korth, P., Grim, P., & Jung, J. (2016). Modeling interaction effects in polarization: Individual media influence and the impact of town meetings. Journal of Artificial Societies and Social Simulation, 19(2), 1: https://www.jasss.org/19/2/1.html. [doi:10.18564/jasss.3021]

QUATTROCIOCCHI, W., Caldarelli, G., & Scala, A. (2014). Opinion dynamics on interacting networks: Media competition and social influence. Scientific Reports, 4(4938), 1–7. [doi:10.1038/srep04938]

QUATTROCIOCCHI, W., Conte, R., & Lodi, E. (2011). Opinions manipulation: Media, power and gossip. Advances in Complex Systems, 14(04), 567–586. [doi:10.1142/s0219525911003165]

ROCH, C. H. (2005). The dual roots of opinion leadership. The Journal of Politics, 67(1), 110–131.

SHERIF, M., & Hovland, C. I. (1961). Social Judgment: Assimilation and Contrast Effects in Communication and Attitude Change. New Haven, CT: Yale University Press.

STAUFFER, D. (2002). Sociophysics: The Sznajd model and its applications. Computer Physics Communications, 146(1), 93–98. [doi:10.1016/s0010-4655(02)00439-3]

SUNSTEIN, C. R. (2006). Infotopia: How Many Minds Produce Knowledge. Oxford, MA: Oxford University Press.

SZNAJD-WERON, K., & Sznajd, J. (2000). Opinion evolution in closed community. International Journal of Modern Physics C, 11(6), 1157–1165. [doi:10.1142/s0129183100000936]

TEN Broeke, G., van Voorn, G., & LigTENberg, A. (2016). Which sensitivity analysis method should I use for my agent-based model? Journal of Artificial Societies and Social Simulation, 19(1), 5: https://www.jasss.org/19/1/5.html. [doi:10.18564/jasss.2857]

TRAUD, A. L., Mucha, P. J., & Porter, M. A. (2012). Social structure of Facebook networks. Physica A: Statistical Mechanics and Its Applications, 391(16), 4165–4180. [doi:10.1016/j.physa.2011.12.021]

UGANDER, J., Backstrom, L., Marlow, C., & Kleinberg, J. (2012). Structural diversity in social contagion. Proceedings of the National Academy of Sciences, 109(16), 5962–5966. [doi:10.1073/pnas.1116502109]

VĪKE-FREIBERGA, V., Däubler-Gmelin, H., Hammersley, B., & Maduro, L. M. P. P. (2013). A free and pluralistic media to sustain European democracy. Report available at: https://ec.europa.eu/digital-single-marketsites/digital-agenda/files/HLG%20Final%20Report.pdf.

WATTS, D. J., & Dodds, P. S. (2007). Influentials, networks, and public opinion formation. Journal of Consumer Research, 34(4), 441–458. [doi:10.1086/518527]

WEBSTER, J. G. (2009). Media Choice: A Theoretical and Empirical Overview. London, UK: Routledge.

WEEKS, B. E., Ksiazek, T. B., & Holbert, R. L. (2016). Partisan enclaves or shared media experiences? A network approach to understanding citizens’ political news environments. Journal of Broadcasting & Electronic Media, 60(2), 248–268. [doi:10.1080/08838151.2016.1164170]

ZHANG, A., Zheng, M., & Pang, B. (2018). Structural diversity effect on hashtag adoption in Twitter. Physica A: Statistical Mechanics and Its Applications, 493, 267–275. [doi:10.1016/j.physa.2017.09.075]