1,432
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

Extremism, segregation and oscillatory states emerge through collective opinion dynamics in a novel agent-based model

, , & ORCID Icon
Pages 42-80 | Received 03 Aug 2021, Accepted 09 Sep 2022, Published online: 09 Oct 2022

ABSTRACT

Using mathematics to model the evolution of opinions among interacting agents is a rich and growing field. We present a novel agent-based model that enhances the explanatory power of existing theoretical frameworks, corroborates experimental findings in social psychology, and reflects observed phenomena in contemporary society. Bespoke features of the model include: a measure of pairwise affinity between agents; a memory capacity of the population; and a generalized confidence bound called the interaction threshold, which can be dynamical and heterogeneous. Moreover, the model is applicable to opinion spaces of any dimensionality. Through analytical and numerical investigations, we study the opinion dynamics produced by the model and examine the effects of various model parameters. We prove that as long as every agent interacts with every other, the population will reach an opinion consensus regardless of the initial opinions or parameter values. When interactions are limited to be among agents with similar opinions, segregated opinion clusters can be formed. An opinion drift is also observed in certain settings, leading to collective extremisation of the whole population, which we quantify using a rigorous mathematical measure. We find that collective extremisation is likely if agents cut off connections whenever they move away from the neutral position, effectively isolating themselves from the population. When a population fails to reach a steady state, oscillations of a neutral majority are observed due to the influence exerted by a small number of extreme agents. By carefully interpreting these results, we posit explanations for the mechanisms underlying socio-psychological phenomena such as emergent cooperation and group polarization.

1. Introduction

Creating mathematical models to explain the dynamics of opinions is a research endeavor dating back to French (Citation1956) and remains a frontier today (Castellano et al., Citation2009; Flache et al., Citation2017; Noorazar et al., Citation2020). By quantifying the interconnections among a social group, opinion dynamics models provide unique insights into the stimuli behind individuals evolving their views, and reveal mechanisms by which the group forms a consensus or fails to do so. This paper puts forward a new mathematical model of the emergence of extremism and segregation through opinion dynamics in a closed community. Interpreting the term ‘opinion’ broadly, we design the theory to be applicable to a variety of contexts including cultural evolution, language dynamics, economic games, animal societies, and so on.

One of the foundational models of opinion dynamics is due to DeGroot (Citation1974), where a group of agents iteratively update their positions to weighted averages of other agents’ positions. Extending DeGroot’s model, Friedkin and Johnsen (Citation1990) incorporates exogenous variables and other effects to simulate conflict and conformity behaviors. The DeGroot-Friedkin paradigm, which represents opinion updates as linear maps, remains influential today. In an insightful nonlinear generalization, Dandekar et al. (Citation2013) shows that polarization can be a consequence of biased assimilation, a well-known psychological phenomenon where one is influenced most strongly by people with similar views (Lord et al., Citation1979). An important development of the theory introduces the effect of stubbornness: by allowing agents to have some attachment to their initial beliefs, it is found that the more stubborn agents hold more social power over time (Tian et al., Citation2021).

Bounded Confidence Models generalize the DeGroot-Friedkin paradigm by allowing each agent to interact only with agents whose opinions fall within some ‘confidence bound.’ Hegselmann and Krause (Citation2002), for example, models the process of opinion fragmentation by updating to the average opinions of agents within the confidence bound. The model by Deffuant et al. (Citation2000) has agents interacting in pairs and only adjusting their opinions if they fall within each other’s confidence bound, a process which leads to clustering. Other developments of Bounded Confidence Models have accounted for various factors that affect opinion dynamics, in order to align the models with social realities. Examples of such factors include group pressure and in-group favoritism (Alizadeh et al., Citation2015; Cheng & Yu, Citation2019); social feedback (Banisch & Olbrich, Citation2019); cultural complexity (Flache & Macy, Citation2011; Turner & Smaldino, Citation2018); repulsion (Huet et al., Citation2008; Stadtfeld et al., Citation2020); private opinions (Ye et al., Citation2019); and randomness in the confidence bounds (Kurahashi-Nakamura et al., Citation2016).

The Voter Model by Holley and Liggett (Citation1975) is distinct in character from Bounded Confidence Models; it considers a set of ‘voters’ who change their opinions at random to that of one of their neighbors, without accounting for the opinion they currently hold. Similarly, in the Neutral Model by Bentley et al. (Citation2011), agents copy an existing opinion at random from the population or, with a low probability, invent a new opinion. To augment the Voter Model, the concept of ‘inertia’ has been developed, allowing voters to have conviction in their previously held opinions (Stark et al., Citation2008a, Citation2008b). Inertia has subsequently been applied to the Noisy Voter Model which, when combined with supportive interactions, produce strong drifts of opinions (Artime et al., Citation2018; Kononovicius, Citation2021).

Elsewhere, models with adaptive networks have been shown to promote the formation of echo chambers (Benatti et al., Citation2020); Weighted Balance Theory encompassing multiple weighted attitudes has been validated against American National Election Survey data (Schweighofer et al., Citation2020); a statistical physics approach has successfully integrated data from the 2008 US presidential election (Galesic & Stein, Citation2019); various graph theoretical approaches have been developed to investigate opinion convergence (M. Cao et al., Citation2008; Hendrickx et al., Citation2014; Nedić & Liu, Citation2016; Ren & Beard, Citation2005); and models with memory-based connectivity have been shown to produce opinion clusters (Mariano et al., Citation2020).

In this paper, we develop a novel agent-based model that is nonlinear and deterministic; it incorporates and improves elements from the DeGroot-Friedkin, Bounded Confidence, Voter, and other modeling frameworks, creating a new theory with significantly enhanced explanatory power. The model unifies and enhances many of the aforementioned socio-psychological factors or phenomena, for instance: the ‘stubbornness is power’ effect, biased assimilation, a dynamic confidence bound, inertia-induced opinion drift, and memory-based connectivity. Specific features, which are either important inclusions or upgrades from existing models, are as follows.

Many Bounded Confidence Models allow each agent to hold only one opinion. We propose that each agent holds and communicates multiple opinions at a time. Equivalently, we say that each agent holds an opinion with multiple components, which will be represented as components of a multi-dimensional vector. Two agents will interact if and only if the Euclidean distance between their opinions falls within some prescribed bound. Instead of taking discrete values as in Voter Models, the opinion vectors will take continuous values, allowing a greater variety of simulation outcomes to emerge. In a move akin to the introduction of inertia to Voter Models, we will define the concept of ‘memory capacity,’ describing the number of past states of the population that each agent takes into account when deciding whether or not to interact with another. The resulting non-Markovian process of opinion updating bears a stronger resemblance to real-world decision-making than its Markovian counterparts, and is reducible to the Markovian process if the memory capacity is minimized.

Concepts similar to this memory capacity have been examined in a small number of studies from which we have taken inspiration (Anderson & Ye, Citation2019). Most notably, the network-based model by Mariano et al. (Citation2020) includes a memory state variable that reflects an agent’s opinion history and a parameter that controls how quickly an agent ‘forgets’ the past. In a similar fashion, the connectivity of agents in our model is also dependent upon the history of opinions, but the model’s realism is now improved by allowing the graph of connectivity to be directed: i need not influence j if j influences i, which is a sensible feature of social interactions in the real world.

Another field of research from which we have taken inspiration is the modeling of collective animal motion. The generalizability of collective motion models to the field of opinion dynamics has previously been addressed (M. M. Cao et al., Citation2008; Vicsek & Zafeiris, Citation2012), drawing parallels between convergence properties of self-synchronizing animal systems and quorum-finding mechanisms in social groups. The model of spontaneous order in bird flocks by Cucker and Smale (Citation2007), in particular, has strongly influenced this paper (see, Section 2).

This paper’s scope and structure is as follows. We outline the principal ideas behind our model and present its mathematical formulation in Section 2, followed by a key theorem on consensus formation. A novel concept of pairwise ‘affinity’ will be introduced which describes how closely aligned two agents are in their recent history and is parameterized by the aforementioned memory capacity. In Section 3, we use numerical simulations to explore the phenomena of clustering, opinion drift, and extremisation, exploring real-world implications of the simulation results in the context of cooperative networks. We also examine the natural emergence of extreme views from the system when each agent’s threshold for interaction evolves with their opinions. Section 3.4 proves that the model admits periodic solutions under certain conditions that we specify explicitly, which is a particularly intriguing feature. The oscillatory dynamics that arise when the system fails to converge are investigated in detail. Finally, we will draw conclusions and discuss future directions in Section 4.

2. The model and preliminary analysis

In the model, an ‘opinion’ has any number of components, which will be represented as coordinates in D-dimensional space. For example, an agent’s preference for sweet or savory popcorn can be one dimension, while their conservative or liberal politics may be another. It is assumed that, in general, an agent evolves their entire opinion – all dimensions included – as a whole, rather than evolve the components independently. Thus, the opinion space is RD, with the origin representing the opinion that is neutral in every dimension. The Euclidean distance from the origin to an opinion is a measure of that opinion’s ‘extremeness.’

We consider N2 agents whose opinions are represented by D-dimensional real-valued vectors, v1(t),v2(t),,vN(t), where t=0,1,2, are discrete times. To update their opinion at each time, every agent tries to align with a select group of other agents. More precisely, every pair of agents, i and j, share an affinity, aij(t), which we define in Section 2.1; every agent has a threshold, ρi(t), and agent i will try to align with agent j at time t if and only if aij(t)>ρi(t). The model is therefore of the bounded confidence type, except pairwise influence is determined not only by the opinion difference between the pair, but by the more sophisticated measure of pairwise affinity which involves a collective memory capacity of the population. We now proceed to detail the mathematical model in Section 2.1, before proving a result on the consensus formation in Section 2.2.

2.1. Mathematical formulation

A vital element of the model is the pairwise affinity, aij(t), between agents i and j, which we require to possess several properties. Firstly, the affinity must be symmetric (aij=aji). Secondly, it should always take positive values no larger than 1, with higher values indicating that i and j are more ‘alike.’ Thirdly, the affinity should depend not only on the opinion difference between i and j at the current time, but also on a recent history of opinion differences. This memory property represents an important generalization from existing bounded confidence models. For an affinity measure that satisfies all these requirements, we take

(1) aij(t)=11+τ=0tw(τ;t,μ)vj(τ)vi(τ)21/2,(1)

where w(τ;t,μ) is a weight function given by

(2) w(τ;t,μ)=1,ifτ>tμ,0,ifτtμ,(2)

and we have introduced the integer parameter μ1, which we call the memory capacity, representing the number of steps (including the present step) that every agent takes into account when calculating affinities. If the current time t<μ, then the sum will be over all time-steps. If μt, then w assigns unit weight to times from tμ+1 to the present while assigning zero weight to all prior times. We say that opinion differences prior to tμ+1 ‘drop out’ of memory. The sum in the denominator of aij is a weighted sum of the square of opinion difference over the most recent μ time-steps, where denotes the Euclidean norm and so xy denotes the Euclidean distance between x and y. Although we have chosen the Euclidean (l2) norm as the distance measure, it is worth noting that other choices of norm would also be suitable. In particular, the convergence results in Section 2.2 remain true for the l1 and l norms (see details in Lemma 2.1 and Proposition 2.2), meaning that the same consensus behavior would be observed given an alternative norm. We choose the Euclidean norm as it provides the most moderate measure of distance out of the three candidates since, in general, 21.

The choice of weight function is one of the simplest, yet it fully captures the affinity’s memory property. The addition of 1 in the denominator of aij ensures that the affinity never exceeds 1, and aij=1 (maximum affinity) if and only if: either i=j (one has maximum affinity with oneself), or i and j have held exactly the same opinion in the most recent μ time-steps including current time. The fact that aij>0 means no two agents will ever share exactly zero affinity, no matter how much their opinions differ. Moreover, if two agents i and j hold their opinions fixed, with vi(τ)vj(τ), then their affinity aij decreases over time, representing the tendency for people to become less connected if they keep disagreeing with each other.

With the affinity measure in place, and with every agent having some threshold, ρi (to be defined), we let the opinions vi(t) in the population evolve as follows.

(3) Foralli:vi(t+1)=vi(t)+1Qi(t;ρi)j=1Ncij(t;ρi)aij(t)vj(t)vi(t),(3)

where

(4) cij(t;ρi)=1,ifaij(t)>ρi,0,ifaij(t)ρi,(4)
(5) Qi(t;ρi)=k=1Ncij(t;ρi).(5)

We say that i ‘listens to’ j (or j influences i) at time t if cij(t;ρi)=1, and EquationEquation (4) expresses the fact that i listens to j if and only if their affinity exceeds i’s threshold. Thus, Qi(t;ρi) is simply the number of agents that i listens to (including i, since every agent is self-influencing with aii=1). According to EquationEquation (3), the amount by which agent i adjusts their opinion at each time is a weighted average of relative opinions from i to all agents that i listens to, with weights determined by affinities. By construction, every agent’s self-confidence, 11Qijicijaij, and all the other weights, 1Qicijaij for ji, add up to 1, meaning that the system’s transition matrix is right-stochastic. Note that cij may not be symmetric: aij>ρi does not imply aji>ρj, since ρi and ρj may be different (even though aji=aij). In other words, the fact that i listens to j does not necessarily mean j listens to i, since they may have different thresholds. Note also that if ρi=0 for all i, then in the infinite-memory limit (μ), the model becomes analogous to Cucker and Smale (Citation2007) which investigates the synchronization of bird flocks.

In this paper we consider two ways to assign thresholds to the population:

(1) Every agent is equally susceptible to change at all times: ρi(t)=ρ for all i and all t, where 0ρ<1 is some prescribed constant which we call the universal threshold. This is the simplest way to assign thresholds.

(2) Every agent’s threshold evolves over time, in such a way that the more extreme their opinion, the higher their threshold and hence the less susceptible they are to change:

(6) ρi(t)=ρ+(1ρ)1eαvi(t).(6)

This assumption is grounded in empirical observations (Kozitsin, Citation2020; Lord et al., Citation1979; Tian et al., Citation2021). In EquationEquation (6), ρi is a strictly increasing function of the extremeness vi(t) of agent i’s opinion, and α>0 is a constant reinforcement rate determining how sharply one’s threshold increases as one’s opinion becomes more extreme (see, ). The larger α is, the more sharply one’s threshold increases. Note that ρi1 as vi(t)∥→, and ρi=ρ if vi(t)∥=0. We therefore interpret ρ as a baseline threshold: the threshold that one has when one’s opinion is entirely neutral. Note also that in the limit α0, we recover the uniformly constant ρi(t)=ρ.

Figure 1. An example of the interplay between agents’ thresholds, pairwise affinities, and pairwise influences. Panel (a) shows the evolving threshold ρi of some agent i as a function of vi, as per EquationEquation (6), with reinforcement rate α=0.1,0.2,0.4,0.8. Panels (b-e) are snapshots of the connections among four agents with two-dimensional opinions (0,0),(1,0),(1,1) and (0,1) which we assume, for this illustration, have been held fixed for at least μ time-steps. The arrows denote the three distinct types of pairwise influence. The numbers that annotate the arrows are the pairwise affinities calculated by EquationEquation (1) to 3dp precision. Panels (b,c) have universal threshold ρi=0.5 for all i; panels (d,e) have evolving thresholds with baseline ρ=0.5 and reinforcement rate α=0.2, calculated by EquationEquation (6) to 3dp precision. The memory capacity is μ=1 in (b,d) and μ=2 in (c,e).

Figure 1. An example of the interplay between agents’ thresholds, pairwise affinities, and pairwise influences. Panel (a) shows the evolving threshold ρi of some agent i as a function of ∥vi∥, as per EquationEquation (6)(6) ρi(t)=ρ+(1−ρ)1−e−α∥vi(t)∥.(6) , with reinforcement rate α=0.1,0.2,0.4,0.8. Panels (b-e) are snapshots of the connections among four agents with two-dimensional opinions (0,0),(1,0),(1,1) and (0,1) which we assume, for this illustration, have been held fixed for at least μ time-steps. The arrows denote the three distinct types of pairwise influence. The numbers that annotate the arrows are the pairwise affinities calculated by EquationEquation (1)(1) aij(t)=11+∑τ=0tw(τ;t,μ)vj(τ)−vi(τ)21/2,(1) to 3dp precision. Panels (b,c) have universal threshold ρi=0.5 for all i; panels (d,e) have evolving thresholds with baseline ρ=0.5 and reinforcement rate α=0.2, calculated by EquationEquation (6)(6) ρi(t)=ρ+(1−ρ)1−e−α∥vi(t)∥.(6) to 3dp precision. The memory capacity is μ=1 in (b,d) and μ=2 in (c,e).

Recall that the pairwise affinity aij(t) decreases over time unless i and j adjust their opinions to align with each other. The implication of this fact at the population level is that, unless ρi(t)=0 for all i and all t, then as agents fail to ‘come together’ in their opinions, the network of interpersonal influences will become less connected over time. Equivalently, given two systems with identical opinion histories and different memory capacities, the system with the larger memory capacity has a less connected network of influences. An illustration of this phenomenon is presented in . Assume that agents 1–4 have held their two-dimensional opinions fixed (e.g., due to external influences) for at least μ steps, at (0,0),(1,0),(1,1) and (0,1), respectively. Then, the Euclidean distance between any pair’s opinions is fixed at either 1 or 2. Calculating the pairwise affinities by EquationEquation (1), we find aij=1/2 or 1/3 if μ=1, and aij=1/3 or 1/5 if μ=2. Thus, in case ρi=0.5 for all i: if μ=1 then everyone listens to everyone else, since 1/2>1/3>0.5 (); if μ=2 then the most distant pairs of agents, {1,3} and {2,4}, do not communicate, since 1/3>0.5>1/5 (). On the other hand, in case individual thresholds evolve from baseline ρ=0.5 according to EquationEquation (6) with reinforcement α=0.2: if μ=1 then {2,4} do not communicate while {1,3} communicate uni-directionally, the symmetry being broken due to the heterogeneous thresholds (); if μ=2 then the only communications are agent 1 listening to agents 2 and 4, since no other affinity exceeds the relevant threshold (). This simple example shows that the connectivity of the system depends sensitively on multiple factors: the opinions, thresholds and memory capacity.

In the language of complexity theory, the system is ‘simple’ if the universal threshold ρ is close to 0 or 1, being always highly connected in the former case and always barely connected in the latter; and the complexity is maximized if ρ takes intermediate values since the connectivity can fluctuate greatly over time, as the example above demonstrates. In the simplest case, ρi(t)=0 (meaning everyone listens to everyone else all the time), we establish analytically in Section 2.2 that any population is guaranteed to form a consensus over time, meaning vi(t) converge to some common value for all i. In any other case (ρi(t)=ρ>0 or EquationEquation (6)), the system is not analytically tractable, so we will investigate the opinion dynamics using numerical methods in Section 3.

2.2. Sufficient conditions for convergence and for consensus

We say that the system converges to a steady state if and only if, for all i, there exists some constant vi such that vi(t)vi as t. We say that the system converges to consensus if and only if there exists some common constant v such that, for all i, vi(t)v as t. Whenever the system converges to a steady state but not to consensus, we say that the system converges to segregation. In this section, we consider the model (1)–(5) with some universal threshold ρi(t)=ρ, in which case we show that the system always converges to a steady state, and establish the following sufficient condition for consensus: ρ<ρ, where ρ is a critical value we will determine explicitly.

In Lorenz (Citation2005), it was shown that any system V(t+1)=M(t)V(t) where VRN×D converges to a steady state if three conditions are met: all agents have positive self-confidence (mii>0); confidence is mutual (mij>0mji>0); and there exists some δ>0 such that the time-sequence Mt, defined by Mt=mini,j{mij(t)>0}, satisfies Mt>δ. By expressing (3) in matrix form (putting the opinion of agent i in row i of V), we find the diagonal elements

(7) mii=11Qijicijaij1Qi1Qi1N>0,(7)

implying that the current model meets the “positive self-confidence” condition. For the “mutual confidence” condition, we look at the off-diagonal elements

(8) mij=1Qicijaij0,ij,(8)

and note that equality holds if and only if cij=0. Since ρi=ρ for all i by assumption and aij=aji by definition, we deduce from (4) that cij=cji, and therefore mij=0 if and only if mji=0. Lorenz’s second condition is thus met.

To show that Lorenz’s third and final condition is met, it suffices to find a positive lower bound for all positive off-diagonal mij for all time. To that end, note that

(9) forallijsuchthatmij>0:mij1Naij.(9)

We therefore seek some constant ρ>0 such that aijρ for all i,j and all time, and we do so through the following lemma.

Lemma 2.1. Consider the system V(t+1)=M(t)V(t), where t{0,1,2,}, VRN×D and MRN×N. Let Rt=maxi{ri(V(t))}, where ri(V) denotes the i th row of V and the Euclidean norm. If M(t)1 for all t, then the sequence Rt0 is non-increasing.

Proof. It is an established fact of linear algebra that Rt equals the (2,)-norm of V(t) (Lewis, Citation2010):

(10) maxi{ri(V(t))}=supV(t)xx:x0=∥V(t)2,(10)

Combining the submultiplicity of induced norms:

(11) MV2,≤∥MV2,foranyMandVwhereMVisdefined,(11)

with the assumption that M(t)1, yields

(12) V(t+1)2,=∥M(t)V(t)2,≤∥V(t)2,(12)

as required. □

Note that Lemma 2.1 is a general result applicable to any system whose transition matrix has absolute row sums no larger than 1. Note also that Lemma 2.1 still holds if the l1 or l norm were used to define the pairwise affinity and hence Rt. This is due to the following facts which are analogous to EquationEquation (10; Lewis, Citation2010): maxi{ri(V(t))1}=∥V(t),, maxi{ri(V(t))}=∥V(t)1,. To apply Lemma 2.1 to the current model (1)–(5), we simply let ri(V(t))=vi(t) and M be the matrix with elements given by (7)–(8). Then, Rt=maxivi(t) is the maximum Euclidean magnitude of all opinions at time t. Note that Lemma 2.1 implies the set of opinions is always ‘shrinking’ in the sense that Rt is non-increasing, regardless of ρi(t). A useful interpretation of this result is that the opinions ‘shrink’ due to the agents interacting under attractive forces only, with no repulsive forces involved. Now, let

(13) ρ=11+4μR021/2,(13)

then for all i,j,t, we have vj(t)vi(t)∥≤2R0 and hence

(14) aij(t)ρ,(14)

which implies that Lorenz’s final condition for convergence is met. We are now ready to state the main result of the section.

Proposition 2.2. Consider a population of agents i=1,2,,N, evolving their opinions vi(t)RD according to the model (1)–(5), with some universal threshold ρi(t)=ρ for all i,t.

(1) Given any initial condition, the opinions converges to some steady state: limtvi(t)=vi for all i.

(2) Given any initial condition and any ρ<ρ, where ρ is given by (13) with R0=maxi{vi(0)}, the opinions converge to a consensus: limtvi(t)=v for some common v. Moreover,

(15) v=1Ni=1Nvi(0),(15)

is the initial mean opinion of the population.

Proof. Part (1) is already proven, by showing that the system meets all of Lorenz’s convergence criteria. For part (2), it follows immediately from (14) and ρ<ρ that aij(t)>ρ for all i,j,t, which implies cij(t)=1 for all i,j,t. That is, every agent listens to every other for all time. The system therefore simplifies to

(16) V(t+1)=V(t)+1NA(t)B(t)V(t),(16)

where ri(V)=vi, A is the N×N matrix with elements aij, and B is the N×N diagonal matrix with elements

(17) bii=j=1Naij,bij=0ifij.(17)

Define the initial mean matrix, V0, with rows

(18) r1(V0)=r2(V0)==rN(V0)=1Ni=1Nri(V(0)).(18)

It takes a straightforward calculation to show AV0=BV0, which means that V(t)=V0 is a steady state of the system. Since convergence is already established, and since limits in RN×D are unique, it follows that V(t)=V0 is precisely the state to which the system converges. □

Recall that Lemma 2.1 holds if the system were defined using the l1 or l norm instead of the Euclidean norm; thus, the same can be said for Proposition 2.2. To apply Proposition 2.2, consider initial opinions in RN×D where each column is N values sampled from an independent standard normal distribution (as is done in Section 3). In this case, it is reasonable to assume that all initial opinions fall within a sphere of radius R0=9D. Proposition 2.2 then implies that consensus is guaranteed whenever

(19) ρ<11+36μD16μD.(19)

To conclude the section, we note that other than Lorenz (Citation2005), different convergence criteria for opinion dynamics systems exist in the literature, for example, in Blondel et al. (Citation2005). Hendrickx et al. (Citation2014) proved general results concerning the existence of models that guarantee average consensus, using a graph-theoretic approach. Here, Proposition 2.2 can be stated in graph theoretical terms because a graph that represents the agents as nodes and pairwise influences as edges is indeed connected and undirected if ρ<ρ. Overall, the model with ρ<ρ provides a mechanism for how an interacting social group can find common ground from initial disagreements, through a process of collective assimilation.

3. Numerical simulations: results and discussions

In this section, we investigate how the opinion dynamics are affected by the model parameters, focusing mainly on the threshold, ρi. Two cases are considered: ρi=ρ constant for all time (in Section 3.2), and ρi evolving with individual opinions according to EquationEquation (6; in Section 3.3). We also examine the effects of different dimensions (D) of the opinion vector, and the memory capacity (μ).

3.1. Methodology

Numerical simulations were run in MATLAB (code available at https://github.com/bmstokes/belief_dynamics/releases/tag/v.1.0.0 under the Mozilla Public License 2.0). Every simulation is for N=100 agents. The D components of every initial opinion are drawn randomly from D independent standard normal distributions. We adopt this simplistic initialization method on the basis of its universality: in the absence of any specific context, it is reasonable to consider a normally distributed initial population of opinions, which can then be standardized to enable comparison across the dimensions. We do, however, acknowledge that some real-world scenarios may not be well represented by this initial sampling; we will present some examples in Section 4 and discuss how they can be investigated using the model in future work. The simulation results presented here serve to demonstrate the power of the model: rich and varied phenomena emerge from simple initial conditions and hold strong explanatory power, as we will demonstrate in the following sections. Similarly rich phenomena are bound to emerge from more complex initial states, which any user of the model is always free to specify. For the present study, under each value of D, we generated 1000 distinct initial states; and for each set of other parameter values (some combination of μ,ρ,α, see, ), we ran 1000 simulations using that common set of initial states, allowing us to control for the parameters μ,ρ and α.

Table 1. Parameters used in numerical simulations.

The system is in a steady state from time t0 onwards if vi(t+1)vi(t)=0 for all i for all tt0. In the special case of zero universal threshold (ρi=ρ=0), Proposition 2.2 has established that the only possible steady state is consensus, and that the system converges to it from any initial state in the sense of vi(t+1)vi(t)=0 for all i as t. For other choices of ρi, systems may converge to other (non-consensus) types of steady state, and some systems may not converge to any steady state. For the practical purpose of numerical simulations, where it is impossible to let t, we use the following procedure to determine whether a system has reached a (pseudo-)steady state, allowing us to terminate the simulation at some finite time.

(1) Two agents are in the same cluster if the Euclidean distance between their opinions is less than 106. The clustering of the population refers to the partition of the N agents into their clusters. For example, if agents labeled by even numbers are in one cluster (all pairwise distances less than 106) while all odd-numbered agents are in a different cluster, at some time t, then we say that the clustering at this time is {(1,3,5,),(2,4,6,)}.

(2) If there exists some time tc0 such that, for t=tc+1,tc+2,,tc+100:

(a) The clustering of the population remains the same as the clustering at time tc; and

(b) No agent ‘accelerates’ by more than 106 in any dimension at any time, i.e., maxi,j{vij(t+1)vij(t)}106; and

(c) no agent’s opinion at t=tc+100 is further than 106 away in any dimension from their opinion at t=tc, i.e., maxi,j{vij(tc+100)vij(tc)}106;

then we say that the system has reached a (pseudo-)steady state at time tc, and stop the simulation. We call tc the convergence time of this system.

In short and roughly speaking, we stop the simulation at time tc if all agents have barely moved for 100 time-steps, and our definition of ‘barely’ is a very strict condition. The ‘pseudo-steady’ states therefore serve as very good proxies for the real (analytical) steady states of the system, so we will refer to them simply as steady states. If, by the criteria above, the system fails to converge to any steady state within 5000 time-steps, then we declare that the system in that particular configuration (of initial state and parameters) fails to converge.

Through our simulations, we find that a system with universal threshold (ρi=ρ taking values as per ) always converges to some steady state, regardless of the other parameters and initial state (see, Section 3.2). On the other hand, a system with individually evolving heterogeneous thresholds, where the reinforcement rate takes values as per , sometimes fails to converge in interesting ways (see, Section 3.3).

3.2. Universal threshold: consensus versus segregation

In this part of the investigation, we assume that all agents have the same threshold, which remains constant for all time. That is, ρi(t)=ρ=constant for all i.

The simulations produced two distinct types of phenomena. The system reaches either a steady state of consensus, in which there is exactly one cluster (see, ), or a steady state of segregation, where more than one clusters co-exist (). In particular, whenever consensus is formed, the consensus opinion equals the initial mean opinion of the population, as Proposition 2.2 predicts. When segregation is reached under a high value of ρ, it is typical that some agents never alter their opinion for all t>0 (). This stubbornness is exhibited only by agents whose initial opinions are ‘extreme,’ i.e. far from 0. Since the universal threshold is high, everyone listens only to a small number of others, and it is likely that those who hold initial opinions far from everyone else will never be influenced by anyone. In the example of , the set of connections among all agents, or the connectome of the population, evolves in the manner displayed by . The figure shows that even at the initial t=0, one agent is ‘their own island’: not connected to anyone. This agent never alters their opinion while the agents who have connections evolve their positions. It is a common feature of the model that as the opinions evolve, the connectome becomes more disconnected in the graph-theoretical sense: more isolated ‘islands’ appear. In the example illustrated by , the nine clusters that constitute the population’s final steady state have almost stabilized by t=7, at which time only a few connections remain while the majority of initial connections have been severed. The cutting of a connection occurs if the pairwise affinity drops below the relevant threshold, and affinity decays over time if two agents keep failing to agree with each other. The model dictates that agents always try to align with neighbors; the difference between their succeeding in coming together and failing to do so (before their connection is cut) gives rise to the difference between consensus and segregation.

Figure 2. Examples of one- and two-dimensional opinion trajectories of N=100 agents evolving according to (3), with memory capacity μ=2. Panels (a,b): One-dimensional dynamics from a common initial state (sampled from a standard normal distribution), with different universal thresholds: ρ=0 (a) and ρ=0.8 (b). Panels (c,d): Two-dimensional dynamics from a common initial state (the two dimensions sampled from two independent standard normal distributions), with different universal thresholds: ρ=0 (c) and ρ=0.8 (d); v(1) and v(2) denote the first and second dimensions of the opinions, respectively. Consensus is reached in (a,c), while (b) exhibits segregation with 6 distinct clusters, and (d) shows segregation with 9 distinct clusters.

Figure 2. Examples of one- and two-dimensional opinion trajectories of N=100 agents evolving according to (3), with memory capacity μ=2. Panels (a,b): One-dimensional dynamics from a common initial state (sampled from a standard normal distribution), with different universal thresholds: ρ=0 (a) and ρ=0.8 (b). Panels (c,d): Two-dimensional dynamics from a common initial state (the two dimensions sampled from two independent standard normal distributions), with different universal thresholds: ρ=0 (c) and ρ=0.8 (d); v(1) and v(2) denote the first and second dimensions of the opinions, respectively. Consensus is reached in (a,c), while (b) exhibits segregation with 6 distinct clusters, and (d) shows segregation with 9 distinct clusters.

Figure 3. Evolution of the connectome of the population as the opinions evolve in the manner of , with parameters D=2,μ=2,ρ=0.8,α=0. Opinions are represented by (blue) dots while connections between agents are (black) lines. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots. Since the threshold ρ is universal, every connection is bidirectional: the pair of agents influence each other. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots.

Figure 3. Evolution of the connectome of the population as the opinions evolve in the manner of Figure 2d, with parameters D=2,μ=2,ρ=0.8,α=0. Opinions are represented by (blue) dots while connections between agents are (black) lines. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots. Since the threshold ρ is universal, every connection is bidirectional: the pair of agents influence each other. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots.

The rate at which connections are lost is strongly dependent on the population’s memory capacity, μ. Comparing to , we see that given identical initial opinions and other parameters, the connectome evolves more quickly when the memory capacity is small. That is, if agents quickly forget past discrepancies, then the connectome gets rewired dramatically at each step, and the system takes few steps to stabilize. This result is reminiscent of a recent success story in mathematical sociology. After multiple theoretical models predicted that the rapid rewiring of a social network promotes cooperative behavior (Fu et al., Citation2008; Hanaki et al., Citation2007; Santos et al., Citation2006), the phenomenon was observed in a human experiment by Rand et al. (Citation2011). In the current model, faster rewiring of the connectome accompanies not only faster stabilization of the population, but also the formation of fewer, larger clusters (see, ). This effect is most pronounced when the dimensionality of opinion space is D=2 or 3 and when the universal threshold is high (ρ0.8). For example, in two-dimensional simulations with ρ=0.8, the mean number of stable clusters formed is 19 if μ=10 and 12 if μ=2, the latter scenario having necessarily larger cluster sizes on average (). Even more dramatically, in three-dimensional simulations with ρ=0.8, the mean number of stable clusters is 48 if μ=10 and 35 if μ=2 (). By interpreting the large clusters (which are always close to the neutral 0 position of opinion space) as cooperative groups, and the small clusters (which are always on the periphery of opinion space) as ‘defectors’ in the language of Rand et al. (Citation2011), we are able to understand the dynamics presented here as a process of seeking cooperation. Note that in order to make such identifications, we need to assume that cooperation is the neutral, or default, position; that a randomly sampled population will position themselves in a normal distribution around it. The smaller the memory capacity (or, the more ‘forgetful’ the agents), the more quickly the network gets rewired and cooperative clusters are formed, and the larger those clusters. This finding is consistent with Rand et al. (Citation2011) and the preceding theoretical predictions.

Figure 4. Evolution of the connectome of the population as the 2-dimensional opinions evolve from the same initial state as in , with parameters μ=10,ρ=0.8,α=0. Opinions are represented by (blue) dots while connections between agents are (gray) lines. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots. Since the threshold ρ is universal, every connection is bidirectional: the pair of agents influence each other.

Figure 4. Evolution of the connectome of the population as the 2-dimensional opinions evolve from the same initial state as in Figure 2d, with parameters μ=10,ρ=0.8,α=0. Opinions are represented by (blue) dots while connections between agents are (gray) lines. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots. Since the threshold ρ is universal, every connection is bidirectional: the pair of agents influence each other.

Figure 5. Number of clusters formed for N=100 agents with universal threshold ρ taking values as per ; 1000 simulations per value of ρ. A common set of 1000 initial states are used. Dimensionality D= 1 (a,b), 2 (c,d), 3 (e,f), and 5 (g,h). Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h).

Figure 5. Number of clusters formed for N=100 agents with universal threshold ρ taking values as per Table 1; 1000 simulations per value of ρ. A common set of 1000 initial states are used. Dimensionality D= 1 (a,b), 2 (c,d), 3 (e,f), and 5 (g,h). Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h).

The remainder of this section focuses on the effects of the parameters D,ρ and μ on the simulation results, particularly on clustering and segregation. We reiterate that these results are contingent on the assumption of normally distributed initial opinions.

For any given initial state, the system reaches consensus if ρ is sufficiently small, and segregation if ρ is sufficiently large (all other parameters being fixed). That is to say, if everyone is sufficiently amenable, then consensus will be formed; otherwise, there will be segregation. A deeper investigation of this phenomenon reveals a key feature of the model. For any fixed D and μ, the number of clusters in the steady state tends to increase with ρ; in fact, the mean number of clusters formed over 1000 simulations is a monotonic function of ρ (see, ). If ρρc for some ρc which depends on D and μ, the only outcome over 1000 simulations is consensus. For example, if (D,μ)=(1,2) then ρc=0.29 (); if (D,μ)=(1,10) then ρc=0.2 (); and if (D,μ)=(2,2) then ρc=0.27 (). We find that ρc is a decreasing function of both D and μ: the more high-dimensional the opinions, or the larger the collective memory capacity, the more amenable everyone must be in order to form a consensus. All these simulation results are consistent with the sufficient condition (19) for consensus. If D3, we find that some initial states lead to steady states with as many clusters as there are agents: every agent holds their own unique opinion and will not change them. We call such a steady state maximum segregation. These states are achievable (over the 1000 simulations that we ran) only if ρρs for some ρs which depends on D (but its dependence on μ is negligible). For example, if D=3 then ρs=0.97 (Figure 5ef); and if D=5 then ρs=0.83 (Figure 5gh). We find that ρs is a decreasing function of D: the more high-dimensional the opinions, the easier it is for the system to reach maximum segregation. In particular, for D=5, the mean number of clusters formed resembles a sigmoid function of ρ where, for ρ0.93, even the mean number is greater than 99.5, indicating that maximum segregation is extremely likely.

We also find that if ρρnc for some ρnc which depends on D and μ, then the population never forms a consensus. For example, if (D,μ)=(1,2) then ρnc=0.85 (see, ); if (D,μ)=(1,10) then ρnc=0.78 (); and if (D,μ)=(2,2) then ρnc=0.71 (). If (D,μ)=(5,10), then ρnc becomes as small as 0.34. The more high-dimensional the opinions, or the larger the collective memory capacity, the easier it is for consensus to be impossible.

The convergence time, tc (defined in Section 3.1), is strongly dependent on the memory capacity, μ (see, ). When μ=2, no simulations take more than 50 steps to converge, and 95% of simulations take fewer than 25 steps to converge (). Raising the memory capacity to μ=10 approximately doubles the convergence time (). The mean convergence time is maximized by a ρ-value that is negatively correlated with both D and μ. When D=3 or 5, simulations with large ρ can yield zero convergence time (). Indeed, if the affinity threshold is so high that there are no interactions between agents in the initial state, then no agent would ever deviate from their initial opinion, leading to maximum segregation with 100 distinct clusters (see, ).

Figure 6. Convergence time for N=100 agents with universal threshold ρ taking values as per ; 1000 simulations per value of ρ. A common set of 1000 initial states are used. Dimensionality D= 1 (a,b), 2 (c,d), 3 (e,f), and 5 (g,h). Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h).

Figure 6. Convergence time for N=100 agents with universal threshold ρ taking values as per Table 1; 1000 simulations per value of ρ. A common set of 1000 initial states are used. Dimensionality D= 1 (a,b), 2 (c,d), 3 (e,f), and 5 (g,h). Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h).

We define the opinion drift of a system as the Euclidean distance from the initial mean opinion of the population to the steady-state mean opinion. The simulations reveal that the mean opinion drift over all simulations is maximized at some ρ=ρd which depends on D and μ (see, ). While ρd is a decreasing function of both D and μ, the maximum value of mean opinion drift increases with D and μ, reaching approximately 0.11 when (D,μ)=(5,10). The opinion drift is zero for sufficiently small ρ, a result consistent with the fact that (19) is a sufficient condition for convergence to the mean initial opinion. The phenomenon of opinion drift demonstrates that the population’s average opinion tends to change over time as the agents evolve into clusters, and it tends to change more for more complex systems (recall that the system is most complex at intermediate values of ρ). The simplest systems, with extreme values of ρ, tend to exhibit very small amounts of opinion drift as the agents either form a consensus (small ρ) or barely adjust their opinions (large ρ). A similar fact holds for the convergence time: the more complex systems tend to take longer to reach steady state ().

Figure 7. Opinion drift for N=100 agents with universal threshold ρ taking values as per ; 1000 simulations per value of ρ. A common set of 1000 initial states are used. Dimensionality D= 1 (a,b), 2 (c,d), 3 (e,f), and 5 (g,h). Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h). The grouping precision of opinion drift is 0.001; that is, the histogram bins are the intervals [0,0.001),[0.001,0.002), and so on.

Figure 7. Opinion drift for N=100 agents with universal threshold ρ taking values as per Table 1; 1000 simulations per value of ρ. A common set of 1000 initial states are used. Dimensionality D= 1 (a,b), 2 (c,d), 3 (e,f), and 5 (g,h). Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h). The grouping precision of opinion drift is 0.001; that is, the histogram bins are the intervals [0,0.001),[0.001,0.002), and so on.

3.3. Evolving heterogeneous thresholds: extremisation and oscillations

In this second line of investigation, we allow agents to evolve their thresholds from some baseline value, ρ, according to EquationEquation (6; see, ). Recall that the reinforcement rate, α>0, determines how sharply one’s threshold increases as one’s opinion becomes more extreme. Agents with more extreme views will have higher thresholds and therefore be less inclined to listen to other agents, thus making those extreme agents appear ‘stubborn.’ This correlation between extremeness of views and stubbornness has been studied in formal models and observed in real data (Kozitsin, Citation2020; Tian et al., Citation2021). For simplicity, we fix the dimensionality of opinion space at D=2 throughout this section.

Unlike the scenario with a universal threshold (which can be recovered in the limit α0) where every initial state leads to a steady state, we find that when α is sufficiently large, not all initial states induce a steady state (see, ). The number of failures to reach steady state in 1000 simulations, Fμ,α(ρ), is negatively correlated with the baseline threshold, ρ, and positively correlated with the memory capacity, μ. Given any combination of (μ,α,ρ) within the range as per , the number of simulations that reach steady state is always at least 950, providing a suitably large pool of results to analyze. We consider the cases that fail to converge, and the collective dynamics that arise, in more detail in Section 3.4.

Figure 8. Under evolving heterogeneous thresholds, with reinforcement rate α>0, a small number of simulations out of the total 1000 fail to produce a steady state. That number, Fμ,α(ρ), depends on α, the memory capacity μ, and the baseline threshold ρ. Panel (a): (D,μ)=(2,2). Panel (b): (D,μ)=(2,10).

Figure 8. Under evolving heterogeneous thresholds, with reinforcement rate α>0, a small number of simulations out of the total 1000 fail to produce a steady state. That number, Fμ,α(ρ), depends on α, the memory capacity μ, and the baseline threshold ρ. Panel (a): (D,μ)=(2,2). Panel (b): (D,μ)=(2,10).

For every setting of (μ,α,ρ), the 1000Fμ,α(ρ) simulations that do reach steady state provide us with results on cluster formation and on convergence time, enabling comparisons with corresponding results in the case of universal thresholds. Firstly, the mean number of clusters formed is an increasing function of ρ, α and μ (see, ), and consistently higher than the counterpart under a universal threshold (). Thus, a system where agents become more stubborn as their opinions become extreme tends to become more segregated than a system with a universal threshold. Meanwhile, for sufficiently small ρ, the mean convergence time is much larger under evolving heterogeneous thresholds than under universal thresholds (compare with ). A larger reinforcement rate α is therefore responsible not only for more splintering of the population, but also for longer times taken by any sub-population to reach an agreement.

Figure 9. Under evolving heterogeneous thresholds, the mean number of clusters formed (a,b) and the mean convergence time (c,d) are taken, for each parameter setting (μ,α,ρ), from all simulations that result in steady states. Panels (a,c): μ=2. Panels (b,d): μ=10. Other parameters used for each panel: D=2, α=0.1,0.2,0.4,0.8, and ρ=0,0.01,0.02,,0.99.

Figure 9. Under evolving heterogeneous thresholds, the mean number of clusters formed (a,b) and the mean convergence time (c,d) are taken, for each parameter setting (μ,α,ρ), from all simulations that result in steady states. Panels (a,c): μ=2. Panels (b,d): μ=10. Other parameters used for each panel: D=2, α=0.1,0.2,0.4,0.8, and ρ=0,0.01,0.02,…,0.99.

The most striking result that we observe from simulations relates to the extremisation of opinions. We define the extremisation measure of the system as the difference between two Euclidean norms:

(20) Extremisationmeasure=1Ni=1Nvi(tc)1Ni=1Nvi(0),(20)

where N is the population size (always 100 in this study), vi(t) are the opinions and tc is the convergence time. Recall that the origin in D-dimensional opinion space represents the neutral opinion, and that the Euclidean norm of any position in the opinion space is a measure of how extreme it is. Thus, the extremisation measure represents the extent to which the population’s average view becomes more extreme over the course of the opinion dynamics; a positive (negative) value indicates that the average view becomes more extreme (more moderate). Note that extremisation is unlikely to be negative when we generate the initial opinions from normal distributions, which necessarily results in an initial mean close to 0. Nevertheless, the fashion in which positive extremisation occurs is illuminating, as we now proceed to demonstrate.

In many instances, we observe that the mechanism by which the average view becomes more (or less) extreme over time is a collective drift (see, ), in which a large group of agents form an unstable drifting cluster with more members than any stable cluster. These drifting agents first coalesce around some neutral opinion, before collectively moving away from it, being drawn to a small number of fringe agents.The drifting cluster eventually stabilizes, merging with the fringe attractors, so the population reaches a steady state. The drift toward the extremities of the opinion space equates to a positive extremisation measure for the population. This phenomenon where fringe agents exert great influence over the moderate amenable majority, pulling their opinions to the extremes, has been widely studied in the context of radicalization. For example, it has been observed that when university students without strong existing social identities are exposed to a large variety of strong views, they become at high risk of radicalization (Hollewell & Longpré, Citation2022). More generally, it has been proposed that fair-minded individuals become radicalized through deepening engagement with extremists on a gradually narrowing ‘Staircase to Terrorism’ (Moghaddam, Citation2005).

Figure 10. Two examples of collective drift of opinions under evolving heterogeneous thresholds. The first and second dimensions of the opinions are denoted by v(1) and v(2), respectively. Drifting begins at some t which is chosen to best illustrate the trajectory (rather than rigorously defined). Final opinions are taken at the convergence time, tc. Parameters: μ=2,ρ=0,α=0.8.

Figure 10. Two examples of collective drift of opinions under evolving heterogeneous thresholds. The first and second dimensions of the opinions are denoted by v(1) and v(2), respectively. Drifting begins at some t which is chosen to best illustrate the trajectory (rather than rigorously defined). Final opinions are taken at the convergence time, tc. Parameters: μ=2,ρ=0,α=0.8.

A detailed view of the dynamics depicted in is presented in . We see that three fringe clusters have formed by the time t=6, after which they exert influence over the relatively neutral majority without moving their own positions. At a much later time, one of the fringe groups begins moving under the influence of the majority due to its close proximity, and eventually merges with the majority, stabilizing the entire population.

Figure 11. Key steps in the evolution of the connectome of the population as the opinions evolve in the manner of , with parameters D=2,μ=2,ρ=0,α=0.4. Opinions are represented by (blue) dots, bidirectional connections are dark (gray) lines and unidirectional connections are light (blue) lines. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots. Since the threshold ρ is heterogeneous, unidirectional connections may exist, where agent j influences agent i without reciprocation.

Figure 11. Key steps in the evolution of the connectome of the population as the opinions evolve in the manner of Figure 10a, with parameters D=2,μ=2,ρ=0,α=0.4. Opinions are represented by (blue) dots, bidirectional connections are dark (gray) lines and unidirectional connections are light (blue) lines. The opacity of the dots increase as agents overlap, so that the larger the cluster, the darker the dots. Since the threshold ρ is heterogeneous, unidirectional connections may exist, where agent j influences agent i without reciprocation.

The simulations show that a short memory capacity (μ=2) tends to induce larger extremisation measures than a long one (μ=10), suggesting that a population who takes a long history of itself into account is less likely to become extremised (comparing with b,d,f,h). This finding supports the theory that, the more strongly one’s recent memory influences one’s online behavior, the more rapidly one tends to become sympathetic to extremist views (Z. Z. Cao et al., Citation2018). If the baseline threshold ρ is close to 1, then almost all simulations produce extremisation measures close to zero, simply because these systems tend not to induce any changes in opinions at all. If the reinforcement rate α is small, then the majority of simulations produce zero extremisation (even though outliers with enormous extremisation skew the mean value away from the median; see, ). If α is suitably large and ρ sufficiently small (a population where the neutral agents are highly amenable but the fringe agents are highly stubborn), then the mean and median values of extremisation measure closely align, and we infer that the population’s most likely behavior is high extremisation (). In such cases, for every fixed (μ,α) pair, the mean/mode extremisation measure is maximized by ρ=0. In particular, for (μ,α,ρ)=(2,0.4,0), the mean/mode extremisation measure is just over 1 (), which is a substantial distance in the normalized opinion space. That is to say, the agents tend to move a long way from their initial positions to become their extremised final selves.

Figure 12. Extremisation measure for N=100 agents with evolving heterogeneous thresholds ρi(t); the baseline threshold ρ takes values as per .. A thousand simulations are performed per value of ρ and a common set of 1000 initial states are used for each ρ. Dimensionality D=2. Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h). Reinforcement rate α= 0.1 (a,b), 0.2 (c,d), 0.4 (e,f), and 0.8 (g,h). The grouping precision of extremisation measure is 0.01; that is, the histogram bins are intervals of size 0.01, giving the same number of bins as in .

Figure 12. Extremisation measure for N=100 agents with evolving heterogeneous thresholds ρi(t); the baseline threshold ρ takes values as per .Table 1. A thousand simulations are performed per value of ρ and a common set of 1000 initial states are used for each ρ. Dimensionality D=2. Memory capacity μ= 2 (a,c,e,g), and 10 (b,d,f,h). Reinforcement rate α= 0.1 (a,b), 0.2 (c,d), 0.4 (e,f), and 0.8 (g,h). The grouping precision of extremisation measure is 0.01; that is, the histogram bins are intervals of size 0.01, giving the same number of bins as in Figure 7.

All the extremisation results mirror the well-known socio-psychological effect of group polarization, where a group moves toward a view more extreme than most individual views that were held before their exposure to social influence (Moscovici & Zavalloni, Citation1969; Myers & Lamm, Citation1976). A similar effect has been observed in the increasing polarization of the US senate over time (Liu & Srivastava, Citation2015). The present model provides a detailed view of the mechanics underlying the group polarization effect; for example, we have described the collective drift mechanism, where the majority abandon their moderate initial agreement and become extremised by fringe agents. A sociologically significant lesson arising from these results is that, if the fringe agents, who hold extreme views to begin with, were more amenable to change (i.e. if α were smaller in the model), then such collective extremisation would not occur.

3.4. Failure to converge: collective oscillations

As seen in , when agents possess evolving heterogeneous thresholds, a small number of simulations fail to converge to a steady state. Before presenting the dynamics produced by the numerical results, we will first explicitly construct a system with evolving heterogeneous thresholds as per Equationeq. (6), which fails to converge to any steady state and instead exhibits oscillatory dynamics.

Consider N3 agents in D=1 dimension, with opinions denoted by vi for i=1,2,,N. Let the memory capacity μ=1 and baseline threshold ρ=0. At t=0, let v1=L<0, v2=R>0. We require R+L0 and assume without loss of generality that R+L>0, then define

(21) C=R+L2>0.(21)

Let the initial v3=v4==vN=v for some v(0,C). The following facts about the affinities aN1 and aN2 are easily established through elementary calculus.

(1) aN1 is a strictly decreasing, smooth, positive function of v(0,C);

(2) aN2 is a strictly increasing, smooth, positive function of v(0,C);

(3) aN2<aN1 for all v(0,C), with aN1(vC)=aN2(vC)=11+X2, where we have defined the half-distance between R and L, X=(RL)/2>L.

Whatever R and v are, we choose a reinforcement rate α>0 such that the threshold ρN coincides with aN2; that is,

(22) 1eαv=11+(Rv)2,(22)

which we rearrange to give

(23) eα=111+(Rv)21/v.(23)

We therefore have ρ3=ρ4==ρN=aN2<aN1, meaning that when agents 3,4,,N are at position v, they listen to agent 1 and do not listen to agent 2. As a corollary, since 1eαR>1eαv and 1/1+(Rvi)2<1/1+(Rv)2 for all vi<v, agent 2 (while at position R) listens to no opinions less than or equal v. We take R and v to be such that α satisfies the constraint

(24) 1eα|L|11+L2eα111+L21/|L|.(24)

which ensures that agent 1 (while at position L) listens to no opinions greater than or equal to 0.

We proceed to find further conditions under which, for agents 3,4,,N initialized at v, the subsequent dynamics are periodic: v3(t>0)==vN(t>0)={0,v,0,v,}. To begin, we seek to make their common opinion zero at t=1; that is,

(25) 0=v+1naN1(Lv)=v+Lvn1+(Lv)2,wheren=N1,(25)

which implies the quadratic equation for L,

(26) (n2v21)L2+(2v2n2v3)L+((n21)v2+n2v4)=0.(26)

EquationEquation (26) has real solutions if and only if

v±1nand 0(2v2n2v3)24(n2v21)((n21)v2+n2v4)
(27) =4n2v24n4v4,(27)

if and only if 1/n<v<1/n. Since v>0 by construction, we use the constraint 0<v<1/n. Thus, (26) has exactly one negative solution, which also solves (25):

(28) L=v1n1n2v2.(28)

According to (28), L is a strictly decreasing function of v; for all v(0,1/n), we have L<(1n)v<v . Next, we make v3(t=2)==vN(t=2)=v. Since their common threshold when v3==vN=0 is 0, all those agents listen to both agent 1 and agent 2, so we require

(29) 1NR1+R2+L1+L2=v.(29)

It is clear that for all v>0, any R>0 and L<0 satisfying (29) must be related by R>L. To ensure that (29) has a real R solution, we impose the constraint

(30) NvL<1,(30)

which implies NvL/1+L2<1 (since L<0), and therefore R/1+R2=NvL/1+L2 can be solved for R. Now, using (28) to write L in terms of v in (30) yields

(31) nv1+11n2v2<1,(31)

which translates to v<m/n, where

(32) m=1212+810.469.(32)

Note that v<m/n is a stricter condition than v<1/n.

So far, we have established that any v(0,m/n), and the corresponding value of L<v determined by (28), guarantee the existence of some R>L satisfying (29). The question remains as to whether for some such v, the reinforcement rate α according to (23) is able to satisfy the constraint (24). To that end, we need

(33) 111+(Rv)2|L|/v111+L2=1cos(arctanL).(33)

We will prove that (33) holds if n is sufficiently large and v appropriately defined in terms of n. Let

(34) nv=sinϕ,(34)

for some ϕ which satisfies

(35) sinϕ+tanϕ=cosϕ.(35)

The left-hand side of (35) is a strictly increasing function of ϕ[0,arcsin m], with sin(0)+tan(0)=0 and sin(arcsinm)+tan(arcsinm)=m(1+1/1m2)=1 by definition of m; while the right-hand side is strictly decreasing from cos(0)=1 to cos(arcsinm)<1. Therefore, (35) has exactly one solution ϕ(0,arcsinm), so that v(0,m/n). Putting (34) into (28), we find

(36) L=sinϕntanϕ,(36)

and hence |L|/v=nsecϕ1. Using the identity R/1+R2sin(arctanR), re-arranging (29) yields

(37) R=tanarcsinNvL1+L2,(37)

and using the identities 1/1+R2cos(arctanR) and cos(arcsinZ)1Z2, we further deduce

11+R2=1NvL1+L22
>1NvL2
(38) =1sinϕ+tanϕ2,(38)

where the final equality follows from (34) and (36). By (35), we then find

(39) 11+R2>sinϕ.(39)

Since v(0,R), it then follows that

(40) 111+(Rv)2|L|/v<111+R2|L|/v<(1sinϕ)nsecϕ1:=F(n).(40)

We find that F(n) is a strictly decreasing function of n0 with F(cosϕ)=1 and limnF(n)=0. Moreover, we have

(41) 1cos(arctanL)=1cosarctansinϕntanϕ:=G(n),(41)

which is a strictly increasing function of ncosϕ with G(cosϕ)=0 and limnG(n)=1cosϕ>0. Therefore there exists some nmincosϕ such that, for all nnmin, we have F(n)G(n). Thus, (33) holds for all nnmin.

We have now shown that the common opinion of agents 3,4,,N moves from v0 at t=0, to 0 at t=1, back to v at t=2. In the meantime, agents 1 and 2 do not move since that they are too ‘stubborn’ to listen to any opinions in [0,v]. Thus, the system has returned at t=2 to its original state, and will continue to oscillate with period 2. We have therefore constructed an N-body system, with explicitly specified parameters and initial condition, which follows periodic dynamics. It is interesting that this particular construction is possible only if the number of agents sharing the oscillatory opinion is sufficiently large, i.e. n1nmin1.

This condition is borne out by our numerical simulations of the model (even in higher dimensions and with larger memory capacities), where we see oscillations of the ‘neutral majority’ being pulled back and forth by a small number of extreme agents. In our simulations, whenever a system fails to reach a steady state, a number of stable clusters are formed, while the remaining agents form an unstable cluster that oscillates collectively by small amounts O(103) along each dimension (see, ).These collective oscillations have a long timescale compared to the memory capacity of the population. Moreover, the oscillatory cluster is always the majority, having more members than any of the stable clusters. The oscillations are facilitated by the majority agents’ evolving thresholds. As exemplified by , while the majority cluster near position (0.07,0.62) moves toward the neutral (0,0) position due to an attraction to the fringe cluster near position (2.35,0.18), the majority agents’ thresholds decrease according to EquationEquation (6). When these thresholds become sufficiently low, the fringe agents further away ‘on the other side’ become able to exert influence on the majority, pulling them back toward the other extreme. While the majority move away from the neutral position, their thresholds increase again until they become so high that only the fringe cluster closest to them, near position (2.35,0.18), can exert influence. This oscillatory process continues indefinitely. While the moderate majority swing from one position to another, failing to settle, the peripheral agents hold firm their positions, having such high thresholds that they fail to listen to any other cluster.

Figure 13. Example of a system that fails to reach a steady state under evolving heterogeneous thresholds. The time periods 0–10, 0–100, and 4500–5000 are shown. In particular, the top-right inset in each panel shows only the large-time dynamics of the oscillatory cluster. Panels (a,b) show the first and second dimensions of the opinions, respectively. Each cluster is shown in a different color. Parameters: D=2,μ=2,ρ=0,α=0.4.

Figure 13. Example of a system that fails to reach a steady state under evolving heterogeneous thresholds. The time periods 0–10, 0–100, and 4500–5000 are shown. In particular, the top-right inset in each panel shows only the large-time dynamics of the oscillatory cluster. Panels (a,b) show the first and second dimensions of the opinions, respectively. Each cluster is shown in a different color. Parameters: D=2,μ=2,ρ=0,α=0.4.

4. Conclusions and future directions

We have presented a novel agent-based model of opinion dynamics capable of mimicking many socio-psychological phenomena. The model extends several existing frameworks through bespoke elements such as an agent’s interaction threshold (generalizing the confidence bound), a measure of pairwise affinity between agents, and a system-wide memory capacity. The resulting dynamics is a non-Markovian, nonlinear process of opinion updating. We have analyzed the mathematical properties of the model, and explored the rich variety of simulated behavior that emerges from the dynamics, focusing on consensus, segregation, and extremisation.

The agents’ interaction thresholds are assigned in one of two ways: either prescribing a universal and constant threshold for all agents, or allowing each agent to evolve their own threshold such that the more extreme agents are less susceptible to change. When all agents are given a universal threshold, the system achieves a steady state of either consensus or segregation. We have proved that if all agents are assigned a sub-critical universal threshold ρ<ρ, where ρ is dependent on parameters μ and D as per (13), then consensus is formed regardless of the initial configuration of opinions, and the consensus view equals the average (mean) opinion of the initial state. The system transitions from consensus to segregation as the interaction threshold increases. Through numerical simulations, we have investigated the effects of the model parameters on the opinion clustering, convergence time, and opinion drift. It is found that a high universal threshold promotes segregation in generic D-dimensional opinion space, extending similar findings by Hegselmann and Krause (Citation2002) in one-dimensional opinion space. The simulations also reveal that the connectome of the population becomes more disconnected as the opinions evolve, and the rate at which the connectome rewires itself is strongly dependent on the system’s memory capacity. The opinion dynamics can be seen to represent a process of seeking cooperation, reflecting recent theoretical and experimental results (Rand et al., Citation2011).

In the case where the agents individually evolve their thresholds with some reinforcement rate (a model parameter controlling the rate at which agents become more stubborn), we have examined the system’s clustering behavior. Steady states are not always achieved in this case. By explicitly constructing an N-body system that forms an oscillatory cluster near the neutral position, we have proven that the model admits periodic solutions. Extreme agents ‘on either side’ of the cluster exert their influence in turn, resulting in the oscillations. The construction shows that periodic solutions are possible only if the oscillatory cluster is sufficiently large. Numerical simulations reveal oscillatory behavior of large clusters under various parameter settings. Both the analytic and numerical results in Section 3.4 demonstrate the power of stubborn fringe agents over the neutral majority. By introducing an extremisation measure, we have quantified the extent to which the collective opinion becomes more extreme over time. Extremisation is maximized when the baseline threshold (of entirely neutral agents) is small but the reinforcement rate is large. A population that takes a longer history of itself into account (larger memory capacity) is less likely to become extremised than a population that quickly forgets the past. These results echo the socio-psychological phenomena of group polarization (Moscovici & Zavalloni, Citation1969; Myers & Lamm, Citation1976) and online extremism (Z. Z. Cao et al., Citation2018), providing a mechanistic explanation for the behaviors. When extremisation is large, it tends to involve a process of collective drift, where a large cluster of moderate agents moves toward a small cluster of extremists. The fact that extremisation occurs when fringe agents have a low tolerance to others corroborates the theory of Deffuant et al. (Citation2000).

For simplicity of methodology and ease of interpretation, we have assumed that the initial opinions in each dimension of opinion space follow a normal distribution. It is worth reiterating that the system’s subsequent behaviors are rich in variety despite the simplistic initial states. We expect an even richer range of phenomena to emerge from more sophisticated initial opinion distributions that may be better fits for real-world scenarios. For example, when a new political issue arises and a population forms initial opinions on the matter, those opinions may already be polarized rather than normally distributed, especially if media-driven tribalisation encourages immediate segregation (Llewellyn & Cram, Citation2016; Meredith & Richardson, Citation2019). The current model is capable of simulating the opinion dynamics in this context; one simply needs to input the appropriate data describing the initial opinions of the population. Moreover, when modeling multi-dimensional issues, it may be appropriate to sample initial opinions from correlated distributions, rather than independent distributions as we have done in this paper (Bartels, Citation2018).

It is also worth noting that other frameworks for performing a stability analysis on state dependent networks exist (Etesami, Citation2019; Proskurnikov & Tempo, Citation2018). In particular, the paper by Etesami (Citation2019) contains some mathematical tools that could be used in the future for analysis of the model we have proposed. Other potential extensions to the model may include: a repulsive force, where low-affinity pairs do not merely ignore each other but actively move away from each other’s views; stochastic fluctuations in the agents’ interaction thresholds, representing externally-driven variations in one’s openness to other people; and a hierarchical population where some agents are assigned a much higher interaction threshold than the majority, describing powerful individuals exerting influence with little reciprocation. Overall, the modeling framework developed in this study generates various sociologically relevant phenomena under simple assumptions, while being sufficiently versatile to suit more elaborate contexts, and integration with experimental data in future work will help to further enhance the theory.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This project is supported by the British Academy, grant number SRG1920_101649. The computer simulations are performed on the BlueBEAR HPC system at the University of Birmingham, UK. BMS is supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/S022945/1. JL thanks Samuel Johnson (University of Birmingham) for useful discussions and thanks the University of Birmingham for fellowship funding.

References

  • Alizadeh, M., Cioffi-Revilla, C., & Crooks, A. (2015). The effect of in-group favoritism on the collective behavior of individuals’ opinions. Advances in Complex Systems, 18( 01n02), 1550002. https://doi.org/10.1142/S0219525915500022
  • Anderson, B. D., & Ye, M. (2019). Recent advances in the modelling and analysis of opinion dynamics on influence networks. International Journal of Automation and Computing, 16(2), 129–149. https://doi.org/10.1007/s11633-019-1169-8
  • Artime, O., Peralta, A. F., Toral, R., Ramasco, J. J., & San Miguel, M. (2018). Aging-induced continuous phase transition. Physical Review E, 98(3), 032104. https://doi.org/10.1103/PhysRevE.98.032104
  • Banisch, S., & Olbrich, E. (2019). Opinion polarization by learning from social feedback. The Journal of Mathematical Sociology, 43(2), 76–103. https://doi.org/10.1080/0022250X.2018.1517761.
  • Bartels, L. M. (2018). Partisanship in the Trump era. The Journal of Politics, 80(4), 1483–1494. https://doi.org/10.1086/699337
  • Benatti, A., de Arruda, H. F., Silva, F. N., Comin, C. H., & da Fontoura Costa, L. (2020). Opinion diversity and social bubbles in adaptive Sznajd networks. Journal of Statistical Mechanics: Theory and. Experiment, 2020(2), 023407 https://doi.org/10.1088/1742-5468/ab6de3.
  • Bentley, R. A., Ormerod, P., & Batty, M. (2011). Evolving social influence in large populations. Behavioral Ecology and Sociobiology, 65(3), 537–546. https://doi.org/10.1007/s00265-010-1102-1
  • Blondel, V. D., Hendrickx, J. M., Olshevsky, A., & Tsitsiklis, J. N. (2005). Convergence in multiagent coordination, consensus, and flocking. In Proceedings of the 44th IEEE conference on decision and control (pp. 2996–3000).
  • Cao, M., Morse, A. S., & Anderson, B. D. (2008). Reaching a consensus in a dynamically changing environment: A graphical approach. SIAM Journal on Control and Optimization, 47(2), 575–600. https://doi.org/10.1137/060657005
  • Cao, Z., Zheng, M., Vorobyeva, Y., Song, C., & Johnson, N. F. (2018). Complexity in individual trajectories toward online extremism. Complexity, 2018, 3929583. https://doi.org/10.1155/2018/3929583
  • Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of Modern Physics, 81(2), 591. https://doi.org/10.1103/RevModPhys.81.591
  • Cheng, C., & Yu, C. (2019). Opinion dynamics with bounded confidence and group pressure. Physica A: Statistical Mechanics and Its Applications, 532, 121900. https://doi.org/10.1016/j.physa.2019.121900
  • Cucker, F., & Smale, S. (2007). Emergent behavior in flocks. IEEE Transactions on Automatic Control, 52(5), 852–862. https://doi.org/10.1109/TAC.2007.895842
  • Dandekar, P., Goel, A., & Lee, D. T. (2013). Biased assimilation, homophily, and the dynamics of polarization. Proceedings of the National Academy of Sciences, 110(15), 5791–5796. https://doi.org/10.1073/pnas.1217220110
  • Deffuant, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3( 01n04), 87–98. https://doi.org/10.1142/S0219525900000078
  • DeGroot, M. H. (1974). Reaching a consensus. Journal of the American Statistical Association, 69(345), 118–121. https://doi.org/10.1080/01621459.1974.10480137
  • Etesami, S. R. (2019). A simple framework for stability analysis of state-dependent networks of heterogeneous agents. SIAM Journal on Control and Optimization, 57(3), 1757–1782. https://doi.org/10.1137/18M1217681.
  • Flache, A., & Macy, M. W. (2011). Small worlds and cultural polarization. The Journal of Mathematical Sociology, 35(1–3), 146–176. https://doi.org/10.1080/0022250X.2010.532261
  • Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4), 4. https://doi.org/10.18564/jasss.3521
  • French, J. R., Jr. (1956). A formal theory of social power. Psychological Review, 63(3), 181–194. https://doi.org/10.1037/h0046123
  • Friedkin, N. E., & Johnsen, E. C. (1990). Social influence and opinions. The Journal of Mathematical Sociology, 15(3–4), 193–206. https://doi.org/10.1080/0022250X.1990.9990069
  • Fu, F., Hauert, C., Nowak, M. A., & Wang, L. (2008). Reputation-based partner choice promotes cooperation in social networks. Physical Review E, 78(2), 026117. https://doi.org/10.1103/PhysRevE.78.026117
  • Galesic, M., & Stein, D. L. (2019). Statistical physics models of belief dynamics: Theory and empirical tests. Physica A: Statistical Mechanics and Its Applications, 519, 275–294. https://doi.org/10.1016/j.physa.2018.12.011
  • Hanaki, N., Peterhansl, A., Dodds, P. S., & Watts, D. J. (2007). Cooperation in evolving social networks. Management Science, 53(7), 1036–1050. https://doi.org/10.1287/mnsc.1060.0625
  • Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5, 3. http://jasss.soc.surrey.ac.uk/5/3/2.html.
  • Hendrickx, J. M., Shi, G., & Johansson, K. H. (2014). Finite-time consensus using stochastic matrices with positive diagonals. IEEE Transactions on Automatic Control, 60(4), 1070–1073. https://doi.org/10.1109/TAC.2014.2352691
  • Hollewell, G. F., & Longpré, N. (2022). Radicalization in the social media era: Understanding the relationship between self-radicalization and the Internet. International Journal of Offender Therapy and Comparative Criminology, 66(8), 896–913. https://doi.org/10.1177/0306624X211028771
  • Holley, R. A., & Liggett, T. M. (1975). Ergodic theorems for weakly interacting infinite systems and the voter model. The Annals of Probability, 3(4), 643–663. https://doi.org/10.1214/aop/1176996306
  • Huet, S., Deffuant, G., & Jager, W. (2008). A rejection mechanism in 2d bounded confidence provides more conformity. Advances in Complex Systems, 11(4), 529–549. https://doi.org/10.1142/S0219525908001799
  • Kononovicius, A. (2021). Supportive interactions in the noisy voter model. Chaos, Solitons & Fractals, 143, 110627. https://doi.org/10.1016/j.chaos.2020.110627
  • Kozitsin, I. V. (2020). Formal models of opinion formation and their application to real data: Evidence from online social networks. The Journal of Mathematical Sociology 46, (2), 120–147. https://doi.org/10.1080/0022250X.2020.1835894.
  • Kurahashi-Nakamura, T., Mäs, M., & Lorenz, J. (2016). Robust clustering in generalized bounded confidence models. Journal of Artificial Societies and Social Simulation, 19(4), 7.https://doi.org/10.18564/jasss.3220
  • Lewis, A. D. (2010). A top nine list: Most popular induced matrix norms. Queen’s University, Kingston, Ontario, Tech. Rep. 1–13 https://mast.queensu.ca/~andrew/notes/pdf/2010a.pdf.
  • Liu, C. C., & Srivastava, S. B. (2015). Pulling closer and moving apart: Interaction, identity, and influence in the U.S. Senate, 1973 to 2009. American Sociological Review, 80(1), 192–217. https://doi.org/10.1177/0003122414564182
  • Llewellyn, C., & Cram, L. (2016). Brexit? Analyzing opinion on the UK-EU referendum within Twitter. Proceedings of the International Aaai Conference on Web and Social Media, 10, 760–761 https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13119.
  • Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098
  • Lorenz, J. (2005). A stabilization theorem for dynamics of continuous opinions. Physica A: Statistical Mechanics and Its Applications, 355(1), 217–223. https://doi.org/10.1016/j.physa.2005.02.086.
  • Mariano, S., Morărescu, I., Postoyan, R., & Zaccarian, L. (2020). A hybrid model of opinion dynamics with memory-based connectivity. IEEE Control Systems Letters, 4(3), 644–649. https://doi.org/10.1109/LCSYS.2020.2989077
  • Meredith, J., & Richardson, E. (2019). The use of the political categories of Brexiter and Remainer in online comments about the EU referendum. Journal of Community & Applied Social Psychology, 29(1), 43–55. https://doi.org/10.1002/casp.2384
  • Moghaddam, F. M. (2005). The staircase to terrorism: A psychological exploration. American Psychologist, 60(2), 161. https://doi.org/10.1037/0003-066X.60.2.161
  • Moscovici, S., & Zavalloni, M. (1969). The group as a polarizer of attitudes. Journal of personality and social. psychology, 12(2), 125–135 https://psycnet.apa.org/doi/10.1037/h0027568.
  • Myers, D. G., & Lamm, H. (1976). The group polarization phenomenon. Psychological Bulletin, 83(4), 602–627. https://doi.org/10.1037/0033-2909.83.4.602
  • Nedić, A., & Liu, J. (2016). On convergence rate of weighted-averaging dynamics for consensus problems. IEEE Transactions on Automatic Control, 62(2), 766–781. https://doi.org/10.1109/TAC.2016.2572004
  • Noorazar, H., Vixie, K. R., Talebanpour, A., & Hu, Y. (2020). From classical to modern opinion dynamics. International Journal of Modern Physics C, 31(7), 2050101 https://doi.org/10.1142/S0129183120501016.
  • Proskurnikov, A. V., & Tempo, R. (2018). A tutorial on modeling and analysis of dynamic social networks. part ii. Annual Reviews in Control, 45, 166–190. https://doi.org/10.1016/j.arcontrol.2018.03.005
  • Rand, D. G., Arbesman, S., & Christakis, N. A. (2011). Dynamic social networks promote cooperation in experiments with humans. Proceedings of the National Academy of Sciences, 108(48), 19193–19198. https://doi.org/10.1073/pnas.1108243108
  • Ren, W., & Beard, R. W. (2005). Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Transactions on Automatic Control, 50(5), 655–661. https://doi.org/10.1109/TAC.2005.846556
  • Santos, F. C., Pacheco, J. M., & Lenaerts, T. (2006). Cooperation prevails when individuals adjust their social ties. PLoS Computational Biology, 2(10), e140. https://doi.org/10.1371/journal.pcbi.0020140
  • Schweighofer, S., Schweitzer, F., & Garcia, D. (2020). A weighted balance model of opinion hyperpolarization. Journal of Artificial Societies and Social Simulation, 23(3), 5. https://doi.org/10.18564/jasss.4306
  • Stadtfeld, C., Takács, K., & Vörös, A. (2020). The emergence and stability of groups in social networks. Social Networks, 60, 129–145. https://doi.org/10.1016/j.socnet.2019.10.008
  • Stark, H.-U., Tessone, C. J., & Schweitzer, F. (2008a). Decelerating microdynamics can accelerate macrodynamics in the voter model. Physical Review Letters, 101(1), 018701. https://doi.org/10.1103/PhysRevLett.101.018701
  • Stark, H.-U., Tessone, C. J., & Schweitzer, F. (2008b). Slower is faster: Fostering consensus formation by heterogeneous inertia. Advances in Complex Systems, 11(4), 551–563. https://doi.org/10.1142/S0219525908001805
  • Tian, Y., Jia, P., Mirtabatabaei, A., Wang, L., Friedkin, N. E., & Bullo, F. (2021). Social power evolution in influence networks with stubborn individuals. IEEE Transactions on Automatic Control 67(2) doi:10.1109/TAC.2021.3052485.
  • Turner, M. A., & Smaldino, P. E. (2018). Paths to polarization: How extreme views, miscommunication, and random chance drive opinion dynamics. Complexity, 2018, 2740959. https://doi.org/10.1155/2018/2740959
  • Vicsek, T., & Zafeiris, A. (2012). Collective motion. Physics Reports, 517(3–4), 71–140. https://doi.org/10.1016/j.physrep.2012.03.004.
  • Ye, M., Qin, Y., Govaert, A., Anderson, B. D., & Cao, M. (2019). An influence network model to study discrepancies in expressed and private opinions. Automatica, 107, 371–381. https://doi.org/10.1016/j.automatica.2019.05.059