Abstract
In this work we are interested in the modelling and control of opinion dynamics spreading on a time evolving network with scale-free asymptotic degree distribution. The mathematical model is formulated as a coupling of an opinion alignment system with a probabilistic description of the network. The optimal control problem aims at forcing consensus over the network, to this goal a control strategy based on the degree of connection of each agent has been designed. A numerical method based on a model predictive strategy is then developed and different numerical tests are reported. The results show that in this way it is possible to drive the overall opinion toward a desired state even if we control only a suitable fraction of the nodes.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Graph theory has emerged in recent years as one of the most active fields of research [1, 7,8,9,10, 24]. In fact, the study of technological and communication networks earned a special attention thanks to a huge amount of data coming from empirical observations and more recently from online platforms like Facebook, Twitter, Instagram and many others. This fact offered a real laboratory for testing on a large-scale the collective behavior of large populations of agents [16, 17, 20] and new challenges for the scientific research has emerged. In particular, the necessity to handle millions, and often billions, of vertices implied a substantial shift to large-scale statistical properties of graphs giving rise to the study of the so-called scale-free networks [8, 18, 24].
In this work, we will focus our attention on the modelling and control of opinion dynamics on a time evolving network. We consider a system of agents, each one belonging to a node of the network, interacting only if they are connected through the network. Each agent modifies his/her opinion through a compromise function which depends both on opinions and the network [3,4,5, 13, 14, 19, 21, 23]. At the same time new connections are created and removed from the network following a preferential attachment process. For simplicity here we restrict to non-growing network, that is a graph where the total number of nodes and the total number of edges are conserved in time. An optimal control problem is then introduced in order to drive the agents toward a desired opinion. The rest of the paper is organized as follows. In Sect. 2 we describe the alignment model for opinions spreading on a non-growing network. In order to control the trajectories of the model we introduce in Sect. 3 a general setting for a control technique weighted by a function on the number of connections. A numerical method based on model predictive control is then developed. Finally in Sect. 4 we perform numerical experiments showing the effectiveness of the present approach. Some conclusion are then reported in the last Section.
2 Modelling Opinion Dynamics on Networks
In the model each agent \(i=1,\dots ,N\) is characterized by two quantities \((w_i,c_i), i=1,\dots ,N\), representing the opinion and the number of connections of the agent ith respectively. This latter term is strictly related to the architecture of the social graph where each agent shares its opinion and influences the interaction between individuals. Each agent is seen here as a node of a time evolving graph \(\mathcal {G}^N=\mathcal {G}^N(t), t\in [t_0,t_f]\) whose nodes are connected through a given set of edges. In the following we will indicates the density of connectivity the constant \(\gamma \ge 0\).
2.1 Network Evolution Without Nodes’ Growth
In the sequel we will consider a graph with both a fixed number of nodes N and a fixed number of edges E. In order to describe the network’s evolution we take into account a preferential attachment probabilistic process. This mechanism, known also as Yule process or Matthew effect, has been used in the modeling of several phenomena in biology, economics and sociology, and it is strictly connected to the generation of power law distributions [8, 24]. The initial state of the network, \(\mathcal {G}^N(0)\), is chosen randomly and, at each time step an edge is randomly selected and removed from the network. At the same time, a node is selected with probability
among all possible nodes of \(\mathcal {G}^N\), with \(\alpha >0\) an attraction coefficient. Based on the probability (1) another node is chosen at time t and connected with the formerly selected one. The described process is repeated at each time step. In this way both the number of nodes and the total number of edges remains constant in the reference time interval. Let p(c, t) indicates the probability that a node is endowed of degree the c at time t. We have
The described process may be described by the following master equation [6]
where \(D>0\) characterizes the relaxation velocity of the network toward an asymptotic degree distribution \(p_{\infty }(c)\), the righthand side consists of four terms, the first and the third terms account the rate of gaining a node of degree c and respectively the second and fourth terms the rate of losing a node of degree c. The equation (3) holds in the interval \(c\le E\), whereas for each \(c>E\) we set \(p(c,t)=0\). While most the random graphs models with fixed number of nodes and vertices produces unrealistic degree distributions like the Watts and Strogatz generation model, called small-world model [22], the main advantage of the graph generated through the described rewiring process stands in the possibility to recover the scale-free properties. Indeed we can easily show that if \(\gamma =2E/N \ge 1\) with attraction coefficient \(\alpha \ll 1\) then the stationary degree distribution \(p_{\infty }(c)\) obeys a power-law of the following form
When \(\alpha \gg 1\) we loose the features of the preferential attachment mechanism, in fact high degree nodes are selected approximately with the same probability of the nodes with low degree of connection. Then the selection occurs in a non preferential way and the asymptotic degree distribution obeys the Poisson distribution
A simple graph is sketched in Fig. 1 where we can observe how the initial degree of the nodes influences the evolution of the connections. In order to correctly observe the creation of the new links, that preferentially connect nodes with the highest connection degree, we marked each node with a number \(i=1,\dots ,20\) and the nodes’ diameters are proportional with their number of connections.
Left: initial configuration of the sample network \(\mathcal {G}^{20}\) with density of connectivity \(\gamma =5\). Right: a simulation of the network \(\mathcal {G}^{20}\) after 10 time steps of the preferential attachment process. The diameter of each node is proportional to its degree of connection.
2.2 The Opinion Alignment Dynamics
The opinion of the ith agent ranges in the closed set \(I=[-1,1]\), that is \(w_i=w_i(t)\,{\in }\,I\) for each \(t\,{\in }\,[t_0,t_f]\), and its opinion changes over time according to the following differential system
where \(S_i\) indicates the set of vertex connected with the ith agent and reflects the architecture of the chosen network, whereas \(c_i=|S_i| < N\) stands for the cardinality of the set \(S_i\), also known as degree of vertex i. Note that the number of connections \(c_i\) evolves in time accordingly to the process described in Sect. 2.1. Furthermore we introduced the interaction function \(P_{ij}\in [0,1]\), depending on the opinions of the agents and the graph \(\mathcal {G}^N\) which can be written as follows
A possible choice for the interaction function is the following
where \(H(\cdot ,\cdot )\) represents the positive compromise propensity, and K a general function taking into account statistical properties of the graph \(\mathcal {G}\). In what follows we will consider \(K=K(c_i,c_j)\), a function depending on the vertices’ connections.
3 Optimal Control Problem of the Alignment Model
In this section we introduce a control strategy which characterizes the action of an external agent with the aim of driving opinions toward a given target \(w_d\). To this goal, we consider the evolution of the network \(\mathcal {G}^N(t)\) and the opinion dynamics in the interval \([t_0,t_f]\). Therefore we introduce the following optimal control problem
subject to
where we indicated with \(\mathcal {U}\) the set of admissible controls, with \(\nu >0\) a regularization parameter which expresses the strength of the control in the overall dynamics and \(w_d\in [-1,1]\) the target opinion. Note that the action of the control u is weighted by an indicator function \(\chi (\cdot )\), which is active only for the nodes with degree \(c_i\ge c^*\). In general this selective control approach models an a-priori strategy of a policy maker, possibly acting under limited resources or unable to influence the whole ensemble of agents. For example we can consider a varying horizon control acting on a fixed portion of connected agents.
The solution of this kind of control problems is in general a difficult task, given that their direct solution is prohibitively expensive for a large number of agents. Different strategies have been developed for alignment modeling in order to obtain feedback controls or more general numerical control techniques [2,3,4,5, 11, 12, 15]. To tackle numerically the described problem a standard strategy makes use of a model predictive control (MPC) approach, also referred as receding horizon strategy.
In general MPC strategies solves a finite horizon open-loop optimal control problem predicting the dynamic behavior over a predict horizon \(t_p\le t_f\), with initial state sampled at time t (initially \(t = t_0\)), and computing the control on a control horizon \(t_c\le t_p\). The optimization is computed introducing a new integral functional \(J_p(\cdot ,\cdot )\), which is an approximation of (9) on the time interval \([t,t+t_p]\), namely
where the control, \(\bar{u}: [t,t+t_p]\rightarrow \mathcal {U}\), is supposed to be an admissible control in the set of admissible control \(\mathcal {U}\), subset of \(\mathbb {R}\), and \(\nu _p\) a possibly different penalization parameter with respect to the full optimal control problem. Thus the computed optimal open-loop control \(\bar{u}(\cdot )\) is applied feedback to the system dynamics until the next sampling time \(t+t_s\) is evaluated, with \(t_s\le t_c\), thereafter the procedure is repeated taking as initial state of the dynamics at time \(t+t_s\) and shifting forward the prediction and control horizons, until the final time \(t_f\) is reached. This process generates a sub-optimal solution with respect to the solution of the full optimal control problem (9)–(10).
Let us consider now the full discretize problem, defining the time sequence \([t_0,t_1,\ldots ,t_M]\), where \(t_{n}-t_{n-1}=t_s=\varDelta t>0\) and \(t_M:=M\varDelta t = t_f\), for all \(n=1,\ldots , M \), assuming furthermore that \(t_c = t_p = p\varDelta t\), with \(p>0\). Hence the linear MPC method look for a piecewise control on the time frame \([t_0,t_M]\), defined as follows
In order to discretize the evolution dynamics we consider a Runge-Kutta scheme, the full discretized optimal control problem on the time frame \([t_n,t_n+p\varDelta t]\) reads
subject to
for all \(n=1,\ldots , p-1\); \(l=1,\ldots ,s\); \(i,\ldots , N\) and having defined the following functions
The coefficients \((a_{l,k})_{l,k}\), \((b_l)_l\) and \((\theta _{l})_l\), with \(l,k=1,\ldots ,s\), define the Runge-Kutta method and \((\bar{U}^{(n)})_l, (W^{(n)}_{i,l})_l\) are the internal stages associated to \(\bar{u}(t), w_i(t)\) on time frame \([t_n,t_{n+1}]\).
3.1 Instantaneous Control
Let us restrict to the case of a single prediction horizon, \(p = 1\), where we discretize the dynamics with an explicit Euler scheme (\(a_{1,1}=\theta _1=0\) and \( b_1 = 1\)). Notice that since the control \(\bar{u}\) is a constant value and assuming that the network, \(\mathcal {G}^N\) remains fixed over the time interval \([t_n,t_n+\varDelta t]\) the discrete optimal control problem (13) reduces to
with
In order to find the minima of (13) is sufficient to find the value \(\bar{u}\) satisfying \( \partial _{\bar{u}} J_p(\mathbf w ,\bar{u})=0\), which can be computed by a straightforward calculation
where we scaled the penalization parameter with \(\nu _p= \varDelta t \nu \).
4 Numerical Results
In this section we present some numerical results in order to show the main features of the control introduced in the previous paragraphs. We considered a population of \(N=100\) agents, each of them representing a node of an undirected graph with density of connectivity \(\gamma =30\). The network \(\mathcal {G}^{100}\) evolves in the time interval [0, 50] with attraction coefficient \(\alpha =0.01\) and represents a single sample of the evolution of the master equation (3) with \(D=20\). The control problem is solved by the instantaneous control method described in Remark 3.1 with \(\varDelta t=5~10^{-2}\). In Fig. 3 we present the evolution over the reference time interval of the constrained opinion dynamics. The interaction terms have been chosen as follows
where the function \(H(\cdot ,\cdot )\) is a bounded confidence function with \(\varDelta =0.4\), while \(K(\cdot ,\cdot )\) defines the interactions between the agents i and j taking into account that agents with a large number of connections are more difficult to influence and at the same time they have more influence over other agents. The action of the control is characterized by a parameter \(\kappa =0.1\) and target opinion \(w_d=0.8\). We present the resulting opinion dynamics for a choice of constants \(\lambda =1/100,\beta =1\) in Fig. 2. We report the evolution of the network and of the opinion in Fig. 3, here the diameter of each node is proportional with its degree of connection whereas the color indicates its opinion. As a measure of consensus over the agents we introduce the quantity
Evolution of opinion and connection degree of each node of the previously evolved graph \(\mathcal {G}^{100}\). From left to right: graph at times \(t=0,25,50\). From the top: opinion dynamics for threshold values \(c^*=10,20,30\). The target opinion is set \(w_d=0.8\) and the control parameter \(\kappa =0.1\). (Color figure online)
where \(w_i(t_f)\) is the opinion of the ith agent at the final time \(t_f\). In Fig. 4 we compare different values of \(V_{w_d}\) as a function of \(c^*\). Here we calculated the size of the controlled agents and the values of \(V_{w_d}\) both, starting from a given uniform initial opinion and the same graph with initial uniform degree distribution. It can be observed how the control is capable to drive the overall dynamics toward the desired state acting only on a portion of the nodes.
Left: the red squared plot indicates the size of the set of controlled agent at the final time \(t_f\) in dependence on \(c^*\) whereas the blue line indicates the mean square displacement \(V_{w_d}\). Right: values of the control u at each time step for \(c^*=10,20,30\). In the numerical test we assumed \(\varDelta =0.4, \varDelta t=5~10^{-2}, \kappa =0.1\). (Color figure online)
Conclusions and Perspectives
In this short note we focus our attention on a control problem for the dynamic of opinion over a time evolving network. We show that the introduction of a suitable selective control depending on the connection degree of the agent’s node is capable to drive the overall opinion toward consensus. In a related work we have considered this problem in a mean-field setting where the number of agents, and therefore nodes, is very large [6]. In future works we plan to concentrate on the model predictive control setting, where the evolution of the control is based on the evolution of the network, and on the case with varying prediction horizon acting on a given portion of the agents.
References
Albert, R., Barabási, A.-L.: Statistical mechanics of complex networks. Rev. Mod. phys. 74(1), 47–97 (2002)
Albi, G., Bongini, M., Cristiani, E., Kalise, D.: Invisible control of self-organizing agents leaving unknown environments. SIAM J. Appl. Math 76, 1683–1710 (2016). in press
Albi, G., Herty, M., Pareschi, L.: Kinetic description of optimal control problems and applications to opinion consensus. Commun. Math. Sci. 13(6), 1407–1429 (2015)
Albi, G., Pareschi, L., Zanella, M.: Boltzmann-type control of opinion consensus through leaders. Philos. Trans. R. Soc. A 372(2028), 20140138 (2014)
Albi, G., Pareschi, L., Zanella, M.: Uncertainty quantification in control problems for flocking models. Math. Prob. Eng. 2015, 850124 (2015). 14 p
Albi, G., Pareschi, L., Zanella, M.: Opinion dynamics over complex networks: kinetic modeling and numerical methods. Kinet. Relat. Models 10(1), 1–32 (2017)
Amaral, L.A.N., Scala, A., Barthélemy, M., Stanley, H.E.: Classes of small-world networks. Proc. Natl. Acad. Sci. U. S. Am. 97(21), 11149–11152 (2000)
Barabási, A.-L., Albert, R., Jeong, H.: Mean-field theory for scale-free random networks. Phys. A: Stat. Meach. Appl. 272(1), 173–187 (1999)
Barrat, A., Barthélemy, M., Vespignani, A.: Dynamical Processes on Complex Networks. Cambridge University Press, Cambridge (2008)
Benczik, I.J., Benczick, S.Z., Schmittmann, B., Zia, R.K.: Opinion dynamics on an adaptive random network. Phys. Rev. E 79(4), 046104 (2009)
Bongini, M., Fornasier, M., Fröhlich, F., Haghverdi, L.: Sparse stabilization of dynamical systems driven by attraction and avoidance forces. Netw. Heterogen. Media 9(1), 1–31 (2014)
Wongkaew, S., Caponigro, M., Borzì, A.: On the control through leadership of the Hegselmann-Krause opinion formation model. Math. Models Methods Appl. Sci. 25(2), 255–282 (2015)
Chi, L.: Binary opinion dynamics with noise on random networks. Chin. Sci. Bull. 56(34), 3630–3632 (2011)
Das, A., Gollapudi, S., Munagala, K.: Modeling opinion dynamics in social networks. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining, ACM (2014)
Herty, M., Zanella, M.: Performance bounds for the mean-field limit of constrained dynamics. Preprint (2015)
Jin, E.M., Girvan, M., Newman, M.E.J.: Structure of growing social networks. Phys. Rev. E 64(4), 046132 (2001)
Kramer, A.D.I., Guillory, J.E., Hancock, J.T.: Experimental evidence of massive scale emotional contagion through social networks. Proc. Nat. Acad. Sci. 111(24), 8788–8789 (2014)
Newman, M.E.J.: The structure and function on complex networks. SIAM Rev. 45(2), 167–256 (2003)
Pareschi, L., Toscani, G.: Interacting Multiagent Systems. Kinetic Equations and Monte Carlo Methods. Oxford University Press, Oxford (2013)
Strogatz, S.H.: Exploring complex networks. Nature 410(6825), 268–276 (2001)
Sznajd-Weron, K., Sznajd, J.: Opinion evolution in closed community. Int. J. Mod. Phys. C 11(6), 1157–1165 (2000)
Watts, D.J., Strogatz, S.H.: Collective dynamics of small-world networks. Nature 393, 440–442 (1998)
Weisbuch, G.: Bounded confidence and social networks. Eur. Phys. J. B-Condens. Matter Complex Syst. 38(2), 339–343 (2004)
Xie, Y.-B., Zhou, T., Wang, B.-H.: Scale-free networks without growth. Phys. A 387, 1683–1688 (2008)
Acknowledgments
GA acknowledges the support of the ERC-Starting Grant project High-Dimensional Sparse Optimal Control (HDSPCONTR).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 IFIP International Federation for Information Processing
About this paper
Cite this paper
Albi, G., Pareschi, L., Zanella, M. (2016). On the Optimal Control of Opinion Dynamics on Evolving Networks. In: Bociu, L., Désidéri, JA., Habbal, A. (eds) System Modeling and Optimization. CSMO 2015. IFIP Advances in Information and Communication Technology, vol 494. Springer, Cham. https://doi.org/10.1007/978-3-319-55795-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-55795-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-55794-6
Online ISBN: 978-3-319-55795-3
eBook Packages: Computer ScienceComputer Science (R0)