Keywords

1 Introduction

Graph theory has emerged in recent years as one of the most active fields of research [1, 7,8,9,10, 24]. In fact, the study of technological and communication networks earned a special attention thanks to a huge amount of data coming from empirical observations and more recently from online platforms like Facebook, Twitter, Instagram and many others. This fact offered a real laboratory for testing on a large-scale the collective behavior of large populations of agents [16, 17, 20] and new challenges for the scientific research has emerged. In particular, the necessity to handle millions, and often billions, of vertices implied a substantial shift to large-scale statistical properties of graphs giving rise to the study of the so-called scale-free networks [8, 18, 24].

In this work, we will focus our attention on the modelling and control of opinion dynamics on a time evolving network. We consider a system of agents, each one belonging to a node of the network, interacting only if they are connected through the network. Each agent modifies his/her opinion through a compromise function which depends both on opinions and the network [3,4,5, 13, 14, 19, 21, 23]. At the same time new connections are created and removed from the network following a preferential attachment process. For simplicity here we restrict to non-growing network, that is a graph where the total number of nodes and the total number of edges are conserved in time. An optimal control problem is then introduced in order to drive the agents toward a desired opinion. The rest of the paper is organized as follows. In Sect. 2 we describe the alignment model for opinions spreading on a non-growing network. In order to control the trajectories of the model we introduce in Sect. 3 a general setting for a control technique weighted by a function on the number of connections. A numerical method based on model predictive control is then developed. Finally in Sect. 4 we perform numerical experiments showing the effectiveness of the present approach. Some conclusion are then reported in the last Section.

2 Modelling Opinion Dynamics on Networks

In the model each agent \(i=1,\dots ,N\) is characterized by two quantities \((w_i,c_i), i=1,\dots ,N\), representing the opinion and the number of connections of the agent ith respectively. This latter term is strictly related to the architecture of the social graph where each agent shares its opinion and influences the interaction between individuals. Each agent is seen here as a node of a time evolving graph \(\mathcal {G}^N=\mathcal {G}^N(t), t\in [t_0,t_f]\) whose nodes are connected through a given set of edges. In the following we will indicates the density of connectivity the constant \(\gamma \ge 0\).

2.1 Network Evolution Without Nodes’ Growth

In the sequel we will consider a graph with both a fixed number of nodes N and a fixed number of edges E. In order to describe the network’s evolution we take into account a preferential attachment probabilistic process. This mechanism, known also as Yule process or Matthew effect, has been used in the modeling of several phenomena in biology, economics and sociology, and it is strictly connected to the generation of power law distributions [8, 24]. The initial state of the network, \(\mathcal {G}^N(0)\), is chosen randomly and, at each time step an edge is randomly selected and removed from the network. At the same time, a node is selected with probability

$$\begin{aligned} \varPi _{\alpha }(c_i) = \dfrac{c_i+\alpha }{\sum _{j=0}^N (c_j+\alpha )}=\dfrac{c_i+\alpha }{2E+N\alpha },\qquad i=1,\dots ,N, \end{aligned}$$
(1)

among all possible nodes of \(\mathcal {G}^N\), with \(\alpha >0\) an attraction coefficient. Based on the probability (1) another node is chosen at time t and connected with the formerly selected one. The described process is repeated at each time step. In this way both the number of nodes and the total number of edges remains constant in the reference time interval. Let p(ct) indicates the probability that a node is endowed of degree the c at time t. We have

$$\begin{aligned} \sum _c p(c,t)=1, \qquad \sum _c c~p(c,t)=\gamma . \end{aligned}$$
(2)

The described process may be described by the following master equation [6]

$$\begin{aligned} \begin{aligned} \dfrac{d}{dt} p(c,t)\ =\,&\dfrac{D}{E}\left[ (c+1)p(c+1,t)-cp(c,t)\right] \\&+\dfrac{2D}{2E+N\alpha }\left[ (c-1+\alpha )p(c-1,t)-(c+\alpha )p(c,t)\right] , \end{aligned} \end{aligned}$$
(3)

where \(D>0\) characterizes the relaxation velocity of the network toward an asymptotic degree distribution \(p_{\infty }(c)\), the righthand side consists of four terms, the first and the third terms account the rate of gaining a node of degree c and respectively the second and fourth terms the rate of losing a node of degree c. The equation (3) holds in the interval \(c\le E\), whereas for each \(c>E\) we set \(p(c,t)=0\). While most the random graphs models with fixed number of nodes and vertices produces unrealistic degree distributions like the Watts and Strogatz generation model, called small-world model [22], the main advantage of the graph generated through the described rewiring process stands in the possibility to recover the scale-free properties. Indeed we can easily show that if \(\gamma =2E/N \ge 1\) with attraction coefficient \(\alpha \ll 1\) then the stationary degree distribution \(p_{\infty }(c)\) obeys a power-law of the following form

$$\begin{aligned} p_{\infty }(c)=\left( \dfrac{\alpha }{\gamma }\right) ^{\alpha }\dfrac{\alpha }{c}. \end{aligned}$$
(4)

When \(\alpha \gg 1\) we loose the features of the preferential attachment mechanism, in fact high degree nodes are selected approximately with the same probability of the nodes with low degree of connection. Then the selection occurs in a non preferential way and the asymptotic degree distribution obeys the Poisson distribution

$$\begin{aligned} p_{\infty }(c)=\dfrac{e^{-\gamma }}{c!}\gamma ^c. \end{aligned}$$
(5)

A simple graph is sketched in Fig. 1 where we can observe how the initial degree of the nodes influences the evolution of the connections. In order to correctly observe the creation of the new links, that preferentially connect nodes with the highest connection degree, we marked each node with a number \(i=1,\dots ,20\) and the nodes’ diameters are proportional with their number of connections.

Fig. 1.
figure 1

Left: initial configuration of the sample network \(\mathcal {G}^{20}\) with density of connectivity \(\gamma =5\). Right: a simulation of the network \(\mathcal {G}^{20}\) after 10 time steps of the preferential attachment process. The diameter of each node is proportional to its degree of connection.

2.2 The Opinion Alignment Dynamics

The opinion of the ith agent ranges in the closed set \(I=[-1,1]\), that is \(w_i=w_i(t)\,{\in }\,I\) for each \(t\,{\in }\,[t_0,t_f]\), and its opinion changes over time according to the following differential system

$$\begin{aligned} \dot{w}_i = \frac{1}{|S_i|} \sum _{j\in S_i} P_{ij} (w_j-w_i), \qquad i=1,\dots ,N \end{aligned}$$
(6)

where \(S_i\) indicates the set of vertex connected with the ith agent and reflects the architecture of the chosen network, whereas \(c_i=|S_i| < N\) stands for the cardinality of the set \(S_i\), also known as degree of vertex i. Note that the number of connections \(c_i\) evolves in time accordingly to the process described in Sect. 2.1. Furthermore we introduced the interaction function \(P_{ij}\in [0,1]\), depending on the opinions of the agents and the graph \(\mathcal {G}^N\) which can be written as follows

$$\begin{aligned} P_{ij}=P(w_i,w_j; \mathcal {G}^N). \end{aligned}$$
(7)

A possible choice for the interaction function is the following

$$\begin{aligned} P(w_i,w_j; \mathcal {G}^N)=H(w_i,w_j)K(\mathcal {G}^N), \end{aligned}$$
(8)

where \(H(\cdot ,\cdot )\) represents the positive compromise propensity, and K a general function taking into account statistical properties of the graph \(\mathcal {G}\). In what follows we will consider \(K=K(c_i,c_j)\), a function depending on the vertices’ connections.

3 Optimal Control Problem of the Alignment Model

In this section we introduce a control strategy which characterizes the action of an external agent with the aim of driving opinions toward a given target \(w_d\). To this goal, we consider the evolution of the network \(\mathcal {G}^N(t)\) and the opinion dynamics in the interval \([t_0,t_f]\). Therefore we introduce the following optimal control problem

$$\begin{aligned} \min _{u\in \mathcal {U}}J(\mathbf w ,u):= \frac{1}{2}\int _{t_0}^{t_f} \Big \{ \frac{1}{N}\sum _{j=1}^N (w_j(s)-w_d)^2 +\nu u(s)^2 \Big \} ds, \end{aligned}$$
(9)

subject to

$$\begin{aligned} \dot{w}_i = \dfrac{1}{|S_i|}\sum _{j\in S_i}P_{ij}(w_j-w_i)+u\chi (c_i\ge c^*),\quad w_i(0)=w_i^0, \end{aligned}$$
(10)

where we indicated with \(\mathcal {U}\) the set of admissible controls, with \(\nu >0\) a regularization parameter which expresses the strength of the control in the overall dynamics and \(w_d\in [-1,1]\) the target opinion. Note that the action of the control u is weighted by an indicator function \(\chi (\cdot )\), which is active only for the nodes with degree \(c_i\ge c^*\). In general this selective control approach models an a-priori strategy of a policy maker, possibly acting under limited resources or unable to influence the whole ensemble of agents. For example we can consider a varying horizon control acting on a fixed portion of connected agents.

The solution of this kind of control problems is in general a difficult task, given that their direct solution is prohibitively expensive for a large number of agents. Different strategies have been developed for alignment modeling in order to obtain feedback controls or more general numerical control techniques [2,3,4,5, 11, 12, 15]. To tackle numerically the described problem a standard strategy makes use of a model predictive control (MPC) approach, also referred as receding horizon strategy.

In general MPC strategies solves a finite horizon open-loop optimal control problem predicting the dynamic behavior over a predict horizon \(t_p\le t_f\), with initial state sampled at time t (initially \(t = t_0\)), and computing the control on a control horizon \(t_c\le t_p\). The optimization is computed introducing a new integral functional \(J_p(\cdot ,\cdot )\), which is an approximation of (9) on the time interval \([t,t+t_p]\), namely

$$\begin{aligned} J_p(\mathbf w ,\bar{u}):= \frac{1}{2}\int _t^{t+t_p} \Big \{ \frac{1}{N}\sum _{j=1}^N (w_j(s)-w_d)^2 +\nu _p \bar{u}(s)^2 \Big \} ds \end{aligned}$$
(11)

where the control, \(\bar{u}: [t,t+t_p]\rightarrow \mathcal {U}\), is supposed to be an admissible control in the set of admissible control \(\mathcal {U}\), subset of \(\mathbb {R}\), and \(\nu _p\) a possibly different penalization parameter with respect to the full optimal control problem. Thus the computed optimal open-loop control \(\bar{u}(\cdot )\) is applied feedback to the system dynamics until the next sampling time \(t+t_s\) is evaluated, with \(t_s\le t_c\), thereafter the procedure is repeated taking as initial state of the dynamics at time \(t+t_s\) and shifting forward the prediction and control horizons, until the final time \(t_f\) is reached. This process generates a sub-optimal solution with respect to the solution of the full optimal control problem (9)–(10).

Let us consider now the full discretize problem, defining the time sequence \([t_0,t_1,\ldots ,t_M]\), where \(t_{n}-t_{n-1}=t_s=\varDelta t>0\) and \(t_M:=M\varDelta t = t_f\), for all \(n=1,\ldots , M \), assuming furthermore that \(t_c = t_p = p\varDelta t\), with \(p>0\). Hence the linear MPC method look for a piecewise control on the time frame \([t_0,t_M]\), defined as follows

$$\begin{aligned} \bar{u}(t) = \sum _{n=0}^{M-1} \bar{u}^n\chi _{[t_n,t_{n+1}]}(t). \end{aligned}$$
(12)

In order to discretize the evolution dynamics we consider a Runge-Kutta scheme, the full discretized optimal control problem on the time frame \([t_n,t_n+p\varDelta t]\) reads

$$\begin{aligned} \min _{\bar{u}\in \mathcal {U}}J_p(\mathbf w ,\bar{u}):= \frac{1}{2}\int _{t_n}^{t_n+p\varDelta t} \Big \{ \frac{1}{N}\sum _{j=1}^N (w_j(s)-w_d)^2 +\nu _p \bar{u}^2 \Big \} ds \end{aligned}$$
(13)

subject to

(15)

for all \(n=1,\ldots , p-1\); \(l=1,\ldots ,s\); \(i,\ldots , N\) and having defined the following functions

$$\begin{aligned} F(t,w_i)=\dfrac{1}{|S_i(t)|}\sum _{j\in S_i(t)}P_{ij}(w_j-w_i),\quad Q_i(t) = \chi (c_i(t)\ge c^*). \end{aligned}$$

The coefficients \((a_{l,k})_{l,k}\), \((b_l)_l\) and \((\theta _{l})_l\), with \(l,k=1,\ldots ,s\), define the Runge-Kutta method and \((\bar{U}^{(n)})_l, (W^{(n)}_{i,l})_l\) are the internal stages associated to \(\bar{u}(t), w_i(t)\) on time frame \([t_n,t_{n+1}]\).

3.1 Instantaneous Control

Let us restrict to the case of a single prediction horizon, \(p = 1\), where we discretize the dynamics with an explicit Euler scheme (\(a_{1,1}=\theta _1=0\) and \( b_1 = 1\)). Notice that since the control \(\bar{u}\) is a constant value and assuming that the network, \(\mathcal {G}^N\) remains fixed over the time interval \([t_n,t_n+\varDelta t]\) the discrete optimal control problem (13) reduces to

$$\begin{aligned} \min _{\bar{u}\,{\in }\,\mathcal {U}}J_p(\mathbf w ,\bar{u}^n):= \varDelta t \Big \{ \frac{1}{N}\sum _{j=1}^N (w_j^{n+1}(\bar{u}^n)-w_d)^2 +\nu _p (\bar{u}^n)^2 \Big \} \end{aligned}$$
(16)

with

$$\begin{aligned} w_i^{n+1}&= w_{i}^n + \varDelta t\left( F(t_n,w^n_i)+\bar{u}^nQ_i^n\right) ,\quad w_i^n= w_i(t_n). \end{aligned}$$
(17)

In order to find the minima of (13) is sufficient to find the value \(\bar{u}\) satisfying \( \partial _{\bar{u}} J_p(\mathbf w ,\bar{u})=0\), which can be computed by a straightforward calculation

$$\begin{aligned} \bar{u}^n = -\frac{1}{N\nu + \varDelta t\sum _{j=1}^N(Q^n_j)^2}\left( \sum _{j=1}^NQ^n_j\left( w^n_j-w_d\right) +\varDelta t\sum _{j=1}^NQ^n_jF(t_n,w^n_i)\right) . \end{aligned}$$
(18)

where we scaled the penalization parameter with \(\nu _p= \varDelta t \nu \).

4 Numerical Results

In this section we present some numerical results in order to show the main features of the control introduced in the previous paragraphs. We considered a population of \(N=100\) agents, each of them representing a node of an undirected graph with density of connectivity \(\gamma =30\). The network \(\mathcal {G}^{100}\) evolves in the time interval [0, 50] with attraction coefficient \(\alpha =0.01\) and represents a single sample of the evolution of the master equation (3) with \(D=20\). The control problem is solved by the instantaneous control method described in Remark 3.1 with \(\varDelta t=5~10^{-2}\). In Fig. 3 we present the evolution over the reference time interval of the constrained opinion dynamics. The interaction terms have been chosen as follows

$$\begin{aligned} K(c_i,c_j)=e^{-\lambda c_i}\left( 1-e^{-\beta c_j}\right) , \qquad H(w_i,w_j)=\chi (|w_i-w_j|\le \varDelta ), \end{aligned}$$
(19)

where the function \(H(\cdot ,\cdot )\) is a bounded confidence function with \(\varDelta =0.4\), while \(K(\cdot ,\cdot )\) defines the interactions between the agents i and j taking into account that agents with a large number of connections are more difficult to influence and at the same time they have more influence over other agents. The action of the control is characterized by a parameter \(\kappa =0.1\) and target opinion \(w_d=0.8\). We present the resulting opinion dynamics for a choice of constants \(\lambda =1/100,\beta =1\) in Fig. 2. We report the evolution of the network and of the opinion in Fig. 3, here the diameter of each node is proportional with its degree of connection whereas the color indicates its opinion. As a measure of consensus over the agents we introduce the quantity

Fig. 2.
figure 2

Evolution of the constrained opinion dynamics with uniform initial distribution of opinions over the time interval [0, 50] for different values of \(c^*=10,15,30\) with target opinion \(w_d=0.8\), control parameter \(\kappa =0.1\), \(\varDelta t=10^{-3}\) and confidence bound \(\varDelta =0.4\).

Fig. 3.
figure 3

Evolution of opinion and connection degree of each node of the previously evolved graph \(\mathcal {G}^{100}\). From left to right: graph at times \(t=0,25,50\). From the top: opinion dynamics for threshold values \(c^*=10,20,30\). The target opinion is set \(w_d=0.8\) and the control parameter \(\kappa =0.1\). (Color figure online)

$$\begin{aligned} V_{w_d}=\dfrac{1}{N-1}\sum _{i=1}^N (w_i(t_f)-w_d)^2, \end{aligned}$$
(20)

where \(w_i(t_f)\) is the opinion of the ith agent at the final time \(t_f\). In Fig. 4 we compare different values of \(V_{w_d}\) as a function of \(c^*\). Here we calculated the size of the controlled agents and the values of \(V_{w_d}\) both, starting from a given uniform initial opinion and the same graph with initial uniform degree distribution. It can be observed how the control is capable to drive the overall dynamics toward the desired state acting only on a portion of the nodes.

Fig. 4.
figure 4

Left: the red squared plot indicates the size of the set of controlled agent at the final time \(t_f\) in dependence on \(c^*\) whereas the blue line indicates the mean square displacement \(V_{w_d}\). Right: values of the control u at each time step for \(c^*=10,20,30\). In the numerical test we assumed \(\varDelta =0.4, \varDelta t=5~10^{-2}, \kappa =0.1\). (Color figure online)

Conclusions and Perspectives

In this short note we focus our attention on a control problem for the dynamic of opinion over a time evolving network. We show that the introduction of a suitable selective control depending on the connection degree of the agent’s node is capable to drive the overall opinion toward consensus. In a related work we have considered this problem in a mean-field setting where the number of agents, and therefore nodes, is very large [6]. In future works we plan to concentrate on the model predictive control setting, where the evolution of the control is based on the evolution of the network, and on the case with varying prediction horizon acting on a given portion of the agents.