Abstract
This paper considers a category of nonsmooth distributed optimization on multi-agent systems, where agents own privacies and collectively minimize a sum of local cost functions. Taking the restrictions on communication among agents into consideration, a nonautonomous-differential-inclusion neurodynamic approach is proposed over a weighed topology graph. The convergence of neural network is analyzed and its state exponentially converges to an optimal solution of distributed optimization under certain conditions. Since no additional conditions are required to guarantee the convergence, the neural network is superior to distributed algorithms based on penalty method, which need to estimate penalty parameters. Compared with some existed approaches, the neural network has the advantage of possessing fewer state variables. Finally, illustrative examples and an application in distributed quantile regression are delineated to testify the effectiveness of the presented neural network.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Aubin J, Cellina A (1984) Differential inclusions. Springer, Berlin
Beck A, Nedic A, Ozdaglar A, Teboulle M (2014) An gradient method for network resource allocation problems. IEEE Trans Control of Netw Syst 1(1):64–73
Bolte J (2003) Continuous gradient projection method in Hilbert spaces. J Optim Theory Appl 119(2):235–259
Cheng L, Hou Z, Lin Y, Tan M, Zhang W, Wu F (2011) Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Trans Neural Netws 22(5):714–726
Cherukuri A, Cortés J (2016) Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment. Automatica 74:183–193
Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New York
Fang H, Lei J, Chen H (2015) Primal-dual algorithm for distributed constrained optimization. Syst Control Lett 96:110–117
Gharesifard B, Cortes J (2014) Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans Autom Control 59(3):781–786
Hajinezhad D, Hong M, Garcia A (2019) ZONE: Zeroth-order nonconvex multiagent optimization over networks. In: IEEE transactions on automatic control
Hallock K, Koenker R (2001) Quantile regression. J Econ Perspect 15(4):143–156
Hopfield J (1985) Neural computation of decisions in optimization computation of decisions in optimization problems. Biol Cybern 52:141–152
Jiang X, Qin S, Xue X (2020) A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocomputing 377:225–233
Kennedy M, Chua L (1988) Neural networks for nonlinear programming. IEEE Trans Circuits Syst 35(5):554–562
Kia S, Cortes J, Martinez S (2014) Periodic and event-triggered communication for distributed continuous-time convex optimization. In: 2014 American Control Conference (ACC)
Liang S, Zeng X, Hong Y (2018) Distributed nonsmooth optimization with coupled inequality constraints via modified Lagrangian function. IEEE Trans Autom Control 63(6):1753–1759
Lin P, Ren W, Farrell J (2016) Distributed continuous-time optimization: Nonuniform gradient gains, finite-time convergence, and convex constraint set. IEEE Trans Autom Control 99
Liu Q, Yang S, Hong Y (2017) Constrained consensus algorithms with fixed step size for distributed convex optimization over multi-agent networks. IEEE Trans Autom Control 62(8):4259–4265
Liu Q, Yang S, Wang J (2017) A collective neurodynamic approach to distributed constrained optimization. IEEE Trans Neural Netw Learn Syst 28(8):1747–1758
Long C, Hou ZG, Lin Y, Min T, Zhang W (2011) Solving a modified consensus problem of linear multi-agent systems. Automatica 47(10):2218–2223
Ma L, Bian W (2019) A novel multiagent neurodynamic approach to constrained distributed convex optimization. IEEE Trans Cybern 51:1–12
Nedic A, Ozdaglar A (2009) Distributed subgradient methods for multi-agent optimization. IEEE Trans Autom Control 54(1):48–61
Niederlander S, Cortes J (2016) Distributed coordination for nonsmooth convex optimization via saddle-point dynamics. Optim Control 29(1):1247–1272
Qin S, Yang X, Xue X, Song J (2017) A one-layer recurrent neural network for pseudoconvex optimization problems with equality and inequality constraints. IEEE Trans Cybern 47(10):3063–3074
Sun C, Ye M, Hu G (2017) Distributed time-varying quadratic optimization for multiple agents under undirected graphs. IEEE Trans Autom Control 62(7):3687–3694
Wang X, Hong Y, Ji H (2016) Distributed optimization for a class of nonlinear multiagent systems with disturbance rejection. IEEE Trans Cybern 46(7):1655–1666
Wang Y, Lin P, Hong Y (2018) Distributed regression estimation with incomplete data in multi-agent networks. Sci China Inf Sci 61(9):168–181
Wang J, Liu Q (2015) A second-order multi-agent network for bound-constrained distributed optimization. IEEE Trans Autom Control 60(12):3310–3315
Wang H, Li C (2017) Distributed quantile regression over sensor networks. IEEE Trans Signal Inf Process Over Netw 99
Wang J, Elia N (2011) Control approach to distributed optimization. In: Communication, control and computing
Xi C, Wu Q, Khan U (2017) On the distributed optimization over directed networks. Neurocomputing 267:508–515
Yang S, Liu Q, Wang J (2016) Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Trans Syst Man Cybern Syst 47(5):1–12
Yang S, Liu Q, Wang J (2017) A multi-agent system with a proportional-integral protocol for distributed constrained optimization. IEEE Trans Autom Control 62(7):3461–3467
Yi P, Hong Y, Liu F (2015) Distributed gradient algorithm for constrained optimization with application to load sharing in power systems. Syst Control Lett 83:45–52
You K, Tempo R, Xie P (2018) Distributed algorithms for robust convex optimization via the scenario approach. IEEE Trans Autom Control 64(3):880–895
Zeng X, Yi P, Hong Y (2017) Distributed continuous-time algorithm for constrained convex optimizations via nonsmooth analysis approach. IEEE Trans Autom Control 62(10):5227–5233
Zhang J, You K, Basar T (2019) Distributed discrete-time optimization in multi-agent networks using only sign of relative state. IEEE Trans Autom Control 64(6):2352–2367
Zhu Y, Yu W, Wen G, Chen G (2020) Projected primal-dual dynamics for distributed constrained nonsmooth convex optimization. IEEE Trans Syst Man Cybern 50(4):1776–1782
Zhu Y, Yu W, Wen G, Chen G, Ren W (2019) Continuous-time distributed subgradient algorithm for convex optimization with general constraints. IEEE Trans Autom Control 64(4):1694–1701
Zhu Y, Yu W, Wen G, Ren W (2019) Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans Circuits Syst II Express Briefs 66(7):1202–1206
Acknowledgements
This research is supported by the National Science Foundation of China (61773136, 11871178).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, and there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled “A Nonautonomous-Differential-Inclusion Neurodynamic Approach for Nonsmooth Distributed Optimization on Multi-Agent Systems.”
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Proposition 4
There exists a constant \(\sigma _0>0\), such that for any \(\sigma >\sigma _0\), the minimum points of \(D_{\sigma }(\mathbf{x} )\) are also the optimal solutions of distributed optimization (2) and vice versa.
Proof
Let \(\varOmega _\sigma\) be the set of minimum points of \(D_{\sigma }(\mathbf{x} )\) and \({{\bar{x}}}=\frac{1}{n}\sum \nolimits _{i=1}^n x_i\). Then it has \(\sum \nolimits _{k=1}^n\parallel x_k-{\bar{x}}\parallel \le \frac{1}{n}\sum \nolimits _{k=1}^n\sum \nolimits _{l=1}^n\parallel x_k-x_l\parallel .\) According to Assumption 2, for any k, \(l\in \{1,2,...,n\}\), there must exist a path \(P_{kl}\in {\mathcal {E}}\) that connects the two vertexes. Noticing that weighed graph \({\mathcal {G}}\) is undirected and connected, we have
where \(a_{\rm min}\) is an edge of least weight in the weighed graph \({\mathcal {G}}\). Due to Assumption 1, it can be immediately obtained that
where \({\tilde{M}}=\max \nolimits _{i\in {\mathcal {V}}}\{M_i\}\). Then, in the light of (35), let \(\sigma _0=n{\tilde{M}}/2a_{\rm min}\), then
\(\square\)
In the following part, we will prove the equivalence of minimum points of \(f(\mathbf{x} )\) and \(D_{\sigma }(\mathbf{x} )\) with the help of (36).
Suppose \(\mathbf{x}^*=\mathrm {col}\{x^*,x^*,...,x^*\}\) is an optimal solution to distributed optimization problem (2). By the definition of \(D_{\sigma }(\mathbf{x} )\), it can be immediately obtained that \(\mathbf{x}^*\) also minimizes \(D_{\sigma }(\mathbf{x} )\) on \({\mathbb {R}}^{mn}\). That is to say,
Let \(f_{\rm min}\) be the minimum value of \(D_{\sigma }(\mathbf{x} )\). In the above analysis, \(f_{\rm min}\) is also minimum value of \(D_{\sigma }(\mathbf{x} )\) on \(\varOmega =\{\mathbf{x }\in {\mathbb {R}}^{mn}|x_i=x_j\}\).
Suppose that \(\mathbf{y} ^*=\mathrm {col}\{y_1^*,y_2^*,...,y_n^*\}\) minimizes \(D_{\sigma }(\mathbf{x} )\), then for any \(\mathbf{y} =\mathrm {col}\{y,y,...,y\}\in \varOmega\), it is apparent that
By the fact that \(\mathbf{y}^*\) minimizes \(D_{\sigma }(\mathbf{x} )\) and (36), we have
Since the vector \(\mathrm {col}\{\frac{1}{n}\sum \limits _{i=1}^ny_i^*,\frac{1}{n}\sum \limits _{i=1}^ny_i^*,...,\frac{1}{n}\sum \limits _{i=1}^ny_i^*\}\in \varOmega,\) then
Combining (38) with (39), it has
If \(\mathbf{y}^*\notin \varOmega\), there must exist \(k,l\in \{1,2,...,n\}\) such that \(\Vert y_k^*-y_l^*\Vert > 0\). Hence, the condition \(\sigma >\sigma _0\) and (40) determine that
which contradicts with the fact that \(D(\mathbf{y} ^*)=\min \nolimits _\mathbf{x \in {\mathbb {R}}^{mn}}{D_{\sigma }(\mathbf{x} )}\le f_{\rm min}\). Thus, we have \(\mathbf{y}^*\in \varOmega\), that is \(y_i^*=y_j^*\) for \(i,j\in \{1,2,...,n\}.\)
Note \(\mathbf{y} ^*=\mathrm {col}\{{\tilde{y}}^*,{\tilde{y}}^*,...,{\tilde{y}}^*\}\). Due to (37) and (38), it follows that
That is to say,
for any \(\mathbf{y} \in \varOmega\). Therefore, \(\mathbf{y} ^*\) is also an optimal solution to distributed optimization (2) and \(\varOmega _\sigma \subseteq \varOmega ^*\), which completes the proof.
Rights and permissions
About this article
Cite this article
Wen, X., Wang, Y. & Qin, S. A nonautonomous-differential-inclusion neurodynamic approach for nonsmooth distributed optimization on multi-agent systems. Neural Comput & Applic 33, 13909–13920 (2021). https://doi.org/10.1007/s00521-021-06026-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-021-06026-2