Abstract
As artificial intelligence and large data develop, distributed optimization shows the great potential in the research of machine learning, particularly deep learning. As an important distributed optimization problem, the nonsmooth distributed optimization problem over an undirected multi-agent system with inequality and equality constraints frequently appears in deep learning. To deal with this optimization problem cooperatively, a novel neural network with lower dimension of solution space is presented. It is demonstrated that the state solution of proposed approach can enter the feasible region. Also, it can also prove that the state solution achieves consensus and finally converges to the optimal solution set. Moreover, the proposed approach here does not depend on the boundedness of the feasible region, which is a necessary assumption in some simplified neural network. Finally, some simulation results and a practical application are given to reveal the efficacy and practicability.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Al-Saffar M, Musilek P (2021) Distributed optimization for distribution grids with stochastic der using multi-agent deep reinforcement learning. IEEE Access 9:63059–63072
Qian G, Li Z, He C, Li X, Ding X (2020) Power allocation schemes based on deep learning for distributed antenna systems. IEEE Access 8:31245–31253
Li Z, Ding Z, Sun J, Li Z (2018) Distributed adaptive convex optimization on directed graphs via continuous-time algorithms. IEEE Transact Automat Control 63(5):1434–1441
Li H, Lü Q, Huang T (2018) Convergence analysis of a distributed optimization algorithm with a general unbalanced directed communication network. IEEE Transact Network Sci Eng 6(3):237–248
Gharesifard B, Cortés J (2014) Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Transact Automat Control 59(3):781–786
Yang S, Liu Q, Wang J (2017) A multi-agent system with a proportional-integral protocol for distributed constrained optimization. IEEE Transact Automat Control 62(7):3461–3467
Wang Z, Li H (2020) Edge-based stochastic gradient algorithm for distributed optimization. IEEE Transact Network Sci Eng 7(3):1421–1430
Lu Q, Liao X, Li H, Huang T (2020) Achieving acceleration for distributed economic dispatch in smart grids over directed networks. IEEE Transact Network Sci Eng 7(3):1988–1999
Nedic A, Ozdaglar A, Parrilo P (2010) Constrained consensus and optimization in multi-agent networks. IEEE Transact Automat Control 55(4):922–938
Zhu M, Martínez S (2012) On distributed convex optimization under inequality and equality constraints. IEEE Transact Automat Control 57(1):151–164
Sayin M, Vanli D, Kozat S (2017) Stochastic subgradient algorithms for strongly convex optimization over distributed networks. IEEE Transact Network Sci Eng 4(4):248–260
Nedic A, Ozdaglar A (2009) Distributed subgradient methods for multi-agent optimization. IEEE Transact Automat Control 54(1):48–61
Zhang H, Li Y, Gao D, Zhou J (2017) Distributed optimal energy management for energy internet. IEEE Transact Indust Informat 13(6):3081–3097
Yang S, Liu Q, Wang J (2017) Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Transact Sys Man & Cybernet: Sys 47(5):717–728
Deng Z, Liang S, Hong Y (2017) Distributed continuous-time algorithms for resource allocation problems over weight-balanced digraphs. IEEE Transact Cybernet 11(99):1–10
Zeng X, Peng Y, Hong Y, Xie L (2016) Continuous-time distributed algorithms for extended monotropic optimization problems,” Siam J Control & Optimizat, 56(6):
Zeng X, Peng Y, Hong Y (2018) Distributed algorithm for robust resource allocation with polyhedral uncertain allocation parameters. J Sys Sci & Complex 31(1):103–119
Hopfield J, Tank D (1985) Neural computation of decisions in optimization problems. Biol Cybernet 52(3):141–152
Kennedy M, Chua L (1988) Neural networks for nonlinear programming. IEEE Transact Circuit Sys 35(5):554–562
Liu Q, Yang S, Wang J (2017) A collective neurodynamic approach to distributed constrained optimization. IEEE Transact Neural Net Learn Sys 28(8):1747–1758
Liu Q, Wang J (2015) A second-order multi-agent network for bound-constrained distributed optimization. IEEE Transact Automat Control 60(12):3310–3315
Liang S, Yin G et al (2019) Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity. Automatica 105:298–306
Jia W, Qin S, Xue X (2019) A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Networks 119:46–56
Jia W, Liu N, Qin S (2021) An adaptive continuous-time algorithm for nonsmooth convex resource allocation optimization. IEEE Transact Automat Control. https://doi.org/10.1109/TAC.2021.3137054
Zhu Y, Yu W, Wen G, Chen G (2020) Projected primal-dual dynamics for distributed constrained nonsmooth convex optimization. IEEE Transact Cybernet 50(4):191–213
Jiang X, Qin S, Xue X (2020) A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocompution 337(6):225–233
Ma L, Bian W (2019) A novel multiagent neurodynamic approach to constrained distributed convex optimization. IEEE Transact Cybernet 63(6):2168–2267
Yu X, Wu L, Xu C, Hu Y, Ma C (2019) A novel neural network for solving nonsmooth nonconvex optimization problems. IEEE Transact Neural Network Learn Sys 337(6):1–14
Aubin JP, Cellina A (1984) Differential Inclusions. Springer, Berlin, Heidelberg
Xue X, Bian W (2008) Subgradient-based neural networks for nonsmooth convex optimization problems. IEEE Transact Circuit & Sys I: Regular Paper 55(8):2378–2391
Abboud Azary, Iutzeler Franck et al (2015) Distributed production-sharing optimization and application to power grid networks. IEEE Transact Signal Infor Process Network 22(2):16–28
Acknowledgements
This research is supported by the National Science Foundation of China (61773136, 11871178).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled “A Subgradient-based Neural Network to Constrained Distributed Convex Optimization.”
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research is supported by the National Natural Science Foundation of China (61773136, 11871178).
Appendix
Appendix
Proof
Combining \(\lim _{k\rightarrow +\infty }{\mathbf {x}}(t_{k})=\bar{{\mathbf {x}}}\) with Theorem 3, we have
which means that \(\lim _{k\rightarrow +\infty }{\mathbf {L}}{\mathbf {x}}(t_{k})={\mathbf {L}}\bar{{\mathbf {x}}}=0\). Therefore, combined with Theorems 2, \(\bar{{\mathbf {x}}}\in \varOmega\) is a feasible solution of problem (2).
By \(\lim _{k\rightarrow +\infty }H\left( t_{k}, {\mathbf {x}}(t_{k})\right) =0,\) there exist \(\eta (t_{k})\in \partial G({\mathbf {x}}(t_{k}))\), \(\gamma (t_{k})\in \partial {\mathbf {f}}({\mathbf {x}}(t_{k}))\) and \(\xi (t_{k}) \in \partial D({\mathbf {x}}(t_{k}))\) satisfying
Moreover, combined with the u.s.c. of \(\partial {\mathbf {f}}\), one has
Meanwhile, based on the convexity of \(G(\cdot )\) and \(D(\cdot )\) on \(\varOmega\), and the properties of positive semidefinite of \({\mathbf {L}}\), for any \({\mathbf {y}}\in \varOmega\), we have
Hence, from (39)-(41), for any \({\mathbf {y}}\in \varOmega =S_{1}\cap S_{2}\cap S_{3}\), one has
By the convexity of \({\mathbf {f}}\) on \(\varOmega\), one has
which means that \(\bar{{\mathbf {x}}}\) is an optimal solution of problem (2). \(\square\)
Rights and permissions
About this article
Cite this article
Wei, Z., Jia, W., Bian, W. et al. A subgradient-based neural network to constrained distributed convex optimization. Neural Comput & Applic 35, 9961–9971 (2023). https://doi.org/10.1007/s00521-022-07003-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07003-z