Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

A subgradient-based neural network to constrained distributed convex optimization

  • S.I.: Interpretation of Deep Learning
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

As artificial intelligence and large data develop, distributed optimization shows the great potential in the research of machine learning, particularly deep learning. As an important distributed optimization problem, the nonsmooth distributed optimization problem over an undirected multi-agent system with inequality and equality constraints frequently appears in deep learning. To deal with this optimization problem cooperatively, a novel neural network with lower dimension of solution space is presented. It is demonstrated that the state solution of proposed approach can enter the feasible region. Also, it can also prove that the state solution achieves consensus and finally converges to the optimal solution set. Moreover, the proposed approach here does not depend on the boundedness of the feasible region, which is a necessary assumption in some simplified neural network. Finally, some simulation results and a practical application are given to reveal the efficacy and practicability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Al-Saffar M, Musilek P (2021) Distributed optimization for distribution grids with stochastic der using multi-agent deep reinforcement learning. IEEE Access 9:63059–63072

    Article  Google Scholar 

  2. Qian G, Li Z, He C, Li X, Ding X (2020) Power allocation schemes based on deep learning for distributed antenna systems. IEEE Access 8:31245–31253

    Article  Google Scholar 

  3. Li Z, Ding Z, Sun J, Li Z (2018) Distributed adaptive convex optimization on directed graphs via continuous-time algorithms. IEEE Transact Automat Control 63(5):1434–1441

    Article  MATH  MathSciNet  Google Scholar 

  4. Li H, Lü Q, Huang T (2018) Convergence analysis of a distributed optimization algorithm with a general unbalanced directed communication network. IEEE Transact Network Sci Eng 6(3):237–248

    Article  MathSciNet  Google Scholar 

  5. Gharesifard B, Cortés J (2014) Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Transact Automat Control 59(3):781–786

    Article  MATH  MathSciNet  Google Scholar 

  6. Yang S, Liu Q, Wang J (2017) A multi-agent system with a proportional-integral protocol for distributed constrained optimization. IEEE Transact Automat Control 62(7):3461–3467

    Article  MATH  MathSciNet  Google Scholar 

  7. Wang Z, Li H (2020) Edge-based stochastic gradient algorithm for distributed optimization. IEEE Transact Network Sci Eng 7(3):1421–1430

    Article  MathSciNet  Google Scholar 

  8. Lu Q, Liao X, Li H, Huang T (2020) Achieving acceleration for distributed economic dispatch in smart grids over directed networks. IEEE Transact Network Sci Eng 7(3):1988–1999

    Article  MathSciNet  Google Scholar 

  9. Nedic A, Ozdaglar A, Parrilo P (2010) Constrained consensus and optimization in multi-agent networks. IEEE Transact Automat Control 55(4):922–938

    Article  MATH  MathSciNet  Google Scholar 

  10. Zhu M, Martínez S (2012) On distributed convex optimization under inequality and equality constraints. IEEE Transact Automat Control 57(1):151–164

    Article  MATH  MathSciNet  Google Scholar 

  11. Sayin M, Vanli D, Kozat S (2017) Stochastic subgradient algorithms for strongly convex optimization over distributed networks. IEEE Transact Network Sci Eng 4(4):248–260

    Article  MathSciNet  Google Scholar 

  12. Nedic A, Ozdaglar A (2009) Distributed subgradient methods for multi-agent optimization. IEEE Transact Automat Control 54(1):48–61

    Article  MATH  MathSciNet  Google Scholar 

  13. Zhang H, Li Y, Gao D, Zhou J (2017) Distributed optimal energy management for energy internet. IEEE Transact Indust Informat 13(6):3081–3097

    Article  Google Scholar 

  14. Yang S, Liu Q, Wang J (2017) Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Transact Sys Man & Cybernet: Sys 47(5):717–728

    Article  Google Scholar 

  15. Deng Z, Liang S, Hong Y (2017) Distributed continuous-time algorithms for resource allocation problems over weight-balanced digraphs. IEEE Transact Cybernet 11(99):1–10

    Google Scholar 

  16. Zeng X, Peng Y, Hong Y, Xie L (2016) Continuous-time distributed algorithms for extended monotropic optimization problems,” Siam J Control & Optimizat, 56(6):

  17. Zeng X, Peng Y, Hong Y (2018) Distributed algorithm for robust resource allocation with polyhedral uncertain allocation parameters. J Sys Sci & Complex 31(1):103–119

    Article  MATH  MathSciNet  Google Scholar 

  18. Hopfield J, Tank D (1985) Neural computation of decisions in optimization problems. Biol Cybernet 52(3):141–152

    Article  MATH  MathSciNet  Google Scholar 

  19. Kennedy M, Chua L (1988) Neural networks for nonlinear programming. IEEE Transact Circuit Sys 35(5):554–562

    Article  MathSciNet  Google Scholar 

  20. Liu Q, Yang S, Wang J (2017) A collective neurodynamic approach to distributed constrained optimization. IEEE Transact Neural Net Learn Sys 28(8):1747–1758

    Article  MathSciNet  Google Scholar 

  21. Liu Q, Wang J (2015) A second-order multi-agent network for bound-constrained distributed optimization. IEEE Transact Automat Control 60(12):3310–3315

    Article  MATH  MathSciNet  Google Scholar 

  22. Liang S, Yin G et al (2019) Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity. Automatica 105:298–306

    Article  MATH  MathSciNet  Google Scholar 

  23. Jia W, Qin S, Xue X (2019) A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Networks 119:46–56

    Article  Google Scholar 

  24. Jia W, Liu N, Qin S (2021) An adaptive continuous-time algorithm for nonsmooth convex resource allocation optimization. IEEE Transact Automat Control. https://doi.org/10.1109/TAC.2021.3137054

    Article  Google Scholar 

  25. Zhu Y, Yu W, Wen G, Chen G (2020) Projected primal-dual dynamics for distributed constrained nonsmooth convex optimization. IEEE Transact Cybernet 50(4):191–213

    Google Scholar 

  26. Jiang X, Qin S, Xue X (2020) A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocompution 337(6):225–233

    Article  Google Scholar 

  27. Ma L, Bian W (2019) A novel multiagent neurodynamic approach to constrained distributed convex optimization. IEEE Transact Cybernet 63(6):2168–2267

    Google Scholar 

  28. Yu X, Wu L, Xu C, Hu Y, Ma C (2019) A novel neural network for solving nonsmooth nonconvex optimization problems. IEEE Transact Neural Network Learn Sys 337(6):1–14

    Google Scholar 

  29. Aubin JP, Cellina A (1984) Differential Inclusions. Springer, Berlin, Heidelberg

    Book  MATH  Google Scholar 

  30. Xue X, Bian W (2008) Subgradient-based neural networks for nonsmooth convex optimization problems. IEEE Transact Circuit & Sys I: Regular Paper 55(8):2378–2391

    MathSciNet  Google Scholar 

  31. Abboud Azary, Iutzeler Franck et al (2015) Distributed production-sharing optimization and application to power grid networks. IEEE Transact Signal Infor Process Network 22(2):16–28

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This research is supported by the National Science Foundation of China (61773136, 11871178).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Bian.

Ethics declarations

Conflict of interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled “A Subgradient-based Neural Network to Constrained Distributed Convex Optimization.”

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research is supported by the National Natural Science Foundation of China (61773136, 11871178).

Appendix

Appendix

Proof

Combining \(\lim _{k\rightarrow +\infty }{\mathbf {x}}(t_{k})=\bar{{\mathbf {x}}}\) with Theorem 3, we have

$$\begin{aligned} \lim _{k\rightarrow +\infty }{\mathbf {x}}^{\mathrm {T}}(t_{k}){\mathbf {L}}{\mathbf {x}}(t_{k})=0, \end{aligned}$$

which means that \(\lim _{k\rightarrow +\infty }{\mathbf {L}}{\mathbf {x}}(t_{k})={\mathbf {L}}\bar{{\mathbf {x}}}=0\). Therefore, combined with Theorems 2, \(\bar{{\mathbf {x}}}\in \varOmega\) is a feasible solution of problem (2).

By \(\lim _{k\rightarrow +\infty }H\left( t_{k}, {\mathbf {x}}(t_{k})\right) =0,\) there exist \(\eta (t_{k})\in \partial G({\mathbf {x}}(t_{k}))\), \(\gamma (t_{k})\in \partial {\mathbf {f}}({\mathbf {x}}(t_{k}))\) and \(\xi (t_{k}) \in \partial D({\mathbf {x}}(t_{k}))\) satisfying

$$\begin{aligned}&\lim _{k\rightarrow +\infty }\big (\gamma (t_{k})+(t_{k}+1)^{2}\eta (t_{k}) +(t_{k}+1){\mathbf {L}}{\mathbf {x}}(t_{k})+(t_{k}+1)^{3}\xi (t_{k})\big )=0. \end{aligned}$$
(39)

Moreover, combined with the u.s.c. of \(\partial {\mathbf {f}}\), one has

$$\begin{aligned}&\lim _{k\rightarrow +\infty } \gamma (t_{k})={\bar{\gamma }} \in \partial {\mathbf {f}}(\bar{{\mathbf {x}}}). \end{aligned}$$
(40)

Meanwhile, based on the convexity of \(G(\cdot )\) and \(D(\cdot )\) on \(\varOmega\), and the properties of positive semidefinite of \({\mathbf {L}}\), for any \({\mathbf {y}}\in \varOmega\), we have

$$\begin{aligned} \left\{ \begin{array}{ll} ({\mathbf {y}}-{\mathbf {x}}(t_{k}))^{\mathrm {T}}\eta (t_{k})\le G({\mathbf {y}})-G({\mathbf {x}}(t_{k}))= 0,\\ ({\mathbf {y}}-{\mathbf {x}}(t_{k}))^{\mathrm {T}}\xi (t_{k})\le D({\mathbf {y}})-D({\mathbf {x}}(t_{k}))= 0,\\ ({\mathbf {y}}-{\mathbf {x}}(t_{k}))^{\mathrm {T}}{\mathbf {L}}{\mathbf {x}}(t_{k})=-{\mathbf {x}}^{\mathrm {T}}(t_{k}){\mathbf {L}}{\mathbf {x}}(t_{k})\le 0. \end{array} \right. \end{aligned}$$
(41)

Hence, from (39)-(41), for any \({\mathbf {y}}\in \varOmega =S_{1}\cap S_{2}\cap S_{3}\), one has

$$\begin{aligned} 0=&\lim _{k\rightarrow +\infty }({\mathbf {y}}-{\mathbf {x}}(t_{k}))^{\mathrm {T}}\Big \{\gamma (t_{k})+(t_{k}+1)^{2}\eta (t_{k})+(t_{k}+1){\mathbf {L}}{\mathbf {x}}(t_{k})\\&+(t_{k}+1)^{3}\xi (t_{k})\Big \}\\ \le&\limsup _{k\rightarrow +\infty }({\mathbf {y}}-{\mathbf {x}}(t_{k}))^{\mathrm {T}}\Big \{(t_{k}+1){\mathbf {L}}{\mathbf {x}}(t_{k})+\gamma (t_{k})\Big \}\\ \le&\limsup _{k\rightarrow +\infty }({\mathbf {y}}-{\mathbf {x}}(t_{k}))^{\mathrm {T}}\gamma (t_{k})\\ =&({\mathbf {y}}-\bar{{\mathbf {x}}})^{\mathrm {T}}{\bar{\gamma }}. \end{aligned}$$

By the convexity of \({\mathbf {f}}\) on \(\varOmega\), one has

$$\begin{aligned} {\mathbf {f}}({\mathbf {y}}) \ge {\mathbf {f}}(\bar{{\mathbf {x}}}), \forall {\mathbf {y}}\in \varOmega , \end{aligned}$$
(42)

which means that \(\bar{{\mathbf {x}}}\) is an optimal solution of problem (2). \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, Z., Jia, W., Bian, W. et al. A subgradient-based neural network to constrained distributed convex optimization. Neural Comput & Applic 35, 9961–9971 (2023). https://doi.org/10.1007/s00521-022-07003-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07003-z

Keywords

Navigation