Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Neural-dynamics methods for solving quadratic programming (QP) have been studied for decades. The main feature of a neural-dynamics solver is that it can generate a continuous path from the initial point, and the final path will converge to the solution. In particular, the Z-type neural dynamics (ZND) that has emerged in recent years shows that it can completely converge to the ideal solution for the real-time-dependent QP in the ideal situation, i.e., without noise. It is worth noting that noise substantially influences the accuracy of neural-dynamics models in the process of solving the problems. Nevertheless, the existing neural-dynamics methods show limited capacity of noise tolerance, which may seriously affect their application in practical problems. By exploiting the Z-type design formula and two nonlinear activation functions, this work proposes a prescribed-time convergent and noise-tolerant ZND (PTCNTZND) model for calculating real-time-dependent QPs under noisy environments. Theoretical analyses of the PTCNTZND model show that it can be accelerated to prescribed-time convergence to the time-dependent optimal solution, and has natural anti-noise ability. The upper bound of the convergence time is also derived theoretically. Finally, the performance of the PTCNTZND model was verified by experiments, and the results substantiate the excellent robustness and convergence characteristics of the proposed PTCNTZND model for calculating real-time-dependent QPs, as compared with the existing ZND models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Aubry A, De Maio A, Piezzo M, Farina A (2014) Radar waveform design in a spectrally crowded environment via nonconvex quadratic optimization. IEEE Trans Aerosp Electron Syst 50:1138–1152

    Article  Google Scholar 

  2. Mattingley J, Boyd S (2010) Real-time convex optimization in signal processing. IEEE Signal Process Mag 27:50–61

    Article  Google Scholar 

  3. Johansen TA, Fsosen TI, Berge SP (2004) Constrained nonlinear control allocation with singularity avoidance using sequential quadratic programming. IEEE Trans Control Syst Technol 12:211–216

    Article  Google Scholar 

  4. Miao P, Shen Y, Huang Y, Wang Y-W (2015) Solving time-varying quadratic programs based on finite-time Zhang neural networks and their application to robot tracking. Neural Comput Appl 26:693–703

    Article  Google Scholar 

  5. Liao B, Zhang Y, Jin L (2016) Taylor \(O(h^{3})\) discretization of ZNN models for dynamic equality-constrained quadratic programming with application to manipulators. IEEE Trans Neural Netw Learn Syst 27:225–237

    Article  MathSciNet  Google Scholar 

  6. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

    Book  Google Scholar 

  7. Wu AI, Tam PKS (1999) A neural network methodology and strategy of quadratic optimisation. Neural Comput Appl 8:283–289

    Article  Google Scholar 

  8. Guzmán C, Nemirovski A (2015) On lower complexity bounds for large-scale smooth convex optimization. J Complex 31:1–14

    Article  MathSciNet  Google Scholar 

  9. Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52:141–152

    MATH  Google Scholar 

  10. Guo D, Yi C, Zhang Y (2011) Zhang neural network versus gradient-based neural network for time-varying linear matrix equation solving. Neurocomputing 74:3708–3712

    Article  Google Scholar 

  11. Feng J, Qin S, Shi F, Zhao X (2018) A recurrent neural network with finite-time convergence for convex quadratic bilevel programming problems. Neural Comput Appl 30:3399–3408

    Article  Google Scholar 

  12. Ding L, Xiao L, Liao B, Jin J, Liu M (2016) Novel complex-valued neural network for dynamic complex-valued matrix inversion. J Adv Comput Intell Intell Informat 20:132–138

    Article  Google Scholar 

  13. Jin L, Zhang Y (2015) Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation. IEEE Trans Neural Netw Learn Syst 26:1525–1531

    Article  MathSciNet  Google Scholar 

  14. Chua LO, Lin G (1984) Nonlinear programming without computation. IEEE Trans Circuits Syst 31:182–188

    Article  MathSciNet  Google Scholar 

  15. Forti M, Nistri P, Quincampoix M (2004) Generalized neural network for nonsmooth nonlinear programming problems. IEEE Trans Circuits Syst I-Regul Pap 51:1741–1754

    Article  MathSciNet  Google Scholar 

  16. Liu S, Wang J (2006) A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans Neural Netw 17:1500–1510

    Article  Google Scholar 

  17. Tao Q, Cao J, Sun D (2001) A simple and high performance neurel network for quadratic programming problems. Appl Math Comput 124:251–260

    MathSciNet  MATH  Google Scholar 

  18. Sun DI, Ashley B, Brewer B, Hughes A, Tinney WF (1984) Optimal power flow by newton approach. IEEE Trans Power App Syst 103:2864–2880

    Article  Google Scholar 

  19. Zhang Y, Li W, Guo D, Ke Z (2013) Different Zhang functions leading to different ZNN models illustrated via time-varying matrix square roots finding. Expert Syst Appl 40:4393–4403

    Article  Google Scholar 

  20. Zhang Y, Li Z (2009) Zhang neural network for online solution of time-varying convex quadratic program subject to time-varying linear-equality constraints. Phys Lett A 373:1639–1643

    Article  Google Scholar 

  21. Guo D, Zhang Y (2015) ZNN for solving online time-varying linear matrix-vector inequality via equality conversion. Appl Math Comput 259:327–338

    MathSciNet  MATH  Google Scholar 

  22. Zhang Z, Lu Y, Zheng L, Li S, Yu Z, Li Y (2018) A new varying-parameter convergent-differential neural-network for solving time-varying convex QP problem constrained by linear-equality. IEEE Trans Autom Control 63:4110–4125

    Article  MathSciNet  Google Scholar 

  23. Li S, Li Y, Wang Z (2013) A class of finite-time dual neural networks for solving quadratic programming problems and its \(k\)-winners-take-all application. Neural Netw 39:27–39

    Article  Google Scholar 

  24. Liao B, Zhang Y (2014) From different ZFs to different ZNN models accelerated via Li activation functions to finite-time convergence for time-varying matrix pseudoinversion. Neurocomputing 133:512–522

    Article  Google Scholar 

  25. Xiao L, Liao B, Li S, Chen K (2018) Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 98:102–113

    Article  Google Scholar 

  26. Polyakov A (2012) Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans Autom Control 57:2106–2110

    Article  MathSciNet  Google Scholar 

  27. Khelil N, Otis MJD (2016) Finite-time stabilization of homogeneous non-Lipschitz systems. Mathematics 4:58

    Article  Google Scholar 

  28. Xiao L (2016) A nonlinearly activated neural dynamics and its finite-time solution to time-varying nonlinear equation. Neurocomputing 173:1983–1988

    Article  Google Scholar 

  29. Xiao L (2019) A finite-time convergent zhang neural network and its application to real-time matrix square root finding. Neural Comput Appl 31:793–800

    Article  Google Scholar 

  30. Jin L, Zhang Y, Li S (2016) Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans Neural Netw Learn Syst 27:2615–2627

    Article  Google Scholar 

  31. Xiang Q, Li W, Liao B, Huang Z (2018) Noise-resistant discrete-time neural dynamics for computing time-dependent Lyapunov equation. IEEE Access 6:45359–45371

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the Hunan Natural Science Foundation of China (with Numbers 2020JJ4511 and 2020JJ4510).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bolin Liao.

Ethics declarations

Conflicts of interest

All authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

For PTCNTZND model (4) with AF (6), by defining a Lyapunov function candidate \(w(t)=|u_{j}(t)|\), we have

$$\begin{aligned} {\dot{w}}(t)\le -\alpha \bigg (r_{1}w^{i}(t)+ r_{2}w^{1/i}(t)\bigg ). \end{aligned}$$
(16)

For inequality (16), because \(0<i<1\) and \(1/i>1\), we obtain

$$\begin{aligned} {\dot{w}}(t)\le -\alpha r_{2}w^{1/i}(t) \end{aligned}$$
(17)

if \(w(t)>1\), and

$$\begin{aligned} {\dot{w}}(t)\le -\alpha r_{1}w^{i}(t) \end{aligned}$$
(18)

for \(w(t)\le 1\). If \(w(0)>1\), inequality (17) guarantees \(w(t)\le 1\) for

$$\begin{aligned} t\ge \frac{1}{\alpha r_{2}(1/i-1)}. \end{aligned}$$

If \(w(t_0)\le 1\), it can be derived from (18) that \(w(t)=0\) for

$$\begin{aligned} t\ge t_0+\frac{1}{\alpha r_{1}(i-1)}. \end{aligned}$$

Therefore, for any initial states \(w(0)=|u_{j}(0)|\), when

$$\begin{aligned} t \ge \frac{1}{\alpha r_{1}(1-i)}+\frac{1}{\alpha r_{2}(1/i-1)}, \end{aligned}$$

we have \(w(t)=|u_{j}(t)|=0\). It means that the convergence time of the jth subsystem of the PTCNTZND model is

$$\begin{aligned} t_{j}\le \frac{1}{\alpha r_{1}(1-i)}+\frac{1}{\alpha r_{2}(1/i-1)}. \end{aligned}$$

The proof is thus completed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liao, B., Wang, Y., Li, W. et al. Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming. Neural Comput & Applic 33, 5327–5337 (2021). https://doi.org/10.1007/s00521-020-05356-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05356-x

Keywords

Navigation