Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Boundedness and Convergence of Split-Complex Back-Propagation Algorithm with Momentum and Penalty

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper investigates the split-complex back-propagation algorithm with momentum and penalty for training complex-valued neural networks. Here the momentum are used to accelerate the convergence of the algorithm and the penalty are used to control the magnitude of the network weights. The sufficient conditions for the learning rate, the momentum factor, the penalty coefficient, and the activation functions are proposed to establish the theoretical results of the algorithm. We theoretically prove the boundedness of the network weights during the training process, which is usually used as a precondition for convergence analysis in literatures. The monotonicity of the error function and the convergence of the algorithm are also guaranteed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Benvenuto T, Piazza F (1992) On the complex backpropagaton algorithm. IEEE Trans Signal Process 40(4):967–969

    Google Scholar 

  2. Fine TL, Mukherjee S (1999) Parameter convergence and learning curves for neural networks. Neural Comput 11:747–769

    Article  Google Scholar 

  3. Gaivoronski AA (1994) Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods (part I). Optim Methods Softw 4:117–134

    Article  Google Scholar 

  4. George GM, Koutsougeras C (1992) Complex domain backpropagation. IEEE Trans Circuits Syst II 39(5):330–334

    Article  MATH  Google Scholar 

  5. Hirose A (2006) Complex-valued neural networks. Springer, New York

    Book  MATH  Google Scholar 

  6. Kim T, Adali T (2002) Fully-complex multilayer perceptron network for nonlinear signal processing. J VLSI Signal Process 32(1–2):29–43

    MATH  Google Scholar 

  7. Mandic DP, Goh SL (2009) Complex valued nonlinear adaptive filters: noncircularity, widely linear and neural models. Wiley, New York

    Book  Google Scholar 

  8. Nitta T (2004) Orthogonality of decision boundaries of complex-valued neural networks. Neural Comput 16(1):73–97

    Article  MATH  Google Scholar 

  9. Qian N (1999) On the momentum term in gradient descent learning algorithms. Neural Netw 12:145–151

    Article  Google Scholar 

  10. Reed R (1993) Pruning algorithms: a survey. IEEE Trans Neural Netw 4(5):740–747

    Article  Google Scholar 

  11. Saito K, Nakano R (2000) Second-order learning algorithm with squared penalty term. Neural Comput 12:709–729

    Article  Google Scholar 

  12. Shao HM, Wu W, Liu LJ (2007) Convergence and monotonicity of an online gradient method with penalty for neural networks. WSEAS Trans Math 6(3):469–476

    MATH  MathSciNet  Google Scholar 

  13. Wang J, Yang J, Wu W (2011) Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks. IEEE Trans Neural Netw 22(8):1297–1306

    Article  Google Scholar 

  14. Wang J, Wu W, Zurada JM (2011) Deterministic convergence of conjugate gradient mehtod for feedforward neural networks. Neurocomputing 74:2368–2376

    Article  Google Scholar 

  15. White H (1989) Some asymptotic results for learning in single hidden-layer feedforward network models. J Am Stat Assoc 84:1003–1013

    Article  MATH  Google Scholar 

  16. Wu W, Zhang NM, Li ZX, Li L, Liu Y (2008) Convergence of gradient method with momentum for back-propagation neural networks. J Comput Math 26:613–623

    MATH  MathSciNet  Google Scholar 

  17. Wu W, Feng GR, Li ZX, Xu YS (2005) Deterministic convergence of an online gradient method for BP neural networks. IEEE Trans Neural Netw 16(3):533–540

    Article  Google Scholar 

  18. Xu DP, Zhang HS, Liu LJ (2010) Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Neural Comput 22(10):2655–2677

    Article  MATH  MathSciNet  Google Scholar 

  19. Yang SS, Siu S, Ho CL (2008) Analysis of the initial values in split-complex backpropagation algorithm. IEEE Trans Neural Netw 19(9):1564–1573

    Article  Google Scholar 

  20. Zhang C, Wu W, Xiong Y (2007) Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks. Neural Process Lett 26(3):177–189

    Article  MATH  Google Scholar 

  21. Zhang HS, Zhang C, Wu W (2009) Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks. Discret Dyn Nat Soc 2009:1–16

    Google Scholar 

  22. Zhang HS, Wu W, Liu F, Yao M (2009) Boundedness and convergence of online gadient method with penalty for feedforward neural networks. IEEE Trans Neural Netw 20(6):1050–1054

    Article  Google Scholar 

  23. Nitta T (1997) An extension of the back-propagation algorithm to complex numbers. Neural Netw 10(8):1391–1415

    Article  Google Scholar 

  24. Zhang HS, Wu W (2011) Convergence of split-complex backpropagation algorithm with momentum. Neural Netw World 21(1):75–90

    Google Scholar 

Download references

Acknowledgments

This research is supported by the National Natural Science Foundation of China (61101228,10871220), the China Postdoctoral Science Foundation (No.2012M520623), the Research Fund for the Doctoral Program of Higher Education of China (No.20122304120028), and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huisheng Zhang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, H., Xu, D. & Zhang, Y. Boundedness and Convergence of Split-Complex Back-Propagation Algorithm with Momentum and Penalty. Neural Process Lett 39, 297–307 (2014). https://doi.org/10.1007/s11063-013-9305-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-013-9305-x

Keywords

Navigation