Abstract
This paper investigates the split-complex back-propagation algorithm with momentum and penalty for training complex-valued neural networks. Here the momentum are used to accelerate the convergence of the algorithm and the penalty are used to control the magnitude of the network weights. The sufficient conditions for the learning rate, the momentum factor, the penalty coefficient, and the activation functions are proposed to establish the theoretical results of the algorithm. We theoretically prove the boundedness of the network weights during the training process, which is usually used as a precondition for convergence analysis in literatures. The monotonicity of the error function and the convergence of the algorithm are also guaranteed.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Benvenuto T, Piazza F (1992) On the complex backpropagaton algorithm. IEEE Trans Signal Process 40(4):967–969
Fine TL, Mukherjee S (1999) Parameter convergence and learning curves for neural networks. Neural Comput 11:747–769
Gaivoronski AA (1994) Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods (part I). Optim Methods Softw 4:117–134
George GM, Koutsougeras C (1992) Complex domain backpropagation. IEEE Trans Circuits Syst II 39(5):330–334
Hirose A (2006) Complex-valued neural networks. Springer, New York
Kim T, Adali T (2002) Fully-complex multilayer perceptron network for nonlinear signal processing. J VLSI Signal Process 32(1–2):29–43
Mandic DP, Goh SL (2009) Complex valued nonlinear adaptive filters: noncircularity, widely linear and neural models. Wiley, New York
Nitta T (2004) Orthogonality of decision boundaries of complex-valued neural networks. Neural Comput 16(1):73–97
Qian N (1999) On the momentum term in gradient descent learning algorithms. Neural Netw 12:145–151
Reed R (1993) Pruning algorithms: a survey. IEEE Trans Neural Netw 4(5):740–747
Saito K, Nakano R (2000) Second-order learning algorithm with squared penalty term. Neural Comput 12:709–729
Shao HM, Wu W, Liu LJ (2007) Convergence and monotonicity of an online gradient method with penalty for neural networks. WSEAS Trans Math 6(3):469–476
Wang J, Yang J, Wu W (2011) Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks. IEEE Trans Neural Netw 22(8):1297–1306
Wang J, Wu W, Zurada JM (2011) Deterministic convergence of conjugate gradient mehtod for feedforward neural networks. Neurocomputing 74:2368–2376
White H (1989) Some asymptotic results for learning in single hidden-layer feedforward network models. J Am Stat Assoc 84:1003–1013
Wu W, Zhang NM, Li ZX, Li L, Liu Y (2008) Convergence of gradient method with momentum for back-propagation neural networks. J Comput Math 26:613–623
Wu W, Feng GR, Li ZX, Xu YS (2005) Deterministic convergence of an online gradient method for BP neural networks. IEEE Trans Neural Netw 16(3):533–540
Xu DP, Zhang HS, Liu LJ (2010) Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Neural Comput 22(10):2655–2677
Yang SS, Siu S, Ho CL (2008) Analysis of the initial values in split-complex backpropagation algorithm. IEEE Trans Neural Netw 19(9):1564–1573
Zhang C, Wu W, Xiong Y (2007) Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks. Neural Process Lett 26(3):177–189
Zhang HS, Zhang C, Wu W (2009) Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks. Discret Dyn Nat Soc 2009:1–16
Zhang HS, Wu W, Liu F, Yao M (2009) Boundedness and convergence of online gadient method with penalty for feedforward neural networks. IEEE Trans Neural Netw 20(6):1050–1054
Nitta T (1997) An extension of the back-propagation algorithm to complex numbers. Neural Netw 10(8):1391–1415
Zhang HS, Wu W (2011) Convergence of split-complex backpropagation algorithm with momentum. Neural Netw World 21(1):75–90
Acknowledgments
This research is supported by the National Natural Science Foundation of China (61101228,10871220), the China Postdoctoral Science Foundation (No.2012M520623), the Research Fund for the Doctoral Program of Higher Education of China (No.20122304120028), and the Fundamental Research Funds for the Central Universities.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhang, H., Xu, D. & Zhang, Y. Boundedness and Convergence of Split-Complex Back-Propagation Algorithm with Momentum and Penalty. Neural Process Lett 39, 297–307 (2014). https://doi.org/10.1007/s11063-013-9305-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-013-9305-x