Abstract
This paper proposes a hybrid algorithm by combining backtracking search algorithm (BSA) and a neural network with random weights (NNRWs), called BSA-NNRWs-N. BSA is utilized to optimize the hidden layer parameters of the single layer feed-forward network (SLFN) and NNRWs is used to derive the output layer weights. In addition, to avoid over-fitting on the validation set, a new cost function is proposed to replace the root mean square error (RMSE). In the new cost function, a constraint is added by considering RMSE on both training and validation sets. Experiments on classification and regression data sets show promising performance of the proposed BSA-NNRWs-N.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Kecman V (2001) Learning and soft computing: support vector machines, neural networks, and fuzzy logic models. MIT press, Cambridge
Wang L, Xiuju F (2006) Data mining with computational intelligence. Springer, Heidelberg
Alhamdoosh M, Wang DH (2014) Fast decorrelated neural network ensembles with random weights. Inf Sci 264:104–117
Han M, Fan JC, Wang J (2011) A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control. IEEE Trans Neural Netw 22:1457–1468
Slowik A (2011) Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training. IEEE Trans Ind Electron 58:3160–3167
Song Y, Chen ZQ, Yuan ZZ (2007) New chaotic PSO-based neural network predictive conrol for nonlinear process. IEEE Trans Neural Netw 18:595–601
Schmidt W, Kraaijveld M, Duin R (1992) Feedforward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition methodology and systems, pp 1–4
Cao FL, Tan YP, Cai MM (2014) Sparse algorithms of random weight networks and appllications. Expert Syst Appl 41:2457–2462
Cao FL, Ye HL, Wang DH (2015) A probabilistic learning algorithm for robust modeling using neural networks with random weights. Inf Sci 313:62–78
Zhao JW, Wang ZH, Cao FL, Wang DH (2015) A local learning algorithm for random weights networks. Knowl-Based Syst 74:159–166
Pao YH, Takefji Y (1992) Functional-link net computing. IEEE Comput J 25:76–79
Igelnik B, Pao YH (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6:1320–1329
Civicioglu P (2013) Backtracking search optimization algorithm for numerical optimization problems. Appl Math Comput 219:8121–8144
Chen WN, Zhang J, Lin Y et al (2013) Particle swarm optimization with an aging leader and challengers. IEEE Trans Evol Comput 7:241–258
Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44:525–536
Cao J, Lin Z Z, Huang GB (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36:285–305
Han F, Yao HF, Ling QH (2012) An improved extreme learning machine based on particle swarm optimization, Bio-inspired computing and applications. Springer, Heidelberg
Acknowledgments
The authors thank the anonymous reviewers for their very helpful and constructive comments and suggestions. This work was supported by the NSFC Joint Fund with Guandong of China under Key Project U120158, the Shandong Natural Science Funds for Distinguished Young Scholar under Grant No. JQ201316, and the Fundamental Research Funds of Shandong University No. 2014JC028.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wang, B., Wang, L., Yin, Y. et al. An Improved Neural Network with Random Weights Using Backtracking Search Algorithm. Neural Process Lett 44, 37–52 (2016). https://doi.org/10.1007/s11063-015-9480-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-015-9480-z