Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

An Improved Neural Network with Random Weights Using Backtracking Search Algorithm

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper proposes a hybrid algorithm by combining backtracking search algorithm (BSA) and a neural network with random weights (NNRWs), called BSA-NNRWs-N. BSA is utilized to optimize the hidden layer parameters of the single layer feed-forward network (SLFN) and NNRWs is used to derive the output layer weights. In addition, to avoid over-fitting on the validation set, a new cost function is proposed to replace the root mean square error (RMSE). In the new cost function, a constraint is added by considering RMSE on both training and validation sets. Experiments on classification and regression data sets show promising performance of the proposed BSA-NNRWs-N.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Kecman V (2001) Learning and soft computing: support vector machines, neural networks, and fuzzy logic models. MIT press, Cambridge

    MATH  Google Scholar 

  2. Wang L, Xiuju F (2006) Data mining with computational intelligence. Springer, Heidelberg

    MATH  Google Scholar 

  3. Alhamdoosh M, Wang DH (2014) Fast decorrelated neural network ensembles with random weights. Inf Sci 264:104–117

    Article  MathSciNet  MATH  Google Scholar 

  4. Han M, Fan JC, Wang J (2011) A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control. IEEE Trans Neural Netw 22:1457–1468

    Article  Google Scholar 

  5. Slowik A (2011) Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training. IEEE Trans Ind Electron 58:3160–3167

    Article  Google Scholar 

  6. Song Y, Chen ZQ, Yuan ZZ (2007) New chaotic PSO-based neural network predictive conrol for nonlinear process. IEEE Trans Neural Netw 18:595–601

    Article  Google Scholar 

  7. Schmidt W, Kraaijveld M, Duin R (1992) Feedforward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition methodology and systems, pp 1–4

  8. Cao FL, Tan YP, Cai MM (2014) Sparse algorithms of random weight networks and appllications. Expert Syst Appl 41:2457–2462

    Article  Google Scholar 

  9. Cao FL, Ye HL, Wang DH (2015) A probabilistic learning algorithm for robust modeling using neural networks with random weights. Inf Sci 313:62–78

    Article  Google Scholar 

  10. Zhao JW, Wang ZH, Cao FL, Wang DH (2015) A local learning algorithm for random weights networks. Knowl-Based Syst 74:159–166

    Article  Google Scholar 

  11. Pao YH, Takefji Y (1992) Functional-link net computing. IEEE Comput J 25:76–79

    Article  Google Scholar 

  12. Igelnik B, Pao YH (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6:1320–1329

    Article  Google Scholar 

  13. Civicioglu P (2013) Backtracking search optimization algorithm for numerical optimization problems. Appl Math Comput 219:8121–8144

    MathSciNet  MATH  Google Scholar 

  14. Chen WN, Zhang J, Lin Y et al (2013) Particle swarm optimization with an aging leader and challengers. IEEE Trans Evol Comput 7:241–258

    Article  Google Scholar 

  15. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44:525–536

    Article  MathSciNet  MATH  Google Scholar 

  16. Cao J, Lin Z Z, Huang GB (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36:285–305

    Article  Google Scholar 

  17. Han F, Yao HF, Ling QH (2012) An improved extreme learning machine based on particle swarm optimization, Bio-inspired computing and applications. Springer, Heidelberg

    Google Scholar 

Download references

Acknowledgments

The authors thank the anonymous reviewers for their very helpful and constructive comments and suggestions. This work was supported by the NSFC Joint Fund with Guandong of China under Key Project U120158, the Shandong Natural Science Funds for Distinguished Young Scholar under Grant No. JQ201316, and the Fundamental Research Funds of Shandong University No. 2014JC028.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yilong Yin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, B., Wang, L., Yin, Y. et al. An Improved Neural Network with Random Weights Using Backtracking Search Algorithm. Neural Process Lett 44, 37–52 (2016). https://doi.org/10.1007/s11063-015-9480-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-015-9480-z

Keywords

Navigation