Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

An improved parameter learning methodology for RVFL based on pseudoinverse learners

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

As a compact and effective learning model, the random vector functional link neural network (RVFL) has been confirmed with universal approximation capabilities. It has gained considerable attention in various fields. However, the randomly generated parameters in RVFL often lead to the loss of valid information and data redundancy, which severely degrades the model performance in practice. This paper first proposes an efficient network parameters learning approach for the original RVFL with pseudoinverse learner (RVFL-PL). Instead of taking the random feature mapping directly, RVFL-PL adopts a non-iterative manner to obtain influential enhancement nodes implanted with valuable information from input data, which realizes to improve the quality of the enhancement nodes and ease the problem caused by the randomly assigned parameters in the standard RVFL. Since the network parameters are optimized analytically, this improved variant can maintain the efficiency of the standard RVFL. Further, the RVFL-PL is extended to a multilayered structure (mRVFL-PL) to obtain high-level representations from the input data. The results of comprehensive experiments on some benchmarks indicate the performance improvement of the proposed method compared to other corresponding methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

Data sharing is not applicable to this article as no new datasets were generated or analyzed during the current study.

References

  1. Yue K, Xu F, Yu J (2019) Shallow and wide fractional max-pooling network for image classification. Neural Comput Appl 31(2):409–419

    Article  Google Scholar 

  2. Jia Y, Chen X, Yu J, Wang L, Wang Y (2021) Speaker recognition based on characteristic spectrograms and an improved self-organizing feature map neural network. Complex Intell Syst 7:1749–1757

    Article  Google Scholar 

  3. Wang D (2016) Editorial: randomized algorithms for training neural networks. Inf Sci 364–365:126–128. https://doi.org/10.1016/j.ins.2016.05.021

    Article  MATH  Google Scholar 

  4. Scardapane S, Wang D (2017) Randomness in neural networks: an overview. Wiley Interdiscip Rev Data Min Knowl Discov. https://doi.org/10.1002/widm.1200

    Article  Google Scholar 

  5. Cao W, Wang X, Ming Z, Gao J (2018) A review on neural networks with random weights. Neurocomputing 275:278–287

    Article  Google Scholar 

  6. Zhang L, Suganthan PN (2016) A survey of randomized algorithms for training neural networks. Inf Sci 364:146–155

    Article  MATH  Google Scholar 

  7. Pao Y, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180

    Article  Google Scholar 

  8. Dehuri S, Cho S (2010) A hybrid genetic based functional link artificial neural network with a statistical comparison of classifiers over multiple datasets. Neural Comput Appl 19(2):317–328

    Article  Google Scholar 

  9. Zhang L, Suganthan PN (2016) A comprehensive evaluation of random vector functional link networks. Inf Sci 367:1094–1105

    Article  Google Scholar 

  10. Guo P, Lyu MR (2004) A pseudoinverse learning algorithm for feedforward neural networks with stacked generalization applications to software reliability growth data. Neurocomputing 56:101–121

    Article  Google Scholar 

  11. Wang D, Li M (2017) Stochastic configuration networks: fundamentals and algorithms. IEEE Trans Cybern 47(10):3466–3479. https://doi.org/10.1109/TCYB.2017.2734043

    Article  Google Scholar 

  12. Pratama M, Wang D (2019) Deep stacked stochastic configuration networks for lifelong learning of non-stationary data streams. Inf Sci 495:150–174. https://doi.org/10.1016/j.ins.2019.04.055

    Article  MathSciNet  Google Scholar 

  13. Pratama M, Angelov P, Lughofer E, Er MJ (2018) Parsimonious random vector functional link network for data streams. Inf Sci 430:519–537

    Article  Google Scholar 

  14. Colace F, Loia V, Pedrycz W, Tomasiello S (2020) On a granular functional link network for classification. Neurocomputing 398:108–116

    Article  Google Scholar 

  15. Zhang P, Yang Z (2020) A new learning paradigm for random vector functional-link network: RVFL+. Neural Netw 122:94–105

    Article  Google Scholar 

  16. Scardapane S, Comminiello D, Scarpiniti M, Uncini A (2016) A semi-supervised random vector functional-link network based on the transductive framework. Inf Sci 364–365:156–166

    Article  MATH  Google Scholar 

  17. Guan S, Cui Z (2020) Modeling uncertain processes with interval random vector functional-link networks. J Process Control 93:43–52

    Article  Google Scholar 

  18. Shi Q, Katuwal R, Suganthan PN, Tanveer M (2021) Random vector functional link neural network based ensemble deep learning. Pattern Recognit 117:107978

    Article  Google Scholar 

  19. Katuwal R, Suganthan P (2019) Stacked autoencoder based deep random vector functional link neural network for classification. Appl Soft Comput 85:105854

    Article  Google Scholar 

  20. Xie J, Liu S, Dai H, Rong Y (2020) Distributed semi-supervised learning algorithms for random vector functional-link networks with distributed data splitting across samples and features. Knowl Based Syst 195:105577

    Article  Google Scholar 

  21. Vukovic N, Petrovic M, Miljkovic Z (2017) A comprehensive experimental evaluation of orthogonal polynomial expanded random vector functional link neural networks for regression. Appl Soft Comput 70:1083–1096

    Article  Google Scholar 

  22. Nayak DR, Dash R, Majhi B, Pachori RB, Zhang Y (2020) A deep stacked random vector functional link network autoencoder for diagnosis of brain abnormalities and breast cancer. Biomed Signal Process Control 58:101860

    Article  Google Scholar 

  23. Tyukin I, Prokhorov DV (2009) In: Proceedings of the IEEE international conference on control applications, CCA 2009 and of the international symposium on intelligent control, ISIC 2009, Saint Petersburg, Russia, July 8-10, 2009, pp. 1391–1396. https://doi.org/10.1109/CCA.2009.5281061

  24. Li M, Wang D (2017) Insights into randomized algorithms for neural networks: practical issues and common pitfalls. Inf Sci 382–383:170–178

    Article  MATH  Google Scholar 

  25. Shobana J, Murali M (2021) An efficient sentiment analysis methodology based on long short-term memory networks. Complex Intell Syst 7:2485–2501

    Article  Google Scholar 

  26. Zhang Y, Wu J, Cai Z, Du B, Yu PS (2019) An unsupervised parameter learning model for RVFL neural network. Neural Netw 112:85–97

    Article  MATH  Google Scholar 

  27. Paul AN, Yan P, Yang Y, Zhang H, Du S, Wu QMJ (2021) Non-iterative online sequential learning strategy for autoencoder and classifier. Neural Comput Appl 33(23):16345–16361

    Article  Google Scholar 

  28. Giryes R, Sapiro G, Bronstein AM (2016) Deep neural networks with random gaussian weights: A universal classification strategy? IEEE Trans Signal Process 64(13):3444–3457

    Article  MathSciNet  MATH  Google Scholar 

  29. Guo P, Zhao D, Han M, Feng S (2019) In: Recent advances in big data and deep learning, proceedings of the INNS big data and deep learning conference INNSBDDL 2019, held at Sestri Levante, Genova, Italy 16-18 April 2019. Springer, pp. 158–168

  30. Wang K, Guo P (2021) A robust automated machine learning system with pseudoinverse learning. Cogn Comput 13(3):724–735

    Article  Google Scholar 

  31. Yin Q, Xu B, Zhou K, Guo P (2021) Bayesian pseudoinverse learners: from uncertainty to deterministic learning. IEEE Trans Cybern PP(99):1–12

    Google Scholar 

  32. Lee H, Kim N, Lee J (2017) Deep neural network self-training based on unsupervised learning and dropout. Int J Fuzzy Logic Intell Syst 17(1):1–9

    Article  Google Scholar 

  33. Guo P (2018) Building deep and broad learning systems based on pseudoinverse learning autoencoders. Special session presentation in CPCC 2018 (2018). In: The 29th Chinese process control conference (CPCC 2018). Shenyang

  34. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30. http://jmlr.org/papers/v7/demsar06a.html

  35. Klambauer G, Unterthiner T, Mayr A, Hochreiter S (2017) In: Advances in neural information processing systems 30: annual conference on neural information processing systems 2017, 4-9 December 2017. Long Beach, CA, USA, pp. 971–980

  36. He K, Zhang X, Ren S, Sun J (2015) In: 2015 IEEE international conference on computer vision, ICCV 2015. IEEE Computer Society, Santiago, pp. 1026–1034

  37. Salimans T, Kingma DP (2016) In: Lee DD, Sugiyama M, von Luxburg U, Guyon I, Garnett R (eds) Advances in neural information processing systems 29: annual conference on neural information processing systems 2016, December 5-10, 2016. Barcelona, pp. 901

  38. Srivastava RK, Greff K, Schmidhuber J (2015) In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems 28: annual conference on neural information processing systems 2015, December 7-12, 2015. Montreal, Quebec, pp. 2377–2385

  39. Ba LJ, Kiros JR, Hinton GE (2016) Layer normalization. arXiv abs/1607.06450. 1607.06450

  40. Ioffe S, Szegedy C (2015) In: Bach FR, Blei DM (eds) Proceedings of the 32nd international conference on machine learning, ICML 2015, Lille, France, 6-11 July 2015, JMLR workshop and conference proceedings, vol. 37. pp. 448–456

  41. He K, Zhang X, Ren S, Sun J (2016) In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016. IEEE Computer Society, Las Vegas, pp. 770–778

Download references

Acknowledgements

This work is supported in part by the National Key Research and Development Program of China under Grant No. 2018AAA0100203, and in part by the Joint Research Fund in Astronomy (U2031136) under cooperative agreement between the NSFC and CAS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qian Yin.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, X., Deng, X., Yin, Q. et al. An improved parameter learning methodology for RVFL based on pseudoinverse learners. Neural Comput & Applic 35, 1803–1818 (2023). https://doi.org/10.1007/s00521-022-07824-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07824-y

Keywords

Navigation