Nothing Special   »   [go: up one dir, main page]

Skip to main content

Deep Randomized Networks for Fast Learning

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14286))

Included in the following conference series:

  • 733 Accesses

Abstract

Deep learning neural networks show a significant improvement over shallow ones in complex problems. Their main disadvantage is their memory requirements, the vanishing gradient problem, and the time consuming solutions to find the best achievable weights and other parameters. Since many applications (such as continuous learning) would need fast training, one possible solution is the application of sub-networks which can be trained very fast. Randomized single layer networks became very popular due to their fast optimization while their extensions, for more complex structures, could increase their prediction accuracy. In our paper we show a new approach to build deep neural models for classification tasks with an iterative, pseudo-inverse optimization technique. We compare the performance with a state-of-the-art backpropagation method and the best known randomized approach called hierarchical extreme learning machine. Computation time and prediction accuracy are evaluated on 12 benchmark datasets, showing that our approach is competitive in many cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Awan, A.A., Jain, A., Anthony, Q., Subramoni, H., Panda, D.K.: HyPar-Flow: exploiting MPI and Keras for scalable hybrid-parallel DNN training with tensorflow. In: Sadayappan, P., Chamberlain, B., Juckeland, G., Ltaief, H. (eds.) ISC High Performance 2020. LNCS, vol. 12151, pp. 83–103. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50743-5_5

    Chapter  Google Scholar 

  2. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cai, Y., Liu, X., Zhang, Y., Cai, Z.: Hierarchical ensemble of extreme learning machine. Pattern Recogn. Lett. 116, 101–106 (2018)

    Article  Google Scholar 

  4. Cao, J., Lin, Z., Huang, G.B., Liu, N.: Voting based extreme learning machine. Inf. Sci. 185(1), 66–77 (2012)

    Article  MathSciNet  Google Scholar 

  5. Cao, W., Wang, X., Ming, Z., Gao, J.: A review on neural networks with random weights. Neurocomputing 275, 278–287 (2018)

    Article  Google Scholar 

  6. Chamara, L., Zhou, H., Huang, G.B., Vong, C.M.: Representational learning with extreme learning machine for big data. IEEE Intell. Syst. 28(6), 31–34 (2013)

    Google Scholar 

  7. Cvetković, S., Stojanović, M.B., Nikolić, S.V.: Hierarchical ELM ensembles for visual descriptor fusion. Inf. Fusion 41, 16–24 (2018)

    Article  Google Scholar 

  8. Hierarchical ELM MATLAB source codes. https://www.extreme-learning-machines.org

  9. Han, J., Xu, L., Rafique, M., Butt, A.R., Lim, S.H.: A quantitative study of deep learning training on heterogeneous supercomputers. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States) (2019)

    Google Scholar 

  10. Huang, G.B., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man. Cybern. Part B (Cybern.) 42(2), 513–529 (2011)

    Google Scholar 

  11. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1–3), 489–501 (2006)

    Article  Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014). arXiv preprint arXiv:1412.6980

  13. Ksieniewicz, P., Krawczyk, B., Woźniak, M.: Ensemble of extreme learning machines with trained classifier combination and statistical features for hyperspectral data. Neurocomputing 271, 28–37 (2018)

    Article  Google Scholar 

  14. Lee, S., Nirjon, S.: SubFlow: a dynamic induced-subgraph strategy toward real-time DNN inference and training. In: 2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), pp. 15–29. IEEE (2020)

    Google Scholar 

  15. Li, R., Wang, X., Song, Y., Lei, L.: Hierarchical extreme learning machine with L21-norm loss and regularization. Int. J. Mach. Learn. Cybern. 12(5), 1297–1310 (2021)

    Article  Google Scholar 

  16. Liu, H., Li, F., Xu, X., Sun, F.: Multi-modal local receptive field extreme learning machine for object recognition. Neurocomputing 277, 4–11 (2018)

    Article  Google Scholar 

  17. Moore, E.H.: On the reciprocal of the general algebraic matrix. Bull. Am. Math. Soc. 26, 394–395 (1920)

    Google Scholar 

  18. Nagy, A.M., Czúni, L.: Classification and fast few-shot learning of steel surface defects with randomized network. Appl. Sci. 12(8), 3967 (2022)

    Article  Google Scholar 

  19. Pao, Y.H., Park, G.H., Sobajic, D.J.: Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2), 163–180 (1994)

    Article  Google Scholar 

  20. Schmidt, W.F., Kraaijveld, M.A., Duin, R.P.: Feed forward neural networks with random weights. In: International Conference on Pattern Recognition, pp. 1. IEEE Computer Society Press (1992)

    Google Scholar 

  21. Storn, R.: On the usage of differential evolution for function optimization. In: Proceedings of North American Fuzzy Information Processing, pp. 519–523. IEEE (1996)

    Google Scholar 

  22. Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Appl. Soft Comput. 105, 107239 (2021)

    Article  Google Scholar 

  23. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  24. Tang, J., Deng, C., Huang, G.B.: Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 809–821 (2015)

    Article  MathSciNet  Google Scholar 

  25. Yang, Y., Wu, Q.J.: Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans. Cybern. 46(11), 2570–2583 (2015)

    Article  Google Scholar 

  26. Zagoruyko, S., Komodakis, N.: Wide residual networks (2016). arXiv preprint arXiv:1605.07146

Download references

Acknowledgements

We acknowledge the financial support of the Hungarian Scientific Research Fund grant OTKA K-135729. We are grateful to the NVIDIA corporation for supporting our research with GPUs obtained by the NVIDIA Hardware Grant Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richárd Rádli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rádli, R., Czúni, L. (2023). Deep Randomized Networks for Fast Learning. In: Sellmann, M., Tierney, K. (eds) Learning and Intelligent Optimization. LION 2023. Lecture Notes in Computer Science, vol 14286. Springer, Cham. https://doi.org/10.1007/978-3-031-44505-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44505-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44504-0

  • Online ISBN: 978-3-031-44505-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics