Nothing Special   »   [go: up one dir, main page]

Skip to main content

Application of the Hierarchic Memetic Strategy HMS in Neuroevolution

  • Conference paper
  • First Online:
Computational Science – ICCS 2022 (ICCS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13351))

Included in the following conference series:

  • 1161 Accesses

Abstract

Quite recently some noteworthy papers appeared showing classes of deep neural network (DNN) training tasks where rather simple one-population evolutionary algorithms (EA) found better solutions than gradient-based optimization methods. However, it is well known that simple single-population evolutionary algorithms generally suffer from the problem of getting stuck in local optima. A multi-population adaptive evolutionary strategy called Hierarchic Memetic Strategy (HMS) is designed especially to mitigate this problem. HMS was already shown to outperform single-population EAs in general multi-modal optimization and in inverse problem solving. In this paper we describe an application of HMS to the DNN training tasks where the above-mentioned single-population EA won over gradient methods. Obtained results show that HMS finds better solutions than the EA when using the same time resources, therefore proving the advantage of HMS over not only the EA but in consequence also over gradient methods.

This research was supported in part by PLGrid Infrastructure and by the funds of Polish Ministry of Education and Science assigned to AGH University of Science and Technology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://gym.openai.com/envs/#atari.

  2. 2.

    https://github.com/mtsokol/hms-neuroevolution.

  3. 3.

    https://bit.ly/hms-neuroevolution-playlist.

References

  1. Conti, E., Madhavan, V., Petroski Such, F., et al.: Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS 2018), pp. 5032–5043. Curran Associates Inc., Red Hook (2018)

    Google Scholar 

  2. Fekiač, J., Zelinka, I., Burguillo, J.: A review of methods for encoding neural network topologies in evolutionary computation. In: European Conference on Modelling and Simulation (2016)

    Google Scholar 

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2015)

    Google Scholar 

  4. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1703.00548 (2017)

  5. Konda, V., Tsitsiklis, J.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, vol. 12 (2000)

    Google Scholar 

  6. LeCun, Y., Boser, B., Denker, J.S., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Article  Google Scholar 

  7. Lehman, J., Stanley, K.: Abandoning objectives: evolution through the search for novelty alone. Evol. Comput. 19, 189–223 (2011)

    Article  Google Scholar 

  8. Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  9. Petroski Such, F., Madhavan, V., Conti, E., et al.: Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567 (2018)

  10. Sawicki, J., Łoś, M., Smołka, M., Schaefer, R.: Understanding measure-driven algorithms solving irreversibly ill-conditioned problems. Nat. Comput. (2021). https://doi.org/10.1007/s11047-020-09836-w

  11. Sawicki, J., Łoś, M., Smołka, M., Schaefer, R., Álvarez-Aramberri, J.: Approximating landscape insensitivity regions in solving ill-conditioned inverse problems. Memetic Comput. 10(3), 279–289 (2018). https://doi.org/10.1007/s12293-018-0258-5

    Article  Google Scholar 

  12. Smołka, M., Schaefer, R., Paszyński, M., Pardo, D., Álvarez-Aramberri, J.: An agent-oriented hierarchic strategy for solving inverse problems. Int. J. Appl. Math. Comput. Sci. 25(3), 483–498 (2015)

    Article  MathSciNet  Google Scholar 

  13. Stanley, K.: Compositional pattern producing networks: a novel abstraction of development. Genet. Program Evolvable Mach. 8, 131–162 (2007)

    Article  Google Scholar 

  14. Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maciej Smołka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sokół, M., Smołka, M. (2022). Application of the Hierarchic Memetic Strategy HMS in Neuroevolution. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2022. ICCS 2022. Lecture Notes in Computer Science, vol 13351. Springer, Cham. https://doi.org/10.1007/978-3-031-08754-7_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08754-7_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08753-0

  • Online ISBN: 978-3-031-08754-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics