Abstract
Quite recently some noteworthy papers appeared showing classes of deep neural network (DNN) training tasks where rather simple one-population evolutionary algorithms (EA) found better solutions than gradient-based optimization methods. However, it is well known that simple single-population evolutionary algorithms generally suffer from the problem of getting stuck in local optima. A multi-population adaptive evolutionary strategy called Hierarchic Memetic Strategy (HMS) is designed especially to mitigate this problem. HMS was already shown to outperform single-population EAs in general multi-modal optimization and in inverse problem solving. In this paper we describe an application of HMS to the DNN training tasks where the above-mentioned single-population EA won over gradient methods. Obtained results show that HMS finds better solutions than the EA when using the same time resources, therefore proving the advantage of HMS over not only the EA but in consequence also over gradient methods.
This research was supported in part by PLGrid Infrastructure and by the funds of Polish Ministry of Education and Science assigned to AGH University of Science and Technology.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Conti, E., Madhavan, V., Petroski Such, F., et al.: Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS 2018), pp. 5032–5043. Curran Associates Inc., Red Hook (2018)
Fekiač, J., Zelinka, I., Burguillo, J.: A review of methods for encoding neural network topologies in evolutionary computation. In: European Conference on Modelling and Simulation (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2015)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1703.00548 (2017)
Konda, V., Tsitsiklis, J.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, vol. 12 (2000)
LeCun, Y., Boser, B., Denker, J.S., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)
Lehman, J., Stanley, K.: Abandoning objectives: evolution through the search for novelty alone. Evol. Comput. 19, 189–223 (2011)
Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
Petroski Such, F., Madhavan, V., Conti, E., et al.: Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567 (2018)
Sawicki, J., Łoś, M., Smołka, M., Schaefer, R.: Understanding measure-driven algorithms solving irreversibly ill-conditioned problems. Nat. Comput. (2021). https://doi.org/10.1007/s11047-020-09836-w
Sawicki, J., Łoś, M., Smołka, M., Schaefer, R., Álvarez-Aramberri, J.: Approximating landscape insensitivity regions in solving ill-conditioned inverse problems. Memetic Comput. 10(3), 279–289 (2018). https://doi.org/10.1007/s12293-018-0258-5
Smołka, M., Schaefer, R., Paszyński, M., Pardo, D., Álvarez-Aramberri, J.: An agent-oriented hierarchic strategy for solving inverse problems. Int. J. Appl. Math. Comput. Sci. 25(3), 483–498 (2015)
Stanley, K.: Compositional pattern producing networks: a novel abstraction of development. Genet. Program Evolvable Mach. 8, 131–162 (2007)
Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sokół, M., Smołka, M. (2022). Application of the Hierarchic Memetic Strategy HMS in Neuroevolution. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2022. ICCS 2022. Lecture Notes in Computer Science, vol 13351. Springer, Cham. https://doi.org/10.1007/978-3-031-08754-7_49
Download citation
DOI: https://doi.org/10.1007/978-3-031-08754-7_49
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08753-0
Online ISBN: 978-3-031-08754-7
eBook Packages: Computer ScienceComputer Science (R0)