Abstract
Discrete and combinatorial optimization can be notoriously difficult due to complex and rugged characteristics of the objective function. We address this challenge by mapping the search process to a continuous space using recurrent neural networks. Alongside with an evolutionary run, we learn three mappings: from the original search space to a continuous Cartesian latent space, from that latent space back to the search space, and from the latent space to the search objective. We elicit gradient from that last network and use it to perform moves in the latent space, and apply this Neuromemetic Evolutionary Optimization (NEO) to evolutionary synthesis of programs. Evaluation on a range of benchmarks suggests that NEO significantly outperforms conventional genetic programming.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Bello, I., Pham, H., Le, Q.V., Norouzi, M., Bengio, S.: Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940 (2016)
Brownlee, A.E., Woodward, J.R., Swan, J.: Metaheuristic design pattern: Surrogate fitness functions. In: Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO Companion 2015, pp. 1261–1264. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2739482.2768499
Clark, D.M.: Evolution of algebraic terms 1: term to term operation continuity. Int. J. Algebra Comput. 23(05), 1175–1205 (2013). https://doi.org/10.1142/S0218196713500227. http://www.worldscientific.com/doi/abs/10.1142/S0218196713500227d
Ffrancon, R., Schoenauer, M.: Memetic semantic genetic programming. In: GECCO 2015: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 1023–1030. ACM, Madrid, 11–15 July 2015. https://doi.org/10.1145/2739480.2754697
Hildebrandt, T., Branke, J.: On using surrogates with genetic programming. Evol. Comput. 23(3), 343–367 (2015). https://doi.org/10.1162/EVCO_a_00133
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Jin, Y., Olhofer, M., Sendhoff, B.: A framework for evolutionary optimization with approximate fitness functions. IEEE Trans. Evol. Comput. 6, 481–494 (2002)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014)
Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)
Krawiec, K.: Behavioral Program Synthesis with Genetic Programming. Studies in Computational Intelligence, vol. 618. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-319-27565-9
Liskowski, P., Bładek, I., Krawiec, K.: Neuro-guided genetic programming: prioritizing evolutionary search with neural networks. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1143–1150 (2018)
Liskowski, P., Krawiec, K.: Non-negative matrix factorization for unsupervised derivation of search objectives in genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 749–756 (2016)
Liskowski, P., Krawiec, K.: Surrogate fitness via factorization of interaction matrix. In: Heywood, M.I., McDermott, J., Castelli, M., Costa, E., Sim, K. (eds.) EuroGP 2016. LNCS, vol. 9594, pp. 68–82. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30668-1_5
Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)
Moscato, P.: On evolution, search, optimization, genetic algorithms and martial arts: towards memetic algorithms. Caltech Concurrent Computation Program C3P Report 826 (1989)
Nazari, M., Oroojlooy, A., Snyder, L., Takác, M.: Reinforcement learning for solving the vehicle routing problem. In: Advances in Neural Information Processing Systems, pp. 9839–9849 (2018)
Saxe, A.M., McClelland, J.L., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 (2013)
Spector, L., Clark, D.M., Lindsay, I., Barr, B., Klein, J.: Genetic programming for finite algebras. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, GECCO 2008, pp. 1291–1298. Association for Computing Machinery, New York (2008). https://doi.org/10.1145/1389095.1389343
Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Advances in Neural Information Processing Systems, pp. 2692–2700 (2015)
Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1(2), 270–280 (1989). https://doi.org/10.1162/neco.1989.1.2.270
Acknowledgments
This research has been partially supported by the statutory funds of Poznan University of Technology.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Liskowski, P., Krawiec, K., Toklu, N.E. (2020). Neuromemetic Evolutionary Optimization. In: Bäck, T., et al. Parallel Problem Solving from Nature – PPSN XVI. PPSN 2020. Lecture Notes in Computer Science(), vol 12269. Springer, Cham. https://doi.org/10.1007/978-3-030-58112-1_43
Download citation
DOI: https://doi.org/10.1007/978-3-030-58112-1_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58111-4
Online ISBN: 978-3-030-58112-1
eBook Packages: Computer ScienceComputer Science (R0)