Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3377930.3389834acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

An evolutionary optimization algorithm for gradually saturating objective functions

Published: 26 June 2020 Publication History

Abstract

Evolutionary algorithms have been actively studied for dynamic optimization problems in the last two decades, however the research is mainly focused on problems with large, periodical or abrupt changes during the optimization. In contrast, this paper concentrates on gradually changing environments with an additional imposition of a saturating objective function. This work is motivated by an evolutionary neural architecture search methodology where a population of Convolutional Neural Networks (CNNs) is evaluated and iteratively modified using genetic operators during the training process. The objective of the search, namely the prediction accuracy of a CNN, is a continuous and slow moving target, increasing with each training epoch and eventually saturating when the training is nearly complete. Population diversity is an important consideration in dynamic environments wherein a large diversity restricts the algorithm from converging to a small area of the search space while the environment is still transforming. Our proposed algorithm adaptively influences the population diversity, depending on the rate of change of the objective function, using disruptive crossovers and non-elitist population replacements. We compare the results of our algorithm with a traditional evolutionary algorithm and demonstrate that the proposed modifications improve the algorithm performance in gradually saturating dynamic environments.

References

[1]
2019. Jenetics library. (2019). https://jenetics.io/
[2]
2019. ONNX: Open Neural Network Exchange Formet. (2019). https://onnx.ai/
[3]
2019. PyTorch: An open source deep learning platform. (2019). https://pytorch.org/
[4]
Gagan Aggarwal and Jason D Hartline. 2006. Knapsack auctions. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm. Society for Industrial and Applied Mathematics, 1083--1092.
[5]
HC Andersen. 1991. An investigation into genetic algorithms, and the relationship between speciation and the tracking of optima in dynamic functions. Brisbane, Australia: Honors, Queensland Univ (1991).
[6]
Tim Blackwell and Jürgen Branke. 2006. Multiswarms, exclusion, and anti-convergence in dynamic environments. IEEE transactions on evolutionary computation 10, 4 (2006), 459--472.
[7]
Chenyang Bu, Wenjian Luo, and Lihua Yue. 2016. Continuous dynamic constrained optimization with ensemble of locating and tracking feasible regions strategies. IEEE Transactions on Evolutionary Computation 21, 1 (2016), 14--33.
[8]
Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. 2015. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641 (2015).
[9]
Helen G Cobb. 1990. An investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environments. Technical Report. Naval Research Lab Washington DC.
[10]
Matej Črepinšek, Shih-Hsi Liu, and Marjan Mernik. 2013. Exploration and exploitation in evolutionary algorithms: A survey. ACM computing surveys (CSUR) 45, 3 (2013), 1--33.
[11]
Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. 2015. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
[12]
David Fagan and Michael O'Neill. 2017. Exploring Target Change Related Fitness Reduction in the Moving Point Dynamic Environment. In International Conference on Theory and Practice of Natural Computing. Springer, 63--74.
[13]
John J Grefenstette et al. 1992. Genetic algorithms for changing environments. In PPSN, Vol. 2. 137--144.
[14]
Nils Y Hammerla, Shane Halloran, and Thomas Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880 (2016).
[15]
Iason Hatzakis and David Wallace. 2006. Dynamic multi-objective optimization with evolutionary algorithms: a forward-looking approach. In Proceedings of the 8th annual conference on Genetic and evolutionary computation. 1201--1208.
[16]
Yaochu Jin and Jürgen Branke. 2005. Evolutionary optimization in uncertain environments-a survey. IEEE Transactions on evolutionary computation 9, 3 (2005), 303--317.
[17]
Wee Tat Koo, Chi Keong Goh, and Kay Chen Tan. 2010. A predictive gradient strategy for multiobjective evolutionary algorithms in a fast changing environment. Memetic Computing 2, 2 (2010), 87--110.
[18]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016).
[19]
Wenjian Luo, Juan Sun, Chenyang Bu, and Ruikang Yi. 2018. Identifying species for particle swarm optimization under dynamic environments. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1921--1928.
[20]
Trung Thanh Nguyen, Shengxiang Yang, and Juergen Branke. 2012. Evolutionary dynamic optimization: A survey of the state of the art. Swarm and Evolutionary Computation 6 (2012), 1--24.
[21]
Daniel Parrott and Xiaodong Li. 2006. Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Transactions on Evolutionary Computation 10, 4 (2006), 440--458.
[22]
Hani Pourvaziri and B Naderi. 2014. A hybrid multi-population genetic algorithm for the dynamic facility layout problem. Applied Soft Computing 24 (2014), 457--469.
[23]
Attila Reiss and Didier Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring. In 2012 16th International Symposium on Wearable Computers. IEEE, 108--109.
[24]
Hendrik Richter and Franz Dietel. 2010. Change detection in dynamic fitness landscapes with time-dependent constraints. In 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC). IEEE, 580--585.
[25]
Shaaban Sahmoud and Haluk Rahmi Topcuoglu. 2016. Sensor-based change detection schemes for dynamic multi-objective optimization problems. In 2016 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1--8.
[26]
Dolly Sapra and Andy D Pimentel. 2020. Constrained evolutionary piecemeal training to design convolutional neural networks. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer.
[27]
Shamik Sengupta and Mainak Chatterjee. 2008. Designing auction mechanisms for dynamic spectrum access. Mobile Networks and Applications 13, 5 (2008), 498--515.
[28]
Anabela Simões and Ernesto Costa. 2011. Memory-based CHC algorithms for the dynamic traveling salesman problem. In Proceedings of the 13th annual conference on Genetic and evolutionary computation. 1037--1044.
[29]
Dirk Sudholt. 2020. The benefits of population diversity in evolutionary algorithms: a survey of rigorous runtime analyses. In Theory of Evolutionary Computation. Springer, 359--404.
[30]
Christopher LE Swartz and Yoshiaki Kawajiri. 2019. Design for dynamic operation-A review and new perspectives for an increasingly dynamic plant operating environment. Computers & Chemical Engineering (2019).
[31]
Andrea Toffolo and Ernesto Benini. 2003. Genetic diversity as an objective in multi-objective evolutionary algorithms. Evolutionary computation 11, 2 (2003), 151--167.
[32]
Krzysztof Trojanowski and Zbigniew Michalewicz. 2000. Evolutionary optimization in non-stationary environments. Journal of Computer Science & Technology 1 (2000).
[33]
Frank Vavak, KA Jukes, Terrence C Fogarty, et al. 1998. Performance of a genetic algorithm with variable local search range relative to frequency of the environmental changes. Genetic Programming (1998), 22--25.
[34]
Hongfeng Wang, Dingwei Wang, and Shengxiang Yang. 2009. A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems. Soft Computing 13, 8-9 (2009), 763--780.
[35]
Shengxiang Yang. 2003. Non-stationary problem optimization using the primaldual genetic algorithm. In The 2003 Congress on Evolutionary Computation, 2003. CEC'03., Vol. 3. IEEE, 2246--2253.
[36]
Shengxiang Yang and Xin Yao. 2008. Population-based incremental learning with associative memory for dynamic environments. IEEE Transactions on Evolutionary Computation 12, 5 (2008), 542--561.
[37]
Aimin Zhou, Yaochu Jin, and Qingfu Zhang. 2013. A population prediction strategy for evolutionary dynamic multiobjective optimization. IEEE transactions on cybernetics 44, 1 (2013), 40--53.
[38]
Juan Zou, Qingya Li, Shengxiang Yang, Jinhua Zheng, Zhou Peng, and Tingrui Pei. 2019. A dynamic multiobjective evolutionary algorithm based on a dynamic evolutionary environment model. Swarm and evolutionary computation 44 (2019), 247--259.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
GECCO '20: Proceedings of the 2020 Genetic and Evolutionary Computation Conference
June 2020
1349 pages
ISBN:9781450371285
DOI:10.1145/3377930
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 June 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. dynamic optimization
  2. genetic algorithms

Qualifiers

  • Research-article

Funding Sources

Conference

GECCO '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)1
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Neural architecture search via similarity adaptive guidanceApplied Soft Computing10.1016/j.asoc.2024.111821162(111821)Online publication date: Sep-2024
  • (2023)A Survey on Evolutionary Neural Architecture SearchIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2021.310055434:2(550-570)Online publication date: Feb-2023
  • (2023)Evolutionary ClassificationHandbook of Evolutionary Machine Learning10.1007/978-981-99-3814-8_7(171-204)Online publication date: 2-Nov-2023
  • (2022)A Review on Convolutional Neural Network Encodings for NeuroevolutionIEEE Transactions on Evolutionary Computation10.1109/TEVC.2021.308863126:1(12-27)Online publication date: Feb-2022
  • (2022)Evolutionary neural networks for deep learning: a reviewInternational Journal of Machine Learning and Cybernetics10.1007/s13042-022-01578-813:10(3001-3018)Online publication date: 10-Jun-2022
  • (2022)Methodologies for Design Space ExplorationHandbook of Computer Architecture10.1007/978-981-15-6401-7_23-1(1-31)Online publication date: 27-Jan-2022
  • (2021)Intermittent-Aware Neural Architecture SearchACM Transactions on Embedded Computing Systems10.1145/347699520:5s(1-27)Online publication date: 23-Sep-2021
  • (2021)Designing convolutional neural networks with constrained evolutionary piecemeal trainingApplied Intelligence10.1007/s10489-021-02679-752:15(17103-17117)Online publication date: 30-Jul-2021
  • (2020)Deep Learning Model Reuse and Composition in Knowledge Centric Networking2020 29th International Conference on Computer Communications and Networks (ICCCN)10.1109/ICCCN49398.2020.9209668(1-11)Online publication date: Aug-2020
  • (2020)Lights and shadows in Evolutionary Deep Learning: Taxonomy, critical methodological analysis, cases of study, learned lessons, recommendations and challengesInformation Fusion10.1016/j.inffus.2020.10.014Online publication date: Oct-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media