Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Learnable Evolutionary Search Across Heterogeneous Problems via Kernelized Autoencoding

Published: 01 June 2021 Publication History

Abstract

The design of the evolutionary algorithm with learning capability from past search experiences has attracted growing research interests in recent years. It has been demonstrated that the knowledge embedded in the past search experience can greatly speed up the evolutionary process if properly harnessed. Autoencoding evolutionary search (AEES) is a recently proposed search paradigm, which employs a single-layer denoising autoencoder to build the mapping between two problems by configuring the solutions of each problem as the input and output for the autoencoder, respectively. The learned mapping makes it possible to perform knowledge transfer across heterogeneous problem domains with diverse properties. It has shown a promising performance of learning and transferring the knowledge from past search experiences to facilitate the evolutionary search on a variety of optimization problems. However, despite the success enjoyed by AEES, the linear autoencoding model cannot capture the nonlinear relationship between the solution sets used in the mapping construction. Taking this cue, in this article, we devise a kernelized autoencoder to construct the mapping in a reproducing kernel Hilbert space (RKHS), where the nonlinearity among problem solutions can be captured easily. Importantly, the proposed kernelized autoencoding method also holds a closed-form solution which will not bring much computational burden in the evolutionary search. Furthermore, a kernelized autoencoding evolutionary-search (KAES) paradigm is proposed that adaptively selects the linear and kernelized autoencoding along the search process in pursuit of effective knowledge transfer across problem domains. To validate the efficacy of the proposed KAES, comprehensive empirical studies on both benchmark multiobjective optimization problems as well as real-world vehicle crashworthiness design problem are presented.

References

[1]
T. Bäck, U. Hammel, and H.-P. Schwefel, “Evolutionary computation: Comments on the history and current state,” IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 3–17, Apr. 1997.
[2]
K. Deb, “Genetic algorithm in search and optimization: The technique and applications,” in Proc. Int. Workshop Soft Comput. Intell. Syst., 1998, pp. 58–87.
[3]
H. Maaranen, K. Miettinen, and A. Penttinen, “On initial populations of a genetic algorithm for continuous optimization problems,” J. Global Optim., vol. 37, no. 3, p. 405, 2007.
[4]
J. Sun, Q. Zhang, and E. P. K. Tsang, “DE/EDA: A new evolutionary algorithm for global optimization,” Inf. Sci., vol. 169, nos. 3–4, pp. 249–262, 2005.
[5]
J. Zhang, V. Avasarala, A. C. Sanderson, and T. Mullen, “Differential evolution for discrete optimization: An experimental study on combinatorial auction problems,” in Proc. IEEE Congr. Evol. Comput. World Congr. Comput. Intell., Hong Kong, China, 2008, pp. 2794–2800.
[6]
T. P. Dinh, B. H. T. Thanh, T. T. Ba, and L. N. Binh, “Multifactorial evolutionary algorithm for solving clustered tree problems: Competition among cayley codes,” Memetic Comput., vol. 12, no. 3, pp. 185–217, 2020.
[7]
Y. Wang and C. Dang, “An evolutionary algorithm for global optimization based on level-set evolution and latin squares,” IEEE Trans. Evol. Comput., vol. 11, no. 5, pp. 579–595, Oct. 2007.
[8]
L. Deng, L. Zhang, H. Sun, and L. Qiao, “DSM-DE: A differential evolution with dynamic speciation-based mutation for single-objective optimization,” Memetic Comput., vol. 12, no. 1, pp. 73–86, 2020.
[9]
Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, Dec. 2007.
[10]
S. Jiang and S. Yang, “An improved multiobjective optimization evolutionary algorithm based on decomposition for complex Pareto fronts,” IEEE Trans. Cybern., vol. 46, no. 2, pp. 421–437, Feb. 2016.
[11]
S. Jiang and S. Yang, “A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 3, pp. 329–346, Jun. 2017.
[12]
Z. He, G. G. Yen, and Z. Yi, “Robust multiobjective optimization via evolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 23, no. 2, pp. 316–330, Apr. 2019.
[13]
Z. He, G. G. Yen, and J. Lv, “Evolutionary multiobjective optimization with robustness enhancement,” IEEE Trans. Evol. Comput., vol. 24, no. 3, pp. 494–507, Jun. 2020.
[14]
T. T. Nguyen, S. Yang, and J. Branke, “Evolutionary dynamic optimization: A survey of the state of the art,” Swarm Evol. Comput., vol. 6, pp. 1–24, Oct. 2012.
[15]
S. Jiang and S. Yang, “A steady-state and generational evolutionary algorithm for dynamic multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 1, pp. 65–82, Feb. 2017.
[16]
K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Syst., vol. 9, no. 2, pp. 115–148, 1995.
[17]
A. V. Eremeev and Y. V. Kovalenko, “A memetic algorithm with optimal recombination for the asymmetric travelling salesman problem,” Memetic Comput., vol. 12, no. 1, pp. 23–36, 2020.
[18]
C.-K. Ting, C.-F. Ko, and C.-H. Huang, “Selecting survivors in genetic algorithm using tabu search strategies,” Memetic Comput., vol. 1, no. 3, pp. 191–203, 2009.
[19]
S. Yang, M. Li, X. Liu, and J. Zheng, “A grid-based evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 17, no. 5, pp. 721–736, Oct. 2013.
[20]
Y. S. Ong and A. J. Keane, “Meta-lamarckian learning in memetic algorithms,” IEEE Trans. Evol. Comput., vol. 8, no. 2, pp. 99–110, Apr. 2004.
[21]
A. Gupta and Y.-S. Ong, Memetic Computation: The Mainspring of Knowledge Transfer in a Data-Driven Optimization Era, vol. 21. Cham, Switzerland: Springer, 2018.
[22]
A. Gupta, Y.-S. Ong, and L. Feng, “Insights on transfer optimization: Because experience is the best teacher,” IEEE Trans. Emerg. Topics Comput. Intell., vol. 2, no. 1, pp. 51–64, Feb. 2018.
[23]
Y.-S. Ong and A. Gupta, “AIR₅: Five pillars of artificial intelligence research,” IEEE Trans. Emerg. Topics Comput. Intell., vol. 3, no. 5, pp. 411–415, Oct. 2019.
[24]
W. Dai, Q. Yang, G.-R. Xue, and Y. Yu, “Boosting for transfer learning,” in Proc. 24th Int. Conf. Mach. Learn., 2007, pp. 193–200.
[25]
S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.
[26]
S. Louis and J. McDonnell, “Learning with case-injected genetic algorithms,” IEEE Trans. Evol. Comput., vol. 8, no. 4, pp. 316–328, Aug. 2004.
[27]
S. J. Louis and G. Li, “Case injected genetic algorithms for traveling salesman problems,” Inf. Sci., vol. 122, nos. 2–4, pp. 201–225, 2000.
[28]
J. Branke, “Memory enhanced evolutionary algorithms for changing optimization problems,” in Proc. Congr. Evol. Comput. (CEC), vol. 3. Washington, DC, USA, 1999, pp. 1875–1882.
[29]
G. J. Barlow and S. F. Smith, “A memory enhanced evolutionary algorithm for dynamic scheduling problems,” in Proc. Workshops Appl. Evol. Comput., 2008, pp. 606–615.
[30]
S. Yang, “Explicit memory schemes for evolutionary algorithms in dynamic environments,” in Evolutionary Computation in Dynamic and Uncertain Environments. Heidelberg, Germany: Springer, 2007, pp. 3–28.
[31]
L. Feng, Y.-S. Ong, M.-H. Lim, and I. W. Tsang, “Memetic search with interdomain learning: A realization between CVRP and carp,” IEEE Trans. Evol. Comput., vol. 19, no. 5, pp. 644–658, Oct. 2015.
[32]
A. Gupta, Y.-S. Ong, and L. Feng, “Multifactorial evolution: Toward evolutionary multitasking,” IEEE Trans. Evol. Comput., vol. 20, no. 3, pp. 343–357, Jun. 2016.
[33]
A. T. W. Min, Y.-S. Ong, A. Gupta, and C.-K. Goh, “Multiproblem surrogates: Transfer evolutionary multiobjective optimization of computationally expensive problems,” IEEE Trans. Evol. Comput., vol. 23, no. 1, pp. 15–28, Feb. 2019.
[34]
B. Da, A. Gupta, and Y.-S. Ong, “Curbing negative influences online for seamless transfer evolutionary optimization,” IEEE Trans. Cybern., vol. 49, no. 12, pp. 4365–4378, Dec. 2019.
[35]
L. Feng, Y.-S. Ong, S. Jiang, and A. Gupta, “Autoencoding evolutionary search with learning across heterogeneous problems,” IEEE Trans. Evol. Comput., vol. 21, no. 5, pp. 760–772, Oct. 2017.
[36]
S. Kullback and R. A. Leibler, “On information and sufficiency,” Ann. Math. Stat., vol. 22, no. 1, pp. 79–86, 1951.
[37]
E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: Empirical results,” Evol. Comput., vol. 8, no. 2, pp. 173–195, 2000.
[38]
K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems for evolutionary multiobjective optimization,” in Evolutionary Multiobjective Optimization. London, U.K.: Springer, 2005, pp. 105–145.
[39]
X. Liao, Q. Li, X. Yang, W. Zhang, and W. Li, “Multiobjective optimization for crash safety design of vehicles using stepwise regression model,” Struct. Multidiscipl. Optim., vol. 35, no. 6, pp. 561–569, 2008.
[40]
P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in Proc. 25th Int. Conf. Mach. Learn., 2008, pp. 1096–1103.
[41]
X. Glorot, A. Bordes, and Y. Bengio, “Domain adaptation for large-scale sentiment classification: A deep learning approach,” in Proc. 28th Int. Conf. Mach. Learn., 2011, pp. 513–520.
[42]
M. Chen, K. Q. Weinberger, Z. Xu, and F. Sha, “Marginalizing stacked linear denoising autoencoders,” J. Mach. Learn. Res., vol. 16, no. 1, pp. 3849–3875, 2015.
[43]
X. Lu, Y. Tsao, S. Matsuda, and C. Hori, “Speech enhancement based on deep denoising autoencoder,” in Proc. Int. Speech Commun. Assoc. (Interspeech), 2013, pp. 436–440.
[44]
T. Ishii, H. Komiyama, T. Shinozaki, Y. Horiuchi, and S. Kuroiwa, “Reverberant speech recognition based on denoising autoencoder,” in Proc. Int. Speech Commun. Assoc. (Interspeech), 2013, pp. 3512–3516.
[45]
H. Sagha, N. Cummins, and B. Schuller, “Stacked denoising autoencoders for sentiment analysis: A review,” Wiley Interdiscipl. Revi. Data Min. Knowl. Discov., vol. 7, no. 5, p. e1212, 2017.
[46]
S. Zhai and Z. M. Zhang, “Semisupervised autoencoder for sentiment analysis,” in Proc. 13th AAAI Conf. Artif. Intell., 2016, pp. 1394–1400.
[47]
C. M. Bishop, Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006.
[48]
D. Charte, F. Charte, S. García, M. J. del Jesus, and F. Herrera, “A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines,” Inf. Fusion, vol. 44, pp. 78–96, Nov. 2018.
[49]
X. He and P. Niyogi, “Locality preserving projections,” in Advances in Neural Information Processing Systems. Cambridge, MA, USA: MIT Press, 2004, pp. 153–160.
[50]
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002.
[51]
E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength Pareto evolutionary algorithm,” Inst. Techn. Comput. Sci. Commun. Netw., ETH Zürich, Zurich, Switzerland, TIK-Rep. 103, 2001.
[52]
G. Valentini and T. G. Dietterich, “Bias-variance analysis of support vector machines for the development of SVM-based ensemble methods,” J. Mach. Learn. Res., vol. 5, pp. 725–775, Dec. 2004.
[53]
P. Du Boiset al., Vehicle Crashworthiness and Occupant Protection, Amer. Iron Steel Inst., Southfield, MI, USA, 2004.
[54]
J. Xiaochun and W. Xiao, “Simulation of crashworthiness during front impact and offset impact and vehicle body structure improvement,” J. Autom. Safety Energy, vol. 2, no. 3, pp. 212–216, 2011.

Cited By

View all
  • (2024)A Mahalanobis Distance-Based Approach for Dynamic Multiobjective Optimization With Stochastic ChangesIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.325385028:1(238-251)Online publication date: 1-Feb-2024
  • (2023)First Complexity Results for Evolutionary Knowledge TransferProceedings of the 17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms10.1145/3594805.3607137(140-151)Online publication date: 30-Aug-2023
  • (2023)Distributed Knowledge Transfer for Evolutionary Multitask Multimodal OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.329187428:4(1141-1155)Online publication date: 3-Jul-2023
  • Show More Cited By

Index Terms

  1. Learnable Evolutionary Search Across Heterogeneous Problems via Kernelized Autoencoding
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Evolutionary Computation
        IEEE Transactions on Evolutionary Computation  Volume 25, Issue 3
        June 2021
        203 pages

        Publisher

        IEEE Press

        Publication History

        Published: 01 June 2021

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 03 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)A Mahalanobis Distance-Based Approach for Dynamic Multiobjective Optimization With Stochastic ChangesIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.325385028:1(238-251)Online publication date: 1-Feb-2024
        • (2023)First Complexity Results for Evolutionary Knowledge TransferProceedings of the 17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms10.1145/3594805.3607137(140-151)Online publication date: 30-Aug-2023
        • (2023)Distributed Knowledge Transfer for Evolutionary Multitask Multimodal OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.329187428:4(1141-1155)Online publication date: 3-Jul-2023
        • (2023)A Surrogate-Assisted Differential Evolution With Knowledge Transfer for Expensive Incremental Optimization ProblemsIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.329169728:4(1039-1053)Online publication date: 4-Jul-2023
        • (2023)Ensemble of Domain Adaptation-Based Knowledge Transfer for Evolutionary MultitaskingIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.325906728:2(388-402)Online publication date: 20-Mar-2023
        • (2023)A Survey on Learnable Evolutionary Algorithms for Scalable Multiobjective OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.325035027:6(1941-1961)Online publication date: 1-Dec-2023
        • (2023)Learning-Aided Evolution for OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.323277627:6(1794-1808)Online publication date: 1-Dec-2023
        • (2023)A Bi-Objective Knowledge Transfer Framework for Evolutionary Many-Task OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.321078327:5(1514-1528)Online publication date: 1-Oct-2023
        • (2023)Evolutionary Multitasking for Large-Scale Multiobjective OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.316648227:4(863-877)Online publication date: 1-Aug-2023
        • (2023)Enhancing evolutionary multitasking optimization by leveraging inter-task knowledge transfers and improved evolutionary operatorsKnowledge-Based Systems10.1016/j.knosys.2022.110027259:COnline publication date: 10-Jan-2023
        • Show More Cited By

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media