Abstract
The design and choice of benchmark suites are ongoing topics of discussion in the multi-objective optimization community. Some suites provide a good understanding of their Pareto sets and fronts, such as the well-known DTLZ and ZDT problems. However, they lack diversity in their landscape properties and do not provide a mechanism for creating multiple distinct problem instances. Other suites, like bi-objective BBOB, possess diverse and challenging landscape properties, but their optima are not well understood and can only be approximated empirically without any guarantees.
This work proposes a methodology for creating complex continuous problem landscapes by concatenating single-objective functions from version 2 of the multiple peaks model (MPM2) generator. For the resulting problems, we can determine the distribution of optimal points with arbitrary precision w.r.t. a measure such as the dominated hypervolume. We show how the properties of the MPM2 generator influence the multi-objective problem landscapes and present an experimental proof-of-concept study demonstrating how our approach can drive well-founded benchmarking of MO algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bartz-Beielstein, T., et al.: Benchmarking in Optimization: Best Practice and Open Issues (2020). https://doi.org/10.48550/arxiv.2007.03488
Blot, A., Hoos, H.H., Jourdan, L., Kessaci-Marmion, M.É., Trautmann, H.: MO-ParamILS: a multi-objective automatic algorithm configuration framework. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 32–47. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50349-3_3
Bossek, J.: ECR 2.0: a modular framework for evolutionary computation in R. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1187–1193 (2017)
Bossek, J.: smoof: single- and multi-objective optimization test functions. R J. 9(1), 103 (2017)
Bossek, J., Deb, K.: omnioptr: omni-optimizer algorithm (2021). https://github.com/jakobbossek/omnioptr
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. (TEVC) 6(2), 182–197 (2002)
Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Abraham, A., Jain, L., Goldberg, R. (eds.) Evolutionary Multiobjective Optimization, pp. 105–145. Springer, London (2005). https://doi.org/10.1007/1-84628-137-7_6
Deb, K., Tiwari, S.: Omni-optimizer: a generic evolutionary algorithm for single and multi-objective optimization. Eur. J. Oper. Res. (EJOR) 185, 1062–1087 (2008)
Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)
Emmerich, M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume measure as selection criterion. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 62–76. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-31880-4_5
Glasmachers, T.: Challenges of convex quadratic bi-objective benchmark problems. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 559–567 (2019)
Grimme, C., et al.: Peeking beyond peaks: challenges and research potentials of continuous multimodal multi-objective optimization. Comput. Oper. Res. 136, 105489 (2021)
Hansen, N., Finck, S., Ros, R., Auger, A.: Real-parameter black-box optimization benchmarking 2009: noiseless functions definitions. Technical report, RR-6829, INRIA (2009)
Heins, J., Rook, J., Schäpermeier, L., Kerschke, P., Bossek, J., Trautmann, H.: BBE: basin-based evaluation of multimodal multi-objective optimization problems. In: Rudolph, G., Kononova, A.V., Aguirre, H., Kerschke, P., Ochoa, G., Tušar, T. (eds.) Parallel Problem Solving from Nature, pp. 192–206. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-14714-2_14
Hoos, H.H.: Automated algorithm configuration and parameter tuning. In: Hamadi, Y., Monfroy, E., Saubion, F. (eds.) Autonomous Search, pp. 37–71. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21434-9_3
Kerschke, P., et al.: Search dynamics on multimodal multiobjective problems. Evol. Comput. 27(4), 577–609 (2019)
Mersmann, O., Trautmann, H., Steuer, D., Bischl, B., Deb, K.: MCO: multiple criteria optimization algorithms and related functions, R package, version 1.0-15.6 (2020). https://github.com/olafmersmann/mco
Rook, J., Trautmann, H., Bossek, J., Grimme, C.: On the potential of automated algorithm configuration on multi-modal multi-objective optimization problems. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 356–359. ACM, New York (2022)
Schäpermeier, L., Grimme, C., Kerschke, P.: One PLOT to show them all: visualization of efficient sets in multi-objective landscapes. In: Bäck, T., et al. (eds.) PPSN 2020. LNCS, vol. 12270, pp. 154–167. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58115-2_11
Schäpermeier, L., Grimme, C., Kerschke, P.: To boldly show what no one has seen before: a dashboard for visualizing multi-objective landscapes. In: Ishibuchi, H., et al. (eds.) EMO 2021. LNCS, vol. 12654, pp. 632–644. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72062-9_50
Schäpermeier, L., Grimme, C., Kerschke, P.: MOLE: digging tunnels through multimodal multi-objective landscapes. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pp. 592–600. ACM (2022)
Schäpermeier, L., Grimme, C., Kerschke, P.: Plotting impossible? Surveying visualization methods for continuous multi-objective benchmark problems. IEEE Trans. Evol. Comput. 26(6), 1306–1320 (2022)
Tanabe, R., Ishibuchi, H.: A review of evolutionary multi-modal multi-objective optimization. IEEE Trans. Evol. Comput. (TEVC) 24(1), 193–200 (2020)
Toure, C., Auger, A., Brockhoff, D., Hansen, N.: On bi-objective convex-quadratic problems. In: Deb, K., et al. (eds.) EMO 2019. LNCS, vol. 11411, pp. 3–14. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12598-1_1
Tušar, T., Brockhoff, D., Hansen, N., Auger, A.: COCO: The Bi-Objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite. arXiv preprint abs/1604.00359 (2016)
Wessing, S.: The multiple peaks model 2. Technical report, TR15-2-001, TU Dortmund University, Germany (2015)
Wessing, S.: Two-stage methods for multimodal optimization. Ph.D. thesis, University of Dortmund (2015). https://doi.org/10.17877/DE290R-7804
Yue, C., Qu, B., Yu, K., Liang, J., Li, X.: A novel scalable test problem suite for multimodal multiobjective optimization. Swarm Evol. Comput. 48, 62–71 (2019)
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. (ECJ) 8(2), 173–195 (2000)
Acknowledgment
The authors acknowledge support by the European ResearchCenter for Information Systems (ERCIS). Further, L. Schäpermeier and P. Kerschke acknowledge support by the Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Schäpermeier, L., Kerschke, P., Grimme, C., Trautmann, H. (2023). Peak-A-Boo! Generating Multi-objective Multiple Peaks Benchmark Problems with Precise Pareto Sets. In: Emmerich, M., et al. Evolutionary Multi-Criterion Optimization. EMO 2023. Lecture Notes in Computer Science, vol 13970. Springer, Cham. https://doi.org/10.1007/978-3-031-27250-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-27250-9_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-27249-3
Online ISBN: 978-3-031-27250-9
eBook Packages: Computer ScienceComputer Science (R0)