Abstract
This paper introduces a new metric vector for assessing the performance of different multi-objective algorithms, relative to the range of performance expected from a random search. The metric requires an ensemble of repeated trials to be performed, reducing the chance of overly favourable results. The random search baseline for the function-under-test may be either analytic, or created from a Monte-Carlo process: thus the metric is repeatable and accurate.
The metric allows both the median and worst performance of different algorithms to be compared directly, and scales well with high-dimensional many-objective problems. The metric quantifies and is sensitive to the distance of the solutions to the Pareto set, the distribution of points across the set, and the repeatability of the trials. Both the Monte-Carlo and closed form analysis methods will provide accurate analytic confidence intervals on the observed results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Deb, K.: NSGA-II code in C, http://www.iitk.ac.in/kangal/codes/nsga2/nsga2-v1.1.tar
Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel, H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 849–858. Springer, Heidelberg (2000)
Hughes, E.J.: MOERS example software, http://code.evanhughes.org
Hughes, E.J.: Assessing robustness of optimisation performance for problems with expensive evaluation functions. In: World Congress on Computational Intelligence, Vancouver, Canada, July 2006, IEEE, Los Alamitos (to appear, 2006)
Knowles, J., Corne, D.: On metrics for comparing non-dominated sets. In: Congress on Evolutionary Computation (CEC 2002), Hawaii, vol. 1, pp. 711–716 (2002)
Okabe, T., Jin, Y., Sendhoff, B.: A critical survey of performance indices for multi-objective optimization. In: Congress on Evolutionary Computation (CEC 2003), Canberra, Australia, December 2003, vol. 2, pp. 878–885 (2003)
Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., da Fonseca, V.G.: Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation 7, 117–132 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hughes, E.J. (2006). Multi-Objective Equivalent Random Search. In: Runarsson, T.P., Beyer, HG., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds) Parallel Problem Solving from Nature - PPSN IX. PPSN 2006. Lecture Notes in Computer Science, vol 4193. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11844297_47
Download citation
DOI: https://doi.org/10.1007/11844297_47
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-38990-3
Online ISBN: 978-3-540-38991-0
eBook Packages: Computer ScienceComputer Science (R0)