Nothing Special   »   [go: up one dir, main page]

Skip to main content

General Lower Bounds for Evolutionary Algorithms

  • Conference paper
Parallel Problem Solving from Nature - PPSN IX (PPSN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4193))

Included in the following conference series:

Abstract

Evolutionary optimization, among which genetic optimization, is a general framework for optimization. It is known (i) easy to use (ii) robust (iii) derivative-free (iv) unfortunately slow. Recent work [8] in particular show that the convergence rate of some widely used evolution strategies (evolutionary optimization for continuous domains) can not be faster than linear (i.e. the logarithm of the distance to the optimum can not decrease faster than linearly), and that the constant in the linear convergence (i.e. the constant C such that the distance to the optimum after n steps is upper bounded by Cn) unfortunately converges quickly to 1 as the dimension increases to ∞. We here show a very wide generalization of this result: all comparison-based algorithms have such a limitation. Note that our result also concerns methods like the Hooke & Jeeves algorithm, the simplex method, or any direct search method that only compares the values to previously seen values of the fitness. But it does not cover methods that use the value of the fitness (see [5] for cases in which the fitness-values are used), even if these methods do not use gradients. The former results deal with convergence with respect to the number of comparisons performed, and also include a very wide family of algorithms with respect to the number of function-evaluations. However, there is still place for faster convergence rates, for more original algorithms using the full ranking information of the population and not only selections among the population. We prove that, at least in some particular cases, using the full ranking information can improve these lower bounds, and ultimately provide superlinear convergence results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Auger, A.: Convergence results for (1,λ)-SA-ES using the theory of ϕ-irreducible markov chains. Theoretical Computer Science (in press, 2005)

    Google Scholar 

  2. Auger, A., Jebalia, M., Teytaud, O.: Xse: quasi-random mutations for evolution strategies. In: Proceedings of Evolutionary Algorithms, pages 12 (2005)

    Google Scholar 

  3. Devroye, L., Györfi, L., Lugosi, G.: A probabilistic Theory of Pattern Recognition. Springer, Heidelberg (1997)

    MATH  Google Scholar 

  4. Droste, S.: Not all linear functions are equally difficult for the compact genetic algorithm. In: Proc. of the Genetic and Evolutionary Computation COnference (GECCO 2005), pp. 679–686 (2005)

    Google Scholar 

  5. Droste, S., Jansen, T., Wegener, I.: Upper and lower bounds for randomized search heuristics in black-box optimization (2003)

    Google Scholar 

  6. Feller, W.: An introduction to Probability Theory and its Applications. Wiley, Chichester (1968)

    MATH  Google Scholar 

  7. Hooke, R., Jeeves, T.A.: Direct search solution of numerical and statistical problems. Journal of the ACM 8, 212–229 (1961)

    Article  Google Scholar 

  8. Jagerskupper, J., Witt, C.: Runtime analysis of a (mu+1)es for the sphere function. Technical report (2005)

    Google Scholar 

  9. Nelder, J., Mead, R.: A simplex method for function minimization. Computer Journal 7, 308–311 (1965)

    Article  MathSciNet  Google Scholar 

  10. Rudolph, G.: Convergence rates of evolutionary algorithms for a class of convex objective functions. Control and Cybernetics 26(3), 375–390 (1997)

    MathSciNet  MATH  Google Scholar 

  11. Teytaud, O., Gelly, S., Mary, J.: On the ultimate convergence rates for isotropic algorithms and the best choices among various forms of isotropy, ppsn (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Teytaud, O., Gelly, S. (2006). General Lower Bounds for Evolutionary Algorithms. In: Runarsson, T.P., Beyer, HG., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds) Parallel Problem Solving from Nature - PPSN IX. PPSN 2006. Lecture Notes in Computer Science, vol 4193. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11844297_3

Download citation

  • DOI: https://doi.org/10.1007/11844297_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-38990-3

  • Online ISBN: 978-3-540-38991-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics