Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Environment identification-based memory scheme for estimation of distribution algorithms in dynamic environments

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

In estimation of distribution algorithms (EDAs), the joint probability distribution of high-performance solutions is presented by a probability model. This means that the priority search areas of the solution space are characterized by the probability model. From this point of view, an environment identification-based memory management scheme (EI-MMS) is proposed to adapt binary-coded EDAs to solve dynamic optimization problems (DOPs). Within this scheme, the probability models that characterize the search space of the changing environment are stored and retrieved to adapt EDAs according to environmental changes. A diversity loss correction scheme and a boundary correction scheme are combined to counteract the diversity loss during the static evolutionary process of each environment. Experimental results show the validity of the EI-MMS and indicate that the EI-MMS can be applied to any binary-coded EDAs. In comparison with three state-of-the-art algorithms, the univariate marginal distribution algorithm (UMDA) using the EI-MMS performs better when solving three decomposable DOPs. In order to understand the EI-MMS more deeply, the sensitivity analysis of parameters is also carried out in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 IEEE congress on evolutionary computation (CEC99), vol 3, pp 1875–1882

  • Branke J (2001) Evolutionary optimization in dynamic environments, Kluwer, Dordrecht

  • Branke J, Kaubler T, Schmidt C, Schmeck H (2000) A multipopulation approach to dynamic optimization problems. In: Proceedings of the 4th international conference on adaptive computing in design and manufacturing, pp 299–308

  • Branke J, Lode C, Shapiro JL (2007) Addressing sampling errors and diversity loss in UMDA. In: Proceedings of the 9th annual conference on genetic and evolutionary computation (GECCO 2007), pp 508–515

  • Cobb HG (1990) An investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environments. Techincal Report AIC-90-001, Naval Research Lab, Washington, USA

  • Cedeno W, Vemuri VR (1997) On the use of niching for dynamic landscapes. In: Proceedings of the 1997 IEEE international conference on evolutionary computation, pp 361–366

  • Goldberg DE (2002) The design of innovation: lessons from and for competent genetic algorithms. Kluwer, Norwell

    MATH  Google Scholar 

  • Grefenstette JJ, Fitzpatrick J (1992) Genetic algorithms for changing environments. In: Proceedings of the 2nd internatioanl conference on parallel problem solving from nature

  • Jin Y, Branke J (2005) Evolutionary optimization in uncertain environments—a survey. IEEE Trans Evol Comput 9(3):303–317

    Article  Google Scholar 

  • Mori N, Kita H, Nishikawa Y (1996). Adaptation to a changing environment by means of the thermodynamical genetic algorithm. In: Ebeling W et al. (eds) Proceedings of the 4th international conference on parallel problem solving from nature (PPSN IV), pp 513–522

  • Morrison R (2004) Designing evolutionary algorithms for dynamic environments, Springer, Berlin

  • Morrison R, De Jong K (1999) A test problem generator for non-stationary environments. In: Proceedings of the 1999 congress on evolutionary computation, pp 2047–2053

  • Mühlenbein H, Paaß G (1996) Recombination of genes to the estimation of distributions. In: Ebeling W (ed) Proceedings of the 4th international conference on parallel problem solving from nature (PPSN IV), pp 178–187

  • Pelikan M (2002) Bayesian optimization algorithm: from single level to hierarchy. PhD thesis. University of Illinois, Urbana-Champaign, USA

  • Shapiro JL (2003) Scaling of probability-based optimization algorithms. In: Obermayer K (ed) Advances in neural information processing systems, pp 399–406

  • Shapiro JL (2005) Drift and scaling in estimation of distribution algorithms. Evol Comput 13(1):99–123

    Article  Google Scholar 

  • Shapiro JL (2006) Diversity loss in general estimation of distribution algorithms. In: Runarsson TP (ed) Proceedings of the 9th international conference on parallel problem solving from nature, LNCS 4193, pp 92–101

  • Ursem RK (2000) Multinational GA optimization techniques in dynamic environments. In: Proceedings of the 2000 genetic and evolutionary computation conference (GECCO 2000), pp 19–26

  • Whitley LD (1991) Fundamental principles of deception in genetic search. In: Rawlins GJE (ed) Foundations of genetic algorithms. Morgan Kaufmann, San Francisco, pp 221–241

  • Wineberg M, Oppacher F (2000) Enhancing the GA’s ability to cope with dynamic environments. In: Proceedings of the 2000 Genetic and Evolitionary Computation Conference (GECCO 2000), pp 3–10

  • Yang S (2003) Non-stationary problem optimization using the primal-dual genetic algorithm. In: Proceedings of the 2003 IEEE congress on evolutionary computation, vol 3, pp 2246–2253

  • Yang S (2005a) Memory-enhanced univariate marginal distribution algorithms for dynamic optimization problems. In: Proceedings of the 2005 IEEE Congress on Evolutionary Computation, vol 3, pp 2560–2567

  • Yang S (2005b) Population-based incremental learning with memory scheme for changing environments. In: Proceedings of the 7th annual conference on genetic and evolutionary computation (GECCO 2005), pp 711–718

  • Yang S (2006) Associative memory scheme for genetic algorithms in dynamic environments. In: EvoWorkshops 2006: applications of evolutionary computing, pp 788–799

  • Yang S, Yao X (2005) Experimental study on population-based incremental learning algorithms for dynamic optimization problems. Soft Comput 9(11):815–834

    Article  MATH  Google Scholar 

  • Yang S, Yao X (2008) Population-based incremental learning with associative memory for dynamic environments. IEEE Trans Evol Comput 12(5):542–561

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the anonymous associate editor and reviewers for their thoughtful suggestions and constructive comments. This work was supported by the National Nature Science Foundation of China (NSFC) under Grant 60774064, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xingguang Peng.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Peng, X., Gao, X. & Yang, S. Environment identification-based memory scheme for estimation of distribution algorithms in dynamic environments. Soft Comput 15, 311–326 (2011). https://doi.org/10.1007/s00500-010-0547-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-010-0547-5

Keywords

Navigation