Nothing Special   »   [go: up one dir, main page]

skip to main content
article
Free access

Fda -a scalable evolutionary algorithm for the optimization of additively decomposed functions

Published: 01 December 1999 Publication History

Abstract

The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2n parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O(nn) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA.

References

[1]
Aarts, E. H., Korst, H. M. and van Laarhoven, P. J. (1997). Simulated Annealing. In Aarts, E. and Lenstra, J. K., editors, Local Search in Combinatorial Optimization, pages 121-136, John Wiley and Sons, Chichester, England.
[2]
Baluja, S. and Davies, S. (1997). Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space. Technical Report CMU-CS-97-107, Carnegie-Mellon University, Pittsburgh, Pennsylvania.
[3]
De Bonet, J. S., Isbell, Ch. L. and Viola, P. (1997). MIMIC: Finding Optima by Estimating Probability Densities. In Mozer, M., Jordan, M. and Petsche, Th., editors, Advances in Neural Information Processing Systems 9, pages 424-431, MIT Press, Cambridge, Massachusetts.
[4]
Bouckaert, R. R. (1994). Properties of Bayesian network learning algorithms. In Lopez de Mantaras, R. and Poole, D., editors, Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pages 102-109, Morgan Kaufmann, San Francisco, California.
[5]
Frey, B. J. (1998). Graphical Models for Machine Learning and Digital Communication. MIT Press, Cambridge, Massachusetts.
[6]
Goldberg, D. E., Deb, K., Kargupta, H. and Harik, G. (1993). Rapid, Accurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms. In Forrest, S., editor, Proceedings of the Fifth International Conference on Genetic Algorithms, pages 56-64, Morgan Kaufmann, San Mateo, California.
[7]
Harik, G. (1999). Linkage Learning via probabilistic Modeling in the ECGA. IlliGal Technical Report 99010, University, of Illinois at Urbana-Champaign, Urbana, Illinois.
[8]
Jordan, M.I. (1999). Learning in Graphical Models. MIT Press, Cambridge, Massachusetts.
[9]
Kargupta, H. (1997). Revisiting The GEMGA: Scalable Evolutionary Optimization Through Linkage Learning. Personal Communication.
[10]
Kargupta, H. and Goldberg, D. E. (1997). SEARCH, Blackbox Optimization, And Sample Complexity. In Belew, R. K. and Vose, M., editors, Foundations of Genetic Algorithms 4. Morgan Kaufman, San Mateo, California.
[11]
Lauritzen, S. L. (1996). Graphical Models. Clarendon Press, Oxford, England.
[12]
Mühlenbein, H. (1998). The Equation for Response to Selection and its Use for Prediction. Evolutionary, Computation, 5(3):303-346.
[13]
Mühlenbein, H. and Mahnig, Th. (1999). Convergence Theory and Applications of the Factorized Distribution Algorithm. Journal of Computing and Information Technology, 7:19-32.
[14]
Mühlenbein, H. and Paaß, G. (1996). From Recombination of Genes to the Estimation of Distributions I. Binary., Parameters. In Voigt, H.-M., Ebeling, W., Rechenberg, E. and Schwefel, H.-P., editors, Lecture Notes in Computer Science 1141: Parallel Problem Solving from Nature - PPSN IV, pages 178-187, Springer Press, Berlin, Germany.
[15]
Mühlenbein, H. and Schlierkamp-Voosen, D. (1993a). Predictive Models for the Breeder Genetic Algorithm I. Continuous Parameter Optimization Evolutionary Computation, 1(1):25-49.
[16]
Mühlenbein, H. and Schlierkamp-Voosen, D. (1993b). The science of breeding and its application to the breeder genetic algorithm. Evolutionary, Computation, 1 (4):335-360.
[17]
Mühlenbein, H. and Schlierkamp-Voosen, D. (1994). In Stender, J., Hiltebrand, E. and Kingdon, J., editors, The Theory of Breeding and the Breeder Genetic Algorithm, pages 27-64, IOS Press, Amsterdam, The Netherlands.
[18]
Mühlenbein, H., Mahnig, Th. and Ochoa, R. (1999). Schemata, Distributions and Graphical Models in Evolutionary Optimization. Journal of Heuristics, in press.
[19]
Pelikan, M. and Mühlenbein, H. (1998). The bivariate marginal distribution algorithm, in Roy, R. et at., editors, Advames in Soft Computing - Engineering Design and Manufacturing, pages 521-535, Springer Press, New York, New York.
[20]
Pelikan, M., Goldberg, D. E. and Cantú-Paz, E. (1999). BOA: The Bayesian optimization algorithm. IlliGAL Technical Report 99003, University of Illinois at Urbana-Champaign, Urbana, Illinois.
[21]
Robertson, A. (1960). A theory of limits in artificial selection. Proceedings of the Royal Society of London B, 153:234-249.
[22]
Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 7:461-464.
[23]
Zhang, B.T., Ohm, P. and Mühlenbein, H. (1997). Evolutionary Induction of Sparse Neural Trees. Evolutionary Computation, 5(2):213-236.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Evolutionary Computation
Evolutionary Computation  Volume 7, Issue 4
Winter 1999
125 pages
ISSN:1063-6560
EISSN:1530-9304
Issue’s Table of Contents

Publisher

MIT Press

Cambridge, MA, United States

Publication History

Published: 01 December 1999
Published in EVOL Volume 7, Issue 4

Author Tags

  1. Bayes network
  2. Boltzmann distribution
  3. Genetic algorithms
  4. convergence
  5. factorization of distributions
  6. learning of Bayes networks
  7. simulated annealing

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)36
  • Downloads (Last 6 weeks)8
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Estimation of Distribution Algorithms in Machine Learning: A SurveyIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.331410528:5(1301-1321)Online publication date: 1-Oct-2024
  • (2024)A roadmap for solving optimization problems with estimation of distribution algorithmsNatural Computing: an international journal10.1007/s11047-022-09913-223:1(99-113)Online publication date: 1-Mar-2024
  • (2023)Metaheuristics in the BalanceInternational Journal of Intelligent Systems10.1155/2023/57080852023Online publication date: 1-Jan-2023
  • (2023)A Decomposition Method for Both Additively and Nonadditively Separable ProblemsIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.321837527:6(1720-1734)Online publication date: 1-Dec-2023
  • (2023)Bivariate estimation-of-distribution algorithms can find an exponential number of optimaTheoretical Computer Science10.1016/j.tcs.2023.114074971:COnline publication date: 6-Sep-2023
  • (2023)COLMA: a chaos-based mayfly algorithm with opposition-based learning and Levy flight for numerical optimization and engineering designThe Journal of Supercomputing10.1007/s11227-023-05400-279:17(19699-19745)Online publication date: 2-Jun-2023
  • (2022)Differential evolution and particle swarm optimization against COVID-19Artificial Intelligence Review10.1007/s10462-021-10052-w55:3(2149-2219)Online publication date: 1-Mar-2022
  • (2022)An improved estimation of distribution algorithm for multi-objective optimization problems with mixed-variableNeural Computing and Applications10.1007/s00521-022-07695-334:22(19703-19721)Online publication date: 1-Nov-2022
  • (2020)Theory of estimation-of-distribution algorithmsProceedings of the 2020 Genetic and Evolutionary Computation Conference Companion10.1145/3377929.3389888(1254-1282)Online publication date: 8-Jul-2020
  • (2020)Sharp Bounds for Genetic Drift in Estimation of Distribution AlgorithmsIEEE Transactions on Evolutionary Computation10.1109/TEVC.2020.298736124:6(1140-1149)Online publication date: 30-Nov-2020
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media