Nothing Special   »   [go: up one dir, main page]

skip to main content
article

Adaptive evolutionary Monte Carlo algorithm for optimization with applications to sensor placement problems

Published: 01 December 2008 Publication History

Abstract

In this paper, we present an adaptive evolutionary Monte Carlo algorithm (AEMC), which combines a tree-based predictive model with an evolutionary Monte Carlo sampling procedure for the purpose of global optimization. Our development is motivated by sensor placement applications in engineering, which requires optimizing certain complicated "black-box" objective function. The proposed method is able to enhance the optimization efficiency and effectiveness as compared to a few alternative strategies. AEMC falls into the category of adaptive Markov chain Monte Carlo (MCMC) algorithms and is the first adaptive MCMC algorithm that simulates multiple Markov chains in parallel. A theorem about the ergodicity property of the AEMC algorithm is stated and proven. We demonstrate the advantages of the proposed method by applying it to a sensor placement problem in a manufacturing process, as well as to a standard Griewank test function.

References

[1]
Andrieu, C., Robert, C.P.: Controlled MCMC for optimal sampling. MCMC Preprint Service. http://www.ceremade.dauphine.fr/ ~xian/control.ps.gz (2002).
[2]
Atchadé, Y.F., Rosenthal, J.S.: On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11 815-828 (2005).
[3]
Bertsimas, D., Tsitsiklis, J.: Simulated annealing. Stat. Sci. 8, 10-15 (1993).
[4]
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman & Hall, New York (1984).
[5]
Brockwell, A.E., Kadane, J.B.: Identification of regeneration times in MCMC simulation, with application to adaptive schemes. J. Comput. Graph. Stat. 14, 436-458 (2005).
[6]
Brooks, R.R., Ramanathan, P., Sayeed, A.M.: Distributed target classification and tracking in sensor networks. Proc. IEEE 91, 1163- 1171 (2003).
[7]
Box, G.E.P., Wilson, K.B.: On the experimental attainment of optimum conditions. J. R. Stat. Soc. B Stat. Methodol. 13, 1-45 (1951).
[8]
Bukkapatnam, S.T.S., Nichols, J.M., Seaver, M., Trickey, S.T., Hunter, M.: A wavelet-based, distortion energy approach to structural health monitoring. Struct. Health Monitor. J. 4, 247-258 (2005).
[9]
Chen, V.C.P., Tsui, K., Barton, R.R., Meckesheimer, M.: A review on design, modeling and applications of computer experiments. IIE Trans. 38, 273-291 (2006).
[10]
¿ivilis, A., Jensen, C.S., Pakalnis, S.: Techniques for efficient tracking of road-network-based moving objects. IEEE Trans. Knowl. Data Eng. 17, 698-712 (2005).
[11]
Fang, K., Li, R., Sudjianto, A.: Design and modeling for computer experiments. Chapman & Hall/CRC, Boca Raton (2006).
[12]
Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Trans. Pattern Anal. 6, 721-741 (1984).
[13]
Gilks, W.R., Richardson, S., Spiegelhalter, D.J.: Introducing Markov chain Monte Carlo. In: Markov Chain Monte Carlo in Practice, pp. 1-19. Chapman & Hall, London (1995).
[14]
Gilks, W.R., Roberts, G.O., Sahu, S.K.: Adaptive Markov chain Monte Carlo through regeneration. J. Am. Stat. Assoc. 93, 1045-1054 (1998).
[15]
Griewank, A.O.: Generalized descent for global optimization. J. Optim. Theory Appl. 34, 11-39 (1981).
[16]
Guikema, S.D., Davidson, R.A., Ça¿nan, Z.: Efficient simulation-based discrete optimization. In: Ingalls, R.G., Rossetti, M.D., Smith, J.S., Peters, B.A. (eds.) Proceedings of the 2004 Winter Simulation Conference (2004).
[17]
Haario, H., Saksman, E., Tamminen, J.: An adaptive metropolis algorithm. Bernoulli 7, 223-242 (2001).
[18]
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2001).
[19]
Holland, J.H.: Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT Press, Cambridge (1992).
[20]
Huyet, A.L.: Optimization and analysis aid via data-mining for simulated production systems. Eur. J. Oper. Res. 173, 827-838 (2006).
[21]
Jin, J., Shi, J.: State space modeling of sheet metal assembly for dimensional control. J. Manuf. Sci. Eng. 121, 756-762 (1999).
[22]
Liang, F.: A generalized Wang-Landau algorithm for Monte Carlo computation. J. Am. Stat. Assoc. 100, 1311-1327 (2005).
[23]
Liang, F., Liu, C., Carroll, R.J.: Stochastic approximation in Monte Carlo Computation. J. Am. Stat. Assoc. 102, 305-320 (2007).
[24]
Liang, F., Wong, W.H.: Evolutionary Monte Carlo: applications to Cp model sampling and change point problem. Stat. Sin. 10, 317-342 (2000).
[25]
Liang, F., Wong, W.H.: Real-parameter evolutionary Monte Carlo with applications to Bayesian mixture models. J. Am. Stat. Assoc. 96, 653-666 (2001).
[26]
Liu, H., Igusa, T.: Feature-based classification for design optimization. Res. Eng. Des. 17, 189-206 (2007).
[27]
Kim, P., Ding, Y.: Optimal engineering system design guided by datamining methods. Technometrics 47, 336-348 (2005).
[28]
Liu, C., Ding, Y., Chen, Y.: Optimal coordinate sensor placements for estimating mean and variance components of variation sources. IIE Trans. 37, 877-889 (2005).
[29]
Mandroli, S.S., Shrivastava, A.K., Ding, Y.: A survey of inspection strategy and sensor distribution studies in discrete-part manufacturing processes. IIE Trans. 38, 309-328 (2006).
[30]
Mengersen, K.L., Tweedie, R.L.: Rates of convergence of the Hastings and Metropolis algorithms. Ann. Stat. 24, 101-121 (1996).
[31]
Michalski, R.S.: Learnable evolution model: evolutionary processes guided by machine learning. Mach. Learn. 38, 9-40 (2000).
[32]
Müller, P.: A generic approach to posterior integration and Gibbs sampling. Technical Report, Purdue University, West Lafayette, Indiana (1991).
[33]
Neal, R.M.: Bayesian learning for neural networks. Lecture Notes in Statistics, vol. 118. Springer, New York (1996).
[34]
Roberts, G.O., Rosenthal, J.S.: General state-space Markov chains and MCMC algorithms. Probab. Surv. 1, 20-71 (2004).
[35]
Roberts, G.O., Rosenthal, J.S.: Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Probab. 44, 458- 475 (2007).
[36]
Roberts, G.O., Tweedie, R.L.: Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83, 95-110 (1996).
[37]
Rosenthal, J.S.: Minorization conditions and convergence rate for Markov chain Monte Carlo. J. Am. Stat. Assoc. 90, 558-566 (1995).
[38]
Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Stat. Sci. 4, 409-435 (1989).
[39]
Schwabacher, M., Ellman, T., Hirsh, H.: Learning to set up numerical optimizations of engineering designs. In: Braha, D. (ed.) Data Mining for Design and Manufacturing, pp. 87-125. Kluwer Academic, Dordrecht (2001).
[40]
Simpson, T.W., Peplinski, J., Koch, P.N., Allen, J.K.: On the use of statistics in design and the implications for deterministic computer experiments. In: Design Theory and Methodology-- DTM'97, Sacramento, CA, September 14-17, ASME, Paper No. DETC97/DTM-3881 (1997).
[41]
Wong, W.H., Liang, F.: Dynamic weighting in Monte Carlo and optimization. Proc. Natl. Acad. Sci. USA 94, 14220-14224 (1997).

Cited By

View all
  • (2015)Adaptive Metropolis algorithm using variational Bayesian adaptive Kalman filterComputational Statistics & Data Analysis10.1016/j.csda.2014.10.00683:C(101-115)Online publication date: 1-Mar-2015

Index Terms

  1. Adaptive evolutionary Monte Carlo algorithm for optimization with applications to sensor placement problems

                    Recommendations

                    Comments

                    Please enable JavaScript to view thecomments powered by Disqus.

                    Information & Contributors

                    Information

                    Published In

                    cover image Statistics and Computing
                    Statistics and Computing  Volume 18, Issue 4
                    December 2008
                    137 pages

                    Publisher

                    Kluwer Academic Publishers

                    United States

                    Publication History

                    Published: 01 December 2008

                    Author Tags

                    1. Adaptive MCMC
                    2. Data mining
                    3. Evolutionary Monte Carlo
                    4. Global optimization

                    Qualifiers

                    • Article

                    Contributors

                    Other Metrics

                    Bibliometrics & Citations

                    Bibliometrics

                    Article Metrics

                    • Downloads (Last 12 months)0
                    • Downloads (Last 6 weeks)0
                    Reflects downloads up to 20 Nov 2024

                    Other Metrics

                    Citations

                    Cited By

                    View all
                    • (2015)Adaptive Metropolis algorithm using variational Bayesian adaptive Kalman filterComputational Statistics & Data Analysis10.1016/j.csda.2014.10.00683:C(101-115)Online publication date: 1-Mar-2015

                    View Options

                    View options

                    Login options

                    Media

                    Figures

                    Other

                    Tables

                    Share

                    Share

                    Share this Publication link

                    Share on social media