Abstract
This paper addresses a class of algorithms for solving bound-constrained black-box global optimization problems. These algorithms partition the objective function domain over multiple scales in search for the global optimum. For such algorithms, we provide a generic procedure and refer to as multi-scale optimization (MSO). Furthermore, we propose a theoretical methodology to study the convergence of MSO algorithms based on three basic assumptions: (a) local Hölder continuity of the objective function f, (b) partitions boundedness, and (c) partitions sphericity. Moreover, the worst-case finite-time performance and convergence rate of several leading MSO algorithms, namely, Lipschitzian optimization methods, multi-level coordinate search, dividing rectangles, and optimistic optimization methods have been presented.
Similar content being viewed by others
Notes
As the problem has been formulated as one of maximization, the words optimize, maximize and their conjugated forms are going to be used interchangeably.
At \(t=1\), the root node gets evaluated J times and partitioned into K nodes; \(P=Q=1\) for all MSO algorithms, irrespective of their values at \(t>1\).
This is the same x(n) of Eq. (2).
Bounding the approximation error could be valid with a probability of \(\gamma \ge 0\). In such case, any related analysis holds with a probability of \(\gamma \).
In this paper, we refer to the work of Piyavskii [43] and Shubert [56] by LO. Nevertheless, it should be noted that Strongin [60] independently employed the Lipschitz condition for optimization problems. While Piyavskii and Shubert used a priori given constant L, Strongin proposed to adaptively estimate the Lipschitz constant during the search.
One function evaluation may belong to one or more nodes.
We are treating MSO as an n-round sequential decision making process. For MSO algorithms, n may correspond to the number of iterations, evaluations, or expansions. All three quantities are interrelated. Generally, we mean by n the number of evaluations if not stated otherwise.
we shall consider this presumption throughout our analysis. In other words, for n node expansions, there are Kn node evaluations.
The purpose of presenting \(\acute{\ell }\) is to quantify how big \({\mathcal {X}}_\epsilon \) is, and consequently to introduce the near-optimality dimension (presented shortly). It does not always hold. Consider a constant function (e.g., \(f(x)=5\)), for which \(\acute{\ell }(x^*,x)\) should be 0. This violates the definition of \(\acute{\ell }\) with \(\acute{L}\) being a positive real constant. Nevertheless, it implies that \(\ell (x^*,x)=0\le \epsilon \;, \forall x \in {\mathcal {X}}\) and hence \({\mathcal {X}}_\epsilon ={\mathcal {X}}\). In other words, all the nodes at all depths have to be expanded to find the optimal node, yet each one of them is an optimal node. In summary, if \(\acute{\ell }\) does not exist, the whole search space is considered as \(\epsilon \)-optimal. Note that Assumption 1 implies \(\beta \ge \alpha \).
To the best of our knowledge, there is no finite-time analysis of DIRECT (only the consistency property \(\lim _{n\rightarrow \infty } r(n)= 0\) given by Jones et al. [29] which was proven again in [16] by Finkel and Kelley. Furthermore, they showed, based on nonsmooth analysis, that certain samples of DIRECT may converge to points that satisfy the necessary conditions for optimality defined by Clarke [8]).
LO and DOO are hypothetical propositions for which only practical/adapted implementation exists.
References
Archetti, F., Betrò, B.: A priori analysis of deterministic strategies for global optimization problems. Towards Global Optim. 2, 31 (1978)
Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Belmont (1996)
Browne, C.B., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P.I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of monte carlo tree search methods. IEEE Trans. Acoust. Speech Signal Process. Comput. Intell. AI Games 4(1), 1–43 (2012)
Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (2006)
Chaput, J.C., Szostak, J.W.: Evolutionary optimization of a nonbiological atp binding protein for improved folding stability. Chem. Biol. 11(6), 865–874 (2004)
Chaslot, G., Saito, J.T., Bouzy, B., Uiterwijk, J., Van Den Herik, H.J.: Monte-carlo strategies for computer go. In: Proceedings of the 18th BeNeLux Conference on Artificial Intelligence, Namur, Belgium, pp. 83–91. Citeseer (2006)
Clarke, F.H.: Nonsmooth analysis and optimization. In: Proceedings of the International Congress of Mathematicians (Helsinki, 1978), pp. 847–853 (1983)
Cousty, J., Najman, L., Perret, B.: Constructive links between some morphological hierarchies on edge-weighted graphs. In: Mathematical Morphology and Its Applications to Signal and Image Processing, pp. 86–97. Springer (2013)
Csendes, T., Ratz, D.: Subdivision direction selection in interval methods for global optimization. SIAM J. Numer. Anal. 34(3), 922–938 (1997)
Derbel, B., Preux, P.: Simultaneous optimistic optimization on the noiseless bbob testbed. In: IEEE Congress on Evolutionary Computation (CEC), pp. 2010–2017 (2015). doi:10.1109/CEC.2015.7257132
Evtushenko, Y., Posypkin, M.: A deterministic approach to global box-constrained optimization. Optim. Lett. 7(4), 819–829 (2013)
Evtushenko, Y.G.: Numerical methods for finding global extrema (case of a non-uniform mesh). USSR Comput. Math. Math. Phys. 11(6), 38–54 (1971)
Evtushenko, Y.G., Malkova, V., Stanevichyus, A.: Parallel global optimization of functions of several variables. Comput. Math. Math. Phys. 49(2), 246–260 (2009)
Finkel, D., Kelley, C.: Additive scaling and the direct algorithm. J.Global Optim. 36(4), 597–608 (2006)
Finkel, D.E., Kelley, C.T.: Convergence analysis of the direct algorithm. NCSU Mathematics Department, Raleigh, NC (2004)
Floudas, C.A., Pardalos, P.M., Adjiman, C., Esposito, W.R., Gümüs, Z.H., Harding, S.T., Klepeis, J.L., Meyer, C.A., Schweiger, C.A.: Handbook of Test Problems in Local and Global Optimization, vol. 33. Springer Science & Business Media, Berlin (2013)
Fowkes, J.M., Gould, N.I., Farmer, C.L.: A branch and bound algorithm for the global optimization of hessian lipschitz continuous functions. J. Global Optim. 56(4), 1791–1815 (2013)
Gablonsky, J.: An implementation of the direct algorithm. Centre for Research in Scientific Computing, North Carolina State University, Tech. Rep. CRSC-TR98-29 (1998)
Gablonsky, J.: Modifications of the direct algorithm. Ph.D. thesis, North Carolina State University, Raleigh, North Carolina (2001)
Gablonsky, J.M., Kelley, C.T.: A locally-biased form of the direct algorithm. J. Global Optim. 21(1), 27–37 (2001)
Hansen, N., Auger, A., Finck, S., Ros, R.: Real-parameter black-box optimization benchmarking 2012: Experimental setup. Tech. rep., INRIA (2012). http://coco.gforge.inria.fr/bbob2012-downloads
Hansen, N., Finck, S., Ros, R., Auger, A.: Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. Tech. Rep. RR-6829, INRIA (2009). http://hal.inria.fr/inria-00362633/en/
Hansen, P., Jaumard, B., Lu, S.H.: On the number of iterations of piyavskii’s global optimization algorithm. Math. Oper. Res. 16(2), 334–350 (1991). doi:10.1287/moor.16.2.334
Horst, R., Tuy, H.: On the convergence of global methods in multiextremal optimization. J. Optim. Theory Appl. 54(2), 253–271 (1987)
Hu, J., Wang, Y., Zhou, E., Fu, M.C., Marcus, S.I.: A survey of some model-based methods for global optimization. In: Optimization, Control, and Applications of Stochastic Systems, pp. 157–179. Springer (2012)
Huyer, W., Neumaier, A.: Global optimization by multilevel coordinate search. J. Global Optim. 14(4), 331–355 (1999)
Ivanov, V.: On optimal algorithms of minimization in the class of functions with the lipschitz condition. Inf. Process. 71, 1324–1327 (1972)
Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the lipschitz constant. J. Optim. Theory Appl. 79(1), 157–181 (1993)
Kvasov, D.E., Pizzuti, C., Sergeyev, Y.D.: Local tuning and partition strategies for diagonal go methods. Numer. Math. 94(1), 93–106 (2003)
Kvasov, D.E., Sergeyev, Y.D.: Lipschitz gradients for global optimization in a one-point-based partitioning scheme. J. Comput. Appl. Math. 236(16), 4042–4054 (2012)
Kvasov, D.E., Sergeyev, Y.D.: Deterministic approaches for solving practical black-box global optimization problems. Adv. Eng. Softw. 80, 58–66 (2015)
Lai, T.L., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6(1), 4–22 (1985)
Laurence, A., Wolsey, G.L.N.: Integer and Combinatorial Optimization. Wiley, New York (1988)
Liu, Q., Cheng, W.: A modified direct algorithm with bilevel partition. J. Global Optim. 60(3), 483–499 (2014). doi:10.1007/s10898-013-0119-1
Mayne, D., Polak, E.: Outer approximation algorithm for nondifferentiable optimization problems. J. Optim. Theory Appl. 42(1), 19–30 (1984). doi:10.1007/BF00934131
Mladineo, F.H.: An algorithm for finding the global maximum of a multimodal, multivariate function. Math. Prog. 34(2), 188–200 (1986). doi:10.1007/BF01580583
Munos, R.: Optimistic optimization of a deterministic function without the knowledge of its smoothness. In: Advances in Neural Information Processing Systems, vol. 24, pp. 783–791. Curran Associates, Inc. (2011). http://papers.nips.cc/paper/4304-optimistic-optimization-of-a-deterministic-function-without-the-knowledge-of-its-smoothness.pdf
Paulavičius, R., Sergeyev, Y.D., Kvasov, D.E., Žilinskas, J.: Globally-biased DISIMPL algorithm for expensive global optimization. J. Global Optim. 59(2–3), 545–567 (2014)
Paulavičius, R., Žilinskas, J.: Simplicial Global Optimization. Springer, New York (2014)
Pintér, J.: Globally convergent methods for \(n\)-dimensional multiextremal optimization. Optimization 17(2), 187–202 (1986)
Pintér, J.: Global Optimization in Action: Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications, vol. 6. Springer Science & Business Media, Berlin (1995)
Piyavskii, S.: An algorithm for finding the absolute extremum of a function. USSR Comput. Math. Math. Phys. 12(4), 57–67 (1972)
Pošík, P.: Bbob-benchmarking the direct global optimization algorithm. In: GECCO ’09: Proceedings of the 11th annual conference companion on Genetic and evolutionary computation conference, pp. 2315–2320. ACM, New York, NY, USA (2009). doi:10.1145/1570256.1570323
Pošík, P., Huyer, W., Pál, L.: A comparison of global search algorithms for continuous black box optimization. Evolut. Comput. 20(4), 509–541 (2012)
Preux, P., Munos, R., Valko, M.: Bandits attack function optimization. In: IEEE Congress on Evolutionary Computation (CEC), pp. 2245–2252 (2014)
Ratz, D., Csendes, T.: On the selection of subdivision directions in interval branch-and-bound methods for global optimization. J. Global Optim. 7(2), 183–207 (1995). doi:10.1007/BF01097060
The Morgridge Institute for Research, I.M.: Bound constrained optimization. http://www.neos-guide.org/content/bound-constrained-optimization
Robbins, H., et al.: Some aspects of the sequential design of experiments. Bull. Am. Math. Soc. 58(5), 527–535 (1952)
Roslund, J., Shir, O.M., Bäck, T., Rabitz, H.: Accelerated optimization and automated discovery with covariance matrix adaptation for experimental quantum control. Phys. Rev. A 80(4), 043–415 (2009)
Sergeyev, Y.D.: A one-dimensional deterministic global minimization algorithm. Comput. Math. Math. Phys. 35(5), 705–717 (1995)
Sergeyev, Y.D.: On convergence of divide the best global optimization algorithms. Optimization 44(3), 303–325 (1998)
Sergeyev, Y.D., Kvasov, D.E.: Global search based on efficient diagonal partitions and a set of lipschitz constants. SIAM J. Optim. 16(3), 910–937 (2006)
Sergeyev, Y.D., Kvasov, D.E.: A deterministic global optimization using smooth diagonal auxiliary functions. Commun. Nonlinear Scie. Numer. Simul. 21(1), 99–111 (2015)
Sergeyev, Y.D., Strongin, R.G.: A global minimization algorithm with parallel iterations. USSR Comput. Math. Math. Phys. 29(2), 7–15 (1990)
Shubert, B.O.: A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 9(3), 379–388 (1972)
Srinivas, N., Krause, A., Kakade, S.M., Seeger, M.: Gaussian process optimization in the bandit setting: no regret and experimental design. In: 27th International Conference on Machine Learning (2010)
Stover, C., Weisstein, E.W.: Hölder condition. MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/HoelderCondition.html
Strongin, R.G.: Numerical methods in multi-extremal problems (information-statistical algorithms) (1978)
Strongin, R.G.: On the convergence of an algorithm for finding a global extremum. Eng. Cybernet. 11, 549–555 (1973)
Strongin, R.G., Sergeyev, Y.: Global Optimization and Non-Convex Constraints: Sequential and Parallel Algorithms. Kluwer Academic Publishers, Dordrecht (2000)
Strongin, R.G., Sergeyev, Y.D.: Global multidimensional optimization on parallel computer. Parallel Comput. 18(11), 1259–1273 (1992)
Sukharev, A.G.: Optimal strategies of the search for an extremum. USSR Comput. Math. Math. Phys. 11(4), 119–137 (1971)
Thompson, W.R.: On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, pp. 285–294 (1933)
Torn, A., Zilinskas, A.: Global Optimization. Springer, New York (1989)
Valko, M., Carpentier, A., Munos, R.: Stochastic simultaneous optimistic optimization. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 19–27 (2013)
Wang, Z., Shakibi, B., Jin, L., de Freitas, N.: Bayesian multi-scale optimistic optimization. In: Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS 2014), pp. 1005–1014 (2014)
Acknowledgments
The authors would like to thank the reviewers for their valuable comments and suggestions that had improved the paper substantially.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported by Ministry of Education (MoE), Singapore through tier I (No. M4011269) funding.
Rights and permissions
About this article
Cite this article
Al-Dujaili, A., Suresh, S. & Sundararajan, N. MSO: a framework for bound-constrained black-box global optimization algorithms. J Glob Optim 66, 811–845 (2016). https://doi.org/10.1007/s10898-016-0441-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-016-0441-5