Abstract
In this work, we show that, given a nonlinear programming problem, it is possible to construct a family of dynamical systems, defined on the feasible set of the given problem, so that: (a) the equilibrium points are the unknown critical points of the problem, which are asymptotically stable, (b) each dynamical system admits the objective function of the problem as a Lyapunov function, and (c) explicit formulas are available without involving the unknown critical points of the problem. The construction of the family of dynamical systems is based on the Control Lyapunov Function methodology, which is used in mathematical control theory for the construction of stabilizing feedback. The knowledge of a dynamical system with the previously mentioned properties allows the construction of algorithms, which guarantee the global convergence to the set of the critical points.
Similar content being viewed by others
References
Antipin, A.S.: Minimization of convex functions on convex sets by means of differential equations. Differ. Equ. 30(9), 1365–1375 (1994)
Cabot, A.: The steepest descent dynamical system with control. Applications to constrained minimization. ESAIM Control Optim. Calc. Var. 10, 243–258 (2004)
Evtushenko, Y.G., Zhadan, V.: Barrier-projective methods for nonlinear programming. Comput. Math. Math. Phys. 34(5), 579–590 (1994)
Goh, B.S.: Algorithms for unconstrained optimization problems via control theory. J. Optim. Theory Appl. 92(3), 581–604 (1997)
Pan, P.Q.: New ODE methods for equality constrained optimization equations. J. Comput. Math. 10(1), 77–92 (1992)
Tanabe, K.: An algorithm for constrained maximization in nonlinear programming. J. Oper. Res. Soc. Jpn. 17, 184–201 (1974)
Yamashita, H.: A differential equation approach to nonlinear programming. Math. Program. 18, 155–168 (1980)
Zhou, L., Wu, Y., Zhang, L., Zhang, G.: Convergence analysis of a differential equation approach for solving nonlinear programming problems. Appl. Math. Comput. 184, 789–797 (2007)
Brown, A.A., Bartholomew-Biggs, M.C.: ODE versus SQP methods for constrained optimization. J. Optim. Theory Appl. 62(3), 371–386 (1989)
Xia, Y., Wang, J.: A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraints. IEEE Trans. Circuits Syst. 51(7), 1385–1394 (2004)
Xia, Y., Wang, J.: A recurrent neural network for solving nonlinear convex programs subject to linear constraints. IEEE Trans. Neural Netw. 16(2), 379–386 (2005)
Xia, Y., Feng, G., Wang, J.: A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. IEEE Trans. Neural Netw. 19(8), 1340–1353 (2008)
Wena, U.P., Lan, K.M., Shih, H.S.: A review of Hopfield neural networks for solving mathematical programming problems. Eur. J. Oper. Res. 198(3), 675–687 (2009)
Grüne, L., Karafyllis, I.: Lyapunov function based step size control for numerical ODE solvers with application to optimization algorithms. In: Hüper, K., Trumpf, J. (eds.) Mathematical System Theory, pp. 183–210 (2013) Festschrift in Honor of Uwe Helmke on the Occasion of his 60th Birthday. CreateSpace, Smart-Link: http://users.cecs.anu.edu.au/~trumpf/UH60Festschrift.pdf
Karafyllis, I., Grüne, L.: Feedback stabilization methods for the numerical solution of systems of ordinary differential equations. Discrete Contin. Dyn. Syst., Ser. B 16(1), 283–317 (2011)
Helmke, U., Moore, J.B.: Optimization and Dynamical Systems, 2nd edn. Springer, Berlin (1996)
Aubin, J.P.: Viability Theory. Birkhäuser, Boston (1991)
Artstein, Z.: Stabilization with Relaxed Controls. Nonlinear Anal., Theory Methods Appl. 7, 1163–1173 (1983)
Freeman, R.A., Kokotovic, P.V.: Robust nonlinear control design-state space and Lyapunov techniques. Birkhäuser, Boston (1996)
Karafyllis, I., Jiang, Z.-P.: Stability and Stabilization of Nonlinear Systems. Communications and Control Engineering. Springer, London (2011)
Sontag, E.D.: A “Universal” construction of Artstein’s theorem on nonlinear stabilization. Syst. Control Lett. 13, 117–123 (1989)
Mangasarian, O.L., Fromovitz, S.: The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 17, 37–47 (1967)
Andreani, R., Echagüe, C.E., Schuverdt, M.L.: Constant-rank condition and second order constraint qualification. J. Optim. Theory Appl. 146(2), 255–266 (2010)
Scholtes, S., Stohr, M.: How Stringent is the linear independence assumption for mathematical programs with complementarity constraints? Math. Oper. Res. 26(4), 851–863 (2001)
Solodov, M.V.: Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences. Math. Program., Ser. A 118, 1–12 (2009)
Fiacco, A.V., McCormick, G.P.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, New York (1968)
Avriel, M.: Nonlinear Programming: Analysis and Methods. Dover, New York (2003)
Stuart, A.M., Humphries, A.R.: Dynamical Systems and Numerical Analysis. Cambridge University Press, Cambridge (1998)
Khalil, H.K.: Nonlinear Systems, 2nd edn. Prentice-Hall, New York (1996)
Hairer, E., Norsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I Nonstiff Problems, 2nd edn. Springer, Berlin (1993)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II Stiff and Differential-Algebraic Problems, 2nd edn. Springer, Berlin (2002)
Ghaffari, A., Krstic, M., Nesic, D.: Multivariable Newton-based extremum seeking. Automatica 48, 1759–1767 (2012)
Liu, S.-J., Krstic, M.: Stochastic Averaging and Stochastic Extremum Seeking. Springer, Berlin (2012)
Acknowledgements
The author would like to thank Prof. Lars Grüne for his valuable comments and suggestions. The contribution of Prof. Lars Grüne to this work is major. George Athanasiou helped very much with the typesetting of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Nobuo Yamashita.
Appendix
Appendix
Proof of Lemma 2.1
First notice that by virtue of Fact 2, the following equality holds for all \(\xi=(\xi_{1} ,\dots,\xi_{k} )'\in\mathbb{R} ^{k} \):
Equation (53) implies that \(Q(x)\in\mathbb{R} ^{k\times k} \) is positive semidefinite. Suppose that Q(x) is not positive definite. Then there exists a nonzero vector \(\xi=(\xi_{1} ,\ldots,\xi_{k} )'\in \mathbb{R} ^{k}\) with ξ′Q(x)ξ=0. Consequently, Eq. (53) shows that we must have H(x)B′(x)ξ=0 and ξ j =0 for all j=1,…,k with g j (x)<0. Fact 3 implies that there exists \(\lambda\in\mathbb{R} ^{m} \) such that B′(x)ξ=A′(x)λ. The previous equality implies that
Since ξ j =0 for all j=1,…,k with g j (x)<0 and since \(\xi=(\xi_{1} ,\dots,\xi_{k} )'\in\mathbb{R} ^{k} \) is non-zero, we conclude from (54) that assumption (H2) is violated. The proof is complete. □
Proof of the claim made in the proof of Theorem 3.1
Let η>0 be arbitrary. We distinguish two cases.
Case 1: The set \(\{x\in\tilde{S} : d(x)\ge\eta \}\) is empty, where \(\tilde{S}\subseteq S\) is defined by (32), and d(x) is defined by (33). In this case, implication (34) holds trivially with arbitrary δ η >0.
Case 2: The set \(\{x\in\tilde{S} : d(x)\ge\eta \}\) is nonempty.
The continuity of the distance function d(x) and the compactness of \(\tilde{S}\subseteq S\) implies that the set \(\{x\in\tilde{S} : d(x)\ge\eta \}\) is compact. Statements (b) and (c) of Theorem 2.1 guarantee that the quantity
is well defined and positive. Let \(x\in\tilde{S}\) with d(x)≥η be an arbitrary point. We denote by z(t) the unique solution of the initial-value problem \(\dot{z}=F(z)\) with z(0)=x. We also notice that the vector field F as defined by (8), (9) is locally Lipschitz on a neighborhood of S. By the compactness of \(\tilde{S}\subseteq S\), there exists a constant L≥0 such that
Inequality (56), the fact that z(s) belongs to the compact set of all z∈S with θ(z)≤θ(x), and standard arguments show that the following inequality holds for all s≥0:
Next, we notice that the problem
with I(x)≠∅ admits at least one solution (since the mapping y→|y−x−sF(x)|2 is radially unbounded). Any solution \(y\in\mathbb{R} ^{n} \) of problem (58) with I(x)≠∅ satisfies for all s≥ 0:
The above inequalities hold trivially for the case I(x)=∅ and y=x+sF(x). Define:
We will show next that implication (34) holds with δ η >0 defined by
First, we show the implication
It suffices to show that g j (y)≤0 for all j∉I(x). Notice that by the definition of the set I(x)⊆{1,…,k} it follows that g j (x+sF(x))≤−ε for all s∈[0,ε] and j∉I(x). Using (59), we obtain for all s∈[0,r]:
Since g j (x+sF(x))≤−ε for all s∈[0,ε] and j∉I(x), we obtain from (57), (59), (64) and (60) for all s∈[0,ε] and j∉I(x):
Inequality (65) in conjunction with definition (62) shows that g j (y)≤0 for all j∉I(x), provided that s≤δ η .
By implication (63) we are left with the task of proving the inequality θ(y)≤θ(x)+λs∇θ(x)F(x) for all s≤δ η and \(x\in\tilde{S}\) with d(x)≥η. Using (59), we obtain for all s∈[0,r]:
where \(K=\frac{1}{2} \max \{ \vert \nabla^{2} \theta(z)\vert : \vert z\vert \le\beta \}\). The derivation of (66) follows from majorizing the second derivative of the mapping w→p(w)=θ(x+w(y−x)) and using the inequality |y−x|≤2s|F(x)| (which is a direct consequence of (59)). It follows from (59), (57), (66), and (61) that the following inequality holds for all s∈[0,r]:
Definitions (55), (62) and inequality (67) allow us to conclude that the inequality θ(y)≤θ(x)+λs∇θ(x)F(x) holds for all s≤δ η . The proof is complete. □
Rights and permissions
About this article
Cite this article
Karafyllis, I. Feedback Stabilization Methods for the Solution of Nonlinear Programming Problems. J Optim Theory Appl 161, 783–806 (2014). https://doi.org/10.1007/s10957-013-0459-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-013-0459-5