Abstract
In this note, we present an elementary proof for a well-known second-order sufficient optimality condition in nonlinear semidefinite optimization which does not rely on the enhanced theory of second-order tangents. Our approach builds on an explicit elementary computation of the so-called second subderivative of the indicator function associated with the semidefinite cone which recovers the best curvature term known in the literature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Second-order sufficient optimality conditions play a significant role in the theory of nonlinear optimization. Among others, their validity guarantees stability of the underlying strict local minimizer with respect to perturbations of the data, and this opens a way in order to show local fast convergence of diverse types of numerical solution algorithms, including augmented Lagrangian, sequential quadratic programming, and Newton-type methods.
Geometric constraints of type
where \(F:\mathbb X\rightarrow \mathbb Y\) is a twice continuously differentiable mapping between Euclidean spaces \(\mathbb X\) and \(\mathbb Y\), and \(C\subset \mathbb X\) is a closed, convex set, provide a rather general paradigm for the modeling of diverse popular constraint systems in nonlinear optimization. It has been well-recognized in the past that second-order optimality conditions in constrained optimization depend on the second derivative of the objective function as well as the curvature of the feasible set. In the presence of constraints of type (1.1), the latter can be described in terms of the second derivative of F and the curvature of C. Thus, associated second-order optimality conditions do not only comprise the second derivative of a suitable Lagrangian function, but a so-called curvature term associated with C pops up as well. In case where C is a polyhedral set, this curvature term vanishes, and one obtains very simple second-order conditions as they are known from standard nonlinear programming, see [1, 24]. In more general cases, however, a suitable tool to keep track of the curvature of C has to be used to formulate a suitable curvature term. Classically, the support function of a (local) second-order tangent approximation of C has been exploited for that purpose, see [6, 7], and this exemplary led to second-order optimality conditions in nonlinear second-order cone and semidefinite optimization, see [5, 30]. However, we would like to mention here that the proofs in these papers are far from being elementary since the calculus of second-order tangents is a rather challenging task. With the aid of a generalized notion of support functions, the approach via second-order tangents can be further generalized to situations where C is not convex anymore, see [16]. Another less popular approach to curvature terms has been promoted recently in [3, 26, 32] where the so-called second subderivative, see [27], of the indicator function of C has been used for that purpose. This tool yields promising results even in infinite-dimensional spaces, see [10, 34]. The approach via second subderivatives is particularly suitable for the derivation of second-order sufficient optimality conditions due to the underlying calculus properties of second subderivatives, see [2] for a recent study. Second-order sufficient conditions obtained from this approach have been shown to serve as suitable tools for the local convergence analysis of solution algorithms associated with challenging optimization problems based on variational analysis, see [17, 19, 29]. In this note, we aim to popularize the approach using second subderivatives even more by presenting an application in nonlinear semidefinite optimization.
Thus, let us focus on the special situation where \(\mathbb Y:=\mathbb S^m\) equals the space of all real symmetric \(m\times m\)-matrices and \(C:=\mathbb S^m_+\) is the cone of all positive semidefinite matrices. The tightest second-order sufficient condition in nonlinear semidefinite optimization we are aware of has been established by Shapiro and can be found in [30, Theorem 9]. Its proof heavily relies on technical arguments which exploit second-order directional differentiability of the smallest eigenvalue of a positive semidefinite matrix and calculus rules for second-order tangent sets. Later, several authors tried to recover or enhance this result using reformulations of the original problem. In [14], the author obtained a related second-order sufficient condition based on a localized Lagrangian and some technical arguments via Schur’s complement. The authors of [23] applied the squared slack variable technique to semidefinite optimization problems and obtained second-order sufficient conditions in the presence of so-called strict complementarity. In [21], strict complementarity and a second-order constraint qualification are needed to recover Shapiro’s original second-order sufficient condition based on a simplified technique. Further results about second-order optimality conditions in nonlinear semidefinite optimization such as a strong second-order sufficient condition and a weak second-order necessary condition can be found in [15, 31]. The validation of second-order sufficient conditions in the papers [14, 21, 23] is much simpler than the strategy used in [30]. However, these approaches either do not recover the original result from [30] in full generality, i.e., additional conditions are postulated to proceed, or the analysis still makes some technical preliminary considerations necessary. Here, we simply compute the second subderivative of the indicator function associated with the positive semidefinite cone in order to recover the result from [30] in elementary way. Let us note that this calculation already has been done in [25, Example 3.7], but the arguments presented there are not self-contained and exploit involved variational properties of eigenvalue functions, see [33]. In contrast, our calculations are completely elementary.
The remainder of this note is structured as follows. In Sect. 2, we summarize the notation used in this paper and recall the definitions of some variational tools which we are going to exploit. We present an abstract second-order sufficient optimality condition for nonlinear semidefinite optimization problems in Sect. 3 which comprises the second subderivative of the indicator function of the semidefinite cone as the curvature term and can be distilled from a much more general result recently proven in [2, 3]. Then, by explicit computation of the appearing second subderivative, we specify this result in terms of initial problem data and recover the results from [6, 30]. Some concluding remarks close the paper in Sect. 4.
2 Preliminaries
The notation used in this note is fairly standard and follows [6, 28].
2.1 Basic notation
By \(\mathbb {R}^n_+\), we denote the nonnegative orthant of \(\mathbb {R}^n\). Let \(\mathbb {R}^{m\times n}\) be the set of all rectangular matrices with m rows amd n columns, and O the all-zero matrix of appropriate dimensions. An Euclidean space \(\mathbb X\), i.e., a finite-dimensional Hilbert space, will be equipped with the inner product \(\left\langle \cdot , \cdot \right\rangle \colon \mathbb X\times \mathbb X\rightarrow \mathbb {R}\) and the associated induced norm \(\left\| \cdot \right\| \colon \mathbb X\rightarrow [0,\infty )\). For arbitrary \(\bar{x}\in \mathbb X\) and \(\varepsilon >0\), \(\mathbb B_\varepsilon (\bar{x}):=\{x\in \mathbb X\,|\,\left\| x-\bar{x}\right\| \le \varepsilon \}\) represents the closed \(\varepsilon\)-ball around \(\bar{x}\). The space of all real symmetric \(n\times n\)-matrices \(\mathbb S^n\) is equipped with the Frobenius inner product given by
and the associated induced Frobenius norm.
For an arbitrary Euclidean space \(\mathbb X\) and some nonempty, convex set \(A\subset \mathbb X\), we use
in order to denote the polar cone of A, which is always a closed, convex cone, and the annihilator of A, which is a subspace of \(\mathbb X\). The distance function \({\text {dist}}_A\colon\mathbb X\rightarrow \mathbb {R}\) of A is given by
For \(\bar{x}\in A\), we make use of
in order to represent the tangent (or Bouligand) cone to A at \(\bar{x}\). The associated polar cone, i.e.,
is the normal cone to A at \(\bar{x}\). Note that \(\mathcal T_A(\bar{x})\) and \(\mathcal N_A(\bar{x})\) are closed, convex cones.
For a twice continuously differentiable mapping \(F\colon\mathbb X\rightarrow \mathbb Y\) between Euclidean spaces \(\mathbb X\) and \(\mathbb Y\) as well as some point \(\bar{x}\in \mathbb X\), \(F'(\bar{x})\colon\mathbb X\rightarrow \mathbb Y\) is the linear operator which represents the first derivative of F at \(\bar{x}\). Similarly, \(F''(\bar{x})\colon\mathbb X\times \mathbb X\rightarrow \mathbb Y\) is the bilinear mapping which represents the second derivative of F at \(\bar{x}\). Partial derivatives are denoted in analogous way.
Finally, for a lower semicontinuous function \(\varphi \colon\mathbb X\rightarrow \mathbb {R}\cup \{\infty \}\), some \(\bar{x}\in \mathbb X\) such that \(\varphi (\bar{x})<\infty\), and some \(x^*\in \mathbb X\), the function \(\mathrm d^2\varphi (\bar{x},x^*)\colon\mathbb X\rightarrow \mathbb {R}\cup \{-\infty ,\infty \}\) given by
is referred to as the second subderivative of \(\varphi\) at \(\bar{x}\) with \(x^*\). The recent study [2] reports on the calculus of this variational tool and its usefulness for the derivation of second-order optimality conditions in nonlinear optimization, and these findings can be partially extended even to infinite-dimensional situations, see [10, 34]. Here, we are particularly interested in the second subderivative of indicator functions \(\delta _A\colon\mathbb X\rightarrow \mathbb {R}\cup \{\infty \}\), associated with closed, convex sets \(A\subset \mathbb X\), given by
For this particular function, the definition of the second subderivative yields
and one can easily check that \(\mathrm d^2\delta _A(\bar{x},x^*)(u)=\infty\) if \(u\notin \mathcal T_A(\bar{x})\) or \(\left\langle x^*, u\right\rangle <0\). In case where \(u\in \mathcal T_A(\bar{x})\) and \(\langle x^*, u\rangle >0\), \(\mathrm d^2\delta _A(\bar{x},x^*)(u)=-\infty\) holds. Thus, only the case \(u\in \mathcal T_A(\bar{x})\cap \{x^*\}^\perp\) is interesting. In turn, for given \(\bar{x}\in A\) and \(u\in \mathcal T_A(\bar{x})\), the consideration of the second subderivative is only reasonable if \(x^*\in \mathcal N_A(\bar{x})\cap \{u\}^\perp\).
2.2 Matrix analysis
In order to carry out our analysis related to the cone of all positive semidefinite matrices, we need to introduce some further notation first. Fix some \(m\in \mathbb {N}\) such that \(m\ge 2\). By \(\mathbb S^m_+,\mathbb S^m_-\subset \mathbb S^m\), we denote the cones of all positive semidefinite and negative semidefinite matrices, respectively. For each matrix \(Y\in \mathbb S^m_+\), there exists an orthogonal matrix \(P\in \mathbb {R}^{m\times m}\) such that \(Y=P^\top MP\) where \(M\in \mathbb {R}^{m\times m}\) is the diagonal matrix whose diagonal is made of the eigenvalues of Y, ordered non-increasingly. We refer to this representation as an ordered eigenvalue decomposition of Y. Throughout the paper, we will denote the index sets of (row) indices of M associated with the positive and zero eigenvalues of Y by \(\pi\) and \(\omega\), respectively. For later use, let us also mention that \(Y^\dagger =P^\top M^\dagger P\) holds for the Moore–Penrose pseudoinverse of Y, and that \(M^\dagger\) results from M by inverting its positive diagonal elements. For arbitrary matrices \(A\in \mathbb S^m\) and index sets \(I,J\subset \{1,\ldots ,m\}\), we use \(A_{IJ}\) to denote the matrix which results from A by deleting those rows and columns whose indices do not belong to I and J, respectively. Furthermore, we set \(A^P:=PAP^\top\) and \(A^P_{IJ}:=(A^P)_{IJ}\).
In [6, Section 5.3.1], the formula
has been established. Furthermore, [20, Section 4.2.4] gives
In the course of this note, we will need a criterion for semidefiniteness of block matrices. The following lemma is taken from [8, Appendix A.5.5].
Lemma 1
Let \(m_1,m_2\in \mathbb {N}\) be positive integers. Furthermore, let \(A\in \mathbb S^{m_1}_+\) be positive definite, and let \(B\in \mathbb {R}^{m_1\times m_2}\) as well as \(C\in \mathbb S^{m_2}\) be arbitrarily chosen. For \(m:=m_1+m_2\), we consider the block matrix
Then \(M\in \mathbb S^m_+\) is equivalent to \(C-B^\top A^{-1}B\in \mathbb S^{m_2}_+\).
3 Second-order sufficient optimality conditions in nonlinear semidefinite optimization
Let \(m\in \mathbb {N}\) such that \(m\ge 2\) be fixed. Throughout the section, we consider the nonlinear semidefinite optimization problem
where \(f\colon\mathbb X\rightarrow \mathbb {R}\) and \(F\colon\mathbb X\rightarrow \mathbb S^m\) are twice continuously differentiable mappings and \(\mathbb X\) is some Euclidean space. Let \(\mathcal F\subset \mathbb X\) be the feasible set of (NSDP). For \(\alpha \ge 0\), we introduce the generalized Lagrangian function \(\mathcal L^\alpha \colon\mathbb X\times \mathbb S^m\rightarrow \mathbb {R}\) associated with (NSDP) by means of
Furthermore, for \(x\in \mathcal F\), we exploit the critical cone associated with (NSDP) given by
Note that, due to (2.1), this cone can be computed explicitly as soon as an ordered eigenvalue decomposition of F(x) is at hand. For \(u\in \mathcal C(x)\) and \(\alpha \ge 0\), the associated directional Lagrange multiplier set is given by
and this set can be computed via (2.2).
The following second-order sufficient optimality condition for (NSDP) can be distilled from the more general result [3, Theorem 3.3] which has been proven via a straight contradiction argument, and a direct proof of it, which is merely based on calculus rules for the second subderivative, is stated in [2, Theorem 5.2]. A slightly less general result, which clearly motivated the authors of [3], can be found in [26, Theorem 7.1].
Theorem 1
Let \(\bar{x}\in \mathcal F\) be chosen such that for each \(u\in \mathcal C(\bar{x}){\setminus }\{0\}\), there are \(\alpha \ge 0\) and \(Y^*\in \Lambda ^\alpha (\bar{x},u)\) such that
Then \(\bar{x}\) is an essential local minimizer of second order for (NSDP), i.e., there are \(\varepsilon >0\) and \(\beta >0\) such that
Particularly, \(\bar{x}\) is a strict local minimizer of (NSDP).
It is clear by definition of the second subderivative that (3.1) can only hold for some \(u\in \mathcal C(\bar{x}){\setminus }\{0\}\), \(\alpha \ge 0\), and \(Y^*\in \Lambda ^\alpha (\bar{x},u)\) if \((\alpha ,Y^*)\ne (0,O)\), i.e., the non-triviality of the appearing generalized Lagrange multipliers is inherent.
We also note that the growth condition (3.2) is slightly more restrictive than
which is referred to as the second-order growth condition associated with (NSDP) at \(\bar{x}\) in the literature.
In order to turn (3.1) into a valuable second-order optimality condition, the appearing second subderivative of \(\delta _{\mathbb S^m_+}\) has to be evaluated or, at least, estimated from below. Exemplary, this strategy has been used in [2] in order to infer second-order sufficient conditions in nonlinear second-order cone programming and turned out to be much simpler than the more technical verification strategies from [5, 18]. Here, we present a similar analysis for nonlinear semidefinite programs. As already remarked in [2], obtaining second-order necessary optimality conditions based on second subderivatives is often not reasonable since this would come along with comparatively strong regularity conditions which are necessary in order to get the calculus rules for second subderivatives working.
In the subsequent lemma, an explicit formula for the second subderivative of \(\delta _{\mathbb S^m_+}\) is presented.
Lemma 2
For each \(Y\in \mathbb S^m_+\), \(V\in \mathcal T_{\mathbb S^m_+}(Y)\), and \(Y^*\in \mathcal N_{\mathbb S^m_+}(Y)\cap \{V\}^\perp\), we have
Proof
Let \(Y=P^\top MP\) be an ordered eigenvalue decomposition of Y with orthogonal matrix \(P\in \mathbb {R}^{m\times m}\) and diagonal matrix \(M\in \mathbb S^m\) as well as the index sets \(\pi\) and \(\omega\) as defined in Sect. 2.2. From \(Y^*\in \mathcal N_{\mathbb S^m_+}(Y)\), we find \((Y^*)_{\pi \pi }^P=O\), \((Y^*)_{\pi \omega }^P=O\), and \((Y^*)_{\omega \omega }^P\in \mathbb S^{|\omega |}_-\). Furthermore, \(V\in \mathcal T_{\mathbb S^m_+}(Y)\) gives \(V^P_{\omega \omega }\in \mathbb S^{|\omega |}_+\). From \(\left\langle Y^*, V\right\rangle =0\) and orthogonality of P, we have
which gives \(\left\langle (Y^*)^P_{\omega \omega }, V^P_{\omega \omega }\right\rangle =0\).
For given \(V'\in \mathbb S^m\) and sufficiently small \(t>0\), \(M_{\pi \pi }+t(V')^P_{\pi \pi }\) is positive definite, and since \(Y+tV'\in \mathbb S^m_+\) and \(M+t(V')^P\in \mathbb S^m_+\) are equivalent by orthogonality of P, Lemma 1 can be used to infer that, for small enough \(t>0\), \(Y+tV'\in \mathbb S^m_+\) equals
Thus, from \((Y^*)^P_{\omega \omega }\in \mathbb S^{|\omega |}_-\), we find
Finally, we construct particular sequences \(\{t_k\}_{k\in \mathbb {N}}\subset (0,\infty )\) and \(\{V_k\}_{k\in \mathbb {N}}\subset \mathbb S^m\) which show that this lower estimate is sharp. Therefore, let \(\{t_k\}_{k\in \mathbb {N}}\subset (0,\infty )\) be a null sequence such that \(M_{\pi \pi }+t_kV^P_{\pi \pi }\) is invertible for each \(k\in \mathbb {N}\). Define
and
for each \(k\in \mathbb {N}\). Clearly, we have \(\Delta _k\rightarrow O\) which gives \(V_k\rightarrow V\). By construction, we also have
and rearrangements lead to
Thus, Lemma 1 gives \(Y+t_kV_k\in \mathbb S^m_+\) for each \(k\in \mathbb {N}\). Reprising the above steps for the estimation of the lower limit and recalling \(\left\langle (Y^*)^P_{\omega \omega }, V^P_{\omega \omega }\right\rangle =0\), we find
This already completes the proof.\(\square\)
Let us note that the assertion of Lemma 2 has been proven in [25, Example 3.7] with the aid of some deeper results from [33] addressing variational properties of eigenvalue functions. In contrast, our proof is rather elementary.
Combining this result with Theorem 1, we obtain fully explicit second-order sufficient optimality conditions for (NSDP).
Corollary 1
Let \(\bar{x}\in \mathcal F\) be chosen such that for each \(u\in \mathcal C(\bar{x}){\setminus }\{0\}\), there are \(\alpha \ge 0\) and \(Y^*\in \Lambda ^\alpha (\bar{x},u)\) such that
Then \(\bar{x}\) is an essential local minimizer of second-order for the associated optimization problem (NSDP).
Let us point the reader’s attention to the simplicity of the above arguments which have been used to obtain this second-order optimality condition. Theorem 1 is proven via a standard contradiction argument. Further, the computation of the appearing second subderivative of \(\delta _{\mathbb S^m_+}\) is completely elementary and relies on the standard approach of working with an ordered eigenvalue decomposition. In [30, Theorem 9] and [6, Section 5.3.5], related second-order sufficient conditions, based on the same expression for the curvature term, i.e., the right-hand side in (3.3), but with a weaker growth condition were obtained using the theory of second-order tangent sets. This approach is much more technical and relies on deeper mathematics such as second-order directional differentiability of the smallest eigenvalue of a positive semidefinite matrix.
4 Concluding remarks
In this note, we computed the second subderivative of the indicator function associated with the cone of all positive semidefinite matrices, and this finding was used to obtain second-order sufficient optimality conditions in nonlinear semidefinite optimization. This procedure recovered the findings from [30] in elementary way. In the future, it needs to be studied whether this second-order sufficient condition can be employed beneficially in numerical optimization like in [19] where local analysis of a multiplier-penalty method associated with second-order cone programs is investigated. Furthermore, it seems reasonable to check whether our approach to second-order sufficient conditions yields comprehensive results when applied to optimization problems with semidefinite cone complementarity constraints, see e.g. [11, 22, 35]. Finally, we note that
holds, so \(\mathbb S^m_+\) is a special instance of the closed, convex cone
where \(K\subset \mathbb {R}^m\) is an arbitrary closed, convex cone. In the literature, \(\mathbb S^m_+(K)\) is referred to as the set-semidefinite or set-copositive cone associated with K, and for \(K:=\mathbb {R}^m_+\), the popular copositive cone is obtained, see [4, 9, 12, 13] for further information about this cone and applications of copositive optimization. Following the approach of this note, it might be possible to obtain second-order sufficient conditions for nonlinear optimization problems involving \(\mathbb S^m_+(K)\). However, it is well known that the variational geometry of \(\mathbb S^m_+(K)\) is much more challenging for general K than for \(K:=\mathbb {R}^m\), so the necessary computations might be much more involved than the ones from Lemma 2.
References
Ben-Tal, A.: Second-order and related extremality conditions in nonlinear programming. J. Optimizat. Theory Appl. 31(2), 143–165 (1980). https://doi.org/10.1007/BF00934107
Benko, M., Mehlitz, P.: Why second-order sufficient conditions are, in a way, easy – or – revisiting calculus for second subderivatives. Journal of Convex Analysis https://arxiv.org/abs/2206.03918, in press (2023)
Benko, M., Gfrerer, H., Ye, J.J., Zhang, J., Zhou, J.: Second-order optimality conditions for general nonconvex optimization problems and variational analysis of disjunctive systems. preprint arXiv https://arxiv.org/abs/2203.10015 (2022)
Bomze, I.M.: Copositive optimization–recent developments and applications. Eur. J. Operat. Res. 216(3), 509–520 (2012). https://doi.org/10.1016/j.ejor.2011.04.026
Bonnans, J.F., Ramírez, C.H.: Perturbation analysis of second-order cone programming problems. Math. Program. 104, 205–227 (2005). https://doi.org/10.1007/s10107-005-0613-4
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Bonnans, J.F., Cominetti, R., Shapiro, A.: Second order optimality conditions based on parabolic second order tangent sets. SIAM J. Optimizat. 9(2), 466–492 (1999). https://doi.org/10.1137/S1052623496306760
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Burer, S.: A gentle, geometric introduction to copositive optimization. Math. Program. 151, 89–116 (2015). https://doi.org/10.1007/s10107-015-0888-z
Christof, C., Wachsmuth, G.: No-gap second-order conditions via a directional curvature functional. SIAM J. Optimizat. 28(3), 2097–2130 (2018). https://doi.org/10.1137/17M1140418
Ding, C., Sun, D., Ye, J.J.: First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints. Math. Program. 147(1–2), 539–579 (2014). https://doi.org/10.1007/s10107-013-0735-z
Dür, M.: Copositive programming – a survey. In: Diehl M, Glineur F, Jarlebring E, Michiels W (eds) Recent Advances in Optimization and its Applications in Engineering, Springer, Berlin, pp 3–20, https://doi.org/10.1007/978-3-642-12598-0_1 (2010)
Dür, M., Rendl, F.: Conic optimization: a survey with special focus on copositive optimization and binary quadratic problems. EURO J. Computat. Optimizat. 9, 100021 (2021). https://doi.org/10.1016/j.ejco.2021.100021
Forsgren, A.: Optimality conditions for nonconvex semidefinite programming. Math. Program. 88, 105–128 (2000). https://doi.org/10.1007/PL00011370
Fukuda, E.H., Haeser, G., Mito, L.M.: On the weak second-order optimality condition for nonlinear semidefinite and second-order cone programming. Set-Valued Variat. Anal. (2023). https://doi.org/10.1007/s11228-023-00676-1
Gfrerer, H., Ye, J.J., Zhou, J.: Second-order optimality conditions for nonconvex set-constrained optimization problems. Math. Operat. Res. 47(3), 2344–2365 (2022). https://doi.org/10.1287/moor.2021.1211
Hang, N.T.V., Sarabi, M.E.: Local convergence analysis of augmented Lagrangian methods for piecewise linear-quadratic composite optimization problems. SIAM J. Optimizat. 31(4), 2665–2694 (2021). https://doi.org/10.1137/20M1375188
Hang, N.T.V., Mordukhovich, B.S., Sarabi, M.E.: Second-order variational analysis in second-order cone programming. Math. Program. 180(1), 75–116 (2020). https://doi.org/10.1007/s10107-018-1345-6
Hang, N.T.V., Mordukhovich, B.S., Sarabi, M.E.: Augmented Lagrangian method for second-order cone programs under second-order sufficiency. J. Glob. Optimizat. 82, 51–81 (2022). https://doi.org/10.1007/s10898-021-01068-1
Hiriart-Urruty, J.B., Malick, J.: A fresh variational-analysis look at the positive semidefinite matrices world. J. Optimizat. Theory Appl. 153(3), 551–577 (2012). https://doi.org/10.1007/s10957-011-9980-6
Jarre, F.: Elementary optimality conditions for nonlinear SDPs. In: Anjos MF, Lasserre JB (eds) Handbook on Semidefinite, Conic and Polynomial Optimization, Springer, Boston, MA, pp 455–470, https://doi.org/10.1007/978-1-4614-0769-0_16 (2012)
Liu, Y., Pan, S.: Second-order optimality conditions for mathematical programs with semidefinite cone complementarity constraints and applications. Set-Valued Variat. Anal. 30, 373–395 (2022). https://doi.org/10.1007/s11228-021-00587-z
Lourenço, B.F., Fukuda, E.H., Fukushima, M.: Optimality conditions for nonlinear semidefinite programming via squared slack variables. Math. Program. 168, 177–200 (2018). https://doi.org/10.1007/s10107-016-1040-4
McCormick, G.: Second order conditions for constrained minima. SIAM J. Appl. Math. 15(3), 641–652 (1967). https://doi.org/10.1137/0115056
Mohammadi, A., Sarabi, M.E.: Twice epi-differentiability of extended-real-valued functions with applications in composite optimization. SIAM J. Optimizat. 30(3), 2379–2409 (2020). https://doi.org/10.1137/19M1300066
Mohammadi, A., Mordukhovich, B.S., Sarabi, M.E.: Parabolic regularity in geometric variational analysis. Trans. Am. Math. Soc. 374, 1711–1763 (2021). https://doi.org/10.1090/tran/8253
Rockafellar, R.T.: Second-order optimality conditions in nonlinear programming obtained by way of epi-derivatives. Math. Operat. Res. 14(3), 462–484 (1989). https://doi.org/10.1287/moor.14.3.462
Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998)
Sarabi, M.E.: Primal superlinear convergence of SQP methods in piecewise linear-quadratic composite optimization. Set-Valued Var. Anal. 30, 1–37 (2022). https://doi.org/10.1007/s11228-021-00580-6
Shapiro, A.: First and second order analysis of nonlinear semidefinite programs. Math. Program. 77, 301–320 (1997). https://doi.org/10.1007/BF02614439
Sun, D.: The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Operat. Res. 31(4), 761–776 (2006). https://doi.org/10.1287/moor.1060.0195
Thinh, V.D., Chuong, T.D., Anh, N.L.H.: Second order variational analysis of disjunctive constraint sets and its applications to optimization problems. Optimizat. Lett. 15, 2201–2224 (2021). https://doi.org/10.1007/s11590-020-01681-1
Torki, M.: First- and second-order epi-differentiability in eigenvalue optimization. J. Math. Anal. Appl. 234(2), 391–416 (1999). https://doi.org/10.1006/jmaa.1999.6320
Wachsmuth, D., Wachsmuth, G.: Second-order conditions for non-uniformly convex integrands: quadratic growth in \(L^{1}\). J. Nonsmooth Anal. Optimizat. 3:8733, https://doi.org/10.46298/jnsao-2022-8733 (2022)
Wu, J., Zhang, L., Zhang, Y.: Mathematical programs with semidefinite cone complementarity constraints: constraint qualifications and optimality conditions. Set-Valued Variat. Anal. 22, 155–187 (2014). https://doi.org/10.1007/s11228-013-0242-7
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Not available
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mehlitz, P. A simple proof of second-order sufficient optimality conditions in nonlinear semidefinite optimization. Optim Lett 18, 965–976 (2024). https://doi.org/10.1007/s11590-023-02031-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-023-02031-7