Weakly Convex Optimization over Stiefel Manifold Using Riemannian Subgradient-Type Methods
We consider a class of nonsmooth optimization problems over the Stiefel manifold, in which the objective function is weakly convex in the ambient Euclidean space. Such problems are ubiquitous in engineering applications but still largely unexplored. We ...
Moreau Envelope of Supremum Functions with Applications to Infinite and Stochastic Programming
In this paper, we investigate the Moreau envelope of the supremum of a family of convex, proper, and lower semicontinuous functions. Under mild assumptions, we prove that the Moreau envelope of a supremum is the supremum of Moreau envelopes, which allows ...
Nonlinear Conjugate Gradient Methods for PDE Constrained Shape Optimization Based on Steklov--Poincaré-Type Metrics
Shape optimization based on shape calculus has received a lot of attention in recent years, particularly regarding the development, analysis, and modification of efficient optimization algorithms. In this paper we propose and investigate nonlinear ...
Efficient Algorithms for Distributionally Robust Stochastic Optimization with Discrete Scenario Support
Recently, there has been a growing interest in distributionally robust optimization (DRO) as a principled approach to data-driven decision making. In this paper, we consider a distributionally robust two-stage stochastic optimization problem with discrete ...
Approximate 1-Norm Minimization and Minimum-Rank Structured Sparsity for Various Generalized Inverses via Local Search
Fundamental in matrix algebra and its applications, a generalized inverse of a real matrix $A$ is a matrix $H$ that satisfies the Moore--Penrose (M--P) property $AHA=A$. If $H$ also satisfies the additional useful M--P property, $HAH=H$, it is called a ...
An Inexact Augmented Lagrangian Method for Second-Order Cone Programming with Applications
In this paper, we adopt the augmented Lagrangian method (ALM) to solve convex quadratic second-order cone programming problems (SOCPs). Fruitful results on the efficiency of the ALM have been established in the literature. Recently, it has been shown in [...
Characterizing Convexity of Images for Quadratic-Linear Mappings with Applications in Nonconvex Quadratic Optimization
Various characterizations of convexity for images of a vector mapping where some of its components are quadratic and the remaining ones are linear are established. In a certain sense, one might conclude that convexity of the full image is reduced to the ...
Unified Acceleration of High-Order Algorithms under General Hölder Continuity
In this paper, through an intuitive vanilla proximal method perspective, we derive a concise unified acceleration framework (UAF) for minimizing a convex function that has Hölder continuous derivatives with respect to general (non-Euclidean) norms. The ...
Fundamental Domains for Symmetric Optimization: Construction and Search
Symmetries of a set are linear transformations that map the set to itself. A fundamental domain of a symmetric set is a subset that contains at least one representative from each of the symmetric equivalence classes (orbits) in the set. This paper ...
MG/OPT and Multilevel Monte Carlo for Robust Optimization of PDEs
An algorithm is proposed to solve robust control problems constrained by partial differential equations with uncertain coefficients, based on the so-called MG/OPT framework. The levels in the MG/OPT hierarchy correspond to discretization levels of the PDE, ...
Dual Randomized Coordinate Descent Method for Solving a Class of Nonconvex Problems
We consider a nonconvex optimization problem consisting of maximizing the difference of two convex functions. We present a randomized method that requires low computational effort at each iteration. The described method is a randomized coordinate descent ...
Outlier Detection in Time Series via Mixed-Integer Conic Quadratic Optimization
We consider the problem of estimating the true values of a Wiener process given noisy observations corrupted by outliers. In this paper we show how to improve existing mixed-integer quadratic optimization formulations for this problem. Specifically, we ...
Attouch--Théra Duality, Generalized Cycles, and Gap Vectors
Using the Attouch--Théra duality, we study the cycles, gap vectors, and fixed point sets of compositions of proximal mappings. Sufficient conditions are given for the existence of cycles and gap vectors. A primal-dual framework provides an exact ...
Method of Alternating Contractions and Its Applications to Some Convex Optimization Problems
This article proposes an algorithm for convex programming which can be viewed as a further development of the method of alternating projections. Some combinatorial problems like the maximum matchings and the maximum simple $b$-matchings can be solved in ...
Adaptive Douglas--Rachford Splitting Algorithm from a Yosida Approximation Standpoint
The adaptive Douglas--Rachford splitting algorithm iteratively applies the operator $T=\kappa_{n}Q_{A}Q_{B}+(1-\kappa_{n}){Id}$ to solve the inclusion problem $\text{zer}(A+B)$. By taking a Yosida approximation standpoint, we express in canonical form $Q_{A}...
Operator Splitting for a Homogeneous Embedding of the Linear Complementarity Problem
We present a first-order quadratic cone programming algorithm that can scale to very large problem sizes and produce modest accuracy solutions quickly. Our algorithm returns primal-dual optimal solutions when available or certificates of infeasibility ...
Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems
In this paper, we study the sequential convex programming method with monotone line search (SCP$_{ls}$) in [Z. Lu, Sequential Convex Programming Methods for a Class of Structured Nonlinear Programming, ŭlhttps://arxiv.org/abs/1210.3039, 2012] for a class ...
Genericity Analysis of Multi-Leader-Disjoint-Followers Game
Multi-leader-follower games are models which combine bilevel programming with generalized Nash games. Due to their many applications, they have attracted more and more attention in the last few years. We introduce here a particular case of the so-called ...
The Structure of Conservative Gradient Fields
The classical Clarke subdifferential alone is inadequate for understanding automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely on enlarged generalized gradients called “conservative fields,” defined through the natural ...
Inexact Cuts in Stochastic Dual Dynamic Programming Applied to Multistage Stochastic Nondifferentiable Problems
In [V. Guigues, SIAM J. Optim., 30 (2020), pp. 407--438], an inexact variant of stochastic dual dynamic programming (SDDP) called ISDDP was introduced which uses approximate (instead of exact with SDDP) primal dual solutions of the problems solved in the ...
Stochastic Dynamic Linear Programming: A Sequential Sampling Algorithm for Multistage Stochastic Linear Programming
Multistage stochastic programming deals with operational and planning problems that involve a sequence of decisions over time while responding to an uncertain future. Algorithms designed to address multistage stochastic linear programming (MSLP) problems ...
A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints
In this work, we consider a constrained convex problem with linear inequalities and provide an inexact penalty reformulation of the problem. The novelty is in the choice of the penalty functions, which are smooth and can induce a nonzero penalty over some ...
A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints
We consider a class of optimization problems with Cartesian variational inequality (CVI) constraints, where the objective function is convex and the CVI is associated with a monotone mapping and a convex Cartesian product set. This mathematical ...
Nonlinear Forward-Backward Splitting with Projection Correction
We propose and analyze a versatile and general algorithm called nonlinear forward-backward splitting (NOFOB). The algorithm consists of two steps; first an evaluation of a nonlinear forward-backward map followed by a relaxed projection onto the separating ...
Optimizing Hypergraph-Based Polynomials Modeling Job-Occupancy in Queuing with Redundancy Scheduling
We investigate two classes of multivariate polynomials with variables indexed by the edges of a uniform hypergraph and coefficients depending on certain patterns of unions of edges. These polynomials arise naturally to model job-occupancy in some ...
An SQP Method for Equality Constrained Optimization on Hilbert Manifolds
We extend a sequential quadratic programming method for equality constrained optimization to the setting of Hilbert manifolds. The use of retractions and linearizing maps allows us to pull back the functional and the constraint mapping to linear spaces ...
Minimizing Rational Functions: A Hierarchy of Approximations via Pushforward Measures
This paper is concerned with minimizing a sum of rational functions over a compact set of high dimension. Our approach relies on the second Lasserre hierarchy (also known as the upper bounds hierarchy) formulated on the pushforward measure in order to ...
Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
Conditional gradient methods have attracted much attention in both machine learning and optimization communities recently. These simple methods can guarantee the generation of sparse solutions. In addition, without the computation of full gradients, they ...
A Unified Approach to Mixed-Integer Optimization Problems With Logical Constraints
We propose a unified framework to address a family of classical mixed-integer optimization problems with logically constrained decision variables, including network design, facility location, unit commitment, sparse portfolio selection, binary quadratic ...
A Lagrange Multiplier Expression Method for Bilevel Polynomial Optimization
This paper studies bilevel polynomial optimization. We propose a method to solve it globally by using polynomial optimization relaxations. Each relaxation is obtained from the Karush--Kuhn--Tucker (KKT) conditions for the lower level optimization and the ...