Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A stochastic primal-dual method for a class of nonconvex constrained optimization

Published: 01 September 2022 Publication History

Abstract

In this paper we study a class of nonconvex optimization which involves uncertainty in the objective and a large number of nonconvex functional constraints. Challenges often arise when solving this type of problems due to the nonconvexity of the feasible set and the high cost of calculating function value and gradient of all constraints simultaneously. To handle these issues, we propose a stochastic primal-dual method in this paper. At each iteration, a proximal subproblem based on a stochastic approximation to an augmented Lagrangian function is solved to update the primal variable, which is then used to update dual variables. We explore theoretical properties of the proposed algorithm and establish its iteration and sample complexities to find an ϵ-stationary point of the original problem. Numerical tests on a weighted maximin dispersion problem and a nonconvex quadratically constrained optimization problem demonstrate the promising performance of the proposed algorithm.

References

[1]
Boob, D., Deng, Q., Lan, G.: Stochastic first-order methods for convex and nonconvex functional constrained optimization. Math. Program. (2022)
[2]
Campi MC and Garatti S A sampling-and-discarding approach to chance-constrained optimization: Feasibility and optimality J. Optim. Theory App. 2011 148 2 257-280
[3]
Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In: 28th NIPS, vol. 27 (2014)
[4]
Ghadimi S Conditional gradient type methods for composite nonlinear and stochastic optimization Math. Program. 2019 173 1–2 431-464
[5]
Haines S, Loeppky J, Tseng P, and Wang X Convex relaxations of the weighted maxmin dispersion problem SIAM J. Optim. 2013 23 4 2264-2294
[6]
Hamedani EY and Aybat NS A primal-dual algorithm with line search for general convex-concave saddle point problems SIAM J. Optim. 2021 31 2 1299-1329
[7]
Hiriart-Urruty, J.-B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer (2001)
[8]
Huo, Z., Gu, B., Liu, J., Huang, H.: Accelerated method for stochastic composition optimization with nonsmooth regularization. In: 32nd AAAI Conference on Artificial Intelligence, pp. 3287–3294. AAAI (2018)
[9]
Johnson R and Zhang T Accelerating stochastic gradient descent using predictive variance reduction NIPS 2013 1 3 315-323
[10]
Lan G First-Order and Stochastic Optimization Methods for Machine Learning 2020 Cham Springer
[11]
Lan G, Romeijn E, and Zhou Z Conditional gradient methods for convex optimization with general affine and nonlinear constraints SIAM J. Optim. 2021 31 3 2307-2339
[12]
Lan G and Zhou Z Algorithms for stochastic optimization with function or expectation constraints Comput. Optim. App. 2020 76 2 461-498
[13]
Li, Z., Chen, P.-Y., Liu, S., Lu, S., Xu, Y.: Rate-improved inexact augmented lagrangian method for constrained nonconvex optimization. In: 24th AISTATS, vol. 130, pp. 2170–2178 (2021)
[14]
Li Z and Xu Y Augmented lagrangian based first-order methods for convex-constrained programs with weakly-convex objective INFROMS J. Optim. 2021 3 4 373-397
[15]
Lin Q, Ma R, and Xu Y Complexity of an inexact proximal-point penalty methods for constrained non-convex optimization Comput. Optim. Appl. 2022 82 1 175-224
[16]
Lin Q, Nadarajah S, and Soheili N A level-set method for convex optimization with a feasible solution path SIAM J. Optim. 2018 28 4 3290-3311
[17]
Lin, T., Jin, C., Jordan, M.I.: On gradient descent ascent for nonconvex-concave minimax problems. In: 37th ICML, vol. 119, pp. 6083–6093 (2020)
[18]
Lu S, Tsaknakis I, Hong M, and Chen Y Hybrid block successive approximation for one-sided non-convex min-max problems: algorithms and applications IEEE T. Signal Proces. 2020 68 3676-3691
[19]
Milzarek A, Xiao X, Cen S, Wen Z, and Ulbrich M A stochastic semismooth newton method for nonsmooth nonconvex optimization SIAM J. Optim. 2019 29 4 2916-2948
[20]
Nemirovski A Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems SIAM J. Optim. 2004 15 1 229-251
[21]
Nguyen, L.M., Liu, J., Scheinberg, K., Takác̆, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: 34th ICML, vol. 70, pp. 2613–2621 (2017)
[22]
Nocedal J and Wright S Numerical Optimization 2006 New York Springer
[23]
Nouiehed, M., Sanjabi, M., Huang, T., Lee, J.D., Razaviyayn, M.: Solving a class of non-convex min-max games using iterative first order methods. In: 33th NIPS, vol. 32 (2019)
[24]
Pan W, Shen J, and Xu Z An efficient algorithm for nonconvex-linear minimax optimization problem and its application in solving weighted maximin dispersion problem Comput. Optim. Appl. 2021 78 1 287-306
[25]
Pham NH, Nguyen LM, Phan DT, and Quoc T-D ProxSARAH: An efficient algorithmic framework for stochastic composite nonconvex optimization J. Mach. Lean. Res. 2020 21 110 1-48
[26]
Poljak BT A general method for solving extremal problems Dokl. Akad. Nauk SSSR 1967 174 1 33-36
[27]
Rafique, H., Liu, M., Lin, Q., Yang, T.: Weakly-convex concave min-max optimization: provable algorithms and applications in machine learning. Optim. Method Softw. pp. 1–35 (2021)
[28]
Reddi, S.J., Sra, S., Poczos, B., Smola, A.: Stochastic Frank-Wolfe Methods for Nonconvex Optimization. In: 54TH ALLERTON, pp. 1244–1251 (2016)
[29]
Robbins H and Monro S A stochastic approximation method Ann. Math. Statist. 1951 22 3 400-407
[30]
Rockafellar, R.T.: Convex Analysis. Princeton University Press (1972)
[31]
Rockafellar RT Lagrange multipliers and optimality SIAM Rev. 1993 35 2 183-238
[32]
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer Science & Business Media (2009)
[33]
Sahin, M. F., Eftekhari, A., Alacaoglu, A., Latorre, F., Cevher, V.: An inexact augmented lagrangian framework for nonconvex optimization with nonlinear constraints. In: 33th NIPS, vol. 32 (2019)
[34]
Seri R and Choirat C Scenario approximation of robust and chance-constrained programs J. Optim. Theory App. 2013 158 2 590-614
[35]
Wang S and Xia Y On the ball-constrained weighted maximin dispersion problem SIAM J. Optim. 2016 26 3 1565-1588
[36]
Wang X, Wang X, and Yuan YX Stochastic proximal quasi-newton methods for non-convex composite optimization Optim. Method Softw. 2019 34 5 922-948
[37]
Wang X and Yuan Y An augmented lagrangian trust region method for equality constrained optimization Optim. Method Softw. 2015 30 3 559-582
[38]
Wang X and Zhang H An augmented lagrangian affine scaling method for nonlinear programming Optim. Methods Softw. 2015 30 5 934-964
[39]
Xiao L and Zhang T A proximal stochastic gradient method with progressive variance reduction SIAM J. Optim. 2014 24 4 2057-2075
[40]
Xu Y Primal-dual stochastic gradient method for convex programs with many functional constraints SIAM J. Optim. 2020 30 2 1664-1692
[41]
Xu Y First-order methods for constrained convex programming based on linearized augmented lagrangian function INFORMS J. Optim. 2021 3 1 89-117
[42]
Xu, Z., Zhang, H., Xu, Y., Lan, G.: A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems (2020). arXiv preprint arXiv:2006.02032
[43]
Zhang, J., Xiao, P., Sun, R., Luo, Z.: A single-loop smoothed gradient descent-ascent algorithm for nonconvex-concave min-max problems. In: 34th NIPS, vol. 33 (2020)

Cited By

View all
  • (2023)Oracle complexity of single-loop switching subgradient methods for non-smooth weakly convex functional constrained optimizationProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668801(61327-61340)Online publication date: 10-Dec-2023
  • (2023)Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimizationComputational Optimization and Applications10.1007/s10589-023-00521-z87:1(117-147)Online publication date: 7-Sep-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Computational Optimization and Applications
Computational Optimization and Applications  Volume 83, Issue 1
Sep 2022
375 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 September 2022
Accepted: 26 May 2022
Received: 06 July 2021

Author Tags

  1. Nonconvex optimization
  2. Augmented Lagrangian function
  3. Stochastic gradient
  4. ϵ-stationary point
  5. Complexity

Author Tags

  1. 90C30
  2. 90C06
  3. 65K05
  4. 92C15

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Oracle complexity of single-loop switching subgradient methods for non-smooth weakly convex functional constrained optimizationProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668801(61327-61340)Online publication date: 10-Dec-2023
  • (2023)Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimizationComputational Optimization and Applications10.1007/s10589-023-00521-z87:1(117-147)Online publication date: 7-Sep-2023

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media