Abstract
We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its ability to be implemented matrix-free. This is important as this feature of augmented Lagrangian methods is responsible for renewed interests in employing such methods for solving large-scale problems. We provide convergence results from remote starting points and illustrate by a set of numerical experiments that our method outperforms traditional augmented Lagrangian methods in terms of critical performance measures.
Similar content being viewed by others
References
Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111, 5–32 (2008)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Computer Science and Applied Mathematics. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York (1982)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Belmont, Massachusetts (1996)
Birgin, E.G., Martínez, J.M.: Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization. Comput. Optim. Appl. 51, 941–965 (2012)
Boggs, P.T., Tolle, J.W.: A family of descent functions for constrained optimization. SIAM J. Numer. Anal. 21, 1146–1161 (1984)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3, 1–122 (2011)
Byrd, R.H., Lopez-Calva, G., Nocedal, J.: A line search exact penalty method using steering rules. Math Program 133, 39–73 (2012)
Byrd, R.H., Nocedal, J., Waltz, R.A.: Steering exact penalty methods for nonlinear programming. Optim. Methods Softw. 23, 197–213 (2008)
Conn, A.R., Gould, N.I.M., Toint, P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991)
Conn, A.R., Gould, N.I.M., Toint, P.L.: LANCELOT: A Fortran package for large-scale nonlinear optimization (Release A). Lecture Notes in Computation Mathematics, vol. 17. Springer, Berlin, Heidelberg, New York, London, Paris and Tokyo (1992)
Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000)
Davidson, K.R., Donsig, A.P.: Real analysis and applications. Undergraduate Texts in Mathematics. Springer, New York (2010). Theory in practice
Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19, 400–408 (1982)
Dennis Jr, J., El-Alem, M., Maciel, M.C.: A global convergence theory for general trust-region-based algorithms for equality constrained optimization. SIAM J. Optim. 7, 177–207 (1997)
DiPillo, G., Grippo, L.: A new class of augmented Lagrangians in nonlinear programming. SIAM J. Control Optim. 17, 618–628 (1979)
Dolan, E.D., Moré, J.J.: Benchmarking Optimization Software with COPS, Technical Memorandum ANL/MCS-TM-246. Argonne National Laboratory, Argonne, IL (2000)
El-Alem, M.: A global convergence theory for dennis, el-alem, and maciel’s class of trust-region algorithms for constrained optimization without assuming regularity. SIAM J. Optim. 9, 965–990 (1999)
Fernández, D., Solodov, M.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficiency condition. IMPA, preprint A677 (2010)
Fernández, D., Solodov, M.: Stabilized sequential quadratic programming for optimization and a stabilized Newton-type method for variational problems. Math. Program. 125, 47–73 (2010)
Fletcher, R.: A class of methods for nonlinear programming with termination and convergence properties. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 157–175. North-Holland, The Netherlands (1970)
Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976)
Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Rev. 47, 99–131 (2005)
Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: Some Theoretical Properties of an Augmented Lagrangian Merit Function, Report SOL 86–6R. Stanford University, Stanford, CA (1986)
Gill, P.E., Robinson, D.P.: A Globally Convergent Stabilized SQP Method, Numerical Analysis Report 12–02. University of California, San Diego, La Jolla, CA (2012)
Glowinski, R., Marroco, A.: Sur l’Approximation, par Elements Finis d’Ordre Un, el la Resolution, par Penalisation-Dualité, d’une Classe de Problèmes de Dirichlet Nonlineares. Revue Française d’Automatique, Informatique et Recherche Opérationelle 9, 41–76 (1975)
Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec: A constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003)
Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)
Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems, vol. 187. Springer, Berlin, Heidelberg and New York (1981)
ILOG Inc, ILOG CPLEX: High-performance software for mathematical programming and optimization, 2006. See http://www.ilog.com/products/cplex/
Izmailov, A.F., Solodov, M.V.: On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-Newton SQP methods to critical multipliers. Math. Program. 126, 231–257 (2011)
Izmailov, A.F., Solodov, M.V.: Stabilized SQP revisited. Math. Program. 133, 93–120 (2012)
Kočvara, M., Stingl, M., PENNON: A generalized augmented Lagrangian method for semidefinite programming, in High performance algorithms and software for nonlinear optimization (Erice: vol. 82 of Appl. Optim., Kluwer Acad. Publ. Norwell, MA 2003, pp. 303–321 (2001)
Li, D.-H., Qi, L.: A stabilized SQP method via linear equations, technical Report AMR00/5. University of New South Wales, Sydney, School of Mathematics (2000)
Mongeau, M., Sartenaer, A.: Automatic decrease of the penalty parameter in exact penalty function methods. Eur. J. Oper. Res. 83(3), 686–699 (1995)
Moré, J.J.: Trust regions and projected gradients. In M. Iri and K. Yajima, (eds.) System Modelling and Optimization. vol. 113 of Lecture Notes in Control and Information Sciences, pp. 1–13. Springer, Berlin Heidelberg (1988)
Mostafa, E.-S., Vicente, L., Wright, S.: Numerical behavior of a stabilized SQP method for degenerate NLP problems. In C. Bliek, C. Jermann, and A. Neumaier (eds.) Global Optimization and Constraint Satisfaction, vol. 2861 of Lecture Notes in Computer Science, pp. 123–141. Springer, Berlin/Heidelberg (2003)
Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, London and New York (1969)
Qin, Z., Goldfarb, D., Ma, S.: An alternating direction method for total variation denoising, arXiv, preprint arXiv:1108.1587 (2011)
Toint, P.L.: Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization. Optim. Methods Softw 28, 82–95 (2013)
Wright, S.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11, 253–275 (1998)
Yang, J., Zhang, Y., Yin, W.: A fast alternating direction method for tvl1-l2 signal reconstruction from partial fourier data. IEEE J. Sel. Top. Signal Proces. 4, 288–297 (2010)
Acknowledgments
The authors are grateful to the two referees and Associate Editor whose comments and suggestions helped to significantly improve the paper. They are also indebted to Nick Gould and Andrew Conn whose insights were extremely valuable in the preparation of this article.
Author information
Authors and Affiliations
Corresponding author
Additional information
F. E. Curtis—Research supported by National Science Foundation Grant DMS-1016291 and Department of Energy Grant DE-SC0010615
H. Jiang, D.P. Robinson—Research supported by National Science Foundation Grant DMS-1217153.
Rights and permissions
About this article
Cite this article
Curtis, F.E., Jiang, H. & Robinson, D.P. An adaptive augmented Lagrangian method for large-scale constrained optimization. Math. Program. 152, 201–245 (2015). https://doi.org/10.1007/s10107-014-0784-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-014-0784-y
Keywords
- Nonlinear optimization
- Nonconvex optimization
- Large-scale optimization
- Augmented Lagrangians
- Matrix-free methods
- Steering methods