Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

A -algorithm for convex minimization

  • Published:
Mathematical Programming Submit manuscript

Abstract

For convex minimization we introduce an algorithm based on -space decomposition. The method uses a bundle subroutine to generate a sequence of approximate proximal points. When a primal-dual track leading to a solution and zero subgradient pair exists, these points approximate the primal track points and give the algorithm's , or corrector, steps. The subroutine also approximates dual track points that are -gradients needed for the method's -Newton predictor steps. With the inclusion of a simple line search the resulting algorithm is proved to be globally convergent. The convergence is superlinear if the primal-dual track points and the objective's -Hessian are approximated well enough.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Auslender, A.: Numerical methods for nondifferentiable convex optimization. Math. Programming Stud. 30, 102–126 (1987)

    Google Scholar 

  2. Bonnans, J., Gilbert, J., Lemaréchal, C., Sagastizábal, C.: Numerical Optimization. Theoretical and Practical Aspects. Universitext. Springer-Verlag, Berlin, 2003, Xiv+423 pp.

  3. Byrd, R., Nocedal, J., Schnabel, R.: Representations of quasi-Newton matrices and their use in limited-memory methods. Math. Program. 63, 129–156 (1994)

    Article  Google Scholar 

  4. Chen, X., Fukushima, M.: Proximal quasi-Newton methods for nondifferentiable convex optimization. Math. Program. 85(2, Ser. A), 313–334 (1999)

  5. Cheney, E., Goldstein, A.: Newton's method for convex programming and Tchebycheff approximations. Numerische Mathematik 1, 253–268 (1959)

    Article  Google Scholar 

  6. Correa, R., Lemaréchal, C.: Convergence of some algorithms for convex minimization. Math. Program. 62(2), 261–275 (1993)

    Article  Google Scholar 

  7. Fukushima, M.: A descent algorithm for nonsmooth convex optimization. Math. Program. 30, 163–175 (1984)

    Google Scholar 

  8. Fukushima, M., Qi, L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optimization 6, 1106–1120 (1996)

    Article  Google Scholar 

  9. Gilbert, J., Lemaréchal, C.: Some numerical experiments with variable-storage quasi-Newton algorithms. Math. Program. 45, 407–435 (1989)

    Article  Google Scholar 

  10. Hare, W.: Nonsmooth Optimization with Smooth Substructure. Ph.D. thesis, Department of Mathematics, Simon Fraser University (2003). Preprint available at http://www.cecm.sfu.ca/~whare

  11. Hiriart-Urruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. No. 305-306 in Grund. der math.Wiss. Springer-Verlag, 1993. (two volumes)

  12. Kelley, J.E.: The cutting plane method for solving convex programs. J. Soc. Indust. Appl. Math. 8, 703–712 (1960)

    Article  Google Scholar 

  13. Kiwiel, K.: An aggregate subgradient method for nonsmooth convex minimization. Math. Program. 27, 320–341 (1983)

    Google Scholar 

  14. Kiwiel, K.: A method for solving certain quadratic programming problems arising in nonsmooth optimization. IMA Journal of Numerical Analysis 6, 137–152 (1986)

    Google Scholar 

  15. Kolda, T., O'Leary, D., Nazareth, J.: BFGS with update skipping and varying memory. SIAM J. Optimization 8, 1060–1083 (1998)

    Article  Google Scholar 

  16. Lemaréchal, C., Mifflin, R.: Global and superlinear convergence of an algorithm for one-dimensional minimization of convex functions. Math. Program. 24, 241–256 (1982)

    Article  Google Scholar 

  17. Lemaréchal, C., Oustry, F.: Growth conditions and -Lagrangians. Set-Valued Analysis 9(1/2), 123–129 (2001)

  18. Lemaréchal, C., Oustry, F., Sagastizábal, C.: The -Lagrangian of a convex function. Trans. Am. Math. Soc. 352(2), 711–729 (2000)

    Article  Google Scholar 

  19. Lemaréchal, C., Sagastizábal, C.: An approach to variable metric bundle methods. In: J. Henry, J.P. Yvon, (eds.), Systems Modeling and Optimization, no. 197 in Lecture Notes in Control and Information Sciences, Springer-Verlag, 1994, pp. 144–162

  20. Lemaréchal, C., Sagastizábal, C.: More than first-order developments of convex functions: primal-dual relations. J. Convex Analysis 3(2), 1–14 (1996)

    Google Scholar 

  21. Lemaréchal, C., Sagastizábal, C.: Practical aspects of the Moreau-Yosida regularization: theoretical preliminaries. SIAM J. Optimization 7(2), 367–385 (1997)

    Article  Google Scholar 

  22. Lemaréchal, C., Sagastizábal, C.: Variable metric bundle methods: from conceptual to implementable forms. Math. Program., Ser. A 76, 393–410 (1997)

    Google Scholar 

  23. Liu, D., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–520 (1989)

    Article  Google Scholar 

  24. Lobo, M., Vandenberghe, L., Boyd, S., Lebret, H.: Applications of second-order cone programming. Linear Algebra and its Applications 284, 193–228 (1998)

    Article  Google Scholar 

  25. Mifflin, R.: On superlinear convergence in univariate nonsmooth minimization. Math. Program. 49, 273–279 (1991)

    Article  Google Scholar 

  26. Mifflin, R.: A quasi-second-order proximal bundle algorithm. Math. Program. 73(1), 51–72 (1996)

    Article  Google Scholar 

  27. Mifflin, R., Sagastizábal, C.: -decomposition derivatives for convex max-functions. In: R. Tichatschke, M. Théra (eds.), Ill-posed Variational Problems and Regularization Techniques, no. 477 in Lecture Notes in Economics and Mathematical Systems, Springer-Verlag Berlin Heidelberg, 1999, pp. 167–186

  28. Mifflin, R., Sagastizábal, C.: Functions with primal-dual gradient structure and -Hessians. In: G.D. Pillo, F. Giannessi, (eds.), Nonlinear Optimization and Related Topics, no. 36 in Applied Optimization, Kluwer Academic Publishers B.V. 2000, pp. 219–233

  29. Mifflin, R., Sagastizábal, C.: On -theory for functions with primal-dual gradient structure. SIAM J. Optimization 11(2), 547–571 (2000)

    Article  Google Scholar 

  30. Mifflin, R., Sagastizábal, C.: Proximal points are on the fast track. Journal of Convex Analysis 9(2), 563–579 (2002)

    Google Scholar 

  31. Mifflin, R., Sagastizábal, C.: Primal-Dual Gradient Structured Functions: second-order results; links to epi-derivatives and partly smooth functions. SIAM Journal on Optimization 13(4), 1174–1194 (2003)

    Article  Google Scholar 

  32. Mifflin, R., Sagastizábal, C.: moothness and Proximal Point Results for Some Nonconvex Functions. Optimization Methods and Software 19(5), 463–478 (2004)

    Article  Google Scholar 

  33. Mifflin, R., Sagastizábal, C.: Relating -Lagrangians to Second Order Epiderivatives and Proximal Tracks. Journal of Convex Analysis 12(1), 81–93 (2005). Http://www.heldermann.de/JCA/JCA12/jca12.htm

    MathSciNet  Google Scholar 

  34. Mifflin, R., Sun, D., Qi, L.: Quasi-Newton bundle-type methods for nondifferentiable convex optimization. SIAM Journal on Optimization 8 (2), 583–603 (1998)

    Article  Google Scholar 

  35. Miller, S.A., Malick, J.: Connections among -Lagrangian, Riemannian Newton, and SQP Methods. Rapport de Recherche 5185, INRIA (2004)

  36. Moreau, J.: Proximité et dualité dans un espace Hilbertien. Bulletin de la Société Mathématique de France 93, 273–299 (1965)

    Google Scholar 

  37. Oustry, F.: A second-order bundle method to minimize the maximum eigenvalue function. Math. Program. 89(1, Ser. A), 1–33 (2000)

  38. Qi, L., Chen, X.: A preconditioning proximal Newton method for nondifferentiable convex optimization. Math. Program. 76(3, Ser. B), 411–429 (1997)

  39. Rauf, A., Fukushima, M.: Globally convergent BFGS method for nonsmooth convex optimization. J. Optim. Theory Appl. 104(3), 539–558 (2000)

    Article  Google Scholar 

  40. Rockafellar, R.: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 14, 877–898 (1976)

    Article  Google Scholar 

  41. Rockafellar, R., Wets, R.B.: Variational Analysis. No. 317 in Grund. der math. Wiss. Springer-Verlag, 1998

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Claudia Sagastizábal.

Additional information

Dedicated to Terry Rockafellar who has had a great influence on our work via strong support for proximal points and for structural definitions that involve tangential convergence.

On leave from INRIA Rocquencourt

Research of the first author supported by the National Science Foundation under Grant No. DMS-0071459 and by CNPq (Brazil) under Grant No. 452966/2003-5. Research of the second author supported by FAPERJ (Brazil) under Grant No.E26/150.581/00 and by CNPq (Brazil) under Grant No. 383066/2004-2.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mifflin, R., Sagastizábal, C. A -algorithm for convex minimization. Math. Program. 104, 583–608 (2005). https://doi.org/10.1007/s10107-005-0630-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-005-0630-3

Keywords

Mathematics Subject Classification (2000)

Navigation