Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The limited memory BFGS (L-BFGS) method is one of the popular methods for solving large-scale unconstrained optimization. Since the standard L-BFGS method uses a line search to guarantee its global convergence, it sometimes requires a large number of function evaluations. To overcome the difficulty, we propose a new L-BFGS with a certain regularization technique. We show its global convergence under the usual assumptions. In order to make the method more robust and efficient, we also extend it with several techniques such as the nonmonotone technique and simultaneous use of the Wolfe line search. Finally, we present some numerical results for test problems in CUTEst, which show that the proposed method is robust in terms of solving more problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability Statement

All 297 test problems analyzed in this study are available in the supplementary information. The data of all test problems were obtained from https://github.com/ralna/CUTEst.

Notes

  1. Quite recently, Steck and Kanzow [19] proposed an efficient calculation technique for \((B_k + \mu I)^{-1}\)

  2. http://wp.kntu.ac.ir/peyghami/pdf/MRLBFGS.rar.

References

  1. Barazilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  2. Burdakov, O., Gong, L., Zikrin, S., Yuan, Y.X.: On efficiently combining limited-memory and trust-region techniques. Math. Program. Comput. 9, 101–134 (2017)

    Article  MathSciNet  Google Scholar 

  3. Burke, J. V., Wiegmann, A.: Notes on limited memory BFGS updating in a trust-region framework, Technical report, Department of Mathematics, University of Washington, (1996)

  4. Burke, J. V., Wiegmann, A., Xu, L.: Limited memory BFGS updating in a trust-region framework, Technical report, Department of Mathematics, University of Washington, (2008)

  5. Byrd, R.H., Nocedal, J., Schnabel, R.B.: Representations of quasi-Newton matrices and their use in limited memory methods. Math. Program. 63, 129–156 (1994)

    Article  MathSciNet  Google Scholar 

  6. Dennis, J.E., Jr., Moré, J.J.: Quasi-Newton methods, motivation and theory. SIAM Rev. 19, 46–89 (1977)

    Article  MathSciNet  Google Scholar 

  7. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  8. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec, a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003)

    Article  Google Scholar 

  9. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986)

    Article  MathSciNet  Google Scholar 

  10. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MathSciNet  Google Scholar 

  11. Moré, J.J., Sorenson, D.C.: Computing a trust region step. SIAM J. Sci. Stat. Comput. 4, 553–572 (1983)

    Article  MathSciNet  Google Scholar 

  12. Nocedal, J.: Updating quasi-Newton matrices with limited storage. Math. Comput. 35, 773–782 (1980)

    Article  MathSciNet  Google Scholar 

  13. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999)

    Book  Google Scholar 

  14. Nocedal, J.: Software for Large-scale Unconstrained Optimization: L-BFGS distribution, Available at: http://users.iems.northwestern.edu/~nocedal/lbfgs.html

  15. Pearson, J.D.: Variable metric methods of minimisation. Comput. J. 12, 171–178 (1969)

    Article  MathSciNet  Google Scholar 

  16. Powell, M. J. D.: Some global convergence properties of a variable metric algorithm for minimization without exact line search, in: Cottle, R. W., Lemke, C. E. eds., Nonlinear Programming, SIAM-AMS Proceedings IX, SIAM Publications, (1976)

  17. Raydan, M.: The Barzilai and Borwein gradient method for large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997)

    Article  MathSciNet  Google Scholar 

  18. Shanno, D.F., Kang-Hoh, P.A.: Matrix conditioning and nonlinear optimization. Math. Program. 14, 149–160 (1978)

    Article  MathSciNet  Google Scholar 

  19. Steck, D., Kanzow, C.: Regularization of limited memory quasi-newton methods for large-scale nonconvex minimization. arXiv preprint arXiv:1911.04584 (2019)

  20. Sugimoto, S., Yamashita, N.: A regularized limited-memory BFGS method for unconstrained minimization problems, Technical report, Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Japan, (2014). Available at: http://www.optimization-online.org/DB_HTML/2014/08/4479.html

  21. Sun, W.: Nonmonotone trust region method for solving optimization problems. Appl. Math. Comput. 156, 159–174 (2004)

    MathSciNet  MATH  Google Scholar 

  22. Tarzangh, D.A., Peyghami, M.R.: A new regularized limited memory BFGS-type method based on modified secant conditions for unconstrained optimization problems. J. Global Optim. 63, 709–728 (2015)

    Article  MathSciNet  Google Scholar 

  23. Ueda, K., Yamashita, N.: Convergence properties of the regularized newton method for the unconstrained nonconvex optimization. Appl. Math. Optim. 62, 27–46 (2010)

    Article  MathSciNet  Google Scholar 

  24. Ueda, K., Yamashita, N.: A regularized Newton method without line search for unconstrained optimization, Technical Report, Department of Applied Mathematics and Physics, Kyoto University, (2009)

  25. Ueda, K.: Studies on Regularized Newton-type methods for unconstrained minimization problems and their global complexity bounds, Doctoral thesis, Department of Applied Mathematics and Physics, Kyoto University, (2012)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hardik Tankaria.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tankaria, H., Sugimoto, S. & Yamashita, N. A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations. Comput Optim Appl 82, 61–88 (2022). https://doi.org/10.1007/s10589-022-00351-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-022-00351-5

Keywords

Navigation