Nothing Special   »   [go: up one dir, main page]

Skip to main content

A double parameter self-scaling memoryless BFGS method for unconstrained optimization

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

A double parameter self-scaling memoryless BFGS method for unconstrained optimization is presented. In this method, the first two terms of the self-scaling memoryless BFGS matrix are scaled with a positive parameter, while the third one is scaled with another positive parameter. The first parameter scaling the first two terms is determined to cluster the eigenvalues of the memoryless BFGS matrix. The second parameter scaling the third term is computed as a preconditioner to the Hessian of the minimizing function combined with the minimization of the conjugacy condition to shift the large eigenvalues of the self-scaling memoryless BFGS matrix to the left. The stepsize is determined by the Wolfe line search conditions. The global convergence of this method is proved, assuming that the minimizing function is uniformly convex. The preliminary computational experiments on a set of 80 unconstrained optimization test functions show that this algorithm is more efficient and more robust than the self-scaling BFGS updates by Oren and Luenberger and by Oren and Spedicato. Subject to the CPU time metric, CG-DESCENT is top performer. Comparisons with L-BFGS show that our algorithm is more efficient.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Al-Baali M (1998a) Global and superlinear convergence of a class of self-scaling methods with inexact line search. Comput Optim Appl 9:191–203

    Article  MathSciNet  Google Scholar 

  • Al-Baali M (1998b) Numerical experience with a class of self-scaling quasi-Newton algorithms. J Optim Theory Appl 96:533–553

    Article  MathSciNet  Google Scholar 

  • Andrei N (2009) Accelerated conjugate gradient algorithm with finite difference Hessian/vector product approximation for unconstrained optimization. J Comput Appl Math 230:570–582

    Article  MathSciNet  Google Scholar 

  • Andrei N (2017) Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization. Optim Methods Softw 32:534–551

    Article  MathSciNet  Google Scholar 

  • Andrei N (2018a) An adaptive scaled BFGS method for unconstrained optimization. Numer Algorithms 77(2):413–432

    Article  MathSciNet  Google Scholar 

  • Andrei N (2018b) A set of unconstrained optimization test problems. Technical Report No. 4/May 2, 2018. Research Institute for Informatics, Bucharest, pp 1–10

  • Bongartz I, Conn AR, Gould NIM, Toint PL (1995) CUTE: constrained and unconstrained testing environments. ACM TOMS 21:123–160

    Article  Google Scholar 

  • Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91:201–213

    Article  MathSciNet  Google Scholar 

  • Hager WW, Zhang H (2005) A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J Optim 16:170–192

    Article  MathSciNet  Google Scholar 

  • Huang HY (1970) Unified approach to quadratically convergent algorithms for function minimization. J Optim Theory Appl 6:405–423

    Article  MathSciNet  Google Scholar 

  • Liu DC, Nocedal J (1989) On the limited-memory BFGS method for large scale optimization. Math Program 45:503–528

    Article  MathSciNet  Google Scholar 

  • Nocedal J (1980) Updating quasi-Newton matrices with limited storage. Math Comput 35:773–782

    Article  MathSciNet  Google Scholar 

  • Nocedal J (1992) Theory of algorithms for unconstrained optimization. Acta Numer 1:199–242

    Article  MathSciNet  Google Scholar 

  • Nocedal J, Yuan YX (1993) Analysis of self-scaling quasi-Newton method. Math Program 61:19–37

    Article  MathSciNet  Google Scholar 

  • Oren SS, Luenberger DG (1974) Self-scaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms. Manag Sci 20:845–862

    Article  Google Scholar 

  • Oren SS, Spedicato E (1976) Optimal conditioning of self-scaling variable metric algorithms. Math Program 10:70–90

    Article  MathSciNet  Google Scholar 

  • Perry JM (1977) A class of conjugate gradient algorithms with a two-step variable-metric memory. Discussion Paper 269, Center for Mathematical Studies in Economics and Management Sciences, Northwestern University, Evanston

  • Shanno DF (1978) Conjugate gradient methods with inexact searches. Math Oper Res 3:244–256

    Article  MathSciNet  Google Scholar 

  • Shanno DF, Phua KH (1978) Matrix conditioning and nonlinear optimization. Math Program 14:149–160

    Article  MathSciNet  Google Scholar 

  • Sun W, Yuan YX (2006) Optimization theory and methods. Nonlinear programming. Springer Science + Business Media, New York

    MATH  Google Scholar 

  • Wolfe P (1969) Convergence conditions for ascent methods. SIAM Rev 11:226–235

    Article  MathSciNet  Google Scholar 

  • Wolfe P (1971) Convergence conditions for ascent methods. II: Some corrections. SIAM Rev 13:185–188

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neculai Andrei.

Ethics declarations

Conflict of interest

No potential conflict of interest was reported by the author.

Additional information

Communicated by Orizon Pereira Ferreira.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andrei, N. A double parameter self-scaling memoryless BFGS method for unconstrained optimization. Comp. Appl. Math. 39, 159 (2020). https://doi.org/10.1007/s40314-020-01157-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40314-020-01157-z

Keywords

Mathematics Subject Classification