Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper a new hybrid conjugate gradient algorithm is proposed and analyzed. The parameter β k is computed as a convex combination of the Polak-Ribière-Polyak and the Dai-Yuan conjugate gradient algorithms, i.e. β N k =(1−θ k )β PRP k +θ k β DY k . The parameter θ k in the convex combination is computed in such a way that the conjugacy condition is satisfied, independently of the line search. The line search uses the standard Wolfe conditions. The algorithm generates descent directions and when the iterates jam the directions satisfy the sufficient descent condition. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this hybrid computational scheme outperforms the known hybrid conjugate gradient algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 35–58 (2006)

    MATH  Google Scholar 

  2. Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  3. Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  4. Fletcher, R.: Unconstrained Optimization. Practical Methods of Optimization, vol. 1. Wiley, New York (1987)

    Google Scholar 

  5. Polak, E., Ribière, G.: Note sur la convergence de directions conjuguée. Rev. Fr. Inf. Rech. Oper. 3e Année 16, 35–43 (1969)

    Google Scholar 

  6. Poliak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)

    Article  Google Scholar 

  7. Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)

    MATH  MathSciNet  Google Scholar 

  8. Liu, Y., Storey, C.: Efficient generalized conjugate gradient algorithms, Part 1: Theory. J. Optim. Theory Appl. 69, 129–137 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  9. Touati-Ahmed, D., Storey, C.: Efficient hybrid conjugate gradient techniques. J. Optim. Theory Appl. 64, 379–397 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  10. Hu, Y.F., Storey, C.: Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. 71, 399–405 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  11. Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  12. Dai, Y.H., Yuan, Y.: An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 103, 33–47 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  13. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  14. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  15. Andrei, N.: An unconstrained optimization test functions collection. Adv. Model. Optim. 10, 147–161 (2008)

    MATH  MathSciNet  Google Scholar 

  16. Powell, M.J.D.: Restart procedures of the conjugate gradient method. Math. Program. 2, 241–254 (1977)

    Article  MathSciNet  Google Scholar 

  17. Yuan, Y.: Analysis on the conjugate gradient method. Optim. Methods Softw. 2, 19–29 (1993)

    Article  Google Scholar 

  18. Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Numerical Analysis, Dundee, 1983. Lecture Notes in Mathematics, vol. 1066, pp. 122–141. Springer, Berlin (1984)

    Chapter  Google Scholar 

  19. Dai, Y.H.: Analysis of conjugate gradient methods. Ph.D. Thesis, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Science (1997)

  20. Dai, Y.H.: New properties of a nonlinear conjugate gradient method. Numer. Math. 89, 83–98 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  21. Dai, Y.H., Liao, L.Z., Li, D.: On restart procedures for the conjugate gradient method. Numer. Algorithms 35, 249–260 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  22. Shanno, D.F., Phua, K.H.: Algorithm 500, Minimization of unconstrained multivariate functions. ACM Trans. Math. Softw. 2, 87–94 (1976)

    Article  MATH  Google Scholar 

  23. Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43, 117–128 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  24. Andrei, N.: Scaled conjugate gradient algorithms for unconstrained optimization. Comput. Optim. Appl. 38, 401–416 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  25. Andrei, N.: Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Optim. Methods Softw. 22, 561–571 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  26. Andrei, N.: A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Appl. Math. Lett. 20, 645–650 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  27. Kiwiel, K.C., Murty, K.: Convergence of the steepest descent method for minimizing quasiconvex functions. J. Optim. Theory Appl. 89(1), 221–226 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  28. Dai, Y.H., Han, J.Y., Liu, G.H., Sun, D.F., Yin, X., Yuan, Y.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10, 348–358 (1999)

    Article  MathSciNet  Google Scholar 

  29. Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  30. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Andrei.

Additional information

Communicated by F.A. Potra.

N. Andrei is a member of the Academy of Romanian Scientists, Splaiul Independenţei nr. 54, Sector 5, Bucharest, Romania.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Andrei, N. Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization. J Optim Theory Appl 141, 249–264 (2009). https://doi.org/10.1007/s10957-008-9505-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-008-9505-0

Keywords

Navigation