Abstract
In this paper a new hybrid conjugate gradient algorithm is proposed and analyzed. The parameter β k is computed as a convex combination of the Polak-Ribière-Polyak and the Dai-Yuan conjugate gradient algorithms, i.e. β N k =(1−θ k )β PRP k +θ k β DY k . The parameter θ k in the convex combination is computed in such a way that the conjugacy condition is satisfied, independently of the line search. The line search uses the standard Wolfe conditions. The algorithm generates descent directions and when the iterates jam the directions satisfy the sufficient descent condition. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this hybrid computational scheme outperforms the known hybrid conjugate gradient algorithms.
Similar content being viewed by others
References
Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 35–58 (2006)
Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)
Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)
Fletcher, R.: Unconstrained Optimization. Practical Methods of Optimization, vol. 1. Wiley, New York (1987)
Polak, E., Ribière, G.: Note sur la convergence de directions conjuguée. Rev. Fr. Inf. Rech. Oper. 3e Année 16, 35–43 (1969)
Poliak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)
Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)
Liu, Y., Storey, C.: Efficient generalized conjugate gradient algorithms, Part 1: Theory. J. Optim. Theory Appl. 69, 129–137 (1991)
Touati-Ahmed, D., Storey, C.: Efficient hybrid conjugate gradient techniques. J. Optim. Theory Appl. 64, 379–397 (1990)
Hu, Y.F., Storey, C.: Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. 71, 399–405 (1991)
Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)
Dai, Y.H., Yuan, Y.: An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 103, 33–47 (2001)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)
Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)
Andrei, N.: An unconstrained optimization test functions collection. Adv. Model. Optim. 10, 147–161 (2008)
Powell, M.J.D.: Restart procedures of the conjugate gradient method. Math. Program. 2, 241–254 (1977)
Yuan, Y.: Analysis on the conjugate gradient method. Optim. Methods Softw. 2, 19–29 (1993)
Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Numerical Analysis, Dundee, 1983. Lecture Notes in Mathematics, vol. 1066, pp. 122–141. Springer, Berlin (1984)
Dai, Y.H.: Analysis of conjugate gradient methods. Ph.D. Thesis, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Science (1997)
Dai, Y.H.: New properties of a nonlinear conjugate gradient method. Numer. Math. 89, 83–98 (2001)
Dai, Y.H., Liao, L.Z., Li, D.: On restart procedures for the conjugate gradient method. Numer. Algorithms 35, 249–260 (2004)
Shanno, D.F., Phua, K.H.: Algorithm 500, Minimization of unconstrained multivariate functions. ACM Trans. Math. Softw. 2, 87–94 (1976)
Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43, 117–128 (2001)
Andrei, N.: Scaled conjugate gradient algorithms for unconstrained optimization. Comput. Optim. Appl. 38, 401–416 (2007)
Andrei, N.: Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Optim. Methods Softw. 22, 561–571 (2007)
Andrei, N.: A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Appl. Math. Lett. 20, 645–650 (2007)
Kiwiel, K.C., Murty, K.: Convergence of the steepest descent method for minimizing quasiconvex functions. J. Optim. Theory Appl. 89(1), 221–226 (1996)
Dai, Y.H., Han, J.Y., Liu, G.H., Sun, D.F., Yin, X., Yuan, Y.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10, 348–358 (1999)
Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)
Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by F.A. Potra.
N. Andrei is a member of the Academy of Romanian Scientists, Splaiul Independenţei nr. 54, Sector 5, Bucharest, Romania.
Rights and permissions
About this article
Cite this article
Andrei, N. Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization. J Optim Theory Appl 141, 249–264 (2009). https://doi.org/10.1007/s10957-008-9505-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-008-9505-0