Abstract
We present a gradient descent algorithm with a line search procedure for solving unconstrained optimization problems which is defined as a result of applying Picard-Mann hybrid iterative process on accelerated gradient descent S M method described in Stanimirović and Miladinović (Numer. Algor. 54, 503–520, 2010). Using merged features of both analyzed models, we show that new accelerated gradient descent model converges linearly and faster then the starting S M method which is confirmed trough displayed numerical test results. Three main properties are tested: number of iterations, CPU time and number of function evaluations. The efficiency of the proposed iteration is examined for the several values of the correction parameter introduced in Khan (2013).
Similar content being viewed by others
References
Andrei, N: An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algor. 42, 63–73 (2006)
Andrei, N.: An unconstrained optimization test functions collection. http://www.ici.ro/camo/journal/vol10/v10a10.pdf
Armijo, L: Minimization of functions having Lipschitz first partial derivatives. Pac. J. Math. 6, 1–3 (1966)
Bauschke, H., Combettes, P. L.: Convex analysis and monotone operator theory in Hilbert spaces, p 144. Springer, Berlin (2011)
Clarkson, J.A.: Uniformly convex spaces. Trans. Am. Math. Soc. 40, 396–414 (1936)
Dai, Y.H., Liao, L.Z: R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1–10 (2002)
Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)
Goldstein, A.A: On steepest descent. SIAM J. Control 3, 147–151 (1965)
Ishikawa, S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)
Khan, S.H.: A Picard-Mann hybrid iterative process. Fixed Point Theory Appl. 2013, 69. Springer Open Journal (2013)
Lemaréchal, C: A view of line search. In: Auslander, A., Oetti, W., Stoer, J. (eds.) , Optimization and Optimal Control, pp 59–78. Springer, Berlin (1981)
Luenberg, D.G., Ye, Y.: Linear and nonlinear programming. Springer Science+Business Media LLC, New York (2008)
Mann, W.R.: Mean value methods in iterations. Proc. Am. Math. Soc. 4, 506–510 (1953)
Molina, B., Raydan, M.: Preconditioned Barzilai–Borwein method for the numerical solution of partial differential equations. Numer. Algor. 13, 45–60 (1996)
Moré, J.J., Thuente, D.J.: On line search algorithm with guaranteed sufficient decrease, Mathematics and Computer Science Division Preprint MCS-P153-0590. Argone National Laboratory, Argone (1990)
Nesterov, Ye.E.: A method for solving the convex programming problem with convergence rate O\((\frac {1}{k^2})\). Dokl. Acad. Nauk SSSR 269(3), 543–547 (1983)
Ortega, J.M., Rheinboldt, W.C.: Iterative solution of nonlinear equation in several variables. Academic Press, New York (1970)
Petrović M.J.: An accelerated double step size method in unconstrained optimization. Appl. Math. Comput. 250, 309–319 (2015)
Petrović, M.J., Stanimirović, P.S.: Accelerated double direction method for solving unconstrained optimization problems. Math. Probl. Eng. 2014, Article ID 965104, 8 pp (2014)
Stanimirović, P.S., Milovanović, G.V., Petrović, M.J.: A transformation of accelerated double step size method for unconstrained optimization. Math. Probl. Eng. 2015, Article ID 283679, 8 pp (2015)
Potra, F.A., Shi, Y: Efficient line search algorithm for unconstrained optimization. J. Optim. Theory Appl. 85, 677–704 (1995)
Powell, M.J.D.: Some global convergence properties of a variable-metric algorithm for minimization without exact line search. AIAM-AMS Proc. Phila. 9, 53–72 (1976)
Picard, E: Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives. J. Math. Pures Appl. 6, 145–210 (1890)
Rockafellar, R.T.: Convex analysis. Princeton University Press, Princeton (1970)
Shi, Z.-J.: Convergence of line search methods for unconstrained optimization. Appl. Math. Comput. 157, 393–405 (2004)
Stanimirović, P.S., Miladinović, M.B.: Accelerated gradient descent methods with line search. Numer. Algor. 54, 503–520 (2010)
Sun, W., Yuan, Y.-X.: Optimization theory and methods: nonlinear programming. Springer, Berlin (2006)
Wolfe, P: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1968)
Zalinescu, C.: Convex analysis in general vector spaces. World Scientific, Singapore (2002)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Petrović, M., Rakočević, V., Kontrec, N. et al. Hybridization of accelerated gradient descent method. Numer Algor 79, 769–786 (2018). https://doi.org/10.1007/s11075-017-0460-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-017-0460-4