Abstract
We consider gradient algorithms for minimizing a quadratic function in \({\mathbb{R}^n}\) with large n. We suggest a particular sequence of step-sizes and demonstrate that the resulting gradient algorithm has a convergence rate comparable with that of Conjugate Gradients and other methods based on the use of Krylov spaces. When the matrix is large and sparse, the proposed algorithm can be more efficient than the Conjugate Gradient algorithm in terms of computational cost, as k iterations of the proposed algorithm only require the computation of O(log k) inner products.
Similar content being viewed by others
References
Barzilai J., Borwein J.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)
Erdös P., Turan P.: On interpolation. III. Interpolation theory of polynomials. Ann. Math. 41(3), 510–553 (1940)
Golub G., Loan C.V.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)
Haycroft, R., Pronzato, L., Wynn, H., Zhigljavsky, A.: Studying convergence of gradient algorithms via optimal experimental design theory. In: Optimal Design and Related Areas in Optimization and Statistics, pp. 13–37. Springer, Berlin (2009)
Krasnosel’skii M.A., G.Krein S.: An iteration process with minimal residues. Mat. Sb. 31(4), 315–334 (1952)
Meurant G.: The block preconditioned conjugate gradient method on vector computers. BIT 24, 623–633 (1984)
Pronzato L., Wynn H., Zhigljavsky A.: Asymptotic behaviour of a family of gradient algorithms in \({\mathbb{R}^d}\) and Hilbert spaces. Math. Program. A 107(3), 409–438 (2006)
Pronzato. L., Wynn. H.P., Zhigljavsky. A.: A dynamical-system analysis of the optimum s-gradient algorithm. In: Optimal Design and Related Areas in Optimization and Statistics, pp. 39–80. Springer, Berlin (2009)
Pronzato L., Zhigljavsky A.: Gradient algorithms for quadratic optimization with fast convergence rates. Comput. Optim. Appl. 50(3), 597–617 (2011)
Saad Y.: Practical use of polynomial preconditionings for the conjugate gradient method. SIAM J. Sci. Stat. Comp. 6(4), 865–881 (1985)
Slater B.: Gaps and steps for the sequence n θ mod 1. Math. Proc. Camb. Phil. Soc. 63, 1115–1123 (1967)
van der Vorst, H.A. (2002) Iterative methods for large linear systems. Tech. rep., Math. Inst. Utrecht Univ.
Author information
Authors and Affiliations
Corresponding author
Additional information
The work of E. Bukina was partially supported by the EU through a Marie-Curie Fellowship (EST-SIGNAL program: http://est-signal.i3s.unice.fr) under the contract Nb. MEST-CT-2005-021175. Part of this work was accomplished while the first two authors were invited at the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK; the support of the INI and of CNRS is gratefully acknowledged.
Rights and permissions
About this article
Cite this article
Zhigljavsky, A., Pronzato, L. & Bukina, E. An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost. Optim Lett 7, 1047–1059 (2013). https://doi.org/10.1007/s11590-012-0491-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-012-0491-7