Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

We consider gradient algorithms for minimizing a quadratic function in \({\mathbb{R}^n}\) with large n. We suggest a particular sequence of step-sizes and demonstrate that the resulting gradient algorithm has a convergence rate comparable with that of Conjugate Gradients and other methods based on the use of Krylov spaces. When the matrix is large and sparse, the proposed algorithm can be more efficient than the Conjugate Gradient algorithm in terms of computational cost, as k iterations of the proposed algorithm only require the computation of O(log k) inner products.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Barzilai J., Borwein J.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  2. Erdös P., Turan P.: On interpolation. III. Interpolation theory of polynomials. Ann. Math. 41(3), 510–553 (1940)

    Article  Google Scholar 

  3. Golub G., Loan C.V.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)

    Google Scholar 

  4. Haycroft, R., Pronzato, L., Wynn, H., Zhigljavsky, A.: Studying convergence of gradient algorithms via optimal experimental design theory. In: Optimal Design and Related Areas in Optimization and Statistics, pp. 13–37. Springer, Berlin (2009)

  5. Krasnosel’skii M.A., G.Krein S.: An iteration process with minimal residues. Mat. Sb. 31(4), 315–334 (1952)

    Google Scholar 

  6. Meurant G.: The block preconditioned conjugate gradient method on vector computers. BIT 24, 623–633 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  7. Pronzato L., Wynn H., Zhigljavsky A.: Asymptotic behaviour of a family of gradient algorithms in \({\mathbb{R}^d}\) and Hilbert spaces. Math. Program. A 107(3), 409–438 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Pronzato. L., Wynn. H.P., Zhigljavsky. A.: A dynamical-system analysis of the optimum s-gradient algorithm. In: Optimal Design and Related Areas in Optimization and Statistics, pp. 39–80. Springer, Berlin (2009)

  9. Pronzato L., Zhigljavsky A.: Gradient algorithms for quadratic optimization with fast convergence rates. Comput. Optim. Appl. 50(3), 597–617 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Saad Y.: Practical use of polynomial preconditionings for the conjugate gradient method. SIAM J. Sci. Stat. Comp. 6(4), 865–881 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  11. Slater B.: Gaps and steps for the sequence n θ mod 1. Math. Proc. Camb. Phil. Soc. 63, 1115–1123 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  12. van der Vorst, H.A. (2002) Iterative methods for large linear systems. Tech. rep., Math. Inst. Utrecht Univ.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anatoly Zhigljavsky.

Additional information

The work of E. Bukina was partially supported by the EU through a Marie-Curie Fellowship (EST-SIGNAL program: http://est-signal.i3s.unice.fr) under the contract Nb. MEST-CT-2005-021175. Part of this work was accomplished while the first two authors were invited at the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK; the support of the INI and of CNRS is gratefully acknowledged.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhigljavsky, A., Pronzato, L. & Bukina, E. An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost. Optim Lett 7, 1047–1059 (2013). https://doi.org/10.1007/s11590-012-0491-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-012-0491-7

Keywords

Navigation