Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Properties of the delayed weighted gradient method

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The delayed weighted gradient method, recently introduced in Oviedo-Leon (Comput Optim Appl 74:729–746, 2019), is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the \(n\times n\) real Hessian matrix of the quadratic function has only \(p<n\) distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the future use of this novel scheme when low precision is required for the minimization of non-quadratic functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Brezinski, C.: Variations on Richardson’s method and acceleration. Bull. Belg. Math. Soc. 3(5), 33–44 (1996)

    MATH  Google Scholar 

  2. Birgin, E.G., Martínez, J.M., Raydan, M.: Spectral projected gradient methods: review and perspectives. J. Stat. Softw. 60(3), 1–21 (2014)

    Article  Google Scholar 

  3. Dai, Y.-H., Huang, Y., Liu, X.-W.: A family of spectral gradient methods for optimization. Comput. Optim. Appl. 74, 43–65 (2019)

    Article  MathSciNet  Google Scholar 

  4. De Asmundis, R., Di Serafino, D., Riccio, F., Toraldo, G.: On spectral properties of steepest descent methods. IMA J. Numer. Anal. 33(4), 1416–1435 (2013)

    Article  MathSciNet  Google Scholar 

  5. De Asmundis, R., Di Serafino, D., Hager, W.W., Toraldo, G., Zhang, H.: An efficient gradient method using the Yuan steplength. Comput. Optim. Appl. 59(3), 541–563 (2014)

    Article  MathSciNet  Google Scholar 

  6. Di Serafino, D., Ruggiero, V., Toraldo, G., Zanni, L.: On the steplength selection in gradient methods for unconstrained optimization. Appl. Math. Comput. 318, 176–195 (2018)

    MathSciNet  MATH  Google Scholar 

  7. Friedlander, A., Martínez, J.M., Molina, B., Raydan, M.: Gradient method with retards and generalizations. SIAM J. Numer. Anal. 36(1), 275–289 (1999)

    Article  MathSciNet  Google Scholar 

  8. Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)

    Book  Google Scholar 

  9. Huang, Y., Dai, Y.-H., Liu, X.-W., Zhang, H.: Gradient methods exploiting spectral properties. Optim. Methods Softw. (2020). https://doi.org/10.1080/10556788.2020.1727476

    Article  MathSciNet  MATH  Google Scholar 

  10. Kelley, T.: Iterative Methods for Linear and Nonlinear Equations. SIAM, Philadelphia (1995)

    Book  Google Scholar 

  11. Lamotte, J.-L., Molina, B., Raydan, M.: Smooth and adaptive gradient method with retards. Math. Comput. Model. 36(9–10), 1161–1168 (2002)

    Article  MathSciNet  Google Scholar 

  12. Luengo, F., Raydan, M.: Gradient method with dynamical retards for large-scale optimization problems. Trans. Numer. Anal. (ETNA) 16, 186–193 (2003)

    MathSciNet  MATH  Google Scholar 

  13. Oviedo-Leon, H.F.: A delayed weighted gradient method for strictly convex quadratic minimization. Comput. Optim. Appl. 74, 729–746 (2019)

    Article  MathSciNet  Google Scholar 

  14. Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2010)

    Google Scholar 

  15. Saad, Y.: Numerical Methods for Large Eigenvalue Problems, 2nd edn. SIAM, Philadelphia (2011)

    Book  Google Scholar 

  16. Stewart, G.W.: Matrix Algorithms, Volume II: Eigensystems. SIAM, Philadelphia (2001)

    Book  Google Scholar 

Download references

Acknowledgements

We would like to thank two anonymous referees for their comments and suggestions that helped us to improve the final version of this paper. Roberto Andreani was financially supported by FAPESP (Projects 2013/05475-7 and 2017/18308-2) and CNPq (Project 301888/2017-5). Marcos Raydan was financially supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Project UIDB/MAT/00297/2020 (Centro de Matemática e Aplicações). Roberto Andreani would like to thank the Operations Research Group at CMA (Centro de Matemática e Aplicações), FCT, NOVA University of Lisbon, Portugal, for the hospitality during a two-week visit in December 2019.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marcos Raydan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andreani, R., Raydan, M. Properties of the delayed weighted gradient method. Comput Optim Appl 78, 167–180 (2021). https://doi.org/10.1007/s10589-020-00232-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00232-9

Keywords

Navigation