Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Delayed Weighted Gradient Method with simultaneous step-sizes for strongly convex optimization

Published: 31 May 2024 Publication History

Abstract

The Delayed Weighted Gradient Method (DWGM) is a two-step gradient algorithm that is efficient for the minimization of large scale strictly convex quadratic functions. It has orthogonality properties that make it to compete with the Conjugate Gradient (CG) method. Both methods calculate in sequence two step-sizes, CG minimizes the objective function and DWGM the gradient norm, alongside two search directions defined over first order current and previous iteration information. The objective of this work is to accelerate the recently developed extension of DWGM to nonquadratic strongly convex minimization problems. Our idea is to define the step-sizes of DWGM in a unique two dimensional convex quadratic optimization problem, calculating them simultaneously. Convergence of the resulting algorithm is analyzed. Comparative numerical experiments illustrate the effectiveness of our approach.

References

[1]
Birgin EG, Chambouleyron I, and Martınez JM Estimation of the optical constants and the thickness of thin films using unconstrained optimization J. Comput. Phys. 1999 151 2 862-880
[2]
Serafini T, Zanghirati G, and Zanni L Gradient projection methods for quadratic programs and applications in training support vector machines Optim. Methods Softw. 2005 20 2–3 353-378
[3]
Figueiredo MA, Nowak RD, and Wright SJ Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems IEEE J. Sel. Top. Signal Process. 2007 1 4 586-597
[4]
Antonelli L, De Simone V, and Di Serafino D On the application of the spectral projected gradient method in image segmentation J. Math. Imaging Vis. 2016 54 106-116
[5]
Luenberger DG and Ye Y Linear and Nonlinear Programming 1984 Reading, MA Addison-wesley
[6]
Cauchy A Méthode générale pour la résolution des systemes d’équations simultanées Comp. Rend. Sci. Paris 1847 25 536-538
[7]
Di Serafino D, Ruggiero V, Toraldo G, and Zanni L On the steplength selection in gradient methods for unconstrained optimization Appl. Math. Comput. 2018 318 176-195
[8]
Gould, N.: Section C: continuous optimisation, Lecture 3: steepest descent, gradient search and Newton’s method. https://www.numerical.rl.ac.uk/people/nimg/course/lectures/raphael/lectures/lecture3.pdf. Accessed 04 Oct 2023 (2006)
[9]
Nemirovsky, A., Yudin, D.: Informational complexity and efficient methods for solution of convex extremal problems. Ékonomika i Mathematicheskie Metody 12 (1983)
[10]
Nesterov YE One class of methods of unconditional minimization of a convex function, having a high rate of convergence USSR Comput. Math. Math. Phys. 1984 24 4 80-82
[11]
Barzilai J and Borwein JM Two-point step size gradient methods IMA J. Numer. Anal. 1988 8 1 141-148
[12]
Fletcher R Low storage methods for uncosntrained optimization Lect. Appl. Math. (AMS) 1990 26 165-179
[13]
Raydan M On the Barzilai and Borwein choice of steplength for the gradient method IMA J. Numer. Anal. 1993 13 321-326
[14]
Dai YH and Liao LZ R-linear convergence of the Barzilai and Borwein gradient method IMA J. Numer. Anal. 2002 22 1 1-10
[15]
Li XR and Huang YK A note on R-linear convergence of nonmonotone gradient methods J. Oper. Res. Soc. China 2023
[16]
Raydan M The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem SIAM J. Optim. 1997 7 26-33
[17]
Grippo L, Lampariello F, and Lucidi S A nonmonotone line search technique for Newton’s method SIAM J. Numer. Anal. 1986 23 707-716
[18]
Friedlander A, Martínez JM, Molina B, and Raydan M Gradient method with retards and generalizations SIAM J. Numer. Anal. 1998 36 1 275-289
[19]
Dai YH Alternate step gradient method Optimization 2003 52 4–5 395-415
[20]
Dai YH and Fletcher R The cyclic Barzilai–Borwein method for unconstrained optimization IMA J. Numer. Anal. 2006 26 3 604-627
[21]
Raydan M and Svaiter BF Relaxed steepest descent and Cauchy–Barzilai–Borwein method Comput. Optim. Appl. 2002 21 155-167
[22]
Oviedo H A cyclic delayed weighted steplength for the gradient method Ric. Mat. 2021
[23]
Zhou B, Gao L, and Dai YH Gradient methods with adaptive step-sizes Comput. Optim. Appl. 2006 35 69-86
[24]
Frassoldati G, Zanni L, and Zanghirati G New adaptive stepsize selections in gradient methods J. Ind. Manag. Optim. 2008 4 2 299-312
[25]
Oviedo H, Dalmau O, and Herrera R Two novel gradient methods with optimal step sizes J. Comput. Math. 2021 39 3 375-391
[26]
Yuan YX A new stepsize for the steepest descent method J. Comput. Math. 2006 24 2 149-156
[27]
Huang YK, Dai YH, and Liu XW Equipping the Barzilai–Borwein method with the two dimensional quadratic termination property SIAM J. Optim. 2021 31 4 3068-3096
[28]
Huang Y., Dai Y. H., Liu, X. W.: Achieving three dimensional quadratic termination for gradient methods. arXiv:2212.07255 (2022)
[29]
Dai YH and Yuan YX Analyses of monotone gradient methods J. Ind. Manag. Optim. 2005 1 2 181-192
[30]
De Asmundis R, Di Serafino D, Hager WW, Toraldo G, and Zhang H An efficient gradient method using the Yuan steplength Comput. Optim. Appl. 2014 59 541-563
[31]
Huang Y, Dai YH, Liu XW, and Zhang H Gradient methods exploiting spectral properties Optim. Methods Softw. 2020 35 4 681-705
[32]
Huang Y, Dai YH, Liu XW, and Zhang H On the acceleration of the Barzilai–Borwein method Comput. Optim. Appl. 2022 81 3 717-740
[33]
Burdakov O, Dai YH, and Huang N Stabilized Barzilai–Borwein method J. Comput. Math. 2019 37 916-936
[34]
Fletcher R A limited memory steepest descent method Math. Program. 2012 135 413-436
[35]
Dai YH, Huang Y, and Liu XW A family of spectral gradient methods for optimization Comput. Optim. Appl. 2019 74 43-65
[36]
Ferrandi G, Hochstenbach ME, and Krejić N A harmonic framework for stepsize selection in gradient methods Comput. Optim. Appl. 2023 85 1 75-106
[37]
Oviedo Leon HF A delayed weighted gradient method for strictly convex quadratic minimization Comput. Optim. Appl. 2019 74 3 729-746
[38]
Andreani R and Raydan M Properties of the delayed weighted gradient method Comput. Optim. Appl. 2021 78 1 167-180
[39]
Hestenes MR and Stiefel E Methods of conjugate gradients for solving linear systems J. Res. Natl. Bureau Stand. 1952 49 6 409-436
[40]
Oviedo H, Andreani R, and Raydan M A family of optimal weighted conjugate-gradient-type methods for strictly convex quadratic minimization Numer. Algorithms 2022 90 1225-1252
[41]
Oviedo H, Dalmau O, and Herrera R A hybrid gradient method for strictly convex quadratic programming Numer. Linear Algebra Appl. 2021 28 4
[42]
Andreani R, Oviedo H, Raydan M, and Secchin LD An extended delayed weighted gradient algorithm for solving strongly convex optimization problems J. Comput. Appl. Math. 2022 416
[43]
Hager WW and Zhang H Algorithm 851: CG\_DESCENT, a conjugate gradient method with guaranteed descent ACM Trans. Math. Softw. (TOMS) 2006 32 1 113-137
[44]
Hager WW and Zhang H A new conjugate gradient method with guaranteed descent and an efficient line search SIAM J. Optim. 2005 16 1 170-192
[45]
Urdaneta HL and Aleixo R On the delayed weighted gradient method with simultaneous step-size search Proc. Ser. Braz. Soc. Comput. Appl. Math. 2022
[46]
Boyd SP and Vandenberghe L Convex Optimization 2004 Cambridge Cambridge University Press
[47]
Shalev-Shwartz, S., Singer, Y.: Logarithmic Regret Algorithms for Strongly Convex Repeated Games. The Hebrew University (2007)
[48]
Davis TA and Hu Y The University of Florida sparse matrix collection ACM Trans. Math. Softw. (TOMS) 2011 38 1 1-25
[49]
Kolodziej SP, Aznaveh M, Bullock M, David J, Davis TA, Henderson M, and Sandstrom R The suitesparse matrix collection website interface J. Open Source Softw. 2019 4 35 1244
[50]
Dolan ED and Moré JJ Benchmarking optimization software with performance profiles Math. Program. 2002 91 201-213
[51]
Andrei N An unconstrained optimization test functions collection Adv. Model. Optim. 2008 10 1 147-161
[52]
Jamil M and Yang XS A literature survey of benchmark functions for global optimisation problems Int. J. Math. Model. Numer. Optim. 2013 4 2 150-194

Index Terms

  1. Delayed Weighted Gradient Method with simultaneous step-sizes for strongly convex optimization
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Computational Optimization and Applications
      Computational Optimization and Applications  Volume 89, Issue 1
      Sep 2024
      274 pages

      Publisher

      Kluwer Academic Publishers

      United States

      Publication History

      Published: 31 May 2024
      Accepted: 20 May 2024
      Received: 14 March 2023

      Author Tags

      1. Gradient methods
      2. Conjugate gradient methods
      3. Strongly convex functions
      4. Large-scale optimization

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 18 Dec 2024

      Other Metrics

      Citations

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media