Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

A new generalized shrinkage conjugate gradient method for sparse recovery

  • Published:
Calcolo Aims and scope Submit manuscript

Abstract

In this paper, a new procedure, called generalized shrinkage conjugate gradient (GSCG), is presented to solve the \(\ell _1\)-regularized convex minimization problem. In GSCG, we present a new descent condition. If such a condition holds, an efficient descent direction is presented by an attractive combination of a generalized form of the conjugate gradient direction and the ISTA descent direction. Otherwise, ISTA is improved by a new step-size of the shrinkage operator. The global convergence of GSCG is established under some assumptions and its sublinear (R-linear) convergence rate in the convex (strongly convex) case. In numerical results, the suitability of GSCG is evaluated for compressed sensing and image debluring problems on the set of randomly generated test problems with dimensions \(n\in \{2^{10},\ldots ,2^{17}\}\) and some images, respectively, in Matlab. These numerical results show that GSCG is efficient and robust for these problems in terms of the speed and ability of the sparse reconstruction in comparison with several state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Ahookhosh, M.: High-dimensional nonsmooth convex optimization via optimal subgradient methods. Ph.D. Thesis, Faculty of Mathematics, University of Vienna (2015)

  2. Ahookhosh, M.: User’s manual for OSGA (Optimal SubGradient Algorithm). http://homepage.univie.ac.at/masoud.ahookhosh/uploads/User’s_manual_for_OSGA.pdf (2014)

  3. Ahokhosh, M., Amini, K.: An efficient nonmonotone trust-region method for unconstrained optimization. Numer. Algorithms 59(4), 523–540 (2012)

    Article  MathSciNet  Google Scholar 

  4. Amini, K., Ahookhosh, M., Nosratpour, H.: An inexact line search approach using modified nonmonotone strategy for unconstrained optimization. Numer. Algorithm 66, 49–78 (2014)

    Article  MathSciNet  Google Scholar 

  5. Amini, K., Shiker, M.A.K., Kimiaei, M.: A line search trust-region algorithm with nonmonotone adaptive radius for a system of nonlinear equations. 4OR-Q. J. Oper. Res. 14(2), 133–152 (2016)

    Article  Google Scholar 

  6. Barzilai, J., Borwein, J.M.: Two point step size gradient method. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  7. Bazaraa, M.S., Sherali, S.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, 3rd edn. Wiley, New York (2006)

    Book  Google Scholar 

  8. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  9. Birgin, E.G., Mart̀inez, J.M., Raydan, M.: Inexact spectral projected gradient methods on convex sets. IMA J. Numer. Anal. 23(4), 539–559 (2003)

    Article  MathSciNet  Google Scholar 

  10. Birgin, E.G., Mart̀inez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10(4), 1196–1211 (2000)

    Article  MathSciNet  Google Scholar 

  11. Bioucas-Dias, J., Figueiredo, M.: A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Image Process. 16, 2992–3004 (2007)

    Article  MathSciNet  Google Scholar 

  12. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles, exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006)

    Article  MathSciNet  Google Scholar 

  13. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  Google Scholar 

  14. Dolan, E., Morè, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  15. Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  16. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    Article  MathSciNet  Google Scholar 

  17. Elad, M.: Sparse and Redundant Representation from Theory to Application in Signal and Image Processing. Springer, Berlin (2010). ISBN 978-1-4419-7011-4

    Book  Google Scholar 

  18. Eldar, C.Y., Kutyniok, G.: Compressed Sensing: Theory and Application. Cambridge University Press, New York (2012). ISBN 978-1-107-00558-7

    Book  Google Scholar 

  19. Esmaeili, H., Rostami, M., Kimiaei, M.: Combining line search and trust-region methods for \(\ell _1\)-minimization. Int. J. Comput. Math. 95(10), 1950–1972 (2018)

    Article  MathSciNet  Google Scholar 

  20. Figueiredo, M.A., Nowak, R.D.: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 12(8), 906–916 (2003)

    Article  MathSciNet  Google Scholar 

  21. Figueiredo, M.A., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007)

    Article  Google Scholar 

  22. Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MathSciNet  Google Scholar 

  23. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013)

    Book  Google Scholar 

  24. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986)

    Article  MathSciNet  Google Scholar 

  25. Hager, W.W., Phan, D.T., Zhang, H.: Gradient based methods for sparse recovery. SIAM J. Imaging Sci. 41, 146–165 (2011)

    Article  MathSciNet  Google Scholar 

  26. Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2(1), 35–58 (2006)

    MathSciNet  MATH  Google Scholar 

  27. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MathSciNet  Google Scholar 

  28. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation applied to compressed sensing: implementation and numerical experiment. J. Comput. Math. 28(2), 170–194 (2010)

    Article  MathSciNet  Google Scholar 

  29. Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 49, 409–436 (1952)

    Article  MathSciNet  Google Scholar 

  30. Huang, Y., Liu, H.: A Barzilai–Borwein type method for minimizing composite functions. Numer. Algorithm 69, 819–838 (2015)

    Article  MathSciNet  Google Scholar 

  31. Iiduka, H.: Hybrid conjugate gradient method for a convex optimization problem over the fixed-point set of a nonexpansive mapping. J. Optim. Theory Appl. 140, 463–475 (2009)

    Article  MathSciNet  Google Scholar 

  32. Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19(4), 1881–1893 (2009)

    Article  MathSciNet  Google Scholar 

  33. Kowalski, R.M.: Proximal algorithm meets a conjugate descent. Pac. J. Optim. 12(3), 549–667 (2011)

    MathSciNet  Google Scholar 

  34. Nocedal, J., Wright, S.: Numerical Optimization. Springer, Berlin (1999)

    Book  Google Scholar 

  35. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trend. Optim. 1(3), 123–231 (2013)

    Google Scholar 

  36. Polak, E., Ribière, G.: Note sur la convergence de directions conjugées. Rev. Fr. Inform. Rech. Opèrtionnelle 3e Année 16, 35–43 (1969)

    MATH  Google Scholar 

  37. Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7(1), 26–33 (1997)

    Article  MathSciNet  Google Scholar 

  38. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117(1), 387–423 (2009)

    Article  MathSciNet  Google Scholar 

  39. Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage subspace optimization and continuation. SIAM J. Sci. Comput. 32(4), 1832–1857 (2010)

    Article  MathSciNet  Google Scholar 

  40. Wen, Z., Yin, W., Zhang, H., Goldfarb, D.: On the convergence of an active set method for \(\ell _1\)-minimization. Optim. Methods Softw. 27(6), 1127–1146 (2012)

    Article  MathSciNet  Google Scholar 

  41. Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009)

    Article  MathSciNet  Google Scholar 

  42. Xiao, Y., Wu, S.-Y., Qi, L.: Nonmonotone Barzilai–Borwein gradient algorithm for \(\ell _1\)-regularized nonsmooth minimization in compressive sensing. J. Sci. Comput. 61, 17–41 (2014)

    Article  MathSciNet  Google Scholar 

  43. Zhang, H.C., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14(4), 1043–1056 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the high performance computing (HPC) center, a branch of institute for research in fundamental Physics and Mathematics (IPM), to help us to use HPC’s cluster for computing numerical results. The third author acknowledges the financial support of the Doctoral Program “Vienna Graduate School on Computational Optimization” funded by Austrian Science Foundation under Project No. W1260-N35.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamid Esmaeili.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Esmaeili, H., Shabani, S. & Kimiaei, M. A new generalized shrinkage conjugate gradient method for sparse recovery. Calcolo 56, 1 (2019). https://doi.org/10.1007/s10092-018-0296-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10092-018-0296-x

Keywords

Mathematics Subject Classification

Navigation