Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Solving \(\ell _0\)-penalized problems with simple constraints via the Frank–Wolfe reduced dimension method

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

\(\ell _0\)-penalized problems arise in a number of applications in engineering, machine learning and statistics, and, in the last decades, the design of algorithms for these problems has attracted the interest of many researchers. In this paper, we are concerned with the definition of a first-order method for the solution of \(\ell _0\)-penalized problems with simple constraints. We use a reduced dimension Frank–Wolfe algorithm Rinaldi (Optim Methods Softw, 26, 2011) and show that the subproblem related to the computation of the Frank–Wolfe direction can be solved analytically at least for some sets of simple constraints. This gives us a very easy to implement and quite general tool for dealing with \(\ell _0\)-penalized problems. The proposed method is then applied to the numerical solution of two practical optimization problems, namely, the Sparse Principal Component Analysis and the Sparse Reconstruction of Noisy Signals. In both cases, the reported numerical performances and comparisons with state-of-the-art solvers show the efficiency of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Afonso, M., Bioucas-Dias, J., Figueiredo, M.: Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(9), 2345–2356 (2010)

    Article  MathSciNet  Google Scholar 

  2. Afonso, M., Bioucas-Dias, J., Figueiredo, M.: An augmented lagrangian based method for the constrained formulation of imaging inverse problems. IEEE Trans. Image Process. 20(3), 681–695 (2011)

    Article  MathSciNet  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-threshold algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(2), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bienstock, D.: Computational study of a family of mixed-integer quadratic programming problems. Math. Progr. 74, 121–140 (1996)

    MathSciNet  MATH  Google Scholar 

  5. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cadima, J., Jolliffe, I.: Loadings and correlations in the interpretation of principal components. J. Appl. Stat. 22, 203–214 (1995)

    Article  MathSciNet  Google Scholar 

  7. Candès, E., Romberg, J.: Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6(2), 227–254 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MATH  Google Scholar 

  9. Candès, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  Google Scholar 

  10. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition basis pursuit. SIAM Rev. 43, 129–159 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  11. D’Aspremont, A., Bach, F.R., El Ghaoui, L.: Optimal solutions for sparse principal component analysis. J. Mach. Learn. Res. 9, 1269–1294 (2008)

    MathSciNet  MATH  Google Scholar 

  12. D’Aspremont, A., El Ghaoui, L., Jordan, N.I., Lanckriet, G.R.G.: A direct formulation for sparce pca using semidefinite programming. SIAM Rev. 49(3), 434–448 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  13. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009)

    Article  MathSciNet  Google Scholar 

  14. Frank, M., Wolfe, P.: An algorithm for quadratic programming. Nav. Res. Logist. Q. 3, 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  15. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for l1-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hale, E.T., Yin, W., Zhang, Y.: http://www.caam.rice.edu/~optimization/l1/fpc/, (2012)

  17. Hein, M., Bühler, T.: An inverse power method for nonlinear eigenproblems with applications in 1-spectral clustering and sparse pca. In: Advances in Neural Information Processing Systems, pp. 847–855 (2010)

  18. Jolliffe, I.T., Trendafilov, N.T., Uddin, M.: A modified principal component technique based on the lasso. J. Comput. Graph. Stat. 12, 531–547 (2003)

    Article  MathSciNet  Google Scholar 

  19. Journeé, M., Nesterov, Y., Richtárik, P., Sepulchre, R.: Generalized power method for sparse principal component analysis. J. Mach. Learn. Res. 11, 517–553 (2010)

    MathSciNet  MATH  Google Scholar 

  20. Kim, S.-J., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large-scale l1-regularized least squares. IEEE J. Select. Topics Signal Process. 1(4), 606–617 (2007)

    Article  Google Scholar 

  21. Luss, R., Teboulle, M.: Conditional gradient algorithms for rank-one matrix approximations with a sparsity constraint. SIAM Rev. 55(1), 65–98 (2013)

  22. Mangasarian, O.L.: Machine learning via polyhedral concave minimization. In: Fischer, H., Riedmueller, B., Schaeffler, S. (eds.) Applied Mathematics and Parallel Computing Festschrift for Klaus Ritter, pp. 175–188. Physica-Verlag, Germany (1996)

    Chapter  Google Scholar 

  23. Moghaddam, B., Weiss, Y., Avidan, S.: Generalized spectral bounds for sparse lda. In: ICML ’06 Proceedings of the 23rd International Conference on Machine Learning, pp. 641–648 (2006)

  24. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  25. Rinaldi, F.: Concave programming for finding sparse solutions to problems with convex constraints. Optim. Methods Softw. 26(6), 971–992 (2011)

  26. Rinaldi, F., Schoen, F., Sciandrone, M.: Concave programming for minimizing the zero-norm over polyhedral sets. Comput. Optim. Appl. 46(3), 467–486 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  27. Riva, A., Carpentier, A.-S., Torrésani, B., Hénaut, A.: Comments on selected fundamental aspects of microarray analysis. Comput. Biol. Chem. 29(5), 319–336 (2005)

    Article  MATH  Google Scholar 

  28. Shen, H., Huang, J.Z.: Sparse principal component analysis via regularized low rank matrix approximation. J. Multivar. Anal. 99(6), 1015–1034 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  29. Sriperumbudur, B.K., Torres, D.A., Lanckriet, G.R.G.: Sparse eigen methods by dc programming. In: Proceedings of the 24th International Conference on Machine Learning, pp. 831–838 (2007)

  30. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  31. Weston, J., Elisseef, A., Schölkopf, B.: Use of the zero norm with linear models and kernel methods. J. Mach. Learn. Res. 3, 1439–1461 (2003)

    MathSciNet  MATH  Google Scholar 

  32. Zhang, Z., Zha, H., Simon, H.: Low rank approximations with sparse factors i: basic algorithms and error analysis. SIAM J. Matrix Anal. Appl. 23(3), 706–727 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  33. Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. J. Comput. Graph. Stat. 15(2), 265–286 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesco Rinaldi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liuzzi, G., Rinaldi, F. Solving \(\ell _0\)-penalized problems with simple constraints via the Frank–Wolfe reduced dimension method. Optim Lett 9, 57–74 (2015). https://doi.org/10.1007/s11590-014-0757-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-014-0757-3

Keywords

Navigation