Abstract
Sparse group lasso, concerning with group-wise and within-group sparsity, is generally considered difficult to solve due to the mixed-norm structure. In this paper, we propose efficient algorithms based on split Bregman iteration to solve sparse group lasso problems, including a synthesis prior form and an analysis prior form. These algorithms have low computational complexity and are suitable for large scale problems. The convergence of the proposed algorithms is also discussed. Moreover, the proposed algorithms are used for magnetic resonance imaging reconstruction. Numerical results show the effectiveness of the proposed algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bilen, C., Wang, Y., & Selesnick, I. W. (2012). High-speed compressed sensing reconstruction in dynamic parallel MRI using augmented lagrangian and parallel processing. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2(3), 370–379.
Bruckstein, A. M., Donoho, D. L., & Elad, M. (2009). From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51(1), 34–81.
Cai, J. F., Osher, S., & Shen, Z. (2009). Split Bregman methods and frame based image restoration. Multiscale Modeling and Simulation, 8(2), 337–369.
Candès, E. J., & Wakin, M. B. (2008). An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2), 21–30.
Chatterjee, S., Banerjee, A., & Ganguly, A. (2011). Sparse group lasso for regression on land climate variables. In 2011 IEEE 11th international conference on data mining workshops (ICDMW) (pp. 1–8). IEEE.
Deng, W., Yin, W., & Zhang, Y. (2011). Group sparse optimization by alternating direction method. TR11-06, Department of Computational and Applied Mathematics, Rice University.
Friedman, J., Hastie, T., & Tibshirani, R. (2010). A note on the group lasso and a sparse group lasso. arXiv, preprint arXiv:10010736.
Goldstein, T., & Osher, S. (2009). The split Bregman method for \(\ell _1\)-regularized problems. SIAM Journal on Imaging Sciences, 2(2), 323–343.
Goldstein, T., Bresson, X., & Osher, S. (2010). Geometric applications of the split Bregman method: Segmentation and surface reconstruction. Journal of Scientific Computing, 45(1–3), 272–293.
He, Z., Xie, S., Ding, S., & Cichocki, A. (2007). Convolutive blind source separation in the frequency domain based on sparse representation. IEEE Transactions on Audio, Speech, and Language Processing, 15(5), 1551–1563.
He Z, Cichocki, A., Zdunek, R., & Cao, J. (2008). CG-M-FOCUSS and its application to distributed compressed sensing. In: Advances in neural networks-ISNN 2008 (pp. 237–245). IEEE.
He, Z., Cichocki, A., Li, Y., Xie, S., & Sanei, S. (2009a). K-Hyperline clustering learning for sparse component analysis. Signal Processing, 89(6), 1011–1022.
He, Z., Cichocki, A., Zdunek, R., & Xie, S. (2009b). Improved FOCUSS method with conjugate gradient iterations. IEEE Transactions on Signal Processing, 57(1), 399–404.
Huang, J., & Zhang, T. (2010). The benefit of group sparsity. The Annals of Statistics, 38(4), 1978–2004.
Jiang, L., & Yin, H. (2012). Bregman iteration algorithm for sparse nonnegative matrix factorizations via alternating \(\ell _1\)-norm minimization. Multidimensional Systems and Signal Processing, 23(3), 315–328.
Lee, D. H., Hong, C. P., & Lee, M. W. (2013a). Sparse magnetic resonance imaging reconstruction using the Bregman iteration. Journal of the Korean Physical Society, 62(2), 328–332.
Lee, D. H., Hong, C. P., Lee, M. W., & Han, B. S. (2013b). Rapid 2D phase-contrast magnetic resonance angiography reconstruction algorithm via compressed sensing. Journal of the Korean Physical Society, 63(5), 1072–1076.
Liu, B., King, K., Steckner, M., Xie, J., Sheng, J., & Ying, L. (2009a). Regularized sensitivity encoding (SENSE) reconstruction using bregman iterations. Magnetic Resonance in Medicine, 61(1), 145–152.
Liu, J., Ji, S., & Ye, J. (2009b). SLEP: Sparse learning with efficient projections. Arizona State University.
Lustig, M., Donoho, D., & Pauly, J. M. (2007). Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6), 1182–1195.
Lustig, M., Donoho, D. L., Santos, J. M., & Pauly, J. M. (2008). Compressed sensing MRI. IEEE Signal Processing Magazine, 25(2), 72–82.
Ma, S., Goldfarb, D., & Chen, L. (2011). Fixed point and Bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(1–2), 321–353.
Meier, L., Van De Geer, S., & Bühlmann, P. (2008). The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1), 53–71.
Simon, N., Friedman, J., Hastie, T., & Tibshirani, R. (2013). A sparse-group lasso. Journal of Computational and Graphical Statistics, 22(2), 231–245.
Smith, D. S., Gore, J. C., Yankeelov, T. E., & Welch, E. B. (2012). Real-time compressive sensing MRI reconstruction using GPU computing and split bregman methods. International Journal of Biomedical Imaging, 2012, 1–6.
Stone, S. S., Haldar, J. P., Tsao, S. C., Wm, Hwu, Sutton, B. P., Liang, Z. P., et al. (2008). Accelerating advanced MRI reconstructions on GPUs. Journal of Parallel and Distributed Computing, 68(10), 1307–1318.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B (Methodological) (pp. 267–288).
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.
Xu, J., Feng, X., & Hao, Y. (2014). A coupled variational model for image denoising using a duality trategy and split Bregman. Multidimensional Systems and Signal Processing, 25(1), 83–94.
Ye, G. B., & Xie, X. (2011). Split Bregman method for large scale fused lasso. Computational Statistics and Data Analysis, 55(4), 1552–1569.
Yin, W., Osher, S., Goldfarb, D., & Darbon, J. (2008). Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM Journal on Imaging Sciences, 1(1), 143–168.
Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1), 49–67.
Zhao, P., & Yu, B. (2006). On model selection consistency of lasso. The Journal of Machine Learning Research, 7, 2541–2563.
Zhu, X., Huang, Z., Cui, J., & Shen, H. (2013). Video-to-shot tag propagation by graph sparse group lasso. IEEE Transactions on Multimedia, 15(3), 633–646.
Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476), 1418–1429.
Zou, J., Fu, Y., & Xie, S. (2012). A block fixed point continuation algorithm for block-sparse reconstruction. IEEE Signal Processing Letters, 19(6), 364–367.
Acknowledgments
The work is supported partially by Guangdong and National ministry of education ii projection (Grant No. 2012B091100331), International cooperation project of Guangdong Natural Science Foundation (Grant No. 2009B050700020), NOS—Guangdong Union Project (Grant No. U0835003), NOS (Grant Nos. 61104053, 61103122).
Author information
Authors and Affiliations
Corresponding author
Appendix: Proof of Theorem 1
Appendix: Proof of Theorem 1
In this appendix, we present the proof of Theorem 1.
Proof
Since subproblems (9)–(11) are convex, the first order optimality condition of Algorithm 1 gives as follows:
where \(\mathbf {p}^{k+1} \in \partial \Vert \mathbf {u}^{k+1} \Vert _{2,1}\) and \(\mathbf {q}^{k+1} \in \partial \Vert \mathbf {v}^{k+1} \Vert _{1}\). The definition of subgradients \(\partial \Vert \cdot \Vert _{2,1}\) and \(\partial \Vert \cdot \Vert _{1}\) can be seen in Simon et al. (2013).
Since there exists at least one solution \(\mathbf {x}^*\) of (5) by the assumption, by the first order optimality condition, \(\mathbf {x}^*\) must satisfy
where \(\mathbf {p}^{*} \in \partial \Vert \mathbf {u}^{*} \Vert _{2,1}, \mathbf {q}^{*} \in \partial \Vert \mathbf {v}^{*} \Vert _{1}\) and \(\mathbf {u}^{*} = \mathbf {v}^{*} = \mathbf {Dx}^{*}\).
Let \(\mathbf {b}^* = \lambda _1\mathbf {p}^*/ \mu _1\) and \(\mathbf {c}^* = \lambda _2\mathbf {q}^*/ \mu _2\). Then (25) can be formulated as
Comparing (26) with (24), \((\mathbf {x}^{*}, \mathbf {u}^{*}, \mathbf {v}^{*}, \mathbf {b}^{*}, \mathbf {c}^{*})\) is a fixed point of Algorithm 1. Paper Cai et al. (2009) pointed out that if the unconstrained spilt Bregman iteration converges, it converges to a solution of (4).
Denote the errors by \(\mathbf {x}_e^{k} = \mathbf {x}^{k}-\mathbf {x}^{*}, \mathbf {u}_e^{k} = \mathbf {u}^{k}-\mathbf {u}^{*}, \mathbf {v}_e^{k} = \mathbf {v}^{k}-\mathbf {v}^{*}, \mathbf {b}_e^{k} = \mathbf {b}^{k}-\mathbf {b}^{*}\) and \(\mathbf {c}_e^{k} = \mathbf {c}^{k}-\mathbf {c}^{*}\).
Subtracting the first equation of (24) by the first equation of (26), we obtain
Taking the inner product of the left- and right-hand sides with \(\mathbf {x}_e^{k+1}\), we have
Subtracting the second equation of (24) by the second equation of (26) and then taking the inner product of the left- and right-hand sides with \(\mathbf {u}_e^{k+1}\), we have
where \(\mathbf {p}^{k+1} \in \partial \Vert \mathbf {u}^{k+1} \Vert _{2,1}, \mathbf {p}^{*} \in \partial \Vert \mathbf {u}^{*} \Vert _{2,1}\).
Similarly, from the third equation of (24) and (26) we have
where \(\mathbf {q}^{k+1} \in \partial \Vert \mathbf {v}^{k+1} \Vert _{1}, \mathbf {q}^{*} \in \partial \Vert \mathbf {v}^{*} \Vert _{1}\).
Adding both sides of (28)–(30), we have
Furthermore, by subtracting the fourth equation of (24) by the corresponding one in (26), we obtain
Taking square norm of both sides of (32) implies
and further
Similarly, from the fifth equation of (24) and (26) we have
Substituting (34) and (35) into (31), we have
Summing (36) from \(k=0\) to \(k=K\) yields
Since \(\mathbf {p}^{k+1} \in \partial \Vert \mathbf {u}^{k+1} \Vert _{2,1}, \mathbf {p}^{*} \in \partial \Vert \mathbf {u}^{*} \Vert _{2,1}\) and \(\Vert \cdot \Vert _{2,1}\) is convex, for \(\forall k\),
where the last inequation is by the definition of subgradient. Similarly, \(\Big \langle \mathbf {q}^{k+1}-\mathbf {q}^{*}, \mathbf {v}^{k+1}-\mathbf {v}^{*}\Big \rangle \ge 0.\)
Therefore, all terms involved in (37) are nonnegative. We have
From (39) and \(\mu _1 >0, \mu _2 >0, \lambda _1 >0, \lambda _2 >0\), we can get
Equation (40) imply that \(\lim _{k\rightarrow + \infty } \Vert \mathbf {A}\mathbf {x}_e^{k} \Vert _2^2 =0\), and
Recall the definition of the Bregman distance (Goldstein and Osher 2009; Cai et al. 2009), for any convex function \(J\), we have
where \(\mathbf {p} \in \partial J(\mathbf {z})\) and \(\mathbf {q} \in \partial J(\mathbf {x})\).
Now let \(J(\mathbf {x}) = \Vert \mathbf {Ax-y}\Vert _2^2, \nabla J(\mathbf {x}) = \mathbf {A}^T(\mathbf {Ax-y})\), from (42) we have
Equation (43) together with the nonnegativity of the Bregman distance, implies that \(\lim _{k\rightarrow + \infty } D_J^{\nabla J(\mathbf {x}^{*})}(\mathbf {x}^k,\mathbf {x}^*) =0\), i.e.,
Equation (39) also leads to
and hence
Let \(J(\mathbf {u}) = \Vert \mathbf {u}\Vert _{2,1}\) and \(\mathbf {p} \in \partial \Vert \mathbf {u} \Vert _{2,1}\), we can get \(\lim _{k\rightarrow + \infty } D_{\Vert \cdot \Vert _{2,1}}^{\mathbf {p}^*}(\mathbf {u}^k,\mathbf {u}^*) =0\), i.e.,
Similarly, let \(J(\mathbf {v}) = \Vert \mathbf {v}\Vert _{1}\) and \(\mathbf {q} \in \partial \Vert \mathbf {v} \Vert _{1}\) we can get
Moreover, (39) also leads to \(\sum _{k=0}^{+ \infty } \Vert \mathbf {Dx}_e^{k+1} - \mathbf {u}_e^{k} \Vert _2^2 < + \infty \), which implies that \(\lim _{k\rightarrow + \infty } \Vert \mathbf {Dx}_e^{k+1} - \mathbf {u}_e^{k} \Vert _2^2 = 0\). By \(\mathbf {Dx}^{*} = \mathbf {u}^{*}\), we conclude that
Since \(\Vert \cdot \Vert _{2,1}\) is continuous, by (47) and (49), for \(\mathbf {x}\), we have
Similarly with (49), we have \(\lim _{k\rightarrow + \infty } \Vert \mathbf {Dx}^{k+1} - \mathbf {v}^{k} \Vert _2^2 = 0\) and since \(\Vert \cdot \Vert _{1}\) is continuous, for \(\mathbf {x}\), by (48), we also have
Summing (44), (50) and (51)yields
where the last equality comes from (25). So (18) is proved.
Next, we prove (19) whenever (3) has a unique solution. It is proved by contradiction.
Define \(\varPhi (\mathbf {x}) = \frac{1}{2} \Vert \mathbf {Ax-y}\Vert _2^2 + \lambda _1 \Vert \mathbf {Dx} \Vert _{2,1} + \lambda _2 \Vert \mathbf {Dx} \Vert _{1}\). Then \(\varPhi (\mathbf {x})\) is convex and lower semi-continuous. Since \(\mathbf {x}^*\) is the unique minimizer, we have \(\varPhi (\mathbf {x}) > \varPhi (\mathbf {x}^*)\) for \(\mathbf {x} \ne \mathbf {x}^*\).
Now, suppose that (19) does not hold. So, there exists a subsequence \(\mathbf {x}^{k_i}\) such that \(\Vert \mathbf {x}^{k_i} - \mathbf {x}^{*}\Vert _2 > \epsilon \) for some \(\epsilon > 0\) and all \(i\). Then, \(\varPhi (\mathbf {x}^{k_i}) > \min \{ \varPhi (\mathbf {x}) : \Vert \mathbf {x}^{k_i} - \mathbf {x}^{*}\Vert _2 = \epsilon \}\). Let \(\mathbf {z}\) be the intersection of \(\{ \mathbf {z} : \Vert \mathbf {z} - \mathbf {z}^{*}\Vert _2 = \epsilon \}\) and the line from \(\mathbf {x}^{*}\) to \(\mathbf {x}^{k_i}\). Indeed, there exists a positive number \(t \in (0,1)\) such that \(\mathbf {z} = t \mathbf {x}^{*} + (1-t) \mathbf {x}^{k_i}\). By the convexity of \(\varPhi \) and the definition of \(\mathbf {x}^{*}\), we have
Denote \({\tilde{\mathbf{x}}} = \min \{ \varPhi (\mathbf {x}) : \Vert \mathbf {x}^{k_i} - \mathbf {x}^{*}\Vert _2 = \epsilon \}\). By (20), we have
which is a contradiction. So (19) holds whenever (5) has a unique solution. \(\square \)
Rights and permissions
About this article
Cite this article
Zou, J., Fu, Y. Split Bregman algorithms for sparse group Lasso with application to MRI reconstruction. Multidim Syst Sign Process 26, 787–802 (2015). https://doi.org/10.1007/s11045-014-0282-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11045-014-0282-7