Abstract
To overcome the limitations of existing low-rank representation (LRR) methods, i.e., the error distribution should be known a prior and the leading rank components might be over penalized, this paper proposes a new low-rank representation based model, namely double weighted LRR (DWLRR), using two distinguished properties on the concerned representation matrix. The first characterizes various distributions of the residuals into an adaptively learned weighting matrix for more flexibility of noise resistance. The second employs a parameterized rational penalty as well as a weighting vector s to reveal the importance of different rank components for better approximation to the intrinsic subspace structure. Moreover, we derive a computationally efficient algorithm based on the parallel updating scheme and automatic thresholding operation. Comprehensive experimental results conducted on image clustering demonstrate the robustness and efficiency of DWLRR compared with other state-of-the-art models.
Supported by organization x.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Liu, G., Lin, Z., Yan, S., Sun, J., Xu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)
Kim, E., Lee, M., Oh, S.: Robust elastic-net subspace representation. IEEE Trans. Image Process. 25(9), 4245–4259 (2016)
Lu, C.-Y., Min, H., Zhao, Z.-Q., Zhu, L., Huang, D.-S., Yan, S.: Robust and efficient subspace segmentation via least squares regression. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 347–360. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33786-4_26
Ding, Z., Fu, F.: Dual low-rank decompositions for robust cross-view learning. IEEE Trans. Image Process. 28(1), 194–204 (2019)
Zhang, Z., Li, F., Zhao, M., Zhang, L., Yan, S.: Robust neighborhood preserving projection by nuclear/l2,1-norm regularization for image feature extraction. IEEE Trans. Image Process. 26(4), 1607–1622 (2017)
Peng, X., Yu, Z., Yi, Z., Tang, H.: Constructing the L2-graph for robust subspace learning and subspace clustering. IEEE Trans. Cybern. 47(4), 1053–1066 (2017)
Peng, C., Kang, Z., Yang, M., Cheng, Q.: Feature selection embedded subspace clustering. IEEE Signal Process. Lett. 23(7), 1018–1022 (2016)
Chen, J., Yang, J.: Robust subspace segmentation via low-rank representation. IEEE Trans. Cybern. 44(8), 1432–1445 (2014)
Zheng, J., Yang, P., Chen, S., Shen, G., Wang, W.: Iterative re-constrained group sparse face recognition with adaptive weights learning. IEEE Trans. Image Process. 26(5), 2408–2423 (2017)
Zhang, Z., Li, F., Zhao, M., Zhang, L., Yan, S.: Joint low-rank and sparse principal feature coding for enhanced robust representation and visual classification. IEEE Trans. Image Process. 25(6), 2429–2443 (2016)
Yin, M., Gao, J., Lin, Z.: Laplacian regularized low-rank representation and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 504–517 (2016)
Peng, X., Lu, C., Yi, Z., Tang, H.: Connections between nuclear-norm and frobenius-norm-based representations. IEEE Trans. Neural Netw. Learn. Syst. 29(1), 218–224 (2018)
Lanza, A., Morigi, S., Selesnick, I., Sgallari, F.: Nonconvex nonsmooth optimization via convex-nonconvex majorization-minimization. Numer. Math. 136(2), 343–381 (2017)
Lu, C., Tang, J., Yan, S., Lin, Z.: Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans. Image Process. 25(2), 829–839 (2016)
Gu, S., Xie, Q., Meng, D., Zuo, W., Feng, X., Zhang, L.: Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 121(2), 183–208 (2017)
Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., Zhang, L.: Weighted schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 25(10), 4842–4857 (2016)
Peng, C., Kang, Z., Cheng, Q.: Integrating feature and graph learning with low-rank representation. Neurocomputing 249, 106–116 (2017)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Yao, Q., Kwok, J., Gao, F., Chen, W., Liu, T.: Efficient inexact proximal gradient algorithm for nonconvex problems. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 3308–3314. Melbourne (2017)
Yao, Q., Kwok, J., Zhong, W.: Fast low-rank matrix learning with nonconvex regularization. In: International Conference on Data Mining, pp. 539–548. IEEE, Atlantic (2015)
Li, Y., Yu, W.: Fast randomized singular value thresholding for low-rank optimization. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 376–391 (2018)
Hu, H., Lin, Z., Feng, J., Zhou, J.: Smooth representation clustering. In: Conference on Computer Vision and Pattern Recognition, pp. 3834–3841. IEEE, Columbus (2014)
Acknowledgements
This work is supported by National Natural Science Foundation of China (61602413) and Natural Science Foundation of Zhejiang Province (LY19F030016).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, J., Lou, K., Yang, P., Chen, W., Wang, W. (2019). Double Weighted Low-Rank Representation and Its Efficient Implementation. In: Yang, Q., Zhou, ZH., Gong, Z., Zhang, ML., Huang, SJ. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2019. Lecture Notes in Computer Science(), vol 11440. Springer, Cham. https://doi.org/10.1007/978-3-030-16145-3_44
Download citation
DOI: https://doi.org/10.1007/978-3-030-16145-3_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16144-6
Online ISBN: 978-3-030-16145-3
eBook Packages: Computer ScienceComputer Science (R0)