Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Tensor completion via convolutional sparse coding with small samples-based training

Published: 01 September 2023 Publication History

Highlights

To tackle the missing value problem, we proposed two CSC regularized LRTC models, SNN-based and TNN-based, respectively.
In both models, an overcomplete dictionary is pre-trained only with a minimal amount of data.
We came up with effective algorithms to solve the LRTC-CSC models, which are based on the inexact ADMM method and plug-and-play framework.
Our model outperforms state-of-the-art methods on three datasets of different types.

Abstract

Tensor data often suffer from missing value problems due to the complex high-dimensional structure while acquiring them. To complete the missing information, lots of Low-Rank Tensor Completion (LRTC) methods have been proposed, most of which depend on the low-rank property of tensor data. In this way, the low-rank component of the original data could be recovered roughly. However, the shortcoming is that the detailed information can not be fully restored, no matter the Sum of the Nuclear Norm (SNN) nor the Tensor Nuclear Norm (TNN) based methods. On the contrary, in the field of signal processing, Convolutional Sparse Coding (CSC) can provide a good representation of the high-frequency component of the image, which is generally associated with the detail component of the data. To this end, we propose two novel methods, LRTC-CSC-I and LRTC-CSC-II, which adopt CSC as a supplementary regularization for LRTC to capture the high-frequency components. Therefore, the LRTC-CSC methods can not only solve the missing value problem but also recover the details. Moreover, the regularizer CSC can be trained with small samples due to the sparsity characteristic. Extensive experiments show the effectiveness of LRTC-CSC methods, and quantitative evaluation indicates that the performance of our models are superior to state-of-the-art methods.

References

[1]
H. Lu, K.N. Plataniotis, A.N. Venetsanopoulos, A survey of multilinear subspace learning for tensor data, Pattern Recognit 44 (7) (2011) 1540–1551.
[2]
Z. Chen, C. Chen, Z. Zheng, Y. Zhu, Tensor decomposition for multilayer networks clustering, Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 2019, pp. 3371–3378.
[3]
M. Yang, Q. Luo, W. Li, M. Xiao, Nonconvex 3d array image data recovery and pattern recognition under tensor framework, Pattern Recognit 122 (2022) 108311.
[4]
J. Liu, P. Musialski, P. Wonka, J. Ye, Tensor completion for estimating missing values in visual data, IEEE Trans Pattern Anal Mach Intell 35 (1) (2012) 208–220.
[5]
T.G. Kolda, B.W. Bader, Tensor decompositions and applications, SIAM Rev. 51 (3) (2009) 455–500.
[6]
J.B. Kruskal, Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra Appl 18 (2) (1977) 95–138.
[7]
J.A. Bengua, H.N. Phien, H.D. Tuan, M.N. Do, Efficient tensor completion for color image and video recovery: low-rank tensor train, IEEE Trans. Image Process. 26 (5) (2017) 2466–2479.
[8]
C. Chen, Z.-B. Wu, Z.-T. Chen, Z.-B. Zheng, X.-J. Zhang, Auto-weighted robust low-rank tensor completion via tensor-train, Inf Sci (Ny) 567 (2021) 100–115.
[9]
M.E. Kilmer, K. Braman, N. Hao, R.C. Hoover, Third-order tensors as operators on matrices: atheoretical and computational framework with applications in imaging, SIAM J. Matrix Anal. Appl. 34 (1) (2013) 148–172.
[10]
B. Recht, M. Fazel, P.A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev. 52 (3) (2010) 471–501.
[11]
Z. Zhang, G. Ely, S. Aeron, N. Hao, M. Kilmer, Novel methods for multilinear data completion and de-noising based on tensor-SVD, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3842–3849.
[12]
X.L. Zhao, J.H. Yang, T.H. Ma, T.X. Jiang, M.K. Ng, Tensor completion via complementary global, local, and nonlocal priors, IEEE Trans. Image Process. 31 (2022) 984–999.
[13]
X. Li, Y. Ye, X. Xu, Low-rank tensor completion with total variation for visual data inpainting, Thirty-First AAAI Conference on Artificial Intelligence, 2017.
[14]
T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, T.-Y. Ji, L.-J. Deng, Matrix factorization for low-rank tensor completion using framelet prior, Inf Sci (Ny) 436 (2018) 403–417.
[15]
J. Chen, H. Lin, Y. Shao, L. Yang, Oblique striping removal in remote sensing imagery based on wavelet transform, Int J Remote Sens 27 (8) (2006) 1717–1723.
[16]
F. Jiang, X.-Y. Liu, H. Lu, R. Shen, Anisotropic total variation regularized low-rank tensor completion based on tensor nuclear norm for color image inpainting, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2018, pp. 1363–1367.
[17]
J. Liu, T.-Z. Huang, I.W. Selesnick, X.-G. Lv, P.-Y. Chen, Image restoration using total variation with overlapping group sparsity, Inf Sci (Ny) 295 (2015) 232–246.
[18]
X.-T. Li, X.-L. Zhao, T.-X. Jiang, Y.-B. Zheng, T.-Y. Ji, T.-Z. Huang, Low-rank tensor completion via combined non-local self-similarity and low-rank regularization, Neurocomputing 367 (2019) 1–12.
[19]
K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3929–3938.
[20]
M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process. 15 (12) (2006) 3736–3745.
[21]
J. Yang, J. Wright, T.S. Huang, Y. Ma, Image super-resolution via sparse representation, IEEE Trans. Image Process. 19 (11) (2010) 2861–2873.
[22]
S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit, SIAM Rev. 43 (1) (2001) 129–159.
[23]
V. Papyan, Y. Romano, J. Sulam, M. Elad, Convolutional dictionary learning via local processing, Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 5296–5304.
[24]
M.D. Zeiler, D. Krishnan, G.W. Taylor, R. Fergus, Deconvolutional networks, 2010 IEEE Computer Society Conference on computer vision and pattern recognition, IEEE, 2010, pp. 2528–2535.
[25]
H. Bristow, A. Eriksson, S. Lucey, Fast convolutional sparse coding, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 391–398.
[26]
B. Wohlberg, Convolutional sparse representations with gradient penalties, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2018, pp. 6528–6532.
[27]
V. Papyan, Y. Romano, M. Elad, Convolutional neural networks analyzed via convolutional sparse coding, The Journal of Machine Learning Research 18 (1) (2017) 2887–2938.
[28]
H. Zhang, V.M. Patel, Convolutional sparse and low-rank coding-based rain streak removal, 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, 2017, pp. 1259–1267.
[29]
P. Bao, W. Xia, K. Yang, W. Chen, M. Chen, Y. Xi, S. Niu, J. Zhou, H. Zhang, H. Sun, et al., Convolutional sparse coding for compressed sensing CT reconstruction, IEEE Trans Med Imaging 38 (11) (2019) 2607–2619.
[30]
A. Bibi, B. Ghanem, High order tensor formulation for convolutional sparse coding, Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1772–1780.
[31]
R. Xu, Y. Xu, Y. Quan, Factorized tensor dictionary learning for visual tensor data completion, IEEE Trans Multimedia (2020).
[32]
X.-L. Zhao, W.-H. Xu, T.-X. Jiang, Y. Wang, M.K. Ng, Deep plug-and-play prior for low-rank tensor completion, Neurocomputing (2020).
[33]
K. Zhang, W. Zuo, L. Zhang, FFDNet: toward a fast and flexible solution for CNN-based image denoising, IEEE Trans. Image Process. 27 (9) (2018) 4608–4622.
[34]
W.W. Hager, H. Zhang, Inexact alternating direction methods of multipliers for separable convex optimization, Comput Optim Appl 73 (1) (2019) 201–235.
[35]
M.E. Kilmer, C.D. Martin, Factorization strategies for third-order tensors, Linear Algebra Appl 435 (3) (2011) 641–658.
[36]
C. Chen, M.K. Ng, X.-L. Zhao, Alternating direction method of multipliers for nonlinear image restoration problems, IEEE Trans. Image Process. 24 (1) (2014) 33–43.
[37]
G.H. Golub, P.C. Hansen, D.P. O’Leary, Tikhonov regularization and total least squares, SIAM J. Matrix Anal. Appl. 21 (1) (1999) 185–194.
[38]
B. Wohlberg, Efficient algorithms for convolutional sparse representations, IEEE Trans. Image Process. 25 (1) (2015) 301–315.
[39]
J.-H. Yang, X.-L. Zhao, T.-H. Ma, M. Ding, T.-Z. Huang, Tensor train rank minimization with hybrid smoothness regularization for visual data recovery, Appl Math Model 81 (2020) 711–726.
[40]
C. Lu, X. Peng, Y. Wei, Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5996–6004.
[41]
Y. Du, G. Han, Y. Quan, Z. Yu, H.-S. Wong, C.L.P. Chen, J. Zhang, Exploiting global low-rank structure and local sparsity nature for tensor completion, IEEE Trans Cybern 49 (11) (2018) 3898–3910.
[42]
X. Chen, S. Liang, Z. Zhang, F. Zhao, A novel spatio-temporal data low-rank imputation approach for traffic sensor network, IEEE Internet Things J. (2022).

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Pattern Recognition
Pattern Recognition  Volume 141, Issue C
Sep 2023
638 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 01 September 2023

Author Tags

  1. Tensor completion
  2. Convolutional sparse coding
  3. High-pass filter
  4. Inexact ADMM

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media