Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization
<p>A three-dimensional tensor: (<b>a</b>) column fiber; (<b>b</b>) row fiber; (<b>c</b>) tube fiber.</p> "> Figure 2
<p>The t-product of <math display="inline"><semantics><mi mathvariant="bold-script">Z</mi></semantics></math> and <math display="inline"><semantics><mi mathvariant="bold-script">Y</mi></semantics></math>.</p> "> Figure 3
<p>The transpose tensor.</p> "> Figure 4
<p>The identity tensor <math display="inline"><semantics><mrow><mi mathvariant="bold-script">E</mi><mo>∈</mo><msup><mi>ℂ</mi><mrow><msub><mi>I</mi><mn>1</mn></msub><mo>×</mo><msub><mi>I</mi><mn>1</mn></msub><mo>×</mo><msub><mi>I</mi><mn>3</mn></msub></mrow></msup></mrow></semantics></math>.</p> "> Figure 5
<p>The <span class="html-italic">f</span>-diagonal tensor.</p> "> Figure 6
<p>The t-SVD result of <math display="inline"><semantics><mi mathvariant="bold-script">Z</mi></semantics></math>.</p> "> Figure 7
<p>Graph of the low-rank function, nuclear norm, LogDet function, Laplace function and SRF of scalars.</p> "> Figure 8
<p>Tensor completion method test results on simulation dataset DS1. One area of interest (red frame) is enlarged for detailed comparison. (<b>a</b>) The original images of different bands of DS1 with different sampling ratios (because there are too many bands, bands 56 and 57 are selected for display when the sampling rate is 10% (first row); bands 66 and 67 are selected for display when the sampling rate is 20% (fourth row); and bands 73 and 74 are displayed when the sampling rate is 30% (seventh row)). (<b>b</b>) The sampling data images of the original images. From the sampling data images, the characteristic information of the retained pixels after random sampling of the specific bands can be seen. (<b>c</b>–<b>g</b>) are (in order) the tensor reconstruction results with sampling rates of 10%, 20% and 30% by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method. The first, fourth and seventh rows are the reconstruction results with sampling rates of 10%, 20% and 30%, respectively. The second, fifth and eighth rows show the difference images between the original image and the reconstruction results by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method with various sampling rates. The third, sixth and ninth rows are gray distribution histograms of the corresponding difference images.</p> "> Figure 9
<p>Test results of the tensor completion method on simulation dataset DS2. One area of interest (red frame) is enlarged for detailed comparison. (<b>a</b>) The original images of different bands of DS2 with different sampling ratios (bands 56 and 57 are selected for display when the sampling rate is 10% (first row); bands 92 and 94 are selected for display when the sampling rate is 20% (fourth row); bands 93 and 95 are selected for display when the sampling rate is 30% (seventh row). (<b>b</b>) The sampling data images of the original images with sampling rates of 10%, 20% and 30%, respectively. (<b>c</b>–<b>g</b>) are the tensor reconstruction results of HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method. The first, fourth and seventh rows are the reconstruction results for sampling rates of 10%, 20% and 30%. The second, fifth and eighth rows show the difference images between the original image and the reconstruction results by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method with various sampling rates. The third, sixth and ninth rows are the gray distribution histograms of the corresponding difference images.</p> "> Figure 10
<p>Tensor completion method test results on DS3. One area of interest (red frame) is enlarged for detailed comparison. (<b>a</b>) The original images of different bands of DS3 with different sampling ratios (bands 32 and 99 are selected for display when the sampling rate is 10% (first row); bands 5 and 18 are selected for display when the sampling rate is 20% (fourth row); bands 42 and 51 are selected for display when the sampling rate is 30% (seventh row). (<b>b</b>) The sampling data images of the original images with sampling rates of 10%, 20% and 30%, respectively. (<b>c</b>–<b>g</b>) are (in order) the tensor reconstruction results of HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method for sampling rates of 10%, 20% and 30%. The first, fourth and seventh rows are the reconstruction results for sampling rates of 10%, 20% and 30%. The second, fifth and eighth rows show the difference images between the original image and the reconstruction results by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method with various sampling rates. The third, sixth and ninth rows are the gray distribution histograms of the corresponding difference images.</p> "> Figure 11
<p>Distribution map of the ground object types in example 4.</p> "> Figure 12
<p>Tensor completion method test results on the distribution map of ground object types from the Remote Sensing Imaging Processing Center of the National University of Singapore. (<b>a</b>) The original images of different bands of simulation dataset 4 with different sampling ratios (bands 22 and 23 are selected for display at sampling rates of 10%, 20% and 30% (first, fourth and seventh rows, respectively)). (<b>b</b>) The sampling data images of the original images with sampling rates of 10%, 20% and 30%, respectively. (<b>c</b>–<b>g</b>) are the tensor reconstruction results for sampling rates of 10%, 20% and 30% by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method. The first, fourth and seventh rows are the reconstruction results for sampling rates of 10%, 20% and 30%. The second, fifth and eighth rows show the difference images between the original image and the reconstruction results by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method with various sampling rates. The third, sixth and ninth rows are the gray distribution histograms of the corresponding difference images.</p> "> Figure 13
<p>The results on the AVIRIS cuprite dataset. (<b>a</b>) The original images of different bands of the AVIRIS cuprite dataset with different sampling ratios (bands 72 and 74 are selected for display when the sampling rate is 10% (first row); bands 67 and 68 are selected for display when the sampling rate is 20% (fourth row); bands 65 and 68 are selected for display when the sampling rate is 30% (seventh row). (<b>b</b>) The sampling data images of the original images with sampling rates of 10%, 20% and 30%, respectively. (<b>c</b>–<b>g</b>) are the tensor reconstruction results for sampling rates of 10%, 20% and 30% by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method. The first, fourth and seventh rows are the reconstruction results for sampling rates of 10%, 20% and 30%. The second, fifth and eighth rows show the difference images between the original image and the reconstruction results by HaLRTC, TNN, LogDet-TC, Laplace-TC and the proposed method with various sampling rates. The third, sixth and ninth rows are the gray distribution histograms of the corresponding difference images.</p> ">
Abstract
:1. Introduction
- (1)
- This paper uses the SRF as a nonconvex substitution of tensor multi-rank. The SRF can treat different singular values of a tensor differently through adaptive weight allocation, and can approach the rank function more closely than the existing substitution functions. This paper analyzes the convergence of the SRF and proposes a tensor completion model. This provides a new theoretical insight for the study of tensor rank substitution and tensor completion.
- (2)
- A solution algorithm based on the ADMM framework is proposed, and the hot start method is added to assure the convergence of the algorithm, providing technical support for the practical application of the proposed model.
- (3)
- Several experiments are constructed to indicate that the proposed method can restore missing values excellently with greatly compressed data. Therefore, the model proposed in this article can be effectively applied in fields that require processing high-dimensional image data such as geological surveys.
2. Symbols and Preliminary Theory
2.1. Symbol Definitions
2.2. Preliminary Theory
3. Proposed Method
3.1. Tensor Completion Model Based on Smooth Rank Function
3.2. Convergence Analysis of the Smooth Rank Function
3.3. Solution Algorithm
Algorithm 1. The ADMM-based algorithm for solving the model (10). |
Input: |
Observed data , index set , parameters , and ; |
1: Enter the maximum number of iterations , , let , , ; |
2: for to : |
3: Let ; |
4: is obtained by performing t-SVD on ; |
5: Update on the grounds of formula (22); |
6: Update on the grounds of formula (23); |
7: Update on the grounds of formula (21c); |
8: If , terminate loop; |
9: end for |
10: return ; |
Output: |
The recovered tensor . |
4. Experiment
4.1. Data and Experimental Environment
4.2. Experimental Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, J.; Xia, Y.; Zhang, Y. Anomaly detection of hyperspectral image via tensor completion. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1099–1103. [Google Scholar] [CrossRef]
- Giannopoulos, M.; Tsagkatakis, G.; Tsakalides, P. On the impact of Tensor Completion in the Classification of Undersampled Hyperspectral Imagery. In Proceedings of the 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1975–1979. [Google Scholar] [CrossRef]
- Geelen, B.; Tack, N.; Lambrechts, A. A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic. In Proceedings of the Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VII, San Francisco, CA, USA, 7 March 2014; Volume 8974, pp. 80–87. [Google Scholar] [CrossRef]
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
- Zhao, X.L.; Yang, J.H.; Ma, T.H.; Jiang, T.X.; Ng, M.K.; Huang, T.Z. Tensor completion via complementary global, local, and nonlocal priors. IEEE Trans. Image Process. 2021, 31, 984–999. [Google Scholar] [CrossRef]
- Xu, T.; Huang, T.Z.; Deng, L.J.; Yokoya, N. An iterative regularization method based on tensor subspace representation for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
- Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5249–5257. [Google Scholar] [CrossRef] [Green Version]
- Xie, Y.; Tao, D.; Zhang, W.; Liu, Y.; Zhang, L.; Qu, Y. On unifying multi-view self-representations for clustering by tensor multi-rank minimization. Int. J. Comput. Vis. 2018, 126, 1157–1179. [Google Scholar] [CrossRef] [Green Version]
- Xie, Y.; Liu, J.; Qu, Y.; Tao, D.; Zhang, W.; Dai, L.; Ma, L. Robust kernelized multiview self-representation for subspace clustering. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 868–881. [Google Scholar] [CrossRef]
- Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal low-rank regularized tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
- Wu, Z.C.; Huang, T.Z.; Deng, L.J.; Huang, J.; Chanussot, J.; Vivone, G. LRTCFPan: Low-rank tensor completion based framework for pansharpening. IEEE Trans. Image Process. 2023, 32, 1640–1655. [Google Scholar] [CrossRef]
- Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef] [Green Version]
- Ran, R.; Deng, L.J.; Jiang, T.X.; Hu, J.F.; Chanussot, J.; Vivone, G. GuidedNet: A general CNN fusion framework via high-resolution guidance for hyperspectral image super-resolution. IEEE Trans. Cybern. 2023, 53, 4148–4161. [Google Scholar] [CrossRef] [PubMed]
- Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W.; Kong, S.G. Enhanced sparsity prior model for low-rank tensor completion. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 4567–4581. [Google Scholar] [CrossRef]
- Luo, Y.S.; Zhao, X.L.; Jiang, T.X.; Chang, Y.; Ng, M.K.; Li, C. Self-supervised nonlinear transform-based tensor nuclear norm for multi-dimensional image recovery. IEEE Trans. Image Process. 2022, 31, 3793–3808. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.; Shan, H.; Chen, C. Tensor p-shrinkage nuclear norm for low-rank tensor completion. Neurocomputing 2020, 387, 255–267. [Google Scholar] [CrossRef] [Green Version]
- Kilmer, M.E.; Martin, C.D. Factorization strategies for third-order tensors. Linear Algebra Its Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef] [Green Version]
- Semerci, O.; Hao, N.; Kilmer, M.E.; Miller, E.L. Tensor-based formulation and nuclear norm regularization for multienergy computed tomography. IEEE Trans. Image Process. 2014, 23, 1678–1693. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Ely, G.; Aeron, S.; Hao, N.; Kilmer, M. Novel methods for multilinear data completion and de-noising based on tensor-SVD. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 3842–3849. [Google Scholar] [CrossRef] [Green Version]
- Wu, Z.C.; Huang, T.Z.; Deng, L.J.; Dou, H.X.; Meng, D. Tensor wheel decomposition and its tensor completion application. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS2022), New Orleans, LA, USA, 28 November–9 December 2022; pp. 27008–27020. [Google Scholar]
- Ji, T.Y.; Huang, T.Z.; Zhao, X.L.; Ma, T.H.; Deng, L.J. A non-convex tensor rank approximation for tensor completion. Appl. Math. Model. 2017, 48, 410–422. [Google Scholar] [CrossRef]
- Zhang, X.; Ma, J.; Yu, S. Nonconvex Tensor Completion for 5-D Seismic Data Reconstruction. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
- Xu, W.H.; Zhao, X.L.; Ji, T.Y.; Miao, J.Q.; Ma, T.H.; Wang, S.; Huang, T.Z. Laplace function based nonconvex surrogate for low-rank tensor completion. Signal Process. Image Commun. 2019, 73, 62–69. [Google Scholar] [CrossRef]
- Zhang, X. A nonconvex relaxation approach to low-rank tensor completion. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1659–1671. [Google Scholar] [CrossRef]
- Xue, S.; Qiu, W.; Liu, F.; Jin, X. Low-rank tensor completion by truncated nuclear norm regularization. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2600–2605. [Google Scholar] [CrossRef]
- Chen, X.; Li, J.; Song, Y.; Li, F.; Chen, J.; Yang, K. Low-rank tensor completion for image and video recovery via capped nuclear norm. IEEE Access 2019, 7, 112142–112153. [Google Scholar] [CrossRef]
- Lin, Z.; Xu, C.; Zha, H. Robust matrix factorization by majorization minimization. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 208–220. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Li, H.; Wu, Z.; Wei, Z. Hyperspectral image recovery using non-convex low-rank tensor approximation. Remote Sens. 2020, 12, 2264. [Google Scholar] [CrossRef]
- Yang, Y.; Han, L.; Liu, Y.; Zhu, J.; Yan, H. A Novel Regularized Model for Third-Order Tensor Completion. IEEE Trans. Signal Process. 2021, 69, 3473–3483. [Google Scholar] [CrossRef]
- Zhao, X.; Bai, M.; Sun, D.; Zheng, L. Robust tensor completion: Equivalent surrogates, error bounds, and algorithms. SIAM J. Imaging Sci. 2022, 15, 625–669. [Google Scholar] [CrossRef]
- Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
- Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar] [CrossRef] [Green Version]
- Du, S.; Xiao, Q.; Shi, Y.; Cucchiara, R.; Ma, Y. Unifying tensor factorization and tensor nuclear norm approaches for low-rank tensor completion. Neurocomputing 2021, 458, 204–218. [Google Scholar] [CrossRef]
- Cai, C.; Poor, H.V.; Chen, Y. Uncertainty quantification for nonconvex tensor completion: Confidence intervals, heteroscedasticity and optimality. In Proceedings of the 37th International Conference on Machine Learning (ICML), Vienna, Austria, 12–18 July 2020; pp. 1271–1282. [Google Scholar] [CrossRef]
- Yang, J.; Zhu, Y.; Li, K.; Yang, J.; Hou, C. Tensor completion from structurally-missing entries by low-tt-rankness and fiber-wise sparsity. IEEE J. Sel. Top. Signal Process. 2018, 12, 1420–1434. [Google Scholar] [CrossRef]
- Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef] [Green Version]
- Gandy, S.; Recht, B.; Yamada, I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 2011, 27, 025010. [Google Scholar] [CrossRef] [Green Version]
- Hillar, C.J.; Lim, L.H. Most tensor problems are NP-hard. J. ACM 2013, 60, 45. [Google Scholar] [CrossRef] [Green Version]
- Fan, Y.R.; Huang, T.Z.; Liu, J.; Zhao, X.L. Compressive sensing via nonlocal smoothed rank function. PLoS ONE 2016, 11, e0162041. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
- Abercromby, K. Communication of the NASA JSC Spacecraft Materials Spectral Database. 2006. Available online: https://www.nasa.gov/open/data.html (accessed on 3 March 2022).
- Clark, R.; Swayze, G.; Wise, R.; Livo, E.; Hoefen, T.; Kokaly, R.; Sutley, S. USGS Digital Spectral Library Splib06a; Digital Data Series 231; USGS: Denver, CO, USA, 2007.
- Han, H.; Wang, G.; Wang, M.; Miao, J.; Guo, S.; Chen, L.; Guo, K. Hyperspectral unmixing via nonconvex sparse and low-rank constraint. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5704–5718. [Google Scholar] [CrossRef]
Data Source | Data Size (Height × Width × Band) | |
---|---|---|
Example 1 | Generated from nine end members randomly selected from the subset generated from the NASA Johnson Space Center (NASA-JSC) spectral characteristics database | 100 × 100 × 100 |
Example 2 | Generated from four end members randomly selected from generated from the United States Geological Survey (USGS) digital spectrum database | 100 × 100 × 224 |
Example 3 | Generated by randomly selecting five end members from | 75 × 75 × 100 |
Example 4 | Distribution map of ground object types from the Remote Sensing Imaging Processing Center of the National University of Singapore | 278 × 329 × 100 |
Example 5 | The AVIRIS Cuprite dataset | 350 × 350 × 188 |
Ground Object Type Number (Color) | Ground Object Type | Composition End Members (%) |
---|---|---|
1 (Dark brown) | Pure water | End member 1 (60), end member 10 (40) |
2 (Fuchsia) | Forest | End member 2 (90), end member 7 (10) |
3 (Yellow-green) | Shrub | End member 3 (50), end member 8 (50) |
4 (Light blue) | Grass | End member 4 (100) |
5 (Dark gray) | Soil, man-made buildings | End member 5 (70), end member 9 (30) |
6 (Navy blue) | Turbid water, soil, man-made buildings | End member 6 (40), end member 9 (30), end member 5 (30) |
7 (Light blue-green) | Soil, man-made buildings | End member 5 (50), end member 9 (50) |
8 (Dark blue-green) | Soil, man-made buildings | End member 5 (40), end member 9 (60) |
Dataset | Sampling Rate | HaLRTC | TNN | LogDet-TC | Laplace-TC | Proposed | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | ||
DS1 | 10% | 19.1810 | 0.5427 | 26.1529 | 0.7779 | 26.5367 | 0.7730 | 27.1079 | 0.7811 | 27.6988 | 0.8027 |
20% | 24.3083 | 0.7850 | 30.4836 | 0.8771 | 31.3059 | 0.8830 | 31.9637 | 0.8908 | 32.3647 | 0.8979 | |
30% | 28.7637 | 0.9037 | 33.8098 | 0.9271 | 34.9760 | 0.9347 | 35.3474 | 0.9380 | 35.6564 | 0.9409 | |
DS2 | 10% | 44.3918 | 0.9915 | 50.2451 | 0.9965 | 52.0375 | 0.9975 | 52.1378 | 0.9975 | 52.5568 | 0.9979 |
20% | 47.8194 | 0.9961 | 59.3216 | 0.9995 | 61.3278 | 0.9997 | 59.3125 | 0.9995 | 61.9718 | 0.9997 | |
30% | 54.1652 | 0.9990 | 66.4177 | 0.9999 | 67.7105 | 0.9999 | 68.0190 | 0.9999 | 68.5685 | 0.9999 | |
DS3 | 10% | 46.8033 | 0.9928 | 47.5930 | 0.9897 | 50.5077 | 0.9940 | 50.9990 | 0.9943 | 67.4867 | 1.0000 |
20% | 50.9409 | 0.9974 | 54.7000 | 0.9978 | 57.3828 | 0.9986 | 57.0897 | 0.9986 | 95.5115 | 1.0000 | |
30% | 54.4479 | 0.9986 | 61.0524 | 0.9994 | 63.5759 | 0.9996 | 63.4314 | 0.9996 | 98.4993 | 1.0000 | |
Distribution map of ground object types | 10% | 21.5421 | 0.4777 | 27.8932 | 0.7478 | 28.0606 | 0.7472 | 28.2599 | 0.7715 | 28.5770 | 0.7949 |
20% | 24.9645 | 0.6679 | 32.2982 | 0.8828 | 33.1070 | 0.8808 | 33.1114 | 0.8976 | 33.3183 | 0.9038 | |
30% | 26.8058 | 0.7725 | 35.8089 | 0.9394 | 36.4996 | 0.9364 | 36.8591 | 0.9475 | 37.0572 | 0.9502 | |
AVIRIS Cuprite | 10% | 49.3208 | 0.4318 | 53.7352 | 0.7869 | 55.8124 | 0.8232 | 59.7162 | 0.8979 | 62.2725 | 0.9173 |
20% | 53.4043 | 0.7048 | 56.6647 | 0.8804 | 57.3858 | 0.8224 | 64.9149 | 0.9502 | 67.0668 | 0.9565 | |
30% | 56.6848 | 0.8435 | 59.6989 | 0.9183 | 60.7367 | 0.8908 | 68.3113 | 0.9679 | 69.9921 | 0.9707 |
Dataset | Sampling Rate | Difference Range of Pixel Value | Percentage of Pixels Occupied | ||||
---|---|---|---|---|---|---|---|
HaLRTC | TNN | LogDet-TC | Laplace-TC | Proposed | |||
DS1 | 10% | [−0.15,0.15] | 86.330% | 99.805% | 99.870% | 99.920% | 99.940% |
20% | [−0.10,0.10] | 92.820% | 99.935% | 99.965% | 99.985% | 99.990% | |
30% | [−0.05,0.05] | 85.450% | 99.300% | 99.775% | 99.800% | 99.860% | |
DS2 | 10% | [−0.01,0.01] | 89.145% | 90.415% | 93.535% | 96.225% | 97.935% |
20% | [−0.07,0.07] | 69.225% | 98.090% | 99.085% | 98.875% | 99.925% | |
30% | [−0.05,0.05] | 87.980% | 99.795% | 99.905% | 99.975% | 99.990% | |
DS3 | 10% | [−0.02,0.02] | 97.013% | 98.027% | 98.951% | 98.996% | 100% |
20% | [−0.01,0.01] | 97.867% | 98.818% | 99.093% | 99.493% | 100% | |
30% | [−0.003,0.003] | 94.347% | 96.631% | 97.173% | 97.493% | 100% | |
Distribution map of ground object types | 10% | [−0.10,0.10] | 80.146% | 98.202% | 98.367% | 98.746% | 98.912% |
20% | [−0.10,0.10] | 90.904% | 99.873% | 99.957% | 99.944% | 99.955% | |
30% | [−0.10,0.10] | 94.792% | 99.993% | 99.997% | 99.998% | 99.999% | |
AVIRIS Cuprite | 10% | [−400,400] | 88.529% | 97.606% | 99.294% | 99.971% | 99.997% |
20% | [−100,100] | 59.607% | 80.892% | 78.311% | 98.905% | 99.904% | |
30% | [−50,50] | 55.267% | 71.592% | 69.236% | 96.605% | 99.255% |
Dataset | Method | HaLRTC | TNN | LogDet-TC | Laplace-TC | Proposed |
---|---|---|---|---|---|---|
DS1 | Number of iterations | 240 | 348 | 207 | 202 | 196 |
Time | 8.80 | 61.54 | 53.24 | 51.44 | 46.75 | |
DS2 | Number of iterations | 261 | 380 | 178 | 170 | 163 |
Time | 9.42 | 64.09 | 43.61 | 40.25 | 36.12 | |
DS3 | Number of iterations | 313 | 374 | 200 | 186 | 181 |
Time | 6.56 | 36.75 | 26.83 | 23.71 | 22.82 | |
Distribution map of ground object types | Number of iterations | 362 | 365 | 202 | 194 | 187 |
Time | 121.06 | 634.11 | 548.77 | 526.93 | 510.04 | |
AVIRIS Cuprite | Number of iterations | 245 | 282 | 181 | 176 | 167 |
Time | 121.13 | 719.70 | 574.42 | 529.66 | 492.13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, S.; Miao, J.; Li, G.; Jin, W.; Li, G.; Liu, X. Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization. Remote Sens. 2023, 15, 3862. https://doi.org/10.3390/rs15153862
Yu S, Miao J, Li G, Jin W, Li G, Liu X. Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization. Remote Sensing. 2023; 15(15):3862. https://doi.org/10.3390/rs15153862
Chicago/Turabian StyleYu, Shicheng, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li, and Xiaoguang Liu. 2023. "Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization" Remote Sensing 15, no. 15: 3862. https://doi.org/10.3390/rs15153862