Remote Sensing Image Denoising via Low-Rank Tensor Approximation and Robust Noise Modeling
"> Figure 1
<p>Graphical model of NMoG-Tucker. Hollow nodes, shadowed nodes, and small solid nodes denote unobserved variables, observed data, and hyperparameters, respectively; a solid arrow from node <span class="html-italic">a</span> to node <span class="html-italic">b</span> indicates the explicit conditional distribution <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mi>b</mi> <mo>|</mo> <mi>a</mi> <mo>)</mo> </mrow> </semantics></math>; a dashed arrow from node <span class="html-italic">a</span> to node <span class="html-italic">b</span> implies that <span class="html-italic">b</span> is implicitly conditioned on <span class="html-italic">a</span>; the box is a compact representation indicating that there are three sets of variables corresponding to the three tensor modes.</p> "> Figure 2
<p>PSNR and SSIM values of each band in MSI denoising, averaged over six testing MSIs. Differences between our results and the competing ones are plotted at larger scales.</p> "> Figure 3
<p>MSI denoising examples. Top two rows: band 31 in <span class="html-italic">Cloth</span> under Gaussian noise. Bottom two rows: band 31 in <span class="html-italic">Beads</span> under mixture noise. For better visualization, we show enlargements of two demarcated patches and the corresponding error maps (difference between the currently displayed patch and the original one). Error maps with less color information indicate better denoising performance.</p> "> Figure 4
<p>PSNR and SSIM values of each band for simulated HSI denoising. Differences between our results and the competing ones are plotted at larger scales.</p> "> Figure 5
<p>Simulated HSI denoising examples. Top two rows: band 58 in <span class="html-italic">DCmall</span> under mixture noise. Bottom two rows: band 43 in <span class="html-italic">Cuprite</span> under speckle noise. For better visualization, we show enlargements of two demarcated patches and the corresponding error maps, similarly to <a href="#remotesensing-12-01278-f003" class="html-fig">Figure 3</a>.</p> "> Figure 6
<p>Real HSI denoising example on band 220 in <span class="html-italic">Indian Pines</span>. For better visualization, we show enlargements of two demarcated patches.</p> "> Figure 7
<p>Real HSI classification example on <span class="html-italic">Indian Pines</span>. The classification results are obtained by performing SVM on the noisy and the denoised HSIs, and the corresponding OA values are reported in parentheses.</p> "> Figure 8
<p>Real HSI denoising example on band 99 in <span class="html-italic">Urban</span> under slight noise. Top two rows: the noisy image and color maps of the noise components estimated by different methods (difference between the noisy image and its denoised version). Results highlighting more noise and fewer image structures indicate better denoising performance. Bottom two rows: the corresponding vertical mean profiles, where we mark the locations of stripes by circles in the noisy data.</p> "> Figure 9
<p>Real HSI denoising example on band 206 in <span class="html-italic">Urban</span> under severe noise. Top two rows: the noisy image and the denoising results by different methods, where show enlargements of two demarcated patches for better visualization. Bottom two rows: the corresponding horizontal mean profiles.</p> ">
Abstract
:1. Introduction
- We formulate the image denoising problem as a full Bayesian generative model, in which a low-Tucker-rank image prior is exploited to characterize the intrinsic low-rank tensor structure of the underlying image, and a non-i.i.d. MoG noise prior is adopted to encode the complex and distinct statistical structures of the embedded noise.
- We design a variational Bayesian algorithm for an efficient solution to the proposed model, where each variable can be updated in closed-form. Moreover, we develop adaptive strategies for the selection of involved hyperparameters, to make our algorithm free from burdensome hyperparameter-tuning.
- We conduct extensive denoising experiments on both simulated and real MSIs/HSIs, and the results show the superiority of the proposed method over the compared state-of-the-art ones.
2. Notation
3. Tucker Rank Minimization with Non-i.i.d. MoG Noise Modeling
3.1. Bayesian Model Formulation
- First, noise in each band exhibits complex statistical properties, which cannot be well captured by simple distributions such as Gaussian or Laplacian. We model the noise in each band by an i.i.d. MoG distribution, which is a universal approximator to any continuous distribution.
- Second, noise across different bands is non-identical in terms of structure and extent, due to sensor malfunctions and atmospheric conditions. This band-noise-distinctness nature is encoded by the band-dependent mixing proportion in MoG, leading to a non-i.i.d. noise distribution.
- Third, there is a strong correlation among the noise distributions in all bands, since real-life noise corruption is generally attributed to only a few main factors. In the proposed prior, the noise correlation is reflected by the fact that the MoG distributions of different bands share the same set of Gaussian components.
- The first -norm term is derived from the weighted multiplication of Gaussians prior on the solution (9). It forms by penalizing the Euclidean distances between the unfoldings and the low-rank components .
- The second weighted--norm term is derived from the non-i.i.d. MoG prior on the noise (3). It serves as a spatially varying loss function that suppresses the noise according to the local noise level estimations embedded in the weight matrix .
- The third and the fourth weighted--norm terms are derived from the Gaussian priors on the factor matrices and (6,7). They promote the joint group sparsity of in the unit of column pair , which implies the sparsity of under rank-one bases , i.e., the low-rankness of .
- The remainder terms are derived from the priors on the variables and provide them with suitable regularization.
3.2. Approximate Variational Inference
Algorithm 1. Variational Bayesian algorithm for NMoG-Tucker. |
Input: Observed image . |
Initialization: |
1. Set the iteration index . |
2. Initialize the low-rank component . |
3. Initialize the noise component . |
Iteration: while not converged do |
4. Given , update the low-rank component and by (16)–(19). |
5. Given , update the noise component by (20)–(22). |
6. Set . |
End while and output . |
3.3. Selection of Hyperparameters
4. Numerical Experiments
4.1. Synthetic Data Denoising
- Gaussian noise: all entries mixed with Gaussian noise .
- Gaussian + sparse noise: 80% entries mixed with Gaussian noise and 20% with additive uniform noise between .
- Mixture noise: 40% entries mixed with Gaussian noise , 20% with Gaussian noise , 20% with additive uniform noise between , and 20% missing (the locations of missing entries are not given as prior knowledge).
4.2. MSI Denoising
- Gaussian noise: all entries mixed with Gaussian noise . The signal-to-noise-ratio (SNR) value averaged over all 31 bands and all six MSIs is dB.
- Mixture noise: 60% entries mixed with Gaussian noise , 20% with Gaussian noise , and 20% with additive uniform noise between . The SNR value averaged over all 31 bands and all six MSIs is dB.
4.3. HSI Denoising
- Gaussian noise: all entries mixed with Gaussian noise . For DCmall, the SNR value of each band varies from 6 to 20 dB, and the mean SNR value of all 160 bands is 13.79 dB. For Cuprite, the SNR value of each band varies from 16 to 20 dB, and the mean SNR value of all 89 bands is 18.69 dB.
- Speckle noise: all bands are corrupted by non-i.i.d. speckle noise with signal-dependent intensity, which is simulated by multiplicative uniform noise with mean 1 and variance randomly sampled from for each band. For both DCmall and Cuprite, the SNR value of each band varies from 3 to 30 dB. The mean SNR value of all 160 bands in DCmall is 19.52 dB, and that of all 89 bands in Cuprite is 20.03 dB.
- Mixture noise: all bands are corrupted by non-i.i.d. Gaussian noise with zero-mean and band-dependent variances, and the SNR value of each band is uniformly sampled from 10 to 20 dB. Then, we randomly choose 90/50 bands in DCmall/Cuprite to add complex noises: the first 40/20 bands are corrupted by stripe noise with stripe number between and stripe intensity between ; the middle 40/20 bands are corrupted by deadline with line number between ; to entries in the last 40/20 bands are corrupted by speckle noise with mean 1 and variance . Thus, each band is randomly corrupted by one to three types of noises. For both DCmall and Cuprite, the SNR value of each band varies from 4 to 20 dB. The mean SNR value of all 160 bands in DCmall is 11.62 dB, and that of all 89 bands in Cuprite is 12.04 dB.
4.4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Mitra, K.; Sheorey, S.; Chellappa, R. Large-scale matrix factorization with missing data under additional constraints. In Advances in Neural Information Processing Systems. 2010, pp. 1651–1659. Available online: http://papers.nips.cc/paper/4111-large-scale-matrix-factorization-with-missing-data-under-additional-constraints (accessed on 17 April 2020).
- Okatani, T.; Yoshida, T.; Deguchi, K. Efficient algorithm for low-rank matrix factorization with missing components and performance comparison of latest algorithms. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 842–849. [Google Scholar]
- Meng, D.; De la Torre, F. Robust matrix factorization with unknown noise. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1337–1344. [Google Scholar]
- Zhao, Q.; Meng, D.; Xu, Z.; Zuo, W.; Zhang, L. Robust principal component analysis with complex noise. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 22–24 June 2014; pp. 55–63. [Google Scholar]
- Zhao, Q.; Meng, D.; Xu, Z.; Zuo, W.; Yan, Y. L1-norm low-rank matrix factorization by variational Bayesian method. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 825–839. [Google Scholar] [CrossRef] [PubMed]
- Cao, X.; Zhao, Q.; Meng, D.; Chen, Y.; Xu, Z. Robust low-rank matrix factorization under general mixture noise distributions. IEEE Trans. Image Process. 2016, 25, 4677–4690. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Cao, X.; Zhao, Q.; Meng, D.; Xu, Z. Denoising hyperspectral image with non-i.i.d. noise structure. IEEE Trans. Cybern. 2018, 48, 1054–1066. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yong, H.; Meng, D.; Zuo, W.; Zhang, L. Robust online matrix factorization for dynamic background subtraction. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1726–1740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yue, Z.; Meng, D.; Sun, Y.; Zhao, Q. Hyperspectral image restoration under complex multi-band noises. Remote Sens. 2018, 10, 1631. [Google Scholar] [CrossRef] [Green Version]
- Fazel, M.; Hindi, H.; Boyd, S.P. A rank minimization heuristic with application to minimum order system approximation. In Proceedings of the 2001 American Control Conference (Cat. No.01CH37148), Arlington, VA, USA, 25–27 June 2001; pp. 4734–4739. [Google Scholar]
- Recht, B.; Fazel, M.; Parrilo, P.A. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 2010, 52, 471–501. [Google Scholar] [CrossRef] [Green Version]
- He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial–spectral total variation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
- Fazel, M.; Hindi, H.; Boyd, S.P. Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. In Proceedings of the 2003 American Control Conference, Denver, CO, USA, 4–6 June 2003; pp. 2156–2162. [Google Scholar]
- Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral image restoration via iteratively regularized weighted Schatten p-norm minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
- Yang, J.H.; Zhao, X.L.; Ma, T.H.; Chen, Y.; Huang, T.Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
- Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
- Oh, T.H.; Tai, Y.W.; Bazin, J.C.; Kim, H.; Kweon, I.S. Partial sum minimization of singular values in robust PCA: Algorithm and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 744–758. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
- Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
- Carroll, J.D.; Chang, J.J. Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition. Psychometrika 1970, 35, 283–319. [Google Scholar] [CrossRef]
- Liu, X.; Bourennane, S.; Fossati, C. Denoising of hyperspectral images using the PARAFAC model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3717–3724. [Google Scholar] [CrossRef]
- Zhao, Q.; Zhang, L.; Cichocki, A. Bayesian CP factorization of incomplete tensors with automatic rank determination. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1751–1763. [Google Scholar] [CrossRef] [Green Version]
- Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika 1966, 31, 279–311. [Google Scholar] [CrossRef]
- Renard, N.; Bourennane, S.; Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2008, 5, 138–142. [Google Scholar] [CrossRef]
- Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 208–220. [Google Scholar] [CrossRef]
- Peng, Y.; Meng, D.; Xu, Z.; Gao, C.; Yang, Y.; Zhang, B. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2949–2956. [Google Scholar]
- Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
- Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar] [CrossRef] [Green Version]
- Fan, H.; Chen, Y.; Guo, Y.; Zhang, H.; Kuang, G. Hyperspectral image restoration using low-rank tensor recovery. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2017, 10, 4589–4604. [Google Scholar] [CrossRef]
- Bengua, J.A.; Phien, H.N.; Tuan, H.D.; Do, M.N. Efficient tensor completion for color image and video recovery: Low-rank tensor train. IEEE Trans. Image Process. 2017, 26, 2466–2479. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Oseledets, I.V. Tensor-train decomposition. SIAM J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
- Yang, J.H.; Zhao, X.L.; Ji, T.Y.; Ma, T.H.; Huang, T.Z. Low-rank tensor train for tensor robust principal component analysis. Appl. Math. Comput. 2020, 367, 124783. [Google Scholar] [CrossRef]
- Liu, Y.; Long, Z.; Huang, H.; Zhu, C. Low CP rank and Tucker rank tensor completion for estimating missing components in image data. IEEE Trans. Circuits Syst. Video Technol. 2019. to be published. [Google Scholar] [CrossRef]
- Zhao, Q.; Meng, D.; Kong, X.; Xie, Q.; Cao, W.; Wang, Y.; Xu, Z. A novel sparsity measure for tensor recovery. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 271–279. [Google Scholar]
- Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z. Kronecker-basis-representation based tensor sparsity and its applications to tensor recovery. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1888–1902. [Google Scholar] [CrossRef]
- Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 11. [Google Scholar] [CrossRef]
- Meng, D.; Xu, Z.; Zhang, L.; Zhao, J. A cyclic weighted median method for L1 low-rank matrix factorization with missing entries. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, Bellevue, WA, USA, 14–18 July 2013; pp. 704–710. [Google Scholar]
- Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
- Maz’ya, V.; Schmidt, G. On approximate approximations using Gaussian kernels. IMA J. Numer. Anal. 1996, 16, 13–29. [Google Scholar] [CrossRef] [Green Version]
- Yue, Z.; Yong, H.; Meng, D.; Zhao, Q.; Leung, Y.; Zhang, L. Robust multiview subspace learning with nonindependently and nonidentically distributed complex noise. IEEE Trans. Neural Netw. Learn. Syst. 2019. to be published. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Han, Z.; Wang, Y.; Zhao, Q.; Meng, D.; Tang, Y. Robust tensor factorization with unknown noise. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5213–5221. [Google Scholar]
- Luo, Q.; Han, Z.; Chen, X.; Wang, Y.; Meng, D.; Liang, D.; Tang, Y. Tensor RPCA by Bayesian CP factorization with complex noise. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5029–5038. [Google Scholar]
- Chen, X.; Han, Z.; Wang, Y.; Zhao, Q.; Meng, D.; Lin, L.; Tang, Y. A generalized model for robust tensor factorization with noise modeling by mixture of Gaussians. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5380–5393. [Google Scholar] [CrossRef] [PubMed]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin, Germany, 2006. [Google Scholar]
- Babacan, S.D.; Luessi, M.; Molina, R.; Katsaggelos, A.K. Sparse Bayesian methods for low-rank matrix estimation. IEEE Trans. Signal Process. 2012, 60, 3964–3977. [Google Scholar] [CrossRef]
- Hurley, N.; Rickard, S. Comparing measures of sparsity. IEEE Trans. Inf. Theory 2009, 55, 4723–4741. [Google Scholar] [CrossRef] [Green Version]
- Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses des l’Ecole MINES: Paris, France, 2002. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [Green Version]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Zhao, X.L.; Xu, W.H.; Jiang, T.X.; Wang, Y.; Ng, M.K. Deep plug-and-play prior for low-rank tensor completion. Neurocomputing, to be published. [CrossRef] [Green Version]
Competing Method | Data Prior | Noise Prior |
---|---|---|
LRMR [38] | Matrix rank constraint | Gaussian + sparse |
MoG-RPCA [4] | Low-rank matrix factorization | MoG |
NMoG-LRMF [7] | Low-rank matrix factorization | Non-i.i.d. MoG |
LRTA [24] | Tucker decomposition | Gaussian |
PARAFAC [21] | CP decomposition | Gaussian |
KBR-RPCA [35] | Kronecker-basis-representation | Gaussian + sparse |
, | ||
where |
Gaussian noise | Gaussian + sparse noise | Mixture noise | ||||
---|---|---|---|---|---|---|
Rank (10,10,10) | ReErr | Time | ReErr | Time | ReErr | Time |
Noisy data | 7.41e-02 | - | 6.92e-01 | - | 8.08e-01 | - |
LRMR | 4.09e-02 | 0.11 | 1.22e-01 | 2.27 | 4.40e-01 | 2.30 |
MoG-RPCA | 3.35e-02 | 1.22 | 4.54e-02 | 6.22 | 3.21e-01 | 19.16 |
NMoG-LRMF | 3.35e-02 | 4.78 | 4.15e-02 | 4.54 | 3.30e-01 | 15.29 |
LRTA | 1.06e-02 | 0.12 | 1.43e-01 | 0.18 | 3.43e-01 | 0.42 |
PARAFAC | 1.97e-02 | 5.19 | 2.67e-01 | 4.26 | 4.53e-01 | 4.04 |
KBR-RPCA | 9.91e-03 | 2.93 | 1.44e-02 | 2.86 | 5.00e-02 | 2.91 |
NMoG-Tucker | 1.00e-02 | 14.84 | 1.17e-02 | 31.14 | 3.25e-03 | 45.84 |
Gaussian noise | Gaussian + sparse noise | Mixture noise | ||||
Rank (20,15,10) | ReErr | Time | ReErr | Time | ReErr | Time |
Noisy data | 7.56e-02 | - | 7.00e-01 | - | 8.12e-01 | - |
LRMR | 4.27e-02 | 0.18 | 1.27e-01 | 2.09 | 4.58e-01 | 2.09 |
MoG-RPCA | 3.42e-02 | 1.34 | 4.58e-02 | 5.01 | 2.84e-01 | 17.58 |
NMoG-LRMF | 3.42e-02 | 3.72 | 4.22e-02 | 4.30 | 3.12e-01 | 15.17 |
LRTA | 1.55e-02 | 0.14 | 2.04e-01 | 0.18 | 3.85e-01 | 0.36 |
PARAFAC | 1.87e-01 | 5.10 | 3.17e-01 | 4.61 | 4.98e-01 | 4.18 |
KBR-RPCA | 1.45e-02 | 2.45 | 2.16e-02 | 2.23 | 9.06e-02 | 3.12 |
NMoG-Tucker | 1.47e-02 | 16.63 | 1.72e-02 | 34.62 | 1.97e-02 | 62.04 |
Gaussian Noise | Mixture Noise | |||||
---|---|---|---|---|---|---|
MPSNR | MSSIM | ERGAS | MPSNR | MSSIM | ERGAS | |
Noisy image | 26.02 | 0.8088 | 204.24 | -2.24 | 0.0233 | 5287.19 |
LRMR | 35.30 | 0.9631 | 72.70 | 20.38 | 0.6893 | 418.92 |
MoG-RPCA | 31.31 | 0.8131 | 123.98 | 31.34 | 0.9475 | 125.40 |
NMoG-LRMF | 32.88 | 0.9453 | 106.49 | 32.39 | 0.9531 | 146.44 |
LRTA | 35.40 | 0.9575 | 71.53 | 20.61 | 0.4120 | 386.96 |
PARAFAC | 26.82 | 0.7349 | 211.89 | 17.10 | 0.2496 | 569.36 |
KBR-RPCA | 35.19 | 0.9637 | 73.95 | 33.43 | 0.9548 | 96.54 |
NMoG-Tucker | 35.37 | 0.9703 | 71.49 | 36.02 | 0.9787 | 85.33 |
Gaussian noise | Speckle noise | Mixture noise | |||||||
---|---|---|---|---|---|---|---|---|---|
DCmall | MPSNR | MSSIM | ERGAS | MPSNR | MSSIM | ERGAS | MPSNR | MSSIM | ERGAS |
Noisy data | 26.02 | 0.7627 | 187.93 | 31.65 | 0.8697 | 226.53 | 23.94 | 0.6988 | 316.59 |
LRMR | 38.54 | 0.9848 | 43.35 | 38.44 | 0.9789 | 58.46 | 37.10 | 0.9785 | 58.66 |
MoG-RPCA | 38.97 | 0.9865 | 41.59 | 33.85 | 0.9520 | 144.52 | 34.81 | 0.9597 | 110.30 |
NMoG-LRMF | 39.47 | 0.9876 | 38.91 | 39.66 | 0.9847 | 59.58 | 38.68 | 0.9838 | 56.39 |
LRTA | 36.86 | 0.9731 | 52.07 | 31.66 | 0.8698 | 226.17 | 24.05 | 0.7021 | 314.09 |
PARAFAC | 32.02 | 0.9360 | 90.77 | 32.75 | 0.9410 | 89.04 | 28.81 | 0.8722 | 164.61 |
KBR-RPCA | 37.31 | 0.9819 | 49.94 | 38.20 | 0.9844 | 48.77 | 36.82 | 0.9797 | 54.89 |
NMoG-Tucker | 39.52 | 0.9877 | 38.68 | 40.23 | 0.9876 | 42.66 | 39.23 | 0.9865 | 44.51 |
Gaussian noise | Speckle noise | Mixture noise | |||||||
Cuprite | MPSNR | MSSIM | ERGAS | MPSNR | MSSIM | ERGAS | MPSNR | MSSIM | ERGAS |
Noisy data | 26.02 | 0.6953 | 124.07 | 27.23 | 0.7052 | 225.22 | 19.42 | 0.4071 | 327.75 |
LRMR | 36.69 | 0.9668 | 38.03 | 35.59 | 0.9511 | 59.71 | 32.93 | 0.9300 | 60.28 |
MoG-RPCA | 35.51 | 0.9697 | 43.46 | 31.62 | 0.9415 | 133.78 | 28.67 | 0.9101 | 119.08 |
NMoG-LRMF | 36.67 | 0.9696 | 39.08 | 36.74 | 0.9737 | 59.43 | 33.77 | 0.9446 | 58.81 |
LRTA | 34.69 | 0.9324 | 48.11 | 27.29 | 0.7070 | 222.79 | 21.01 | 0.4687 | 272.44 |
PARAFAC | 29.41 | 0.8223 | 86.12 | 29.82 | 0.8395 | 85.87 | 25.81 | 0.7150 | 158.41 |
KBR-RPCA | 35.49 | 0.9564 | 44.01 | 34.54 | 0.9611 | 51.75 | 32.70 | 0.9208 | 59.98 |
NMoG-Tucker | 37.41 | 0.9706 | 35.59 | 38.72 | 0.9805 | 32.43 | 34.04 | 0.9462 | 51.58 |
Gaussian Noise | Gaussian + Sparse Noise | Mixture Noise | |||||
---|---|---|---|---|---|---|---|
ReErr | ReErr | ReErr | |||||
1/2 | 1.45e-02 | 1.69e-02 | 2.35e-2 | ||||
2/3 | 1.45e-02 | 1.69e-02 | 2.35e-2 | ||||
3/4 | 1.45e-02 | 1.69e-02 | 2.35e-2 | ||||
1/2 | 1.44e-02 | 1.68e-02 | 2.33e-2 | ||||
2/3 | 1.44e-02 | 1.68e-02 | 2.33e-2 | ||||
3/4 | 1.44e-02 | 1.68e-02 | 2.33e-2 | ||||
1/2 | 1.44e-02 | 1.68e-02 | 8.06e-2 | ||||
2/3 | 1.44e-02 | 1.68e-02 | 8.06e-2 | ||||
3/4 | 1.44e-02 | 1.68e-02 | 8.06e-2 |
Gaussian Noise | Gaussian + Sparse Noise | Mixture Noise | ||||
---|---|---|---|---|---|---|
ReErr | ReErr | ReErr | ||||
9.86e-03 | 3.54e-01 | 7.11e-01 | ||||
9.86e-03 | 1.18e-02 | 9.03e-03 | ||||
9.83e-03 | 1.19e-02 | 4.15e-03 | ||||
9.83e-03 | 1.19e-02 | 4.94e-03 | ||||
9.82e-03 | 1.19e-02 | 3.90e-03 | ||||
9.83e-03 | 1.19e-02 | 3.30e-03 | ||||
9.83e-03 | 1.19e-02 | 3.37e-03 | ||||
9.82e-03 | 1.18e-02 | 3.50e-03 | ||||
9.81e-03 | 1.18e-02 | 3.24e-03 | ||||
9.80e-03 | 1.18e-02 | 2.97e-03 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, T.-H.; Xu, Z.; Meng, D. Remote Sensing Image Denoising via Low-Rank Tensor Approximation and Robust Noise Modeling. Remote Sens. 2020, 12, 1278. https://doi.org/10.3390/rs12081278
Ma T-H, Xu Z, Meng D. Remote Sensing Image Denoising via Low-Rank Tensor Approximation and Robust Noise Modeling. Remote Sensing. 2020; 12(8):1278. https://doi.org/10.3390/rs12081278
Chicago/Turabian StyleMa, Tian-Hui, Zongben Xu, and Deyu Meng. 2020. "Remote Sensing Image Denoising via Low-Rank Tensor Approximation and Robust Noise Modeling" Remote Sensing 12, no. 8: 1278. https://doi.org/10.3390/rs12081278