Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Uncertainty Analysis for Topographic Correction of Hyperspectral Remote Sensing Images
Next Article in Special Issue
Fusing China GF-5 Hyperspectral Data with GF-1, GF-2 and Sentinel-2A Multispectral Data: Which Methods Should Be Used?
Previous Article in Journal
Efficient Discrimination and Localization of Multimodal Remote Sensing Images Using CNN-Based Prediction of Localization Uncertainty
Previous Article in Special Issue
GF-5 Hyperspectral Data for Species Mapping of Mangrove in Mai Po, Hong Kong
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global and Local Tensor Sparse Approximation Models for Hyperspectral Image Destriping

1
School of Automation, Northwestern Polytechnical University, Shanxi 710072, China
2
Ministry of Basic Education, Sichuan Engineering Technical College, Deyang 618000, China
3
Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518057, China
4
Department of Electronics and Informatics, Vrije Universiteit Brussel, 1050 Brussel, Belgium
5
Department of Computer Engineering, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(4), 704; https://doi.org/10.3390/rs12040704
Submission received: 14 January 2020 / Revised: 11 February 2020 / Accepted: 17 February 2020 / Published: 20 February 2020
(This article belongs to the Special Issue Advanced Techniques for Spaceborne Hyperspectral Remote Sensing)
Graphical abstract
">
Figure 1
<p>The framework of the proposed method.</p> ">
Figure 2
<p>Statistics of non-zero elements in <math display="inline"><semantics> <mrow> <msub> <mo>∇</mo> <mn>2</mn> </msub> <mi mathvariant="bold">????</mi> </mrow> </semantics></math>. Vertical axis indicates the number of non-zero elements, while horizontal axis represents pixel value.</p> ">
Figure 3
<p>Directional characteristic of stripes in a hyperspectral image (HSI). (<b>a</b>) Original Washington DC Mall image used in the simulated data experiment. (<b>b</b>) Spatial horizontal gradient (mode-1). (<b>c</b>) Spatial vertical gradient (mode-2). (<b>d</b>) Spectral gradient (mode-3).</p> ">
Figure 4
<p>Histograms of spectral gradient of (<b>a</b>) the observed image, (<b>b</b>) underlying clean image, and (<b>c</b>) the stripe of WDCM in <a href="#remotesensing-12-00704-f003" class="html-fig">Figure 3</a>.</p> ">
Figure 5
<p>Destriping results of band 5 for the periodic stripes case (r = 0.8 and I = 60): (<b>a</b>) original, (<b>b</b>) degraded with periodic stripes, (<b>c</b>) UTV, (<b>d</b>) WFAF, (<b>e</b>) SGE, (<b>f</b>) ASSTV, (<b>g</b>) LRMID, (<b>h</b>) DL0S, (<b>i</b>) LRTD, and (<b>j</b>) GLTSA.</p> ">
Figure 5 Cont.
<p>Destriping results of band 5 for the periodic stripes case (r = 0.8 and I = 60): (<b>a</b>) original, (<b>b</b>) degraded with periodic stripes, (<b>c</b>) UTV, (<b>d</b>) WFAF, (<b>e</b>) SGE, (<b>f</b>) ASSTV, (<b>g</b>) LRMID, (<b>h</b>) DL0S, (<b>i</b>) LRTD, and (<b>j</b>) GLTSA.</p> ">
Figure 6
<p>Destriping results of band 5 for the non-periodic stripes case (r = 0.4 and I = 60): (<b>a</b>) original, (<b>b</b>) degraded with non-periodic stripes (case I = 60, r = 0.4), (<b>c</b>) UTV, (<b>d</b>) WFAF, (<b>e</b>) SGE, (<b>f</b>) ASSTV, (<b>g</b>) LRMID, (<b>h</b>) DL0S, (<b>i</b>) LRTD, and (<b>j</b>) GLTSA.</p> ">
Figure 6 Cont.
<p>Destriping results of band 5 for the non-periodic stripes case (r = 0.4 and I = 60): (<b>a</b>) original, (<b>b</b>) degraded with non-periodic stripes (case I = 60, r = 0.4), (<b>c</b>) UTV, (<b>d</b>) WFAF, (<b>e</b>) SGE, (<b>f</b>) ASSTV, (<b>g</b>) LRMID, (<b>h</b>) DL0S, (<b>i</b>) LRTD, and (<b>j</b>) GLTSA.</p> ">
Figure 7
<p>(<b>a</b>) Degraded with periodic stripes (case I = 60, r = 0.8), (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 8
<p>(<b>a</b>) Degraded with non-periodic stripes (case I = 60, r = 0.4), (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 9
<p>Destriping results of the first real data 1. (<b>a</b>) Original; and destriping results by (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 10
<p>Destriping results of the second real data. (<b>a</b>) Original; and destriping results by (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 11
<p>Spatial mean cross-track profiles for the first real data: (<b>a</b>) original real image; and destriping results by (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 12
<p>Spatial mean cross-track profiles for the second real data: (<b>a</b>) real image; and destriping results by (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 12 Cont.
<p>Spatial mean cross-track profiles for the second real data: (<b>a</b>) real image; and destriping results by (<b>b</b>) UTV, (<b>c</b>) WFAF, (<b>d</b>) SGE, (<b>e</b>) ASSTV, (<b>f</b>) LRMID, (<b>g</b>) DL0S, (<b>h</b>) LRTD, and (<b>i</b>) GLTSA.</p> ">
Figure 13
<p>The MPSNR curves as function of the regularization parameters. The relationship between MPSNR and the parameters (<b>a</b>) <math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mi mathvariant="sans-serif">γ</mi> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mi mathvariant="sans-serif">α</mi> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>4</mn> </msub> </mrow> </semantics></math>; (<b>h</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>5</mn> </msub> </mrow> </semantics></math>.</p> ">
Figure 13 Cont.
<p>The MPSNR curves as function of the regularization parameters. The relationship between MPSNR and the parameters (<b>a</b>) <math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mi mathvariant="sans-serif">γ</mi> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mi mathvariant="sans-serif">α</mi> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>4</mn> </msub> </mrow> </semantics></math>; (<b>h</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">β</mi> <mn>5</mn> </msub> </mrow> </semantics></math>.</p> ">
Figure 14
<p>MPSNR and MSSIM value versus the iteration number of GLTSA. (<b>a</b>) Change in the MPSNR value (periodic). (<b>b</b>) Change in the MSSIM value (periodic). (<b>c</b>) Change in the MPSNR value(non-periodic). (<b>d</b>) Change in the MSSIM value (non-periodic).</p> ">
Versions Notes

Abstract

:
This paper presents a global and local tensor sparse approximation (GLTSA) model for removing the stripes in hyperspectral images (HSIs). HSIs can easily be degraded by unwanted stripes. Two intrinsic characteristics of the stripes are (1) global sparse distribution and (2) local smoothness along the stripe direction. Stripe-free hyperspectral images are smooth in spatial domain, with strong spectral correlation. Existing destriping approaches often do not fully investigate such intrinsic characteristics of the stripes in spatial and spectral domains simultaneously. Those methods may generate new artifacts in extreme areas, causing spectral distortion. The proposed GLTSA model applies two 0 -norm regularizers to the stripe components and along-stripe gradient to improve the destriping performance. Two 1 -norm regularizers are applied to the gradients of clean image in spatial and spectral domains. The double non-convex functions in GLTSA are converted to single non-convex function by mathematical program with equilibrium constraints (MPEC). Experiment results demonstrate that GLTSA is effective and outperforms existing competitive matrix-based and tensor-based destriping methods in visual, as well as quantitative, evaluation measures.

Graphical Abstract">

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) offer abundant spatial and spectral information useful in various application fields, including weather prediction [1] and environmental monitoring [2]. Hyperspectral images collected by using whisk-broom sensors and push-broom scanners can be degraded by stripes, often caused by calibration error or inconsistent responses between detectors [3]. Such stripes degrade visual quality and pose negative influence on subsequent processing, such as unmixing [4,5], super-resolution [6], classification [7,8], compressive sensing reconstruction [9,10], and recovery [11,12]. In general, three types of stripes exist: horizontal (row-by-row) [13,14], vertical (column-by-column) [15,16], and oblique stripes [17,18]. Since horizontal and oblique stripes can be transformed to vertical stripes, without loss of generality, our directional model focuses only on vertical stripes.
In tensor algebra, an HSI dataset is a three-mode tensor, where the first two modes represent spatial information, while the third mode gives spectral information of surface materials. HSI destriping methods can be divided into two categories: matrix-based (band-by-band) methods [13,17] and tensor-based (3D data cube) methods [19,20]. In spite of good results for a single band, matrix-based destriping methods ignore spectral correlation causing spectral distortion in the restored HSI.
Matrix-based destriping methods can be classified into three categories: filtering-based, statistics-based, and optimization-based. Filtering-based methods include wavelet analysis [17], Fourier domain filter [21], and filters combined with Fourier and wavelet [22]. If they considered only the periodic stripe, the stripes would be identified by spectral analysis and suppressed by a finite-impulse-response (FIR) filter [21]. Unfortunately, useful information of the same frequency can be treated as stripes and therefore be filtered out. For non-periodic stripes, they may cause blur or ringing artifacts. Münch et al. [22] proposed a wavelet filter in the Fourier domain to improve destriping performances.
Statistics-based methods [23,24,25,26,27,28,29,30], such as moment matching (MM) [23] and histogram matching (HM) [25], analyze and match the distribution of each sensor signal. MM assumes that the distributions (the mean and the standard deviation) captured from different imaging systems are identical. HM attempts to match a reference empirical cumulative distribution function (ECDF) by using the histogram of the target signal. However, these methods are based on the strong assumption of similarity and regularity among the stripes. The results are unstable, especially for nonlinear and/or irregular stripes. To overcome this issue, various approaches have been proposed [26,27,28,29,30]. Cao et al. [26] improved HM by using sliding windows. This approach understands that the distribution of vertical stripes comes close to the gray level distribution of the column-centered local area. To remove stripes of different frequencies, Sun et al. [27] proposed an improved MM method to remove nonlinear and irregular stripes. Shen et al. [28] proposed a piecewise destriping method, which makes it difficult to determine the threshold value. For stripes on Moderate-Resolution Imaging Spectro-radiometer (MODIS), Rakwatin et al. [29] proposed a filter jointed HM method with an iterated weighted least squares method. Chang et al. [30] combined MM with the multi-resolution analysis in the wavelet domain. When the scenes are with a homogeneous surface, these methods perform well. The results depend strongly on the moment or histogram set in advance. However, finding the suitable reference moment or histogram is always challenging.
Recently, more researchers have been paying attention to optimization-based destriping methods [14,19,31,32]. Mathematically, these methods view the destriping problem as an ill-conditioned inverse problem, which is traditionally handled by exploiting priors with corresponding regularizations. Inspired by classical total variation (TV) models, an improved TV model to destripe MODIS image is proposed [14]. To remove stripes in multispectral image, Chang et al. [19] introduced a novel TV regularizer in the spectral and spatial domains. Based on the unidirectional total variation (UTV) model, several improved or combined methods were proposed [31,32]. In [31], the author combined UTV with framelet to remove stripes, while Chang et al. [32] incorporated a UTV model with sparse priors of the stripes to remove random noise and stripes concurrently. Most optimization-based methods estimate the ideal image from stripe-degraded images, without considering the directional and structural characteristics of the stripes in spectral and spatial domains. Recently, the methods combining the priors of both image component and stripe component have been proposed [13,15,16,33,34]. Analyzing the properties of stripes, Liu et al. [13] and Dou et al. [16] separated the stripes from the image. Chen et al. in [15] and Chang et al. in [34] extracted stripe components from the perspective of image decomposition. To overcome the drawback of sparsity constraint based on L1 norm for stripes, the authors in [33] introduced group sparsity schemes, i.e., L2,1 norm to represent sparsity property of the stripes.
Matrix-based methods mainly focus on spatial information and process a single-band image each time, and they do not fully utilize the spatial–spectral information of HSIs. An Anisotropic Spectral–Spatial Total Variation (ASSTV) model [19] is proposed for HSI destriping, considering the prior of image component by combining spectral–spatial total variation, but not the prior of stripe components. Though most existing matrix-based methods can restore stripe-free images, they usually destroy the original structure, causing spectral distortion. For serious stripes corruptions, outputs from existing methods are either over-smoothed or degraded with artifacts see the Figures presented in the experimental part.
Tensor-based destriping methods can fully exploit structural spectral–spatial correlation [35,36]. Low-Rank Tensor Decomposition (LRTD) in [20] separated the stripes from the perspective of low-rank tensor decomposition. In [35], a weighted low-rank tensor recovery (WLRTR) model is extended to WLRTR robust principal component analysis (WLRTR-RPCA) to extract the stripes. In [36], destriping is implemented with combined low-rank approximation and nonlocal total variation. Although tensor-based destriping methods outperform matrix-based methods, the low-rank constraint of stripes is difficult to satisfy in real HSIs, especially when the stripes are discontinuous [34]. Determining an optimal rank of tensor is challenging.
Considering the characteristic of stripes, sparse representation and sparsity promoting priors have been utilized to remove the stripes in HSIs [15,16,32,33]. However, this sparsity characteristic has not been considered under a tensor framework. In this paper, the tensor-based non-convex sparse model utilizes the sparse priors to estimate stripes from the observed images, using two 0 sparse priors and two 1 sparse priors. One 0 sparse prior is related to the global sparse distributional property of stripes, and the other is related to the directional property of stripes (along the gradient stripes). One 1 sparse prior characterizes the continuity of image (across-stripe direction), and the other denotes the correlation along the spectral dimension. Section 3.2 further explains the reasons for choosing these priors. Figure 1 shows the framework and priors of the proposed method. The ∇1 and ∇2 represent unidirectional TV operators of the along-stripe direction and the perpendicular direction, respectively, and ∇3 represents the difference operator along the spectral direction. To solve the proposed GLTSA, we extended the proximal alternating direction method of multipliers (PADMM)-based algorithm [16] to the tensor framework.
The main contributions of the proposed HSI destriping methods are as follows:
(1)
We transformed an HSI destriping model to tensor framework, to better exploit high correlation between adjacent bands. In the tensor framework, a non-convex optimization model was constructed to estimate the stripes and underlying stripe-free image simultaneously for higher spectral fidelity.
(2)
The proposed method fully considers the discriminative prior of the stripes and the stripe-free image in tensor framework. We represent and analyze intrinsic characteristics of the stripes and stripe-free images, using 0 and 1 , respectively. Experiment results show that the proposed method outperforms existing state-of-the-art methods, especially when the stripes are non-periodic.
(3)
We used the PADMM to solve a non-convex optimization model effectively. Extensive experiments were conducted on simulated data and real-world data. The proposed scheme achieved superior performance in both quantitative evaluation and visual comparison compared with existing approaches.
The remainder of this paper is organized as follows. Section 2 gives some preliminary knowledge on the tensor notations and intrinsic statistical characteristics of stripe and stripe-free images. Section 3 presents the formulation of our model, along with a PADMM solver. In Section 4, we discuss the experiments and analyze the selection of parameters. Section 5 concludes this paper.

2. Intrinsic Statistical Property Analysis of Stripe and Clean Images

2.1. Notations and Preliminaries

In this paper, we use boldface Euler script letters (e.g., 𝒳 ), boldface capital letters (e.g., X), boldface lowercase letters (e.g., x), and lowercase letters (e.g., x ), to denote tensors, matrices, vectors, and scalars, respectively. For an N-order tensor, 𝒳 I 1 × I 2 × × I N , its element at location ( i 1 , i 2 , , i N ) is denoted as x i 1 , i 2 , , i N . The 1 norm of tensor 𝒳 is defined as 𝒳 1 = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N | x i 1 , i 2 , , i N | and 0 -norm is defined as the number of non-zero elements in 𝒳 . The inner product and Hadamard product of two N-order tensors, 𝒳 , 𝓨 , are defined as 𝒳 , 𝓨 = i 1 , i 2 , , i N x i 1 , i 2 , , i N · y i 1 , i 2 , , i N and 𝒳 𝓨 = 𝓩 , where 𝓩 I 1 × I 2 × × I N and z i 1 , i 2 , , i N = x i 1 , i 2 , , i N · y i 1 , i 2 , , i N . For more information about tensor algebra, please refer to [37].

2.2. Priors and Regularizers

This subsection analyzes the priors of stripes and stripe-free image and design appropriate regularizers to describe the priors in a destriping model.

2.2.1. Global Sparsity of Stripes

When the stripes are few in an HSI image, the stripes are regarded as a sparse tensor [16]. Naturally we choose 0 -norm, the number of nonzero elements, to measure the sparsity of stripe component 𝓢 :
R 1 ( 𝓢 ) = 𝓢 0
Based on the sparsity constraint, the stripes are simply separated, while keeping useful information. However, when the stripes are extremely dense, it is difficult to restore HSIs. Heavy stripes, which means that the percentage of stripes in an image is high (usually more than 50%), significantly damage reliable spatial and spectral information. A single 0 -sparsity measurement for stripes can no longer depict reasonably intrinsic characteristics of stripes. Thus, it is necessary to explore other regularizers for stripes and HSIs to deal with heavy stripes.

2.2.2. Local Smoothness along Stripe Direction

Besides the global sparsity of stripes mentioned above, more powerful sparsity is embodied in gradient domain along stripe. Figure 2 represents statistical distribution of non-zero elements in 2 𝓢 , where 2 is the gradient operator along the stripe, and the stripe 𝓢 in Figure 2 is extracted from the promising method proposed in [13]. Most gradient along the stripes are so small that they are close to zero. The sparsity in gradient domain essentially reflects local smoothness along stripe direction. The local sparsity measurement for stripes is as follows:
R 2 ( 𝓢 ) = 2 𝓢 0

2.2.3. Continuity of HSIs along Spatial Domain

As suggested in [16], for an HSI 𝓤 , the spatial information long mode-1 (or spatial horizontal direction) is continuous. From Figure 3b, the stripes severely destroy the local spatial continuity in mode-1, indicating that the horizontal gradient of 𝓤 should be as small as possible, to keep the continuity. With this observation, the 1 -norm regularization on the gradient of 𝓤 below is used to describe the local continuity of the image in the spatial domain:
R 3 ( 𝓤 ) = 1 𝓤 1
where 1 𝓤 represents horizontal gradient of 𝓤 .

2.2.4. Continuity of HSIs along Spectral Domain

Figure 4 represents histograms of spectral gradient of degraded HSIs, the stripe-free image, and the stripes of WDCM in Figure 3. The spectral gradient histogram (Figure 4b) of the stripe-free HSI includes more zeroes and smaller values than that of the stripes (Figure 4c), which illustrates that the underlying HSI possesses strong continuity along the spectral direction. The low-rank prior of stripes will be violated for real HSI, especially when the stripes are fragments [34]. It is natural to minimize the 1 norm of the spectral gradient of 𝓤 :
R 4 ( 𝓤 ) = 3 𝓤 1

3. The GLTSA Destriping Method

3.1. Stripe-Degraded Model

The stripes in HSI can be classified into two categories from the mechanism of noise generation, including additive stripes [14] and multiplicative stripes [38]. The multiplicative stripes can be transformed to additive stripes by logarithmic transformation [39]. Thus, the observation model of stripe-degraded HSI is formulated as F = U + S [13,14], and the corresponding tensor-based reformulation is as follows:
= 𝓤 + 𝓢
where , 𝓤 , 𝓢 h × v × b , and , 𝓤 , 𝓢 indicate the degraded image, the unknown clean image, and stripes, respectively. Moreover, h and v indicate the horizontal and vertical pixel numbers in spatial domain, and b represents the number of spectral bands.

3.2. The GLTSA Destriping Model

Based on the discussion for the priors for both the image and stripe components in Section 2.2, the regularizers in the proposed model can be formulated as follows:
m i n 𝓢 , 𝓤   α 𝓢 0 + 2 𝓢 0 + λ 1 𝓤 1 + γ 3 𝓤 1
where α , λ ,   and   γ are weighted parameters. To tackle this double-objective non-convex optimization problem, we can transform it to the following single-objective non-convex optimization problem with constraint 𝓤 = 𝓢 :
m i n 𝓢   α 𝓢 0 + 2 𝓢 0 + λ 1 ( 𝓢 ) 1 + γ 3 ( 𝓢 ) 1
Note that the optimization problem in Equation (7) substantially differs from the models proposed in [13,16]. The main difference is that the two models are matrix-based, while our method is tensor-based. The two models in [13,16] have not considered the spectral correlation, which would result in spectral distortion. The proposed method introduces the spectral prior to avoid this issue.

3.3. Optimization Procedure and Algorithm

There are two non-convex 0 regularization terms in Equation (7). In order to handle them, we first present Lemma 1, the non-convex 0 optimization problem can be converted into convex optimization with this Lemma. Then the PADMM method is utilized to solve it, and theoretically guarantee the convergence at the same time.
Lemma 1.
(Mathematical Program with Equilibrium Constraints (MPEC) equation [39]). For any given vector w n , the following holds:
w 0 = m i n 0 v 1 1 , 1 v , s . t . v | w | = 0
where < > represents the inner product of two vectors, and denotes the element-wise product. The absolute of w stands for the absolute value of each element of w. Then, v = 1 s i g n ( | w | ) is the unique optimal solution of the minimization problem in Equation (8), and s i g n ( x ) = { 1 , x > 0 0 , x = 0 1 , x < 0 .
Lemma 1 can be intuitively extended to the tensor version, i.e., Lemma 2:
Lemma 2.
For any given tensor 𝓦 I 1 × I 2 × × I N , the following holds:
𝓦 0 = m i n 0 𝓥 1 1 , 1 𝓥 , s . t . 𝓥 | 𝓦 | = 0
where 𝓥 = 1 s i g n ( | 𝓦 | ) is the unique optimal solution of the minimization problem in Equation (9).
Proof. 
Similar to Lemma 1, see details in [39]. ☐
According to Lemma 2, the objective function (7) can be reformulated as follows:
m i n 0 𝓥 1 , 𝓢   α 𝓢 0 + 1 , 1 𝓥 + λ 1 ( 𝓢 ) 1 + γ 3 ( 𝓢 ) 1 s . t . 𝓥 | 2 𝓢 | = 0
According to ADMM optimization framework, we introduce four auxiliary variables 𝓞 , 𝓟 , 𝓠 , and , and then equally transform Equation (10) as follows:
m i n 0 𝓥 1 , 𝓢 , 𝓞 , 𝓟 , 𝓠 , 𝓡   α 𝓟 0 + 1 , 1 𝓥 + λ 𝓠 1 + γ 𝓡 1 , s . t . 𝓥 | 𝓞 | = 0 , 2 𝓢 = 𝓞 , 1 ( 𝓢 ) = 𝓠 , 3 ( 𝓢 ) =
The augmented Lagrangian form of (11) is as follows:
𝓛 ( 𝓥 , 𝓢 , 𝓞 , 𝓟 , 𝓠 , 𝓡 , Λ i , β i ) = α 𝓟 0 + Λ 1 , 𝓢 𝓟 + β 1 2 𝓢 𝓟 F 2 + 1 , 1 𝓥 + Λ 2 , 𝓥 | 𝓞 | + β 2 2 𝓥 | 𝓞 | F 2 + λ 𝓠 1 + Λ 3 , 1 ( 𝓢 ) 𝓠 + β 3 2 1 ( 𝓢 ) 𝓠 F 2 + γ 1 + Λ 4 , 3 ( 𝓢 ) + β 4 2 3 ( 𝓢 ) F 2 + Λ 5 , 2 𝓢 𝓞 + β 5 2 2 𝓢 𝓞 F 2
where the β i s are positive scalar parameters and the Λ i s (i = 1,2,3,4,5) are the scaled Lagrange multipliers.
1. 𝓞 subproblem: This subproblem can be written as follows.
m i n 𝓞   Λ 2 k , 𝓥 k | 𝓞 | + β 2 2 𝓥 k | 𝓞 | F 2 + Λ 5 k , 2 𝓢 k 𝓞 + β 5 2 2 𝓢 k 𝓞 F 2
It has a close-form solution [16]:
𝓞 k + 1 = sign ( 𝓣 k ) | 𝓣 k | Λ 2 k 𝓥 k β 5 k + β 2 𝓥 k ( 𝓥 k ) T
where 𝓣 k = β 5 2 𝓢 k + Λ 5 k k , k is the unit tensor of the same size as   𝓢 k .
2. 𝓟 subproblem: This subproblem is a non-convex optimization problem.
m i n 𝓟   α 𝓟 0 + Λ 1 k , 𝓢 k 𝓟 + β 1 2 𝓢 k 𝓟 F 2
According to [40,41], the 𝓟 k + 1 can be updated as follows:
𝓟 k + 1 = hdthre ( 𝓢 k + Λ 1 k β 1 , 2 α β 1 )
where hdthre ( x , T 0 ) = { x , | x | T 0 0 , | x | < T 0 is a hard-thresholding function, and T 0 is a given positive threshold.
3. 𝓢 subproblem: This subproblem is a quadratic minimization problem.
m i n 𝓢   Λ 1 k , 𝓢 𝓟 k + β 1 2 𝓢 𝓟 k F 2 + Λ 3 k , 1 ( 𝓕 𝓢 ) 𝓠 k + β 3 2 1 ( 𝓢 ) 𝓠 k F 2 + Λ 4 k , 3 ( 𝓢 ) 𝓠 k + β 4 2 3 ( 𝓢 ) 𝓠 k F 2 + Λ 5 k , 2 𝓢 𝓞 k + β 5 2 2 𝓢 𝓞 k F 2
Based on 3D fast Fourier transform (FFT), the explicit close-form solution is as follows:
𝓢 k + 1 = ifft ( f f t ( Π 1 ) Π 2 )
where Π 1 = β 1 ( 𝓟 k Λ 1 k β 1 ) + β 3 1 T ( 1 𝓕 k 𝓠 k + Λ 3 k β 3 ) + β 4 3 T ( 3 𝓕 k 𝓡 k + Λ 4 k β 4 ) + β 5 ( 𝓞 k Λ 5 k β 5 ) , Π 2 = β 1 + β 3 f f t ( 1 1 T ) + β 4 f f t ( 3 3 T ) + β 5 f f t ( 2 2 T ) , fft denotes 3-D FFT, ifft is inverse 3-D FFT.
4. 𝓥 subproblem: By Lemma 2, the subproblem of 𝓥 is
m i n 𝓥 1 , 1 𝓥 + ( 𝓥 | 𝓞 k | , Λ 2 k ) + β 2 2 𝓥 | 𝓞 k | 2 2
which is equivalent to the following:
m i n 0 𝓥 1 𝓓 k , 𝓥 + β 2 2 𝓥 | 𝓞 k | 2 2
Based on the constraint 0 𝓥 1 , it still has a closed-form solution:
𝓥 k + 1 = min ( 1 , m a x ( 0 , 𝓓 k β 2 | 𝓞 k | | 𝓞 k | ) )
where 𝓓 k = 1 Λ 2 k | 𝓞 k | .
5. subproblem: when other variables are fixed, variable can be updated by
m i n γ 1 + Λ 4 k , 3 ( k 𝓢 k ) k + β 4 2 3 ( k 𝓢 k ) k 2 2
By using soft-threshold shrinkage operator [42], the solution is given by the following equation:
k + 1 = soft ( 3 ( k 𝓢 k ) + Λ 4 k β 4 , γ β 4 )
where soft ( a , x ) = s i g n ( a ) m a x ( 0 , | a x | ) .
6. 𝓠 subproblem: Similar to optimization for variable , 𝓠 can be updated by the following equation:
m i n 𝓠   λ 𝓠 1 + Λ 3 , 1 ( 𝓢 k ) 𝓠 + β 3 2 1 ( 𝓢 k ) 𝓠 F 2
The closed-form solution is given by the following equation:
𝓠 k + 1 = soft ( 1 ( k 𝓢 k ) + Λ 3 k β 3 , λ β 3 )
7. Multiplier subproblems: The update form of Lagrange multipliers Λ i s (i = 1, 2, 3, 4, 5) are as follows:
Λ 1 k + 1 = Λ 1 k + S k P k
Λ 2 k + 1 = Λ 2 k + β 2 𝓥 k | 𝓞 k |
Λ 3 k + 1 = Λ 3 k + 1 ( k 𝓢 k ) 𝓠 k
Λ 4 k + 1 = Λ 4 k + 3 ( k 𝓢 k ) k
Λ 5 k + 1 = Λ 5 k + 3 𝓢 k 𝓞 k
All the subproblems have the closed-form solutions, which guarantee the accuracy of the model. The GLTSA destriping algorithm is summarized in the following Algorithm 1. With regard to computational complexity, for an HSI with dimensions of h×b×v, the time complexity of GLTSA is proportional to O(hbv·log(hbv)).
Algorithm 1. The GLTSA destriping algorithm
Input: The observed degraded image with stripe , parameters α , γ , λ , β i ( i = 1 , 2 , 3 , 4 , 5 ) , the maximum number of iterations Nmax=103, and the calculation accuracy tol = 10−4.
Output: The stripes 𝓢
Initialize:
(1) k=0, 𝓥 0 = 1 , 𝓢 0 = ;
While | ( 𝓢 k ) ( 𝓢 k 1 ) | / | ( 𝓢 k ) | > t o l and k < Nmax do;
(2) Update 𝓞 k + 1 by Equation (14);
(3) Update 𝓟 k + 1 by Equation (16);
(4) Update 𝓢 k + 1 using 3-D FFT by Equation (18);
(5) Update 𝓥 k + 1 by Equation (21);
(6) Update k + 1 by Equation (23);
(7) Update 𝓠 k + 1 by Equation (25);
(8) Update the multipliers Λ i k + 1 , i = 1, 2, 3, 4, 5 by Equations (26)–(30).
End While

4. Experiment Results

To verify the effectiveness of GLTSA on simulated and real data, we compared with six state-of-the-art methods. The first four methods belong to band-by-band restoration matrix-based methods, including the unidirectional total variation model (UTV) [14], Directional L0 Sparse (DL0S) Model [16], the wavelet Fourier adaptive filter (WFAF) [22], and statistical gain estimation (SGE) [23]. The other three methods are tensor-based or tensor-like-based methods, including anisotropic spectral–spatial TV (ASSTV) model [19], Low-Rank Tensor Decomposition (LRTD) [20] model, and Low-Rank Multispectral Image Decomposition (LRMID) model [34]. The codes of WFAF, UTV, ASSTV, and LRMID can be downloaded from the website (http://www.escience.cn/people/changyi/codes.html). The code of DL0S can be downloaded from the website (http://www.escience.cn/people/dengliangjian/codes.html). The code of LRTD is available on the website (https://www.researchgate.net/profile/Yong_Chen96). In the simulated experiments, we add periodic and non-periodic stripes, as suggested in [34]. Before this, it is necessary to determine two indicators of the stripes: (1) the intensity represents the stripes’ absolute mean value (denoted by I) and (2) the ratio (or percentage) of the stripe area to the whole image (denoted by r). We selected several typical combinations of I and r in the simulated experiments, that is I ∈ {20,50,100}, r∈{0.2,0.4,0.6,0.8, 0 - 1 }, where r = 0 − 1 indicates that the percentage of stripes varies from band to band. The test data are normalized to a range of 0 to 1. In the following experiments, we manually tuned the parameters. For the experiments, we used MATLAB (R2016a) on a desktop with 8Gb RAM and Inter(R) Core(TM) CPU i5-4590: @3.30 GHz.

4.1. Simulated Data Experiments

In the simulated experiments, we generated periodic stripes and non-periodic stripes under different intensities and ratios. We used the Hyperspectral Digital Imagery Collection Experiment (HYDICE) image dataset of the Washington DC Mall (WDCM in short) as the ground truth image, which is available on website (https://engineering.purdue.edu/biehl/MultiSpec/hyperspectral.html). There are 191 spectral bands, and its spatial resolution is 1208 × 307 pixels. Because the original image is too large, which is very expensive in terms of computational cost, a sub-image of size 256 × 256 × 10 is extracted for the simulated experiment.
(a) Visual effect comparison: To evaluate the destriping effectiveness of GLTSA, we simulated many degraded cases. We selected one representative band of WDCM to show the destriping results. The original images are presented in Figure 5a and Figure 6a, and the images are degraded by periodic stripes (I = 60, r = 0.8) and non-periodic stripes (I = 60, r = 0.4) are listed in Figure 5b and Figure 6b, respectively. The destriping results obtained by different methods are illustrated in Figure 5c–j and Figure 6c–j, where significant differences in visual quality can be observed.
When the intensity and ratio are low, most of the comparative methods obtain similar visual quality comparable to the proposed method. But when the intensity and ratio are high, most of the comparative methods have poor results, in both cases of the periodic and non-periodic stripes. For example, the result images are distorted, blurred, and over-smoothed in Figure 5c–e. LRTD has obtained satisfactory results due to that it directly processes the data cubes instead of a band-by-band process. There are almost no obvious residual stripes in DL0S, LRMID, and LRTD. Even though the LRMID, DL0S, and LRTD achieve similar visual quality as the proposed method, there are still residual stripes in the images from the enlarged part (yellow box), especially in the non-periodic stripes case (see Figure 6g–i). The results show that GLTSA can effectively remove stripes and retain more details and structure.
(b) Qualitative Assessment: For the evaluation of GLTSA, we adopt the mean cross-track profile (MCTP) to compare the performance. The curves in Figure 7 and Figure 8 represent the MCTPs of Figure 5 and Figure 6, respectively. The horizontal axis indicates the column number, while the vertical axis denotes the mean digital number value of each column. The existence of stripes results in rapid fluctuations (blue curve) in Figure 7a; the stripes are more or less suppressed by the eight methods as shown in Figure 7b–i. From Figure 7b–d, we can see that UTV, WFAF, and SGE methods fail to restore original MCTP (green curve). There are still fluctuations in Figure 7e,f. This indicates that the images estimated by these methods are distorted or blurred. Although the profiles produced by DL0S and LRTD are better than other methods, the MCTP of GLTSA is closer to the original, showing no difference with the results listed in Figure 5. A similar case is shown in Figure 8.
(c) Quantitative comparison: To verify the robustness of GLTSA under different noise levels, we adopted five acknowledged quantitative indices, namely the mean peak signal-to-noise ratio (MPSNR) [43], mean structural similarity (MSSIM) [43], feature similarity (MFSIM) [44], mean spectral angle mapper (SAM) [45], and erreur relative globale adimensionnelle de synthese (ERGAS, relative dimensionless global error in synthesis, in English) [46]. The definitions of these indexes are as follows:
MPSNR = 1 b i = 1 b 10 l o g 10 255 2 × n i u ^ i u i 2
where b represents the number of spectral bands, u ^ i and u i are the restored image and the original clean image of i-th band (they are of the same size), and ni represents the total number of pixels of image of u i .
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,    MSSIM = 1 b i = 1 b S S I M i
where μ x and μ y represent the average value of x and y images; σ x and σ y stand for the variance of x and y images; and σ x y is the covariance of these two images. C1 and C2 are constant here.
FSIM = x Ω S L ( x ) · P C m ( x ) x Ω P C m ( x ) ,   MFSIM = 1 b i = 1 b F S I M i
where Ω means the whole image spatial domain, S L ( x ) is the similarity of images f1 and f2 at each location x , and P C m ( x ) is the weight of S L ( x ) in the overall similarity between images f1 and f2.
SAM = arccos [ i = 1 n x i y i i = 1 n x i 2 i = 1 n y i 2 ]
where S1 = (x1, x2, . . . , xn) and S2 = (y1, y2, . . . , yn) represent the spectral vectors at the same location in HSI.
ERGAS = 100 h v 1 b i = 1 b ( R M S E ( x i ) μ ( i ) ) 2
where h, v, and b are defined above. RMSE(xi) denotes the root-mean-square error (RMSE) for image xi, and μ(i) denotes the mean of image yi.
The MPSNR and MSSIM are used to measure the performance of spatial information, while the global performance of spectral distortion is evaluated by SAM and ERGAS. The larger the MPSNR and MSSIM values, the smaller the SAM and ERGAS values, and both are indicating that the effect of destriping is better. Table 1 and Table 2 list the quantitative indices of all the destriping methods with simulated periodic and non-periodic stripes, respectively. The boldface digits in Table 1 and Table 2 represent the best results. GLTSA outperforms LRTD, DL0S in these indices except only a few cases. As the intensity and ratio of stripes increases, the values of MPSNR and MSSIM decrease, while the GLTSA shows stable results, indicating robustness. The SAM and ERGAS values of GLTSA are the smallest among all compared methods; this indicates that the spectral distortion of GLTSA is the lowest. The quantitative evaluation results are in line with the visual effects.

4.2. Experiments on Real Data

In this experiment, we choose two real image datasets. The first one is Urban data from Hyperspectral Digital Imagery Collection Experiment (HYDICE). Some bands are degraded with non-periodic stripes. The original size is 307 × 307 × 210, and a sub-image with the dimensions 307 × 307 × 8 (from band 1 to band 8) is extracted for the experiment. The original data are available on the website (http://www.erdc.usace.army.mil/). The second data are collected by EO-1 Hyperion L1R-level sensor. These data were captured in Australia on December 4, 2010. The size of original image is 3858 × 256 × 242, and a sub-image of 400 × 225 × 10 (from band 54 to band 63) is used in our experiment, which is available online (http://remote-sensing.nci.org.au/). The data are normalized to [0,1] before processing.
The results of all the methods on the two real data are displayed in Figure 9 and Figure 10. From Figure 9b–d and Figure 10b–d, UTV, WFAF, and SGE result in the distortion of details and structure. Erroneous horizontal stripes are introduced by ASSTV, LRMID, and LRTD, as shown in Figure 9e–h and Figure 10e–h. DL0S can eliminate most of the stripes, but still leave some obvious stripes, as shown in Figure 9h and Figure 10h. Regardless of the level of image quality, GLTSA is able to remove the stripes effectively, while retaining the details.
The MCTPs of Figure 9 and Figure 10 are shown in Figure 11 and Figure 12, respectively. The MCTP of the Urban data is shown in Figure 11a, while Figure 11b–i shows the MCTPs of the compared methods. The profiles of LRMID and LRTD have obvious fluctuations, which indicate residual stripes, and the profiles of WFAF and UTV show over-smoothing, which suggests details are lost. Figure 11i indicates that GLTSA can realize the desired result, while keeping more details.
Since there is a lack of ground-truth images as reference data, we adopt two non-reference destriping indices, including the inverse coefficient of variation (ICV) [38] and mean relative deviation (MRD) [38]. The definitions of these two indices are introduced in [38]:
ICV = R m R s
MRD = 1 M N i | x i y i | y i × 100 %
where Rm and Rs represent the mean and standard deviation of pixel values in the same region of the destriping and original images, respectively; and xi and yi are the pixel values in the destriping and original images, respectively.
Table 3 lists the results of two non-reference indices. The best results are given in bold. The GLTSA outperforms the compared methods in most cases. The best results of compared methods have only little difference with ours. The GLTSA outperforms on destriping the state-of-the-art methods for comparison.

4.3. Analysis for Parameter Settings

At present, the selection of regularization parameters automatically or adaptively of different sensors is still a difficulty for many algorithms, which will be further researched in our future work. In this paper, the distributional properties of sparsity, smoothness, and spatial–spectral continuity are constructed as constraint terms to estimate the stripe component. Our model contains eight regularization parameters (12): α , γ , λ and β i (i = 1, 2, 3, 4, 5). Since the noise intensity and proportion of stripes have many different combinations, we tuned them empirically to obtain a more accurate estimation. To find the appropriate parameter values, we chose the mean PSNR (MPSNR) as the selection criterion to evaluate and analyze the effect. We conducted experiments on different simulated data. To show the influence of parameter tuning, we used the WDCM dataset as an example. It is obvious that the optimal values of eight parameters are in the eight-dimensional parameter space, but it will be time-consuming. To mitigate this problem, we adopt a greedy algorithm to determine the optimal value of parameters in turn. Experiment results show that, although the parameters obtained by this method are not globally optimum, the destriping results are still excellent.
The function curve of MPSNR values and the regularization parameters, λ , is plotted in Figure 13a; the results show that, when λ increases from 0.6 to 1.2, the MPSNR is obviously improved, and then the MPSNR reduces slightly when λ is further increased. That is to say, the highest MPSNR value of λ is reached near 1.2. The function curve of MPSNR values with parameter γ is listed in Figure 13b. With the increase of γ , the highest MPSNR value is reached near 1. It shows similar trends as λ . Since there are many degradation scenarios in our implementation, the parameters are not fixed; they are varied in the [1.1, 1.3] range for λ and [0.8, 1] for γ . The relationship between MPSNR and the parameter α is depicted in Figure 13c. It is observed that MPSNR does not change much with the change of α . This means that the GLTSA is robust with α . In fact, the image with severe stripes will prefer a larger γ   and   λ but a lower α . The value of α is fixed within the range of [10–5, 10–4].
The relationship between MPSNR and the parameters β 1 , β 2 , β 3 , β 4 ,   and   β 5 are listed in Figure 13d–h, respectively. From Figure 13d, it is observed that MPSNR changes very little with parameter β 1 in the range of [20,100]. Similar to the analysis of parameter α , it can be seen from Figure 13e,f that our method is robust with β 2 and β 3 . It can be seen from the Figure 13g,h that the GLTSA obtains the best result when β 4 = 16   and   β 5 = 0.05 . Based on the above evaluation results, the parameter values in this paper are empirically set as β 1 = 10 , β 2 = 1000 , β 3 = 10 , β 4 = 16 , and   β 5 = 0.05 .

4.4. Convergence Analysis and Comparison of Running Time

Figure 14 shows the relationship between the MPSNR and MSSIM values and the number of iterations of the GLTSA solver. Figure 14a,b is the periodic situations with (r = 0.5, I = 100), Figure 14c,d is the non-periodic situations with (r = 0.9, I = 60). As the iterations go on, GLTSA converges as relative changes of MPSNR and MSSIM converge to zero.
For time complexity, we show the running times of all the methods in Table 4. It is easy to see that, for 10 bands of WDCM data, the running speeds of methods UTV, WFAF, and SGE are obviously faster than other methods. However, according to the visual effect (Figure 5 and Figure 6) and index analysis (Table 1 and Table 2), their results are worse. In fact, it is hard to find the best trade-off between good results and time complexity. Compared with other methods, the running speed of our method is not the fastest, but our results are the best.

5. Conclusions

This paper presents a tensor sparse approximation model for HSI destriping. The characteristics of image and stripe are simultaneously modeled under tensor-based sparse approximation framework. More specially, the 0 -norm of stripes and its gradient are used to characterize the global sparsity of the stripes and local smoothness along stripes direction. The 1 -norm of the gradients of underlying images in spatial and spectral are utilized to depict the continuity. The tensor-based PADMM algorithm is adopted to handle the non-convex tensor optimization model. Experiment results have demonstrated that GLTSA has better striping performance and lower spectral distortion compared with the existing methods.

Author Contributions

X.K. conducted experiments, analyzed the results, and wrote the paper; Y.Z. conceived the experiments and is responsible for the research analysis; J.C.-W.C., and S.G.K. checked the language and grammar, J.X. collected and processed the original data. All co-authors helped revise the manuscript. All authors have read and agree to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (61771391, 61371152), the Shenzhen Municipal Science and Technology Innovation Committee (JCYJ20170815162956949, JCYJ20180306171146740), the National Research Foundation of Korea (2016R1D1A1B01008522), the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (CX201917), and the natural science basic research plan in Shaanxi Province of China (No. 2018JM6056). We are grateful to the authors of the compared methods for providing the source codes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal Low-Rank Regularized Tensor Decomposition for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
  2. Sánchez, N.; González-Zamora, Á.; Martínez-Fernández, J. Integrated hyperspectral approach to global agricultural drought monitoring. Agric. For. Meteorol. 2018, 259, 141–153. [Google Scholar] [CrossRef]
  3. Chen, J.; Shao, Y.; Guo, H. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  4. Yi, C.; Zhao, Y.Q.; Yang, J. Joint Hyperspectral Superresolution and Unmixing With Interactive Feedback. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3823–3834. [Google Scholar] [CrossRef]
  5. Yang, J.; Zhao, Y.Q.; Chan, C.W. Coupled Sparse Denoising and Unmixing With Low-Rank Constraint for Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2015, 1–16. [Google Scholar] [CrossRef]
  6. Yi, C.; Zhao, Y.Q.; Chan, J.C.W. Spectral super-resolution for multispectral image based on spectral improvement strategy and spatial preservation strategy. IEEE Trans. Geosci. Remote Sens. 2019, 1–15. [Google Scholar] [CrossRef]
  7. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010, 43, 2367–2379. [Google Scholar] [CrossRef] [Green Version]
  8. Yang, J.; Zhao, Y.; Chan, J.C. Learning and Transferring Deep Joint Spectral–Spatial Features for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  9. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Hyper-Laplacian Regularized Nonlocal Low-rank Matrix Recovery for Hyperspectral Image Compressive Sensing Reconstruction. Inf. Sci. 2019, 501, 406–420. [Google Scholar] [CrossRef]
  10. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.-W. Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef] [Green Version]
  11. Xue, J. Nonconvex tensor rank minimization and its applications to tensor recovery. Inf. Sci. 2019, 503, 109–128. [Google Scholar] [CrossRef]
  12. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.; Kong, S.G. Enhanced Sparsity Prior Model for Low-Rank Tensor Completion. IEEE Trans. Neural Netw. Learn. Syst. 2019, 12, 1–15. [Google Scholar] [CrossRef]
  13. Liu, X.; Lu, X.; Shen, H. aration and Removal in Remote Sensing Images by Consideration of the Global Sparsity and Local Variational Properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  14. Bouali, M.; Ladjal, S. Toward optimal destriping of MODIS data using a unidirectional variational model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  15. Chen, Y.; Huang, T.; Zhao, X.; Deng, L.; Huang, J. Stripes Removal of Remote Sensing Images by Total Variation Regularization and Group Sparsity Constraint. Remote Sens. 2017, 9, 559. [Google Scholar] [CrossRef] [Green Version]
  16. Hong-Xia, D.; Ting-Zhu, H.; Liang-Jian, D. Directional 0 Sparse Modeling for Image Stripes Removal. Remote Sens. 2018, 10, 361. [Google Scholar] [CrossRef] [Green Version]
  17. Chen, J.; Lin, H.; Shao, Y.; Yang, L. Oblique striping removal in hyperspectral imagery based on wavelet transform. Int. J. Remote Sens. 2006, 27, 1717–1723. [Google Scholar] [CrossRef]
  18. Liu, X.; Lu, X.; Shen, H. Oblique Stripe Removal in Remote Sensing Images via Oriented Variation. 2018. Available online: https://arxiv.org/ftp/arxiv/papers/1809/1809.02043.pdf (accessed on 18 February 2020).
  19. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic Spectral-Spatial Total Variation Model for Multispectral Remote Sensing Image Destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef]
  20. Chen, Y.; Huang, T.; Zhao, X. Destriping of Multispectral Remote Sensing Image Using Low-Rank Tensor Decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4950–4967. [Google Scholar] [CrossRef]
  21. Hao, W. A New Separable Two-dimensional Finite Impulse Response Filter Design with Sparse Coefficients. IEEE Trans. Circuits Syst. Regul. Papers 2017, 62, 2864–2873. [Google Scholar] [CrossRef]
  22. Münch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet-Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Carfantan, H.; Idier, J. Statistical linear destriping of satellite-based pushbroom-type images. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1860–1871. [Google Scholar] [CrossRef]
  24. Gadallah, F.L.; Csillag, G.; Smith, E.J.M. Destriping multisensory imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  25. Horn, B.K.P.; Woodham, R.J. Destriping LANDSAT MSS images by histogram modification. Comput. Graph. Image Process 1979, 10, 69–83. [Google Scholar] [CrossRef] [Green Version]
  26. Cao, B.; Du, Y.; Xu, D. An improved histogram matching algorithm for the removal of striping noise in optical hyperspectral imagery. Optik Int. J. Light Electron Opt. 2015, 126, 4723–4730. [Google Scholar] [CrossRef]
  27. Sun, L.X.; Neville, R.; Staenz, K.; White, H.P. Automatic destriping of Hyperion imagery based on spectral moment matching. Can. J. Remote Sens. 2008, 34, S68–S81. [Google Scholar] [CrossRef]
  28. Shen, H.F.; Jiang, W.; Zhang, H.Y.; Zhang, L.P. A piece-wise approach to removing the nonlinear and irregular stripes in MODIS data. Int. J. Remote Sens. 2014, 35, 44–53. [Google Scholar] [CrossRef]
  29. Rakwatin, P.; Takeuchi, W.; Yasuoka, Y. Stripes reduction in MODIS data by combining histogram matching with facet filter. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1844–1856. [Google Scholar] [CrossRef]
  30. Chang, W.W.; Guo, L.; Fu, Z.Y. A new destriping method of imaging spectrometer images. In Proceedings of the IEEE International Conference on Wavelet Analysis & Pattern Recognition, Beijing, China, 2–4 November 2007. [Google Scholar] [CrossRef]
  31. Chang, Y.; Fang, H.; Yan, L.; Liu, H. Robust destriping method with unidirectional total variation and framelet regularization. Opt. Express 2013, 21, 23307–23323. [Google Scholar] [CrossRef]
  32. Chang, Y.; Yan, L.X.; Fang, H.Z.; Liu, H. Simultaneous destriping and denoising for hyperspectral images with unidirectional total variation and sparse representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1051–1055. [Google Scholar] [CrossRef]
  33. Chen, Y.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Wang, M. Group sparsity based regularization model for hyperspectral image stripes removal. Neurocomput 2017, 267, 95–106. [Google Scholar] [CrossRef]
  34. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Hyperspectral image stripes removal: From image decomposition perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  35. Chang, Y.; Yan, L.; Fang, H. Weighted Low-rank Tensor Recovery for Hyperspectral Image Restoration. arXiv 2017, arXiv:1709.00192. [Google Scholar]
  36. Cao, W.; Chang, Y.; Han, G. Destriping Remote Sensing Image via Low-Rank Approximation and Nonlocal Total Variation. IEEE Geosci. Remote Sens. Lett. 2018, 1–5. [Google Scholar] [CrossRef]
  37. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  38. Shen, H.F.; Zhang, L.P. A MAP-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  39. Yuan, G.Z.; Ghanem, B. L0TV: A new method for image restoration in the presence of impulse noise. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015. [Google Scholar]
  40. Blumensath, T.; Davies, M.E. Iterative thresholding for sparse approximation. J. Fourier Anal. Appl. 2008, 14, 629–654. [Google Scholar] [CrossRef] [Green Version]
  41. Jiao, Y.; Jin, B.; Lu, X. A primal dual active set with continuation algorithm for the L0-regularized optimization problem. Appl. Comput. Harmon. Anal. 2015, 39, 400–426. [Google Scholar] [CrossRef]
  42. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  43. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image. Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  44. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarityindex for image quality assessment. IEEE Trans. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  45. Yuhas, R.H.; Goetz, F.H.A.; Boardman, J.W. Discrimination among semiarid landscape endmembers using the spectral angle mapper (SAM) algorithm. In Proceedings of the Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992. [Google Scholar]
  46. Kong, X.; Zhao, Y.; Xue, J.; Chan, J. Hyperspectral Image Denoising Using Global Weighted Tensor Norm Minimum and Nonlocal Low-Rank Approximation. Remote Sens. 2019, 11, 2281. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The framework of the proposed method.
Figure 1. The framework of the proposed method.
Remotesensing 12 00704 g001
Figure 2. Statistics of non-zero elements in 2 𝓢 . Vertical axis indicates the number of non-zero elements, while horizontal axis represents pixel value.
Figure 2. Statistics of non-zero elements in 2 𝓢 . Vertical axis indicates the number of non-zero elements, while horizontal axis represents pixel value.
Remotesensing 12 00704 g002
Figure 3. Directional characteristic of stripes in a hyperspectral image (HSI). (a) Original Washington DC Mall image used in the simulated data experiment. (b) Spatial horizontal gradient (mode-1). (c) Spatial vertical gradient (mode-2). (d) Spectral gradient (mode-3).
Figure 3. Directional characteristic of stripes in a hyperspectral image (HSI). (a) Original Washington DC Mall image used in the simulated data experiment. (b) Spatial horizontal gradient (mode-1). (c) Spatial vertical gradient (mode-2). (d) Spectral gradient (mode-3).
Remotesensing 12 00704 g003
Figure 4. Histograms of spectral gradient of (a) the observed image, (b) underlying clean image, and (c) the stripe of WDCM in Figure 3.
Figure 4. Histograms of spectral gradient of (a) the observed image, (b) underlying clean image, and (c) the stripe of WDCM in Figure 3.
Remotesensing 12 00704 g004
Figure 5. Destriping results of band 5 for the periodic stripes case (r = 0.8 and I = 60): (a) original, (b) degraded with periodic stripes, (c) UTV, (d) WFAF, (e) SGE, (f) ASSTV, (g) LRMID, (h) DL0S, (i) LRTD, and (j) GLTSA.
Figure 5. Destriping results of band 5 for the periodic stripes case (r = 0.8 and I = 60): (a) original, (b) degraded with periodic stripes, (c) UTV, (d) WFAF, (e) SGE, (f) ASSTV, (g) LRMID, (h) DL0S, (i) LRTD, and (j) GLTSA.
Remotesensing 12 00704 g005aRemotesensing 12 00704 g005b
Figure 6. Destriping results of band 5 for the non-periodic stripes case (r = 0.4 and I = 60): (a) original, (b) degraded with non-periodic stripes (case I = 60, r = 0.4), (c) UTV, (d) WFAF, (e) SGE, (f) ASSTV, (g) LRMID, (h) DL0S, (i) LRTD, and (j) GLTSA.
Figure 6. Destriping results of band 5 for the non-periodic stripes case (r = 0.4 and I = 60): (a) original, (b) degraded with non-periodic stripes (case I = 60, r = 0.4), (c) UTV, (d) WFAF, (e) SGE, (f) ASSTV, (g) LRMID, (h) DL0S, (i) LRTD, and (j) GLTSA.
Remotesensing 12 00704 g006aRemotesensing 12 00704 g006b
Figure 7. (a) Degraded with periodic stripes (case I = 60, r = 0.8), (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Figure 7. (a) Degraded with periodic stripes (case I = 60, r = 0.8), (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Remotesensing 12 00704 g007
Figure 8. (a) Degraded with non-periodic stripes (case I = 60, r = 0.4), (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Figure 8. (a) Degraded with non-periodic stripes (case I = 60, r = 0.4), (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Remotesensing 12 00704 g008
Figure 9. Destriping results of the first real data 1. (a) Original; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Figure 9. Destriping results of the first real data 1. (a) Original; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Remotesensing 12 00704 g009
Figure 10. Destriping results of the second real data. (a) Original; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Figure 10. Destriping results of the second real data. (a) Original; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Remotesensing 12 00704 g010
Figure 11. Spatial mean cross-track profiles for the first real data: (a) original real image; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Figure 11. Spatial mean cross-track profiles for the first real data: (a) original real image; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Remotesensing 12 00704 g011
Figure 12. Spatial mean cross-track profiles for the second real data: (a) real image; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Figure 12. Spatial mean cross-track profiles for the second real data: (a) real image; and destriping results by (b) UTV, (c) WFAF, (d) SGE, (e) ASSTV, (f) LRMID, (g) DL0S, (h) LRTD, and (i) GLTSA.
Remotesensing 12 00704 g012aRemotesensing 12 00704 g012b
Figure 13. The MPSNR curves as function of the regularization parameters. The relationship between MPSNR and the parameters (a) λ ; (b) γ ; (c) α ; (d) β 1 ; (e) β 2 ; (f) β 3 ; (g) β 4 ; (h) β 5 .
Figure 13. The MPSNR curves as function of the regularization parameters. The relationship between MPSNR and the parameters (a) λ ; (b) γ ; (c) α ; (d) β 1 ; (e) β 2 ; (f) β 3 ; (g) β 4 ; (h) β 5 .
Remotesensing 12 00704 g013aRemotesensing 12 00704 g013b
Figure 14. MPSNR and MSSIM value versus the iteration number of GLTSA. (a) Change in the MPSNR value (periodic). (b) Change in the MSSIM value (periodic). (c) Change in the MPSNR value(non-periodic). (d) Change in the MSSIM value (non-periodic).
Figure 14. MPSNR and MSSIM value versus the iteration number of GLTSA. (a) Change in the MPSNR value (periodic). (b) Change in the MSSIM value (periodic). (c) Change in the MPSNR value(non-periodic). (d) Change in the MSSIM value (non-periodic).
Remotesensing 12 00704 g014
Table 1. The mean values of indices of WDCM with periodic stripe.
Table 1. The mean values of indices of WDCM with periodic stripe.
I 20 60 100
r0.20.40.80–10.20.40.80–10.20.40.80–1
MPSNRNoisy29.10 26.09 23.08 26.5119.56 16.55 13.54 15.2815.12 12.11 9.10 11.94
SGE [23]39.82 38.85 39.87 39.2437.95 33.93 38.42 37.8134.07 15.15 10.31 18.27
WFAF [22]31.58 31.36 31.51 31.3631.08 29.59 30.43 30.5230.29 27.39 28.87 29.01
UTV [14]25.85 25.85 25.85 25.8425.85 25.84 25.85 25.8525.85 25.84 25.85 25.85
ASSTV [19]36.70 36.70 36.34 36.5736.70 36.71 34.64 36.1736.72 36.73 32.07 36.18
LRMID [34]42.46 42.39 42.41 42.4342.33 42.38 37.61 41.9242.16 41.96 25.18 41.05
LRTD [20]49.86 48.8542.10 47.8649.81 48.9541.76 48.9849.81 49.01 41.66 48.67
DL0S [16]43.74 41.85 39.14 42.1843.76 41.86 39.11 42.1643.77 41.91 39.10 42.81
Ours50.9047.95 42.5548.2854.2247.97 42.3350.1953.7950.1242.3250.94
MSSIMNoisy0.84520.74290.58460.77520.50030.28830.17530.42350.32010.11850.07650.2894
SGE [23]0.99070.9880.99090.99060.98450.96020.98680.97640.97120.29510.10960.7698
WFAF [22]0.94830.94740.94840.94860.94690.93840.94560.94820.94410.92180.93980.9367
UTV [14]0.90220.90220.90220.90220.90220.90220.90220.90220.90210.90220.90210.9022
ASSTV [19]0.98330.98330.98120.98420.98330.98330.9680.98460.98340.98340.93290.9792
LRMID [34]0.99540.99540.99550.99550.99530.99540.98020.99110.99490.99450.67960.9843
LRTD [20]0.9990.99860.99410.99920.9990.99870.99370.99790.9990.99870.99360.9989
DL0S [16]0.99680.99510.99180.99780.99680.99510.99180.99520.99680.99510.99180.9966
Ours0.99880.99670.99420.99410.99950.99850.99480.99890.99950.99890.99480.9996
MFSIMNoisy0.92710.8560.86620.91580.81570.64270.6940.83190.7480.53720.60320.7384
SGE [23]0.99270.99070.9930.99280.98760.97140.99050.99010.97560.65160.63040.8542
WFAF [22]0.96260.96170.96230.96810.96050.9540.95830.95980.95710.94180.95220.9568
UTV [14]0.94220.94220.94220.94220.94220.94220.94220.94220.94220.94220.94220.9422
ASSTV [19]0.98220.98220.98190.98230.98220.98220.97990.98310.98220.98230.97780.9818
LRMID [34]0.99650.99650.99660.99660.99630.99640.99320.99610.99580.99520.9820.9967
LRTD [20]0.99940.99910.99690.99890.99930.99910.99660.99670.99930.99910.99650.9989
DL0S [16]0.99690.99560.99330.99580.99690.99560.99320.99580.99690.99560.99320.9959
Ours0.99890.99670.99550.99880.99960.9980.99540.99890.99950.99880.99540.9992
SAMNoisy0.177 0.356 0.452 0.3780.387 0.776 0.931 0.8140.494 0.991 1.139 1.024
SGE [23]0.020 0.043 0.0150.0310.057 0.128 0.040 0.0980.100 0.840 1.090 0.911
WFAF [22]0.025 0.048 0.030 0.0380.053 0.122 0.065 0.0790.080 0.186 0.097 0.137
UTV [14]0.049 0.049 0.049 0.0490.050 0.050 0.051 0.0510.051 0.052 0.052 0.052
ASSTV [19]0.035 0.035 0.035 0.0350.035 0.035 0.035 0.0350.035 0.035 0.036 0.035
LRMID [34]0.038 0.039 0.039 0.0390.040 0.040 0.050 0.0460.042 0.045 0.169 0.064
LRTD [20]0.023 0.026 0.033 0.0290.023 0.026 0.033 0.0280.023 0.026 0.033 0.029
DL0S [16]0.0110.0210.0280.0230.011 0.021 0.0280.0190.011 0.021 0.028 0.018
Ours0.013 0.027 0.030 0.0250.0060.0170.031 0.0130.0070.0120.0310.014
ERGASNoisy131.10 185.40 262.20 241.61393.29 556.19 786.59 581.47655.48 926.98 1311.0986.87
SGE [23]38.16 42.68 37.96 39.5847.46 75.30 44.83 56.2477.76 653.70 1141.4878.94
WFAF [22]98.71 101.21 99.50 100.98104.69 124.43 112.74 118.36115.71 161.12 135.79 152.68
UTV [14]191.27 191.27 191.27 191.27191.28 191.33 191.29 191.30191.30 191.44 191.33 191.38
ASSTV [19]56.22 56.20 58.38 56.2856.19 56.14 70.14 59.3456.09 56.00 93.45 65.97
LRMID [34]30.80 31.00 31.00 30.9131.37 31.08 49.89 35.9431.91 32.16 205.92 49.78
LRTD [20]17.11 19.05 31.03 24.6117.15 19.0432.06 26.2517.15 19.03 32.38 29.37
DL0S [16]24.68 30.42 41.73 32.8424.62 30.34 41.98 36.2724.56 30.18 42.00 38.57
Ours10.8318.9529.0421.067.4620.51 30.6623.587.8912.4230.6926.75
The boldface digits in Table 2 represent the best results.
Table 2. The mean values of indices of WDCM with non-periodic stripe.
Table 2. The mean values of indices of WDCM with non-periodic stripe.
I 20 60 100
R0.20.40.80–10.20.40.80–10.20.40.80–1
MPSNRNoisy29.1226.0923.0827.1319.5816.5513.5417.3415.1412.129.112.85
SGE [23]37.0536.2834.0136.8930.1828.4723.4129.1922.5916.299.9717.92
WFAF [22]31.1230.7830.1930.8528.4126.9725.0727.3825.6323.6321.324.11
UTV [14]25.8525.8425.8225.8425.8425.7425.6225.7825.8325.5525.2925.68
ASSTV [19]35.8335.9734.2236.1232.1531.3726.8631.6228.4826.8321.8427.38
LRMID [34]39.7239.6635.9539.6732.4730.3325.2631.9527.2224.2819.5125.91
LRTD [20]47.8843.9630.544.9847.7842.3423.3644.6847.8540.2319.5641.38
DL0S [16]42.8841.331.4542.8842.8740.9426.9841.6742.1137.6823.1340.62
Ours48.5943.5236.4446.2553.0142.8131.0947.3952.5246.926.2447.19
MSSIMNoisy0.85990.75980.61330.83670.53330.3360.16640.41680.36840.17160.05520.2168
SGE [23]0.98710.98580.9790.98620.95240.93340.82090.92580.81910.50560.07670.6729
WFAF [22]0.94570.94430.94250.94550.92160.90880.88640.90940.87860.84290.78490.8581
UTV [14]0.90220.90220.9020.90210.90220.9010.89940.90110.9020.89860.89360.8988
ASSTV [19]0.98010.98050.97190.97790.95550.94320.85690.94120.89650.84480.65670.8567
LRMID [34]0.99230.99180.98080.99190.95520.91740.78310.92390.8510.73090.49850.7862
LRTD [20]0.99850.99620.93730.99680.99850.99480.80430.99130.99850.99220.66910.9936
DL0S [16]0.99630.99460.97360.99490.99620.99450.94020.99380.99520.99090.87810.9917
Ours0.99910.99610.98840.99890.99940.99580.97330.99870.99930.99790.92990.9981
MFSIMNoisy0.93170.88190.82750.88970.81040.70210.63240.71680.74210.61590.54370.6348
SGE [23]0.99120.99060.98670.99210.97710.96670.90650.96980.90660.75510.58740.7965
WFAF [22]0.96160.96110.95970.96120.95420.94880.93690.95140.94210.92850.90430.9341
UTV [14]0.94220.94220.94220.94220.94220.94220.94220.94220.94220.94220.94220.9422
ASSTV [19]0.980.98030.97590.97960.96640.95850.91110.95590.93590.90820.81650.9167
LRMID [34]0.99460.99450.98730.99370.97050.6510.87840.93670.91440.86280.75140.8934
LRTD [20]0.99880.99690.96190.99710.99880.99550.88020.99590.99880.99350.80220.9945
DL0S [16]0.99660.99550.98480.99620.99660.99540.97510.99560.99630.99420.95330.9939
Ours0.99930.99650.99180.99890.99950.99620.98840.99810.99930.99810.97720.9979
SAMNoisy0.1870.3090.440.3380.4470.7060.9210.7690.5930.9121.1290.927
SGE [23]0.0630.0810.1070.0890.1870.2370.40.2430.3870.7321.10.933
WFAF [22]0.0660.0860.10.0880.1830.2360.2690.2470.290.3650.4190.387
UTV [14]0.050.0540.0580.0560.0560.0870.1110.0910.0660.1260.1660.134
ASSTV [19]0.0350.0350.0350.0350.0370.0350.0810.0510.0680.0680.2310.098
LRMID [34]0.040.0390.0410.0400.0530.060.1530.0730.1220.1840.3870.214
LRTD [20]0.0230.0270.0550.0370.0230.0270.1880.0680.0230.0260.30.187
DL0S [16]0.0160.0310.1390.0690.0160.0340.2360.0880.0240.0610.3620.195
Ours0.0150.0270.0570.0280.0070.0150.1480.0410.0080.0170.2750.086
ERGASNoisy131.20185.49262.28188.39393.61556.4786.85586.24656.01927.461311.4951.44
SGE [23]52.8757.4374.5160.12117.20141.0261.77156.85282.73581.811187.8786.75
WFAF [22]103.96108.04115.9109.87143.46167.88210.53189.67200.30247.29325.82258.97
UTV [14]191.23191.52191.81191.64191.35193.56196.04194.35191.69197.60204.23197.64
ASSTV [19]61.6060.5473.462.8892.64101.09169.59121.07140.85170.36302.18188.69
LRMID [34]39.8140.0659.942.8189.02113.91203.93128.94162.79228.39395.39235.67
LRTD [20]18.8525.83111.5561.8618.9429.99253.9494.3818.8937.27393.12125.74
DL0S [16]27.1132.40101.1758.9427.2033.81169.9187.1431.1951.03264.12119.68
Ours15.0725.1156.6630.459.3628.16105.4056.979.8117.96188.6886.47
The boldface digits in Table 2 represent the best results.
Table 3. The MICV and MMRD index results of the real data experiments.
Table 3. The MICV and MMRD index results of the real data experiments.
DatasetIndexUTVWFAFSGEASSTVLRMIDDL0SLRTDGLTSA
(1)MICV28.17660.87158.94790.52698.618118.856110.054121.248
MMRD (%)0.20130.15040.17060.12060.09510.05380.08640.0547
(2)MICV16.31531.05630.84236.11846.21153.13753.20452.176
MMRD (%)0.09130.08440.09010.06280.02510.01980.03650.0139
Table 4. Comparison of running time on WDCM dataset with periodic and non-periodic stripe (in seconds).
Table 4. Comparison of running time on WDCM dataset with periodic and non-periodic stripe (in seconds).
Stripe TypeUTVWFAFSGEASSTVLRMIDDL0SLRTDGLTSA
Periodic10.948.893.9992.14209.12179.85785.98182.33
non-periodic11.239.184.3795.33214.51183.64789.46186.75

Share and Cite

MDPI and ACS Style

Kong, X.; Zhao, Y.; Xue, J.; Chan, J.C.-W.; Kong, S.G. Global and Local Tensor Sparse Approximation Models for Hyperspectral Image Destriping. Remote Sens. 2020, 12, 704. https://doi.org/10.3390/rs12040704

AMA Style

Kong X, Zhao Y, Xue J, Chan JC-W, Kong SG. Global and Local Tensor Sparse Approximation Models for Hyperspectral Image Destriping. Remote Sensing. 2020; 12(4):704. https://doi.org/10.3390/rs12040704

Chicago/Turabian Style

Kong, Xiangyang, Yongqiang Zhao, Jize Xue, Jonathan Cheung-Wai Chan, and Seong G. Kong. 2020. "Global and Local Tensor Sparse Approximation Models for Hyperspectral Image Destriping" Remote Sensing 12, no. 4: 704. https://doi.org/10.3390/rs12040704

APA Style

Kong, X., Zhao, Y., Xue, J., Chan, J. C. -W., & Kong, S. G. (2020). Global and Local Tensor Sparse Approximation Models for Hyperspectral Image Destriping. Remote Sensing, 12(4), 704. https://doi.org/10.3390/rs12040704

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop