Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Data-Driven Random Forest Models for Detecting Volcanic Hot Spots in Sentinel-2 MSI Images
Next Article in Special Issue
A Novel Knowledge Distillation Method for Self-Supervised Hyperspectral Image Classification
Previous Article in Journal
Editorial for the Special Issue ″Climate Modelling and Monitoring Using GNSS″
Previous Article in Special Issue
Hyperspectral Panoramic Image Stitching Using Robust Matching and Adaptive Bundle Adjustment
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dictionary Learning-Cooperated Matrix Decomposition for Hyperspectral Target Detection

1
Electronic Information School, Wuhan University, Wuhan 430072, China
2
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
3
Hubei Branch of The National Internet Emergency Center of China, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(17), 4369; https://doi.org/10.3390/rs14174369
Submission received: 4 August 2022 / Revised: 29 August 2022 / Accepted: 31 August 2022 / Published: 2 September 2022
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing: Methods and Applications)
Figure 1
<p>Process of the LRaSMD algorithm. <math display="inline"><semantics> <mi mathvariant="bold">L</mi> </semantics></math> is the low-rank part corresponding to the background. <math display="inline"><semantics> <mi mathvariant="bold">S</mi> </semantics></math> is the sparse part corresponding to the target. <math display="inline"><semantics> <mi mathvariant="bold">N</mi> </semantics></math> stands for the noise matrix.</p> ">
Figure 2
<p>Flowchart of the proposed algorithm.</p> ">
Figure 3
<p>Pseudocolor images and the corresponding ground truth of the five HSI datasets. (<b>a</b>) SanDiego-I. (<b>b</b>) SanDiego-II. (<b>c</b>) LosAngeles-I. (<b>d</b>) LosAngeles-II. (<b>e</b>) TexasCoast.</p> ">
Figure 4
<p>Low-rank part and sparse part with different settings of the tradeoff parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. (<b>a</b>) Results when <math display="inline"><semantics> <mi>λ</mi> </semantics></math> is set properly. (<b>b</b>) Results when <math display="inline"><semantics> <mi>λ</mi> </semantics></math> is set improperly.</p> ">
Figure 5
<p>AUC values with respect to the positive tradeoff parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. (<b>a</b>) San Diego−I; (<b>b</b>) San Diego−II; (<b>c</b>) Los Angeles−I; (<b>d</b>) Los Angeles−II; (<b>e</b>) Texas Coast.</p> ">
Figure 6
<p>Hyperspectral target detection map on the five datasets. (<b>a</b>) SanDiego-I. (<b>b</b>) SanDiego-II. (<b>c</b>) LosAngeles-I. (<b>d</b>) LosAngeles-II. (<b>e</b>) TexasCoast.</p> ">
Figure 7
<p>ROC curves of (<math display="inline"><semantics> <msub> <mi>P</mi> <mi>d</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>P</mi> <mi>f</mi> </msub> </semantics></math>) on the five datasets. (<b>a</b>) SanDiego−I. (<b>b</b>) SanDiego−II. (<b>c</b>) LosAngeles−I. (<b>d</b>) LosAngeles−II. (<b>e</b>) TexasCoast.</p> ">
Review Reports Versions Notes

Abstract

:
Hyperspectral target detection is one of the most challenging tasks in remote sensing due to limited spectral information. Many algorithms based on matrix decomposition (MD) are proposed to promote the separation of the background and targets, but they suffer from two problems: (1) Targets are detected with the criterion of reconstruction residuals, and the imbalanced number of background and target atoms in union dictionary may lead to misclassification of targets. (2) The detection results are susceptible to the quality of the a p r i o r i target spectra, thus obtaining inferior performance because of the inevitable spectral variability. In this paper, we propose a matrix decomposition-based detector named dictionary learning-cooperated matrix decomposition (DLcMD) for hyperspectral target detection. The procedure of DLcMD is two-fold. First, the low rank and sparse matrix decomposition (LRaSMD) is exploited to separate targets from the background due to its insensitivity to the imbalanced number of background and target atoms, which can reduce the misclassification of targets. Inspired by dictionary learning, the target atoms are updated during LRaSMD to alleviate the impact of spectral variability. After that, a binary hypothesis model specifically designed for LRaSMD is proposed, and a generalized likelihood ratio test (GLRT) is performed to obtain the final detection result. Experimental results on five datasets have shown the reliability of the proposed method. Especially in the Los Angeles-II dataset, the area under the curve (AUC) value is nearly 16% higher than the average value of the other seven detectors, which reveals the superiority of DLcMD in hyperspectral target detection.

1. Introduction

Hyperspectral imagery (HSI) is a three-dimensional cube staked with hundreds of narrow-band images, with an electromagnetic spectrum covering from visible to far-infrared range. Hyperspectral images possess higher spectral resolution than infrared and multispectral images, thus providing more precise information about the objects [1,2]. Taking these advantages, it is natural for HSI to be one of the most essential tools to detect and recognize ground surface materials in the remote sensing domain. After decades of development, HSIs are now composed of hundreds or even thousands of bands and show their superiority in classification and target detection [3,4]. HSI attracts more and more attention and is now a hotspot in remote sensing, which has been widely applied in various fields, including environmental monitoring, mineral exploration, target detection, and intelligent agriculture [5,6,7,8,9,10,11].
Hyperspectral target detection aims at exploring targets in the scene with very little or even no spectral information about the targets. As an essential part of target detection, anomaly detection tries to distinguish objects that differ from the majority of the image in an unsupervised manner [12,13,14,15]. On the contrary, if we can obtain the spectrum of the target, we are going to separate a specific kind of object from the background. Target detection in HSIs can be viewed as a binary classification problem, where pixels are labeled as background or target by comparing their spectral characteristics with given targets. In practice, there is no prior knowledge about the background, and only a few or even one target spectra are available; this makes target detection a hot but challenging task in hyperspectral image processing [16,17,18]. There are a variety of algorithms based on statistical theory or binary hypothesis models to exploit the implicit information in HSIs. Constraint energy minimization (CEM) [19,20] and target-constrained interference-minimized filter (TCIMF) [21] design a specific finite impulse response (FIR) filter to minimize the filter’s output energy while preserving the response of target pixels. Adaptive coherence/cosine estimator (ACE) [22] and spectral matched filter (SMF) [23] assume that targets share the same covariance matrix with the background but different mean values.
Recent years have witnessed the development of matrix decomposition (MD). MD-based algorithms do not exert any distribution assumptions on data and noise, and it decomposes the original input data into several parts by gradually minimizing the objective function until convergence. The original data without preprocessing is usually directly used for decomposition and thus can be applied to a variety of datasets. The well-known optimization theory is exploited to ensure the physical meanings are maintained in each part during the procedure. Taking these advantages, MD-based algorithms now have been widely employed in many fields, including but not limited to computer vision, recommendation systems, and natural language processing. In terms of hyperspectral target detection, there are many MD-based algorithms proposed to better distinguish targets from the background as well. Take sparse representation (SR) for instance, each pixel can be linearly represented by a few items in an over-complete dictionary. Sparsity-based target detector (STD) [24] exploits training samples from both the background and targets to represent each pixel in the scene and the class label of each pixel is determined by the residual of recovery. Wu et al. [25] model background pixels with low rank and sparse representation and targets with sparse representation. Furthermore, the detection model is incorporated into a multitask learning framework to reduce spectral redundancy. Considering that 1 norm minimization requires strong incoherence among atoms in the dictionary, Ref. [26] utilizes the p norm in SR to obtain a more accurate approximation; a thresholding method is thus proposed to solve the non-convex coding. Zhu et al. [27] propose a binary-class SR model to separate targets from the background, a new target dictionary construction method based on a given target spectrum is derived to get more sufficient target samples. Nevertheless, MD-based detectors still face the following challenging problems.
(1)
The imbalanced amount of training samples between targets and the background always lead to the misclassification of targets. The primary purpose of MD-based methods such as SR is to classify each sample by the corresponding reconstruction residual. The number of atoms of each class is equal to one another, and each sample is represented in a competing pattern. In hyperspectral target detection, however, the ratio between the number of background and target atoms is desperately skewed under the low probability of the occurrence of targets. Under such a circumstance, the target pixels tend to be misclassified as background, thus deteriorating the final result.
(2)
These MD-based detectors rely on the quality of the a p r i o r i target spectra, which are usually contaminated by spectral variability. An ideal target spectrum is supposed to be pure and representative of the corresponding material. In most cases, the target spectra are derived from the known target pixels in the image. Unfortunately, this strategy may lead to degradation due to the spectral variability in HSIs. The uncompensated atmospheric effects and contamination by adjacent pixels make it difficult to obtain highly qualified a p r i o r i target spectra. Given a set of stained target spectra, the MD-based detectors fail to separate target pixels from the background and subsequently lead to inferior detection performance.
Overall, most of the MD-based algorithms suffer from the above obstacles and thus fail to yield an expressive detection performance.
Considering the spectral variability in HSIs, it is necessary to generate a set of optimal target spectra from the limited reference target spectra. Meanwhile, there is also a clear need for a new detection model to overcome the imbalanced amount of training samples between targets and the background. To address these problems, we propose a new MD-based detector for hyperspectral target detection named the dictionary learning-cooperated matrix decomposition (DLcMD) detector. First, low rank and sparse matrix decomposition (LRaSMD) [28] rather than SR is utilized to separate the targets from the background. The reason why we employ LRaSMD is that it can better separate targets from the background and artfully avert the negative impact of the imbalance between targets and the background. Inspired by dictionary learning, we further update the target dictionary in each iteration of LRaSMD to alleviate the spectral variations and reach a more compact representation. After that, we construct a binary hypothesis model based on LRaSMD, and a generalized likelihood ratio test (GLRT) is introduced in DLcMD to obtain a more meaningful result.
Following are two contributions of the proposed DLcMD detector for hyperspectral target detection.
(1)
A LRaSMD-based hypothesis model is proposed for hyperspectral target detection. Here, LRaSMD rather than SR is used to separate targets from the background because of the insensitivity of LRaSMD to the imbalanced amount of target pixels and background pixels. Meanwhile, GLRT is also introduced to better get rid of this dilemma.
(2)
The dictionary learning is incorporated into LRaSMD to avert the degradation caused by spectral variability. With the aim of forming a more compact representation for detection, the target dictionary is updated in each iteration of LRaSMD, and the final detection result verifies the rationality of this strategy.
The rest of the paper is arranged as follows. Section 2 briefly introduces two widely used models for hyperspectral target detection. The proposed DLcMD detector will be described in Section 3 in detail. Extensive experiments are conducted in Section 4 to demonstrate the effectiveness of the proposed DLcMD algorithm. Conclusions are drawn in Section 5.

2. Related Works

2.1. The Linear Mixing Model

Because of the limitation of spatial resolution, many of the pixels in HSI consist of more than one material in practice, which are the so-called mixed pixels. The well-known linear mixing model (LMM) [29] assumes that any pixel x R b in HSI can be expressed as:
x = E α + n s . t . α 0 , 1 p T α = 1
where E R b × p consists of p endmembers, α denotes the abundance fractions corresponding to each endmember, and n is the noise vector. The abundance vector should be subject to the abundance nonnegative constraint and the abundance sum-to-one constraint simultaneously. Many tasks, such as hyperspectral unmixing and target detection, are based on this model, and a variety of algorithms are proposed to solve this problem [30,31].
Derived from LMM, the replacement model [32] divides E α in (1) into two parts, standing for the target spectral signature e t and background spectral signature e b . The replacement model is usually exploited in target detection, and a binary hypothesis model based on this can be written as:
H 0 : x = e b + n , t a r g e t a b s e n t H 1 : x = k e t + ( 1 k ) e b + n , t a r g e t p r e s e n t
where k denotes the proportion of targets in a single pixel x .
When we have access to the target dictionary D t and background dictionary D b , (2) is specified as:
H 0 : x = D b γ + n 0 , t a r g e t a b s e n t H 1 : x = D u β + n 1 = D b β b + D t β t + n 1 , t a r g e t p r e s e n t
where γ and β are the abundance vectors. In practice, however, the background dictionary D b is unavailable and is usually individually generated for each PUT through a dual window.
Before figuring out which class each PUT belongs to, the abundance vectors γ and β need to be solved. SR assumes that a pixel can be sparsely represented by several atoms in the dictionary. To be specific, most elements in γ and β are zero, while the remaining part satisfies the nonnegative constraint and sum-to-one constraint. SR shows its advantage in many HSI-related tasks, including target detection, hence many SR-based algorithms [3,24,31,33] are proposed to find the optimal abundance vector.
Despite the fact that these SR-based detectors are effective in HSI-related tasks, there are two major drawbacks that need addressing.
(1)
In most cases, the number of atoms in the target dictionary and background dictionary are extremely imbalanced. When encountered with a target pixel, SR tends to select more background atoms, and thus, the reconstruction residual will dramatically increase accordingly. Consequently, an inferior detection result is always obtained.
(2)
As there is no prior knowledge about the background in hyperspectral target detection, most SR-based detectors construct the background dictionary by applying a dual concentric window on each PUT. Whereas the sizes of the window are set manually, which is elaborative and varies for different HSIs. What is worse, some target pixels may corrupt the background dictionary, leading to degradation in detection performance. In addition, it is time consuming for these detectors because they have to traverse the entire image to construct the dictionaries for pixels in the scene.

2.2. Low Rank and Sparse Matrix Decomposition

Obstacles in SR suggest we find a more practical way to separate targets from the background. Some studies [13,34,35,36] have pointed out that the background of an image lies in a low-dimensional subspace, while targets are randomly distributed and have a sparse property. Under this assumption, LRaSMD holds that a reshaped HSI X R b × n , which consists of n pixels with b bands, is the sum of a low-rank matrix L corresponding to the background, a sparse matrix S stands for the target, and a noise matrix N , as shown in Figure 1. This method makes fewer assumptions about the background and targets, which has been widely used in many detection-related tasks.
LRaSMD is usually formulated as an optimization problem as follows.
min L , S r a n k ( L ) + λ · c a r d ( S ) s . t . X = L + S + N
where r a n k ( · ) and c a r d ( · ) denote the rank and cardinality of a certain matrix, respectively, while λ is a positive tradeoff parameter that balances these two parts. However, due to the discrete nature of the rank function, problem (4) is non-convex and cannot be solved in polynomial time. Fortunately, it has been proven in [37] that the nuclear norm (i.e., the sum of singular values) is a good surrogate for the rank function. Meanwhile, since the occurrence probability of targets is relatively low, only a few columns of matrix S are supposed to be nonzero. The 2 , 1 norm of a matrix is defined as the sum of the 2 norm of each column, which encourages the columns of S to be zero. It is utilized for “sample-specific” corruptions [38], which is feasible for modeling the targets in hyperspectral images. Based on the above analysis, the following problem is a usual surrogate for problem (4).
min L , S L * + λ S 2 , 1 s . t . X = L + S + N
where · * and · 2 , 1 are the nuclear norm and 2 , 1 norm of a certain matrix, respectively. There are many algorithms proposed to solve the above optimization problem, such as alternating direction method of multipliers (ADMM) [39], GoDecomposition (GoDec) [34] and linearized alternating direction method with adaptive penalty (LADMAP) [40].
When it comes to target detection in HSI, each column in the sparse matrix S can be linearly represented by a dictionary D R b × N t , which consists of N t target spectra given in advance; problem (5) can thus be substituted as follows.
min L , A L * + λ A 2 , 1 s . t . X = L + DA + N
where DA = S . Obviously, problem (5) is actually a special case of problem (6) by setting D = I .

3. Proposed Methodology

3.1. LRaSMD-Based Hypothesis Model

LRaSMD is liberated from the elaborative background dictionary design, as shown in (6). Since it can separate targets from the background automatically by the different characteristics between these two parts, LRaSMD can avert the misclassification of target pixels caused by SR as well. Moreover, there is only one parameter λ that needs adjusting, which will make LRaSMD less sensitive to different scenes in practice.
The above analysis motivates a new hypothesis model based on LRaSMD for hyperspectral target detection. Following the formulation in (2), we can obtain the LRaSMD-based hypothesis model for each PUT x as follows.
H 0 : x = l + n 0 , t a r g e t a b s e n t H 1 : x = l + D α + n 1 , t a r g e t p r e s e n t
where l and D α are the columns of matrices L and DA corresponding to the PUT.
In the null hypothesis, the absence of a target in x demonstrates that elements in coefficient vector α are close to zero and thus can be discarded when representing x . When it comes to the alternate hypothesis, however, elements in α are not zero because the target is present in x , and D α cannot be ignored when constructing x . This new target detection hypothesis model takes advantage of LRaSMD in better distinguishing targets from the background, thus being more sensible in practice.
In this LRaSMD-based hypothesis model, we hold that the reconstruction residuals under different hypotheses obey the Gaussian distribution with the same covariance structure but different variances:
n 0 N ( 0 , σ 0 2 Γ ) n 1 N ( 0 , σ 1 2 Γ )
where σ 0 2 and σ 1 2 are the corresponding variances for both hypotheses. Γ is the covariance matrix, which is estimated from a given independent dataset Y = { y i | y i N ( 0 , Γ ) , i = 1 , , N } [41]. When N is very large, the covariance matrix can be obtained by:
Γ ^ = i = 1 N y i y i T
In practice, it is not uncommon to derive the covariance matrix by the residual. In our work, we simply utilize the noise part ( N ) to construct Γ ^ . The joint likelihood equations under each hypothesis can be subsequently obtained according to (7)–(9).
L ( x , Y | H 0 ) = 1 ( ( 2 π ) b | Γ ^ | ) ( N + 1 ) 2 ( σ 0 2 ) b 2 e x p 1 2 i = 1 N y i T Γ ^ 1 y i 1 2 σ 0 2 ( x l ) T Γ ^ 1 ( x l ) L ( x , Y | H 1 ) = 1 ( ( 2 π ) b | Γ ^ | ) ( N + 1 ) 2 ( σ 1 2 ) b 2 e x p 1 2 i = 1 N y i T Γ ^ 1 y i 1 2 σ 1 2 ( x l D α ) T Γ ^ 1 ( x l D α )
In order to obtain a GLRT-based detector, we estimate the variances for both hypotheses under the maximum likelihood estimation (MLE). After some numerical operations, we have:
σ ^ 0 2 = arg max σ 0 2 L ( x , Y | H 0 ) = 1 b ( x l ) T Γ ^ 1 ( x l ) σ ^ 1 2 = arg max σ 1 2 L ( x , Y | H 1 ) = 1 b ( x l D α ) T Γ ^ 1 ( x l D α )
After that, the GLRT-based [42] detector under this hypothesis model can be obtained as follows.
G L R ( x ) L ( x , Y ; σ ^ 1 2 | H 1 ) L ( x , Y ; σ ^ 0 2 | H 0 ) 2 / b = ( x l ) T Γ ^ 1 ( x l ) ( x l D α ) T Γ ^ 1 ( x l D α )
When the target is absent, ( x l ) ( x l D α ) and G L R ( x ) equals 1. Whereas, when the target is present in the PUT, the Mahalanobis distance between x and l is greater than that between x and l + D α , hence G L R ( x ) > 1 . In order to obtain a more meaningful result, we rewrite (14) by subtracting one for each PUT and obtain the DLcMD detector:
D D L c M D ( x ) = ( x l ) T Γ ^ 1 ( x l ) ( x l D α ) T Γ ^ 1 ( x l D α ) 1
After applying LRaSMD to the test HSI, the detection result for each PUT can be obtained with (13). Pixels with values near zero tend to be background, while those with large values are supposed to have a higher probability of being targets.

3.2. Dictionary Learning-Cooperated Matrix Decomposition

In most hyperspectral target detectors, the a p r i o r i target spectra are directly used to determine whether a PUT is a target or not. However, the spectral variability caused by various mechanisms, such as radiometric and atmospheric effects, may influence the accuracy of target spectra. In this instance, detectors that are susceptible to the quality of the given target spectral information always lead to inferior performance.
Some efforts are made to get out of this dilemma. In [43], Yang et al. utilize an inequality constraint to make CEM more robust to the variation. A reweighted ACE (rACE) [44] is proposed to reconstruct an optimal target spectrum from the given spectrum, but the performance relies on the result of its first iteration. In [45], an adaptive weighted learning method is developed to obtain the specific target spectrum for hyperspectral target detection.
In this section, we embed dictionary learning into LRaSMD to alternately generate more meaningful target spectra and separate targets from the background. Derived from (6), LRaSMD with dictionary learning for hyperspectral target detection can be transformed into an optimization problem as follows.
min L , D , A , J L * + λ J 2 , 1 s . t . X = L + DA + N A = J
where J is an auxiliary variable to make the objective function separable. The augmented Lagrangian function of problem (14) is:
f ( L , D , A , Y 1 , Y 2 ) = L * + λ J 2 , 1 + Y 1 , X L DA + Y 2 , A J + μ 2 ( X L DA F 2 + A J F 2 )
wherein Y 1 R b × n and Y 2 R b × N t are the Lagrange multipliers, · stands for the dot product, and μ > 0 is the penalty parameter. The global optimal solution can be obtained because the augmented Lagrangian function is always convex. The widely used ADMM is utilized here, and the problem can be divided into several subproblems.
(1) Fix other variables and update L . The objective function with respect to L can be rewritten as follows.
L k + 1 = arg min L L * + Y 1 , k , X L D k A k + μ k 2 X L D k A k F 2 = arg min L L * + μ k 2 L ( X D k A k + ( Y 1 , k μ k ) ) F 2
Subproblem (16) can be solved via the singular value thresholding (SVT) operator [46]:
L k + 1 = U L S 1 / μ k [ Σ L ] V L T
with
X D k A k + Y 1 , k μ k = U L Σ L V L T
and
S 1 / μ k [ Σ L ] i j = ( Σ L ) i j 1 / μ k , ( Σ L ) i j > 1 / μ k ( Σ L ) i j + 1 / μ k , ( Σ L ) i j < 1 / μ k 0 , o t h e r w i s e
(2) Fix other variables and update J . The objective function with respect to J can be rewritten as follows.
J k + 1 = arg min J λ J 2 , 1 + Y 2 , k , A k J + μ k 2 A k J F 2 = arg min J J 2 , 1 + μ k 2 λ J ( A k + ( Y 2 , k μ k ) ) F 2
Solution of the i-th column of J can be obtained via the 2 , 1 norm operator [47]:
J · i , k + 1 = ( 1 λ / μ k Q · i ) Q · i , Q · i > λ / μ k 0 , o t h e r w i s e
where Q = A k + Y 2 , k / μ k .
(3) Fix other variables and update A . The objective function with respect to A can be rewritten as follows.
A k + 1 = arg min A Y 1 , k , X L k + 1 D k A + Y 2 , k , A J k + 1 + μ k 2 ( X L k + 1 D k A F 2 + A J k + 1 F 2 ) = μ k 2 ( ( X L k + 1 + Y 1 , k μ k ) D k A F 2 + A ( J k + 1 Y 2 , k μ k ) F 2 )
Subproblem (22) has the solution:
A k + 1 = ( D k T D k + I ) 1 ( D k T Y 1 , k Y 2 , k μ k + D k T X D k T L k + 1 + J k + 1 )
(4) Fix other variables and update D . The objective function with respect to D can be rewritten as follows.
D k + 1 = arg min D Y 1 , k , X L k + 1 D A k + 1 + μ k 2 X L k + 1 D A k + 1 F 2 = μ k 2 ( X L k + 1 + Y 1 , k μ k ) D A k + 1 F 2
Subproblem (24) has the solution:
D k + 1 = ( X L k + 1 + Y 1 , k μ k ) A k + 1
where A is the pseudoinverse operation of matrix A .
(5) Fix other variables and update Y 1 and Y 2 . The Lagrangian multipliers can be updated with the gradient ascent method.
Y 1 , k + 1 = Y 1 , k + μ k ( X L k + 1 D k + 1 A k + 1 ) Y 2 , k + 1 = Y 2 , k + μ k ( A k + 1 J k + 1 )
(6) Fix other variables and update μ . The penalty parameter μ is adaptively updated with the criterion:
μ k + 1 = m i n ( μ m a x , ρ μ k )
where μ m a x is an upper bound of { μ k } , and ρ > 0 is defined as
ρ = ρ 0 > 1 , ( N k + 1 F 2 N k F 2 ) / N k F 2 > ϵ ρ 1 < 1 , o t h e r w i s e
wherein N k = X L k D k A k , and the tolerance ϵ > 0 is a predefined value. This penalty is changed adaptively to accelerate the convergence [40].
The optimal solution can be obtained by alternately updating the above variables until convergence.

3.3. Final Scheme of DLcMD

The scheme of the proposed DLcMD detector for hyperspectral target detection is demonstrated in Algorithm 1. The flow chart of the whole algorithm is shown in Figure 2. Given an HSI to be tested, DLcMD first exploits LRaSMD to decompose it into three parts, denoting background, target, and noise. Here, the widely used ADMM is used to alternately update the variables to approach the optimal solution. Different from the traditional MD-based methods, the target dictionary is updated during each iteration of LRaSMD. By this means, the spectral variability caused by atmospheric effects is alleviated. The decomposition procedure goes until convergence. After that, a new hypothesis model dedicated to LRaSMD is constructed, and a GLRT-based detector is derived from this hypothesis model. The final detection result is finally obtained by this GLRT-based detector.
Algorithm 1 The proposed DLcMD detector
Input:
     (1) the reshaped HSI data set X R b × n ;
     (2) the target dictionary D 0 R b × N t given in advance;
     (3) the tradeoff parameter λ > 0 .
Output:
      hyperspectral target detection map.
Initialization:
      L 0 = X , A 0 = J 0 = 0 N t × n , μ 0 = 1 , μ m a x = 10 6 , ρ 0 = 1.1 , ρ 1 = 0.99 , ϵ = 10 3 , the Lagrangian multipliers Y 1 , 0 and Y 2 , 0 are initialized randomly, I t e r m a x , k = 0 .
Procedure:
1:
Repeat:
2:
  Update L k + 1 via Equation (17).
3:
  Update J k + 1 via Equation (21).
4:
  Update A k + 1 via Equation (23).
5:
  Update D k + 1 via Equation (25).
6:
  Update Lagrangian multipliers Y 1 , k + 1 and Y 2 , k + 1 via Equation (26).
7:
  Update μ k + 1 via Equation (27).
8:
   k : = k + 1 .
9:
Until: k > = I t e r m a x
10:
For each PUT x i in X , obtain the detection result D D L c M D ( x i ) via Equation (13).

4. Experiments

In this section, we employ several experiments with five HSI datasets to evaluate the performance of the proposed DLcMD detector for hyperspectral target detection. After a brief introduction of the five hyperspectral datasets employed here, we will first analyze the impact of parameter λ on the final detection performance. Then, DLcMD is compared with the other classical and advanced detectors. The experiments are implemented in MATLAB on an Intel Quad-Core i5-6200 CPU with 4 GB of RAM.

4.1. Datasets

In this paper, five HSI datasets are employed to evaluate the performance of the proposed DLcMD detector. The pseudocolor images and the corresponding ground truth are shown in Figure 3.
(1)
SanDiego-I: This hyperspectral image was captured by the airborne visible/infrared imaging spectrometer (AVIRIS) sensor over the San Diego airport area. This dataset has a 3.5-m spatial resolution and 10-nm spectral resolution. The size of this image is 100 × 100 , which contains 224 spectral channels in wavelengths ranging from 370 to 2510 nm. A total of 189 bands remain after the removal of bad bands (1–6, 33–35, 97, 107–113, 153–166, and 221–224), which correspond to low signal-to-noise ratio and water absorption regions. Three aircraft in the scene, which consists of 134 pixels in total, are treated as targets.
(2)
SanDiego-II: This dataset was also derived from the AVIRIS sensor. This 100 × 100 image has a 3.5-m spatial resolution with 10-nm spectral resolution. A total of 189 bands remain after bad bands are removed. Three airplanes located at the upper right of the image, which consists of 57 pixels, are treated as targets.
(3)
LosAngeles-I: This dataset was also derived from the AVIRIS sensor with a 7.1-m spatial resolution. The size of the image is 100 × 100 , with 205 spectral bands in wavelengths ranging from 400 to 2500 nm after water vapor absorption bands are removed. There are 87 pixels defined as targets, representing two aircraft.
(4)
LosAngeles-II: This dataset was also derived from the AVIRIS sensor on the airborne platform. This 100 × 100 × 205 image has a 7.1-m spatial resolution and 10-nm spectral resolution. There are 25 human-made objects of different sizes that are treated as targets.
(5)
TexasCoast: This dataset was derived from the AVIRIS sensor on the airborne platform. This image consists of 100 × 100 pixels with 207 bands after the removal of bad bands. The spatial resolution of this dataset is 17.2-m per pixel, and 20 objects corresponding to oil tanks of different sizes are regarded as targets.

4.2. Experimental Settings

For each of these five datasets, we randomly select one pixel from each object to meet the requirement in practice. Thus, we have N t = 3 , N t = 3 , N t = 2 , N t = 25 , and N t = 20 target spectra in each dataset for hyperspectral target detection, respectively.
The detectors utilized here in comparison with DLcMD are: ACE [22], SMF [23], the matched subspace detector (MSD) [48], STD [24], HSSD [30], the background learning based on target suppression constraint (BLTSC) method [49], and the decomposition model with background dictionary learning (DM-BDL) [50]. STD and HSSD require a background dictionary to better distinguish targets from the background, and the dual concentric window strategy is usually utilized to obtain the background spectra. The range of radius of the outer window R o u t is set as {3, 4, 5, 6, 7, 8, 9}, and that of the inner window R i n is set from 1 to R o u t 1 , when R o u t is fixed. Further, these two detectors adopt SR to linearly represent each pixel, and the sparsity level K 0 is a significant parameter in these two detectors. As there is no generic method to select the optimal value of K 0 , it is usually searched manually. The range of K 0 is set as {2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 30}. Moreover, HSSD needs to purify the background dictionary to obtain a promising result; coefficient t is selected from {1, 2, 3, 4}, following the setting in [30]. In terms of MSD, the eigenvectors corresponding to the first K largest eigenvalues of the covariance matrix formed by the training atoms are used to generate the background dictionary, and the range of K is set the same as K 0 . BLTSC consists of the background selection block, adversarial autoencoder block, and target detection block. Thus it is necessary to design the network structure (depth of the network d and the number of hidden nodes N h i d ) and related parameters (tradeoff parameter λ , threshold values ϵ , and δ ). DM-BDL requires a background dictionary to present each pixel; the learning parameter γ and the number of atoms K are utilized to train a discriminative background dictionary. The parameters of these target detectors are set to get the optimal detection performance for the sake of fairness.
Two widely used metrics, the receiver operating characteristic (ROC) curve and area under the curve (AUC), are utilized to evaluate the performance. ROC curves reveal the relationship between the true positive rate P T P R and false positive rate P F P R with different thresholding rates. Specifically, P T P R and P F P R are defined as follows.
P T P R = N d e t e c t e d N t a r g e t P F P R = N f a l s e N b a c k g r o u n d
where N d e t e c t e d and N t a r g e t are the number of detected targets and the total number of target pixels, while N f a l s e and N b a c k g r o u n d are the number of pixels mistaken as target and the total number of background pixels in the image. A detector is supposed to have better detection performance when its ROC curve is at the top-left of the other detectors. However, this situation does not always occur in practice. Thus the AUC value is introduced to quantitatively evaluate the detection performance, and a higher AUC demonstrates a better detection performance.

4.3. Parameter Analysis

The proposed DLcMD detector involves only one parameter needing fine-tuning: the positive tradeoff parameter λ . Here, we will discuss the effect of λ on the final detection performance for DLcMD. All of the five hyperspectral datasets will be exploited in this subsection to provide a convincing statement.
The tradeoff parameter λ determines the information maintained in the low-rank part and the sparse part, which influences the detection performance ulteriorly. Here, we execute an extra experiment to show how the tradeoff parameter λ influences the separation between background and targets. As shown in Figure 4, when λ is set properly, the low-rank part and sparse part can preserve the most valuable information in the background and targets, respectively. However, when the tradeoff parameter λ is set improperly, especially when λ is very large, the target information may be lost in the sparse part, which will degrade the detection performance. The reason behind this is that increasing the value of λ imposes a stronger penalty on the sparse part and vice versa, a relatively large set of λ enforces the 2 , 1 norm to be small, and thus, information about targets is distributed into the low-rank part. Based on the analysis of this additional experiment, we find that a satisfying setting of λ provides robust and outstanding performance in distinguishing targets from the background.
It is not surprising that the optimal values of λ vary from each other for these hyperspectral datasets. In order to figure out the optimal value for these five hyperspectral datasets, we set the range of λ as { 10 6 , 10 5 , 10 4 , 10 3 , 10 2 , 10 1 } . The detection performances are evaluated through the AUC values, and the results with respect to λ are shown in Figure 5. For the SanDiego-I dataset, when parameter λ is relatively very small (i.e., λ 10 2 ), the AUC values with respect to λ are basically unchanged. This can also be found in all of the other hyperspectral datasets, except Los Angeles-I. Note that despite the decrease in AUC values when λ is set to 10 1 , DLcMD still yields satisfying performance with AUC values higher than or close to 0.99. However, things are different when it comes to the Los Angeles-I dataset. The AUC values stay unchanged with the increase in the value of λ until it reaches a certain threshold value. The best detection result is obtained when λ is set to be 10 2 . Then, there is a fall after the value of λ increases to 10 1 . The reason for this may be the complexity of the background. We find that the background is simple in the San Diego-I, San Diego-II, Los Angeles-II, and Texas Coast datasets but more complicated in the Los Angeles-I dataset. HSIs with a simpler background tend to have a more compact low-rank representation, and the separation is less sensitive to the tradeoff parameter λ . The optimal setting of λ encourages better separation between the background and targets. In this experiment, the optimal detection results for each of these five datasets are obtained when the tradeoff parameter λ is set to 10 5 , 10 5 , 10 2 , 10 3 , and 10 4 , respectively. For simplicity purposes, parameter λ is set to be 10 2 for all of the five hyperspectral datasets, which will be directly used in the next subsection for comparison with other hyperspectral target detectors.

4.4. Detection Performance

In this subsection, we will compare our DLcMD detector with the state-of-the-art for hyperspectral target detection. All of the datasets described in Section 4.1 are used in this subsection. The ROC curves and AUC values are used to evaluate the detection performance. Before conducting the experiment, we briefly introduce several detectors and explain why we apply these detectors for comparison.
(1)
ACE: It discards any structured background information and uses a statistical distribution to model the background. The likelihoods are taken as a ratio to yield a GLRT-based detector. ACE is one of the powerful subpixel target detectors and has been widely used in hyperspectral target detection.
(2)
SMF: The same as ACE, the SMF detector assumes that the background and targets share the same covariance matrix but different mean values. SMF finds a filter that maximizes the SCR and is also a classical method for target detection in HSIs.
(3)
MSD: In the matched subspace model, a binary hypothesis model is introduced to determine the classification result of each sample. MSD shares the same form with DLcMD but different distributions of Gaussian noise. Further, a numerical solution of the abundance vector is obtained with fully constrained least squares (FCLS) [51].
(4)
STD: This detector sparsely represents each pixel with a union dictionary consisting of the background and target spectra. STD is a matrix decomposition-based detector and is exploited here for comparison with our proposed DLcMD method.
(5)
HSSD: This detector linearly represents each pixel sparsely, but with different dictionaries under the two competing hypotheses. It assumes that the reconstruction residuals under these two hypotheses obey the Gaussian distribution with the same covariance structure but different variances, the same as our proposed detector.
(6)
BLTSC: Considering the insufficiency of a p r i o r i target spectra, this detector learns the distribution of background samples derived by CEM, and the discrepancy between the reconstructed spectra and the original ones is used to spot the target. The result is reweighted to suppress the undesired background. BLTSC is a reconstruction-based detector and is used here to show the effect of GLRT in our proposed method.
(7)
DM-BDL: This detector is based on LRaSMD, and the dictionary learning strategy is also exploited during the iteration, the same as our DLcMD. However, DM-BDL learns the background dictionary while ours focuses on targets. It is compared with our method to illustrate the advantage of DLcMD in dealing with spectral variability in hyperspectral imagery.
Parameters are set to get the optimal detection performance for all of the eight hyperspectral target detectors. From the analysis in Section 4.3, we can find that our DLcMD detector is insensitive to the setting of parameter λ when the background is not very complicated, and λ is set to 10 2 for all of the datasets in our follow-up experiments. It is noteworthy that the global ACE and global SMF rather than the local versions are adopted in this experiment because the latter ones are labor intensive to search for the optimal sizes of the concentric dual window, and the optimal results of local versions are less satisfying than those of the former ones. The number of background dictionary atoms K for MSD and DM-BDL are set to { 3 , 30 , 4 , 10 , 30 } and { 20 , 20 , 20 , 20 , 20 } in these five datasets, respectively. The learning parameter γ is set to 20 for DM-BDL in all of the five datasets, as suggested in [50]. As for STD and HSSD, the radii of the outer and inner rectangle windows ( R o u t , R i n ) are set to { ( 9 , 8 ) , ( 9 , 8 ) , ( 9 , 8 ) , ( 6 , 5 ) , ( 3 , 2 ) } and { ( 9 , 8 ) , ( 8 , 7 ) , ( 9 , 8 ) , ( 6 , 5 ) , ( 3 , 2 ) } , respectively. Coefficient t about the background purification in HSSD is set to { 3 , 1 , 3 , 1 , 1 } , after labor-intensively searching. All of the parameters in BLTSC remain the same, as suggested in [49].
The detection map and ROC curves are illustrated in Figure 6 and Figure 7; the corresponding AUC values are listed in Table 1. Since the detection result ranges obtained from these detectors vary from each other, all of the results are constricted to the range of 0 to 1 with min-max normalization.
Qualitatively, as shown in Figure 6, ACE can hardly distinguish targets from the background, especially in Los Angeles-I and Los Angeles-II datasets. In contrast, SMF, MSD, and BLTSC can better highlight targets, but the detection results are contaminated by the noise, which degrades the detection performance. It needs addressing that the detection maps obtained by HSSD are almost zeros except for several pixels, which are taken as the target dictionaries given in advance. Consequently, most of the target pixels are shadowed by pixels that are relatively more similar to the target dictionary, and it is hard to determine a thresholding value to tell the background and targets apart. STD shows a competing advantage in highlighting targets when compared with SMF, and the detection map is smoother than SMF at the same time. However, the background is also conspicuous, which increases the difficulty of separating targets. In detection maps obtained by DM-BDL and the proposed DLcMD, the targets show dramatically high contrast with the background; targets can be clearly observed with the naked eye. However, the response of several background pixels in DM-BDL is so strong that this may deteriorate the detection performance. In contrast, our DLcMD can suppress the background well. The reason for this may be that the spectral variability is alleviated by updating the target dictionary in each iteration, and thus, DLcMD can construct a more compact representation. Generally speaking, DLcMD outperforms other detectors in highlighting targets while suppressing the background.
For the San Diego-I dataset, ACE, SMF, and MSD can detect almost 33 % of targets without misclassifying any background pixels. However, the detection results are surpassed by other detectors when the FPR reaches a certain value. STD shows the worst performance among the detectors, as the ROC curve is lower than the others most of the time. DM-BDL and HSSD are located at the bottom right of our DLcMD most of the time and have lackluster performance. The TPR of BLTSC increases dramatically when FPR is greater than 10 3 , which results in a high AUC value. However, DLcMD yields a lower false alarm rate when detecting all of the target pixels, it thus obtains the highest AUC value among the eight detectors as well, which is nearly 9 % higher than the average value of the other seven detectors, as illustrated in Table 1.
For the San Diego-II dataset, DLcMD can detect almost half of the targets when FPR equals 0, and all of the target pixels are found with the lowest FPR. As the ROC curves of these detectors cross each other, it is difficult to judge which one performs better than the others. The AUC values in Table 1 also confirm that all of these detectors show inspiring detection performances, yet DLcMD still yields a higher value than other detectors.
For the Los Angeles-I dataset, the ROC curve of DLcMD is located to the upper left of the others most of the time, as shown in Figure 7c. HSSD and DM-BDL show competing performances with DLcMD, whereas they require higher FPR to find all the target pixels in the scene. The background is complicated in this dataset; thus the AUC values decrease slightly for all of the detectors, including DLcMD. Furthermore, we can find that STD, HSSD, and DM-BDL are based on SR theory, but they show an apparent difference in detecting targets. We think that the background dictionary purification procedure in HSSD and DM-BDL help in better overcoming this problem.
For the Los Angeles-II dataset, DLcMD yields higher TPR when the FPR is fixed, as shown in Figure 7d. However, ACE and SMF fail to detect the targets in the scene and thus show the worst performances. It is obvious that all the detectors except for DM-BDL and DLcMD can hardly distinguish all the targets from the background, as they require higher FPR when the TPR reaches 1, which means that the targets are covered up by the background.
For the Texas Coast dataset, DLcMD outperforms other detectors as its ROC curve is closest to the upper left side, and DLcMD can detect all the targets with a lower false alarm rate, as illustrated in Figure 7e. We can also find that SMF has the worst performance in this dataset, as the ROC curve demonstrates that SMF loses some of the target pixels even with high FPR. Nevertheless, the detectors, except for SMF, show expressive detection performances, as listed in Table 1. Still, the proposed DLcMD yields the highest AUC value among the eight detectors.
In a nutshell, the proposed DLcMD outperforms other state-of-the-art detectors in these five hyperspectral datasets. The reason for this can be summarized as follows. First, LRaSMD is utilized in DLcMD to separate targets from the background. Compared with STD and HSSD, the proposed LRaSMD is more powerful in splitting the sparse portion from the redundant background, and it is also less sensitive to the imbalanced amount of target pixels and background pixels. A GLRT-based detector derived from a hypothesis model that was specially designed for LRaSMD is introduced here to better address the problem. Moreover, the dictionary learning theory is incorporated into LRaSMD. Spectral variability deteriorates the detection performance, which is reflected in the other seven hyperspectral target detectors. By exploiting the dictionary learning theory in DLcMD, we can obtain a more compact representation. The detection map in Figure 6 shows that target pixels share almost the same response to the detector and have high contrast with the background, which is beneficial when dividing the two parts from each other.

5. Conclusions

In this paper, a matrix decomposition-based detector named DLcMD is proposed for hyperspectral target detection. Aiming to alleviate the negative impacts of spectral variability and the imbalanced amount of training samples, we adopt the following strategies to improve the detection performance. First, LRaSMD rather than SR is used to represent the pixels because it is superior in separating the sparse part from the abundant background. After that, a hypothesis model specially designed for LRaSMD is proposed, and then a GLRT-based detector is derived for hyperspectral target detection. Last but not least, the dictionary learning theory is introduced into DLcMD to alleviate the negative impact of spectral variability and construct a more compact representation to better separate targets from the background.
Extensive experiments are conducted on five widely used hyperspectral datasets to verify the superiority of our DLcMD detector in hyperspectral target detection. There is only one parameter in DLcMD that needs fine-tuning, which is an additional advantage of the proposed target detector. Further, DLcMD is insensitive to this trade-off parameter when the background is not very complicated, which indicates that DLcMD has the potential to be applied in real scenes. Then, the proposed DLcMD is compared with seven other hyperspectral target detectors. The detection results demonstrate the advantage of DLcMD in alleviating the impact of spectral variability, as the detection map shows. To quantitatively evaluate the performance, the ROC curves and AUC values are also introduced in the experiment. The ROC curves of DLcMD are located on the upper left side most of the time, and the AUC values are nearly 9%, 1%, 8%, 16%, and 8% higher than the average values of the other seven detectors in five datasets, which reveals the superiority of DLcMD for hyperspectral target detection. However, it should be mentioned that the proposed detector is sensitive to the trade-off parameter when the background is complicated. How to automatically decide the optimal parameters for the proposed method will be the focus of our future work.

Author Contributions

Conceptualization, Y.Y.; methodology, Y.Y.; software, Y.Y.; data curation, M.W.; writing—original draft preparation, Y.Y.; writing—review and editing, G.F. and W.L.; supervision, M.W.; project administration, Y.M.; funding acquisition, X.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China No. 61903279 and Zhuhai Basic and Applied Basic Research Foundation No. ZH22017003200010PWC.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mei, X.; Ma, Y.; Li, C.; Fan, F.; Huang, J.; Ma, J. Robust GBM hyperspectral image unmixing with superpixel segmentation based low rank and sparse representation. Neurocomputing 2018, 275, 2783–2797. [Google Scholar] [CrossRef]
  2. Jin, Q.; Ma, Y.; Pan, E.; Fan, F.; Huang, J.; Li, H.; Sui, C.; Mei, X. Hyperspectral Unmixing with Gaussian Mixture Model and Spatial Group Sparsity. Remote Sens. 2019, 11, 2434. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Du, B.; Zhang, L.; Liu, T. Joint Sparse Representation and Multitask Learning for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2017, 55, 894–906. [Google Scholar] [CrossRef]
  4. Cao, Z.; Li, X.; Feng, Y.; Chen, S.; Xia, C. ContrastNet: Unsupervised feature learning by autoencoder and prototypical contrastive learning for hyperspectral imagery classification. Neurocomputing 2021, 460, 71–83. [Google Scholar] [CrossRef]
  5. Xu, M.; Liu, H.; Beck, R.; Lekki, J.; Yang, B.; Shu, S.; Liu, Y.; Benko, T.; Anderson, R.; Tokars, R.; et al. Regionally and Locally Adaptive Models for Retrieving Chlorophyll-a Concentration in Inland Waters From Remotely Sensed Multispectral and Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4758–4774. [Google Scholar] [CrossRef]
  6. Li, N.; Huang, X.; Zhao, H.; Qiu, X.; Geng, R.; Jia, X.; Wang, D. Multiparameter Optimization for Mineral Mapping Using Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 1348–1357. [Google Scholar] [CrossRef]
  7. Adao, T.; Hruska, J.; Padua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1100. [Google Scholar] [CrossRef]
  8. Ma, Y.; Fan, G.; Jin, Q.; Huang, J.; Mei, X.; Ma, J. Hyperspectral Anomaly Detection via Integration of Feature Extraction and Background Purification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1436–1440. [Google Scholar] [CrossRef]
  9. Lei, J.; Li, M.; Xie, W.; Li, Y.; Jia, X. Spectral mapping with adversarial learning for unsupervised hyperspectral change detection. Neurocomputing 2021, 465, 71–83. [Google Scholar] [CrossRef]
  10. Li, Y.; Melgani, F.; He, B. CSVM architectures for pixel-wise object detection in high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6059–6070. [Google Scholar] [CrossRef]
  11. Karoui, M.S.; Benhalouche, F.Z.; Deville, Y.; Djerriri, K.; Briottet, X.; Houet, T.; Le Bris, A.; Weber, C. Partial linear NMF-based unmixing methods for detection and area estimation of photovoltaic panels in urban hyperspectral remote sensing data. Remote Sens. 2019, 11, 2164. [Google Scholar] [CrossRef]
  12. Li, W.; Du, Q. Collaborative Representation for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1463–1474. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Du, B.; Zhang, L.; Wang, S. A Low-Rank and Sparse Matrix Decomposition-Based Mahalanobis Distance Method for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1376–1389. [Google Scholar] [CrossRef]
  14. Du, B.; Zhang, L. A Discriminative Metric Learning Based Anomaly Detection Method. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6844–6857. [Google Scholar]
  15. Ma, J.; Tang, L.; Fan, F.; Huang, J.; Mei, X.; Ma, Y. SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer. IEEE/CAA J. Autom. Sin. 2022, 9, 1200–1217. [Google Scholar] [CrossRef]
  16. Yang, X.; Dong, M.; Wang, Z.; Gao, L.; Zhang, L.; Xue, J. Data-augmented matched subspace detector for hyperspectral subpixel target detection. Pattern Recognit. 2020, 106, 107464. [Google Scholar] [CrossRef]
  17. Rambhatla, S.; Li, X.; Ren, J.; Haupt, J. A Dictionary-Based Generalization of Robust PCA with Applications to Target Localization in Hyperspectral Imaging. IEEE Trans. Signal Process. 2020, 68, 1760–1775. [Google Scholar] [CrossRef]
  18. Zhang, G.; Zhao, S.; Li, W.; Du, Q.; Ran, Q.; Tao, R. HTD-Net: A Deep Convolutional Neural Network for Target Detection in Hyperspectral Imagery. Remote Sens. 2020, 12, 1489. [Google Scholar] [CrossRef]
  19. Farrand, W.H.; Harsanyi, J.C. Mapping the distribution of mine tailings in the Coeur d’Alene River Valley, Idaho, through the use of a constrained energy minimization technique. Remote Sens. Environ. 1997, 59, 64–76. [Google Scholar] [CrossRef]
  20. Zou, Z.; Shi, Z. Hierarchical Suppression Method for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 330–342. [Google Scholar] [CrossRef]
  21. Ren, H.; Chang, C.I. Target-constrained interference-minimized approach to subpixel target detection for hyperspectral images. Opt. Eng. 2000, 39, 3138–3145. [Google Scholar] [CrossRef]
  22. Manolakis, D.; Marden, D.; Shaw, G.A. Hyperspectral image processing for automatic target detection applications. J. Linc. Lab. 2003, 14, 79–116. [Google Scholar]
  23. Manolakis, D.; Shaw, G.; Keshava, N. Comparative analysis of hyperspectral adaptive matched filter detectors. In Proceedings of the Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VI, Orlando, FL, USA, 24–26 April 2000; Volume 4049, pp. 2–17. [Google Scholar]
  24. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Sparse representation for target detection in hyperspectral imagery. IEEE J. Sel. Top. Signal Process. 2011, 5, 629–640. [Google Scholar] [CrossRef]
  25. Wu, X.; Zhang, X.; Cen, Y. Multi-task Joint Sparse and Low-rank Representation Target Detection for Hyperspectral Image. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1756–1760. [Google Scholar] [CrossRef]
  26. Zhao, X.; Li, W.; Zhang, M.; Tao, R.; Ma, P. Adaptive Iterated Shrinkage Thresholding-Based Lp-Norm Sparse Representation for Hyperspectral Imagery Target Detection. Remote Sens. 2020, 12, 3991. [Google Scholar] [CrossRef]
  27. Zhu, D.; Du, B.; Zhang, L. Single-Spectrum-Driven Binary-Class Sparse Representation Target Detector for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1487–1500. [Google Scholar] [CrossRef]
  28. Zhou, Z.; Li, X.; Wright, J.; Candès, E.J.; Ma, Y. Stable principal component pursuit. In Proceedings of the IEEE ISIT, Austin, TX, USA, 13–18 June 2010; pp. 1518–1522. [Google Scholar]
  29. Schweizer, S.M.; Moura, J.M.F. Efficient detection in hyperspectral imagery. IEEE Trans. Image Process. 2001, 10, 584–597. [Google Scholar] [CrossRef]
  30. Du, B.; Zhang, Y.; Zhang, L.; Tao, D. Beyond the Sparsity-Based Target Detector: A Hybrid Sparsity and Statistics-Based Detector for Hyperspectral Images. IEEE Trans. Image Process. 2016, 25, 5345–5357. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Du, B.; Zhang, L. A sparse representation-based binary hypothesis model for target detection in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1346–1354. [Google Scholar] [CrossRef]
  32. Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J. Is there a best hyperspectral detection algorithm? Proc. SPIE 2009, 7334, 733402. [Google Scholar]
  33. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Sparse Transfer Manifold Embedding for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1030–1043. [Google Scholar] [CrossRef]
  34. Zhou, T.; Tao, D. GoDec: Randomized Lowrank and Sparse Matrix Decomposition in Noisy Case. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011; pp. 33–40. [Google Scholar]
  35. Ma, Y.; Li, C.; Mei, X.; Liu, C.; Ma, J. Robust Sparse Hyperspectral Unmixing with 2,1 Norm. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1227–1239. [Google Scholar] [CrossRef]
  36. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A.G. Infrared Patch-Image Model for Small Target Detection in a Single Image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef] [PubMed]
  37. Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 7–10 December 2009; pp. 2080–2088. [Google Scholar]
  38. Cheng, T.; Wang, B. Graph and Total Variation Regularized Low-Rank Representation for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 391–406. [Google Scholar] [CrossRef]
  39. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  40. Lin, Z.; Liu, R.; Su, Z. Linearized alternating direction method with adaptive penalty for low rank representation. In Proceedings of the Advances in Neural Information Processing Systems 24 (NIPS 2011), Granada, Spain, 12–14 December 2011; pp. 612–620. [Google Scholar]
  41. Kelly, E.J. An Adaptive Detection Algorithm. IEEE Trans. Aerosp. Electron. Syst. 1986, 22, 115–127. [Google Scholar] [CrossRef]
  42. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  43. Yang, S.; Shi, Z.; Tang, W. Robust Hyperspectral Image Target Detection Using an Inequality Constraint. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3389–3404. [Google Scholar] [CrossRef]
  44. Wang, T.; Du, B.; Zhang, L. An Automatic Robust Iteratively Reweighted Unstructured Detector for Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2367–2382. [Google Scholar] [CrossRef]
  45. Niu, Y.; Wang, B. Extracting Target Spectrum for Hyperspectral Target Detection: An Adaptive Weighted Learning Method Using a Self-Completed Background Dictionary. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1604–1617. [Google Scholar] [CrossRef]
  46. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  47. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar] [CrossRef] [PubMed]
  48. Scharf, L.L.; Friedlander, B. Matched subspace detectors. IEEE Trans. Signal Process. 1994, 42, 2146–2157. [Google Scholar] [CrossRef] [Green Version]
  49. Xie, W.; Zhang, X.; Li, Y.; Wang, K.; Du, Q. Background Learning Based on Target Suppression Constraint for Hyperspectral Target Detection. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 5887–5897. [Google Scholar] [CrossRef]
  50. Cheng, T.; Wang, B. Decomposition Model with Background Dictionary Learning for Hyperspectral Target Detection. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 1872–1884. [Google Scholar] [CrossRef]
  51. Heinz, D.C.; Chang, C.-I. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Process of the LRaSMD algorithm. L is the low-rank part corresponding to the background. S is the sparse part corresponding to the target. N stands for the noise matrix.
Figure 1. Process of the LRaSMD algorithm. L is the low-rank part corresponding to the background. S is the sparse part corresponding to the target. N stands for the noise matrix.
Remotesensing 14 04369 g001
Figure 2. Flowchart of the proposed algorithm.
Figure 2. Flowchart of the proposed algorithm.
Remotesensing 14 04369 g002
Figure 3. Pseudocolor images and the corresponding ground truth of the five HSI datasets. (a) SanDiego-I. (b) SanDiego-II. (c) LosAngeles-I. (d) LosAngeles-II. (e) TexasCoast.
Figure 3. Pseudocolor images and the corresponding ground truth of the five HSI datasets. (a) SanDiego-I. (b) SanDiego-II. (c) LosAngeles-I. (d) LosAngeles-II. (e) TexasCoast.
Remotesensing 14 04369 g003
Figure 4. Low-rank part and sparse part with different settings of the tradeoff parameter λ . (a) Results when λ is set properly. (b) Results when λ is set improperly.
Figure 4. Low-rank part and sparse part with different settings of the tradeoff parameter λ . (a) Results when λ is set properly. (b) Results when λ is set improperly.
Remotesensing 14 04369 g004
Figure 5. AUC values with respect to the positive tradeoff parameter λ . (a) San Diego−I; (b) San Diego−II; (c) Los Angeles−I; (d) Los Angeles−II; (e) Texas Coast.
Figure 5. AUC values with respect to the positive tradeoff parameter λ . (a) San Diego−I; (b) San Diego−II; (c) Los Angeles−I; (d) Los Angeles−II; (e) Texas Coast.
Remotesensing 14 04369 g005
Figure 6. Hyperspectral target detection map on the five datasets. (a) SanDiego-I. (b) SanDiego-II. (c) LosAngeles-I. (d) LosAngeles-II. (e) TexasCoast.
Figure 6. Hyperspectral target detection map on the five datasets. (a) SanDiego-I. (b) SanDiego-II. (c) LosAngeles-I. (d) LosAngeles-II. (e) TexasCoast.
Remotesensing 14 04369 g006
Figure 7. ROC curves of ( P d , P f ) on the five datasets. (a) SanDiego−I. (b) SanDiego−II. (c) LosAngeles−I. (d) LosAngeles−II. (e) TexasCoast.
Figure 7. ROC curves of ( P d , P f ) on the five datasets. (a) SanDiego−I. (b) SanDiego−II. (c) LosAngeles−I. (d) LosAngeles−II. (e) TexasCoast.
Remotesensing 14 04369 g007
Table 1. AUC values of the five datasets.
Table 1. AUC values of the five datasets.
AlgorithmsSan Diego-ISan Diego-IILos Angeles-ILos Angeles-IITexas Coast
ACE0.85010.98060.88350.53410.9938
SMF0.93890.98640.82130.59730.6492
MSD0.96910.99260.91630.98060.9954
STD0.81920.97200.79300.95430.9565
HSSD0.96620.99480.96180.98780.9186
BLTSC0.98650.99570.94450.97040.9747
DM-BDL0.97520.98350.96000.99600.9970
DLcMD0.98920.99680.97160.99660.9985
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yao, Y.; Wang, M.; Fan, G.; Liu, W.; Ma, Y.; Mei, X. Dictionary Learning-Cooperated Matrix Decomposition for Hyperspectral Target Detection. Remote Sens. 2022, 14, 4369. https://doi.org/10.3390/rs14174369

AMA Style

Yao Y, Wang M, Fan G, Liu W, Ma Y, Mei X. Dictionary Learning-Cooperated Matrix Decomposition for Hyperspectral Target Detection. Remote Sensing. 2022; 14(17):4369. https://doi.org/10.3390/rs14174369

Chicago/Turabian Style

Yao, Yuan, Mengbi Wang, Ganghui Fan, Wendi Liu, Yong Ma, and Xiaoguang Mei. 2022. "Dictionary Learning-Cooperated Matrix Decomposition for Hyperspectral Target Detection" Remote Sensing 14, no. 17: 4369. https://doi.org/10.3390/rs14174369

APA Style

Yao, Y., Wang, M., Fan, G., Liu, W., Ma, Y., & Mei, X. (2022). Dictionary Learning-Cooperated Matrix Decomposition for Hyperspectral Target Detection. Remote Sensing, 14(17), 4369. https://doi.org/10.3390/rs14174369

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop