Nothing Special   »   [go: up one dir, main page]

CN115439431A - Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation - Google Patents

Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation Download PDF

Info

Publication number
CN115439431A
CN115439431A CN202211041079.3A CN202211041079A CN115439431A CN 115439431 A CN115439431 A CN 115439431A CN 202211041079 A CN202211041079 A CN 202211041079A CN 115439431 A CN115439431 A CN 115439431A
Authority
CN
China
Prior art keywords
encoder
spectral
hidden layer
layer
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211041079.3A
Other languages
Chinese (zh)
Inventor
李欢
朱贺隆
宋江鲁奇
周慧鑫
李幸
滕翔
罗云麟
甘长国
张伟鹏
秦翰林
王炳健
梅峻溪
张嘉嘉
鲍蕴昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202211041079.3A priority Critical patent/CN115439431A/en
Publication of CN115439431A publication Critical patent/CN115439431A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a stack-type noise reduction self-encoder based on a spectrum loss function and a collaborative representation hyperspectral anomaly detection method, which aim at the problems of large data volume and more redundant information of a hyperspectral image and extract the characteristics of the hyperspectral image on the premise of not influencing algorithm detection precision. Compared with various representative algorithms, the method has better detection performance.

Description

Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation
Technical Field
The invention relates to the technical field of image detection, in particular to a stacked noise reduction self-encoder based on a spectral loss function and a hyperspectral anomaly detection method based on collaborative representation.
Background
The hyperspectral image is added with a spectrum dimension on the basis of the traditional remote sensing image and is a three-dimensional data structure containing target space characteristics and spectrum information. The hyperspectral image detection technology can be generally divided into target detection and anomaly detection. The characteristic that the abnormal target can be detected without knowing the spectral information of the target object in advance is adopted, so that the hyperspectral abnormality detection is widely applied to various fields such as agriculture, meteorology, military and the like. Research into hyperspectral anomaly detection is becoming increasingly important.
The hyperspectral image anomaly detection mainly comprises two steps of background modeling and anomaly detection. The background modeling establishes a background measurement standard by searching an evaluation mode of a background signal in a hyperspectral image, and if the abnormal image is detected, whether each pixel belongs to the background is evaluated according to the background measurement standard, and if the pixel does not belong to the background, the pixel is the abnormal pixel. The quality of background modeling directly affects the final anomaly detection result. Meanwhile, the hyperspectral image has spectrum information of hundreds of wave bands, so that the information redundancy of the hyperspectral image is caused, and the construction of the background is greatly influenced. Therefore, how to remove redundant information in the original data, reduce data dimensionality, and reduce the processing complexity of anomaly detection is a key point in the field of hyperspectral anomaly detection. The present invention is characterized by a series of improvements made in view of the above-mentioned problems.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a stacked noise reduction self-encoder based on a spectrum loss function and a hyperspectral anomaly detection method based on collaborative representation.
In order to achieve the purpose, the invention adopts the technical scheme that:
a hyperspectral anomaly detection method based on a stacked noise reduction self-encoder and collaborative representation of a spectral loss function comprises the following steps:
step 1, constructing a stacked noise reduction self-encoder based on a spectral loss function;
step 2, enabling the original hyperspectral image X to be epsilon R M×N×B Inputting the data into the stacked noise reduction self-encoder to perform network training; m represents the row number contained in the hyperspectral image, N represents the column number contained in the hyperspectral image, and B is the wave band number;
and 3, after the network training is finished, inputting the original hyperspectral image into the stack type noise reduction self-encoder network again, and selecting the result of the middle hidden layer as output X' ∈ R M×N×n Wherein n represents the number of spectral bands after feature extraction;
step 4, utilizing a collaborative expression algorithm to output a hyperspectral image X' belonging to R of the hidden layer M×N×n Reconstructing to obtain a reconstructed pixel X' (i, j,: for each pixel X (i, j,:);
and 5, calculating a residual error between the original pixel X (i, j, and the) and the reconstructed pixel X' (i, j, and the) by using the L2 norm, judging whether the pixel to be detected is abnormal or not according to a set threshold value, and outputting a detection result.
Compared with the prior art, the invention has the beneficial effects that:
the spectrum dimension screening of the hyperspectral image is realized by designing the stacked self-encoder with the gradually reduced neuron number of the hidden layer, and the stacked noise reduction self-encoder network with stronger generalization capability is constructed by combining the characteristics of the noise reduction self-encoder. In addition, for spectral feature information in the image, a spectral information divergence function and a spectral angle function are introduced to replace a traditional mean square error function to serve as a loss function of the self-encoder, so that the feature extraction capability of the self-encoder for the spectral information of the hyperspectral image is further improved. And finally, combining a collaborative representation detector to perform anomaly detection on the dimensionality reduction hyperspectral image obtained by extracting the network characteristics of the self-encoder. Experiments on different data sets show that compared with other comparison algorithms, the researched novel stack-type noise reduction self-encoder based on the spectral loss function successfully removes a large amount of redundant and noise interference information in a hyperspectral image, and greatly improves the detection efficiency and the detection result of the collaborative expression abnormal part detection algorithm.
Drawings
Fig. 1 is a pseudo-color image.
FIG. 2 is a graph of an abnormal target profile.
Fig. 3 is a block flow diagram in an example of the invention.
Fig. 4 is a pseudo-color image of the Pavia bridge after dimension reduction.
FIG. 5 is a diagram of the detection result of the CSA-SDAE-CR algorithm on the Pavia bridge image.
FIG. 6 is a graph showing the detection result of the SID-SDAE-CR algorithm on the Pavia bridge image.
FIG. 7 is a three-dimensional surface diagram of the detection result of the CSA-SDAE-CR algorithm on the Pavia bridge image.
FIG. 8 is a three-dimensional surface diagram of the detection result of the SID-SDAE-CR algorithm on the Pavia bridge image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
To demonstrate the effectiveness of the method of the present invention, pavia scene image data was used. The experimental Pavia scene image is urban scene image data collected by a German airborne spectral imager (ROSIS), the image spatial resolution is 1.3m, and the related spectral band is from 430 to 860nm. The size of the image selected by the experiment is 108 multiplied by 120, the number of the wave bands is 102, the abnormal target in the image mainly comprises a vehicle and a street lamp on a bridge, fig. 1 is a pseudo-color image of an original hyperspectral image, fig. 2 is a reference abnormal target distribution diagram, and the image is simply called as a Pavia bridge image in the follow-up experiment. Referring to fig. 3, a basic flow of anomaly detection for Pavia scene image data is as follows:
step 1, constructing a stacked noise reduction self-encoder based on a spectral loss function.
In the present invention, the spectral loss function may be a spectral information divergence loss function or a spectral angle cosine loss function. Therefore, the step can respectively construct a stacked noise reduction self-encoder (SID-SDAE) based on the spectral information divergence loss function and a stacked noise reduction self-encoder (CSA-SDAE) based on the spectral angle cosine loss function.
For SID-SDAE, the construction process is as follows:
step 1.11, calculating the spectral information divergence SID between two spectral vectors A and B as
SID(A,B)=D(A||B)+D(B||A)
Wherein D (A | | B) and D (B | | | A) respectively represent relative entropy values between spectral vectors A and B and between B and A, and the calculation formulas are respectively as follows
Figure BDA0003820996990000031
Figure BDA0003820996990000032
Step 1.12, substituting the relative entropy values between the spectrum vectors A and B and between B and A into the spectrum information divergence SID between the spectrum vectors A and B, and the calculation formula is
Figure BDA0003820996990000041
Wherein T represents the number of bands of the spectrum, p t And q is t Respectively represent the probability magnitude of A and B on the t-th wave band, respectively
Figure BDA0003820996990000042
Figure BDA0003820996990000043
Wherein A is t 、B t Respectively represent the spectral values of the spectral vector A, B at the t-th wavelength band;
step 1.13, substituting the probability of A and B on the t-th wave band into the spectral information divergence SID between the spectral vectors A and B, and calculating the formula
Figure BDA0003820996990000044
Step 1.14, introducing the divergence of the spectral information into the loss function of the self-encoder, wherein A corresponds to the reconstructed spectral vector f (z) output by the self-encoder (L) ) B is the input spectral vector y, then the spectral vector f (z) is reconstructed (L) ) And divergence E of spectral information between the input spectral vector y SID (f(z (L) ) Y) can be represented as
Figure BDA0003820996990000045
Where T represents the number of bands of the spectrum, L represents the number of layers from the encoder, k, d represent bands, f () is an activation function,
Figure BDA0003820996990000046
respectively represents taking z (L) Value in the k, d bands, y k 、y d Represents the spectral vector in the k and d bands, f (z) (L) ) Z in (L) Is represented as
z (l) =W (l-1) a (l-1) +b (l-1)
a (l) =f(z (l) )
a (1) =x
Wherein, a (l) Represents the reconstructed spectral vector output from the encoder for L layers, x represents the original input data, L = L, L-1,L-2, …,2,W (l-1) And b (l-1) Respectively representing a weight matrix and an offset matrix of the self-encoder of the l-1 layer;
step 1.15, finally introducing a regularization term and a spectral information divergence loss function E SID (W, b) is represented by:
Figure BDA0003820996990000051
wherein MN is R M×N×B MN in the table represents the number of pixels, lambda is a regularization parameter, I, J represents the number of pixels in the l-th layer and the l +1 layer respectively, and W, b represents a weight matrix and a bias matrix respectively;
for CSA-SDAE, the construction process is as follows:
step 1.21, spectral Angle θ SA The calculation formula is as follows
Figure BDA0003820996990000052
Where each spectral vector has T bands. The cosine of the spectral angle cos θ is typically used for computational convenience SA Instead, cos θ SA Is calculated as
Figure BDA0003820996990000053
Step 1.22, spectrum angles are included in a reconstruction loss function by calculating spectrum angles between a reconstruction spectrum vector output from the encoder network and an input spectrum vector, and the obtained spectrum angles are as follows:
Figure BDA0003820996990000054
step 1.23, including the regular term to obtain a reconstruction spectrum angle loss function E SA (W, b) is
Figure BDA0003820996990000055
The invention relates to a stacked noise reduction self-encoder which is constructed by combining a stacked self-encoder comprising five hidden layers and a noise reduction self-encoder added with random Gaussian noise. In the stack type noise reduction self-encoder, data of an input layer is sent to a network for training after random Gaussian noise is added, an output layer is equal to data of the input layer, the number of neurons from a hidden layer 1 to a hidden layer 2 to a hidden layer 3 is set to be decreased gradually, the hidden layer 1 and the hidden layer 5, the hidden layer 2 and the hidden layer 4 are symmetrical to each other, and the number of the neurons is set to be consistent.
Step 2, enabling the original hyperspectral image X to be belonged to R M×N×B Inputting the data into the built stacked noise reduction self-encoder for network training. Wherein M represents the number of rows contained in the hyperspectral image, N represents the number of columns contained in the hyperspectral image, and B is the number of wavebands. In this step, the training process is as follows:
and 2.1, training three self-coders from top to bottom by a back propagation method. Where the first is from the input layer to the hidden layer 1 and hidden layer 5 to the output layer, the second is from hidden layer 1 to hidden layer 2 and hidden layer 4 to hidden layer 5, and the third is the input and output of the most intermediate hidden layer 3.
Firstly, training parameters of a first self-encoder from an input layer to a hidden layer 1 and a hidden layer 5 to an output layer to realize reconstruction of an image with Gaussian noise added to the input layer, and fixing a training station after training is finishedObtained parameter W h1 、b h1 、W h6 、b h6 ;W h1 、b h1 For the parameters of the input layer to the hidden layer 1, W h6 、b h6 Parameters for hiding layer 5 to the output layer.
Step 2.2, training parameters of a second self-encoder from the hidden layer 1 to the hidden layer 2 and from the hidden layer 4 to the hidden layer 5 to realize reconstruction of an output image of the hidden layer 1, and after training is finished, fixing the parameters W obtained by training h2 、b h2 、W h5 、b h5 ;W h2 、b h2 Parameters for hidden layer 1 to hidden layer 2, W h5 、b h5 Parameters of the hidden layer 4 to the hidden layer 5.
Step 2.3, training the input and output parameters of the third self-encoder, namely the middle hidden layer 3, to realize the reconstruction of the output image of the hidden layer 2, and after the training is finished, fixing the parameters W obtained by the training h3 、b h3 、W h4 、b h4 ;W h3 、b h3 For hiding input parameters of layer 3, W h4 、b h4 Is the output parameter of the hidden layer 3.
In the step, the number of neurons in the input layer and the output layer of the self-encoder is designed to be equal to the number of spectral bands of the input original hyperspectral image, the number of neurons in the 1 st to 3 rd self-encoders is set to be reduced layer by layer in consideration of the purpose of network data feature extraction, in order to avoid excessive variables, the number of encoding units in the 1 st and 2 nd encoders is set to be 100 and 50 respectively, namely the number of neurons in the hidden layer 1 and the hidden layer 5 is 100, the number of neurons in the hidden layer 2 and the hidden layer 4 is 50, the number of neurons in the middle hidden layer 3 is set as the number of neurons in the hidden layer 3, and as the number of neurons is directly related to the spectral dimension of the final output image, seven values of 5, 10, 15, 20, 25, 30 and 35 are set as the number of neurons in total, and are used as the number of the hidden layer 3, the Pavia image is trained by using two self-network encoders, namely, CSA-SDAE, SID-SDAE and is taken out as the optimal number of neurons in the hidden layer 3.
And, in the networkWhen the network is trained, because the calculation formula of the Spectral Information Divergence (SID) contains log mathematical operators, and the log operation does not include the condition that the log mathematical operators are less than or equal to 0, the network selects a Sigmoid activation function. The network parameters with little influence on the final detection effect are set to be fixed values, for example, the network learning rate is set to be 10 -3 Epoch is set to 100 and batch size is set to 400.
Obviously, the detection effect of the stacked noise reduction self-encoder algorithms of the two loss functions on the image is closely related to the number of neurons of the middle hidden layer. For the Pavia image, when the number of neurons in the middle hidden layer is less than 20, the AUC values of the self-encoders for the two loss functions are not ideal, which indicates that the feature information in the Pavia image needs more neurons to represent, and as the number of neurons increases, the AUC values of the two are gradually increased, and when the number of neurons reaches 25 and 30, the curve basically tends to be stable, and the AUC reaches the highest value. Therefore, by comprehensively considering the Pavia image, the number of the middlemost hidden layer neurons of the stacked noise reduction self-encoder of the two loss functions is set to be 30.
And 2.4, initializing each parameter value in the network to an expected proper value through the pre-training of the previous steps 2.1 to 2.3. And integrating hidden layers of the parameters obtained by training together for training, realizing reconstruction of an image with Gaussian noise added to an input layer, adjusting the parameters obtained by training before each layer in a small range, and finally finishing the training.
And 3, after the network training is finished, inputting the original hyperspectral image into the stack type noise reduction self-encoder network again, and selecting the result of the middle hidden layer as output X' ∈ R M×N×n Where n represents the number of spectral bands after feature extraction, and n takes a value of 30 in this embodiment.
Step 4, utilizing a collaborative expression algorithm to output a hyperspectral image X' epsilon R of the hidden layer M×N×n Reconstructing to obtain a reconstructed pixel X' (i, j,: of each pixel X (i, j,:), and the specific steps are as follows:
step 4.1, for the pixel X (i, j,: to be measured), solving
Figure BDA0003820996990000071
Get the minimum of its co-expression weight vector α, where β is the Lagrangian multiplier, X s A background dictionary matrix for the hyperspectral image data.
Step 4.2, considering that the spectral similarity of the pixel to be measured and each pixel around the pixel to be measured is possibly different, introducing a regularization matrix gamma y
Figure BDA0003820996990000081
Wherein x 1 ,x 2 ,…,x s Is a column vector, namely each background pixel element;
step 4.3, regularizing matrix gamma y Additive to formula
Figure BDA0003820996990000082
In, the optimization function is
Figure BDA0003820996990000083
Thereby obtaining a final weight vector of
Figure BDA0003820996990000084
Step 4.4, using matrix X s And a weight vector alpha, and obtaining a reconstructed pixel X' (i, j,: of the pixel X (i, j,: in) of the background set as
X'(i,j,:)=X s α
In the step, the hyperspectral image data X belongs to R MN×B All the pixels contained in (1) are called D, namely D = M × N, then X is equal to R D×B (ii) a Determining using a sliding dual window approachThe pixel of the spatial neighborhood, the sliding double window consists of an inner concentric window and an outer concentric window, and the length of the outer window is set as w out The length of the inner window is w in Selecting the pixels between the inner window and the outer window as background pixels, and then the number of the background pixels is s = w out ×w out -w in ×w in (ii) a The background pixels form a background dictionary matrix
Figure BDA0003820996990000085
Size B × s, x i Representing the ith background picture element.
And 5, calculating a residual error between the original pixel X (i, j, and the) and the reconstructed pixel X' (i, j, and the) by using the L2 norm, judging whether the pixel to be detected is abnormal or not according to a set threshold value, and outputting a detection result.
Illustratively, the residual error r between X (i, j,: and its reconstructed pixel element X' (i, j,:) is represented as:
r=||X(i,j,:)-X'(i,j,:)|| 2 =||X(i,j,:)-X s α|| 2
classifying and judging the pixels to be detected by setting a threshold, wherein if r is greater than the threshold, the pixels to be detected are abnormal, and otherwise, the pixels to be detected are background;
the result is shown in the figure, and fig. 4 is a pseudo color image of the Pavia bridge after dimension reduction; FIG. 5 is a CSA-SDAE-CR algorithm detection result diagram on a Pavia bridge image, FIG. 6 is a SID-SDAE-CR algorithm detection result diagram on the Pavia bridge image, bright spots on the two diagrams represent abnormal positions, and the rest are backgrounds; fig. 7 is a three-dimensional curved surface diagram of a detection result of a CSA-SDAE-CR algorithm on a Pavia bridge image, fig. 8 is a three-dimensional curved surface diagram of a detection result of a SID-SDAE-CR algorithm on the Pavia bridge image, the salient points on the two diagrams represent abnormal positions, and the rest is a background.

Claims (9)

1. A stack-type noise reduction self-encoder based on a spectrum loss function and a cooperatively expressed hyperspectral anomaly detection method are characterized by comprising the following steps:
step 1, constructing a stacked noise reduction self-encoder based on a spectral loss function;
step 2, enabling the original hyperspectral image X to be epsilon R M×N×B Inputting the data into the stacked noise reduction self-encoder to perform network training; m represents the number of rows contained in the hyperspectral image, N represents the number of columns contained in the hyperspectral image, and B is the number of wavebands;
step 3, after the network training is finished, inputting the original hyperspectral image into the stack type noise reduction self-encoder network again, and selecting the result of the middle hidden layer as output X' ∈ R M×N×n Wherein n represents the number of spectral bands after feature extraction;
step 4, utilizing a collaborative expression algorithm to output a hyperspectral image X' belonging to R of the hidden layer M×N×n Reconstructing to obtain reconstructed pixels X' (i, j,: of each pixel X (i, j,:);
and 5, calculating a residual error between the original pixel X (i, j,: and) and the reconstructed pixel X' (i, j,: and) by using the L2 norm, judging whether the pixel to be detected is abnormal or not according to a set threshold value, and outputting a detection result.
2. The stacked noise reduction self-encoder and collaborative representation-based hyperspectral anomaly detection method according to claim 1, wherein the spectral loss function is a spectral information divergence loss function or a spectral angle cosine loss function.
3. The stacked noise reduction self-encoder based on spectral loss function and cooperatively expressed hyperspectral anomaly detection method according to claim 2, wherein the spectral information divergence loss function is obtained by:
reconstructing the spectral vector f (z) (L) ) And divergence E of spectral information between the input spectral vector y SID (f(z (L) ) And y) is expressed as:
Figure FDA0003820996980000011
where T represents the number of bands of the spectrum, L represents the number of layers from the encoder, k, d represent bands, f () is an activation function,
Figure FDA0003820996980000021
respectively represents taking z (L) Value in the k, d bands, y k 、y d Represents the spectral vector in the k and d bands, f (z) (L) ) Z in (L) Is represented as
z (l) =W (l-1) a (l-1) +b (l-1)
a (l) =f(z (l) )
a (1) =x
Wherein, a (l) Represents the reconstructed spectral vector output from the encoder for L layers, x represents the original input data, L = L, L-1,L-2, …,2,W (l-1) And b (l-1) Respectively representing a weight matrix and an offset matrix of the self-encoder of the l-1 layer;
introducing regularization term, spectral information divergence loss function E SID (W, b) is represented by:
Figure FDA0003820996980000022
wherein MN is R M×N×B MN in the layer represents the number of pixels, lambda is a regularization parameter, I, J represents the number of pixels in the l layer and the l +1 layer respectively, and W, b represents a weight matrix and a bias matrix respectively;
the spectral angle cosine loss function is obtained by the following method:
the spectral angle is obtained by calculating the spectral angle between the reconstructed spectral vector output from the encoder network and the input spectral vector and incorporating the spectral angle into a reconstruction loss function
Figure FDA0003820996980000023
The regular term is included to obtain a reconstruction spectrum angle loss function E SA (W, b) is
Figure FDA0003820996980000024
4. The stacked noise reduction self-encoder based on the spectral loss function and the hyperspectral anomaly detection method based on collaborative representation according to claim 2 or 3 are characterized in that the stacked noise reduction self-encoder based on the spectral loss function is constructed by combining a stacked self-encoder comprising five hidden layers and a noise reduction self-encoder added with random Gaussian noise.
5. The stacked noise reduction self-encoder and collaborative representation hyperspectral anomaly detection method based on the spectral loss function according to claim 1 is characterized in that in the stacked noise reduction self-encoder, data of an input layer is sent to a network for training after random Gaussian noise is added, an output layer is equal to data of the input layer, the number of neurons from a hidden layer 1 to a hidden layer 2 to a hidden layer 3 is set to be decreased in sequence, the hidden layer 1 and the hidden layer 5, the hidden layer 2 and the hidden layer 4 are symmetrical to each other, and the number of neurons is set to be consistent.
6. The spectral loss function-based stacked noise reduction self-encoder and co-representation hyperspectral anomaly detection method according to claim 1, wherein the step 2 comprises:
step 2.1, training three self-encoders from top to bottom through a back propagation method, wherein the first self-encoder is from an input layer to a hidden layer 1 and a hidden layer 5 to an output layer, the second self-encoder is from the hidden layer 1 to the hidden layer 2 and a hidden layer 4 to the hidden layer 5, and the third self-encoder is the input and the output of the most middle hidden layer 3; firstly, training parameters of a first self-encoder to realize reconstruction of an image with Gaussian noise added to an input layer, and after the training is finished, fixing parameters W obtained by the training h1 、b h1 、W h6 、b h6 ;W h1 、b h1 For the parameters of the input layer to the hidden layer 1, W h6 、b h6 Parameters for the hidden layer 5 to the output layer;
step 2.2, trainThe parameters of two self-encoders are used for realizing the reconstruction of the output image of the hidden layer 1, and after the training is finished, the parameters W obtained by the training are fixed h2 、b h2 、W h5 、b h5 ;W h2 、b h2 Parameters for hidden layer 1 to hidden layer 2, W h5 、b h5 Parameters from hidden layer 4 to hidden layer 5;
step 2.4, training the parameters of the third self-encoder to realize the reconstruction of the output image of the hidden layer 2, and after the training is finished, fixing the parameters W obtained by the training h3 、b h3 、W h4 、b h4 ;W h3 、b h3 For hiding input parameters of layer 3, W h4 、b h4 Is the output parameter of the hidden layer 3;
step 2.5, initializing each parameter value to an expected value through the pre-training of the steps 2.2 to 2.4; and integrating hidden layers of the parameters obtained by training in the steps 2.2 to 2.4 together for training, realizing reconstruction of an image with Gaussian noise added to an input layer, adjusting the parameters obtained by training in each layer in a small range, and finally finishing the training.
7. The method for detecting hyperspectral anomaly by using stacked noise reduction self-encoder and collaborative representation based on spectral loss function according to claim 1, wherein the step 4 comprises:
step 4.1, obtaining the minimum value of the collaborative expression weight vector alpha by solving the following formula:
Figure FDA0003820996980000031
step 4.2, the reconstructed pixel X' (i, j,: for pixel X (i, j,:): is obtained as follows:
X′(i,j,:)=X s α
where β is the Lagrangian multiplier, X s A background dictionary matrix for the hyperspectral image data.
8. The method of claim 7 based on spectral loss functionThe hyperspectral anomaly detection method is characterized in that hyperspectral image data X belongs to R MN×B All the pixels contained in (1) are called D, namely D = M × N, then X is equal to R D×B (ii) a Determining the pixel of the spatial neighborhood by adopting a sliding double-window method, wherein the sliding double window consists of an inner window and an outer window which are concentric, and the length of the outer window is set as w out Length of inner window w in Selecting the pixel between the inner window and the outer window as a background pixel, and then the number of the background pixels s = w out ×w out -w in ×w in (ii) a The background pixels form a background dictionary matrix
Figure FDA0003820996980000041
Size B × s, x i Representing the ith background picture element.
9. The spectral loss function based stacked noise reduction self-encoder and co-representation hyperspectral anomaly detection method according to claim 7, wherein the residual error r between the step 5,X (i, j:) and its reconstructed pixel X' (i, j:) is expressed as
r=||X(i,j,:)-X'(i,j,:)|| 2 =||X(i,j,:)-X s α|| 2
And classifying and judging the pixels to be detected by setting a threshold, wherein the pixels to be detected are abnormal if r is greater than the threshold, and the pixels to be detected are background if r is not greater than the threshold.
CN202211041079.3A 2022-08-29 2022-08-29 Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation Pending CN115439431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211041079.3A CN115439431A (en) 2022-08-29 2022-08-29 Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211041079.3A CN115439431A (en) 2022-08-29 2022-08-29 Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation

Publications (1)

Publication Number Publication Date
CN115439431A true CN115439431A (en) 2022-12-06

Family

ID=84244905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211041079.3A Pending CN115439431A (en) 2022-08-29 2022-08-29 Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation

Country Status (1)

Country Link
CN (1) CN115439431A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435940A (en) * 2023-12-20 2024-01-23 龙建路桥股份有限公司 Spectrum detection method for winter concrete curing process
CN117909650A (en) * 2023-12-21 2024-04-19 北京聚原科技有限公司 Gamma energy spectrum intelligent noise reduction method, system and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435940A (en) * 2023-12-20 2024-01-23 龙建路桥股份有限公司 Spectrum detection method for winter concrete curing process
CN117435940B (en) * 2023-12-20 2024-03-05 龙建路桥股份有限公司 Spectrum detection method for winter concrete curing process
CN117909650A (en) * 2023-12-21 2024-04-19 北京聚原科技有限公司 Gamma energy spectrum intelligent noise reduction method, system and storage medium

Similar Documents

Publication Publication Date Title
Li et al. Asymmetric feature fusion network for hyperspectral and SAR image classification
Rao et al. Transferable network with Siamese architecture for anomaly detection in hyperspectral images
Zhang et al. A stacked autoencoders-based adaptive subspace model for hyperspectral anomaly detection
CN115439431A (en) Stack type noise reduction self-encoder based on spectrum loss function and hyperspectral anomaly detection method based on collaborative representation
Chen et al. Convolutional neural network for classification of solar radio spectrum
CN111046800A (en) Hyperspectral image abnormal target detection method based on low rank and sparse decomposition
Liu et al. An efficient residual learning neural network for hyperspectral image superresolution
Liu et al. Hyperspectral remote sensing imagery generation from RGB images based on joint discrimination
Zhou et al. RGB-to-HSV: A frequency-spectrum unfolding network for spectral super-resolution of RGB videos
CN113837314A (en) Hyperspectral image classification method based on hybrid convolutional neural network
CN112200123A (en) Hyperspectral open set classification method combining dense connection network and sample distribution
He et al. Two-branch pure transformer for hyperspectral image classification
CN117058558A (en) Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network
Jia et al. Bipartite adversarial autoencoders with structural self-similarity for unsupervised heterogeneous remote sensing image change detection
Zhu et al. HCNNet: A hybrid convolutional neural network for spatiotemporal image fusion
Mei et al. Cascade residual capsule network for hyperspectral image classification
CN105760857A (en) High spectral remote sensing image object detection method
CN112819769A (en) Nonlinear hyperspectral image anomaly detection algorithm based on kernel function and joint dictionary
CN108492283B (en) Hyperspectral image anomaly detection method based on band-constrained sparse representation
Zhang et al. Three-Dimension Spatial-Spectral Attention Transformer for Hyperspectral Image Denoising
CN115115933B (en) Hyperspectral image target detection method based on self-supervision contrast learning
CN114429424B (en) Remote sensing image super-resolution reconstruction method suitable for uncertain degradation modes
CN113947712A (en) Hyperspectral anomaly detection method and system based on capsule differential countermeasure network
CN116433548A (en) Hyperspectral and panchromatic image fusion method based on multistage information extraction
CN116012349A (en) Hyperspectral image unmixing method based on minimum single-body volume constraint and transducer structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination