CN111507454B - Improved cross cortical neural network model for remote sensing image fusion - Google Patents
Improved cross cortical neural network model for remote sensing image fusion Download PDFInfo
- Publication number
- CN111507454B CN111507454B CN201910090285.5A CN201910090285A CN111507454B CN 111507454 B CN111507454 B CN 111507454B CN 201910090285 A CN201910090285 A CN 201910090285A CN 111507454 B CN111507454 B CN 111507454B
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- fusion
- network model
- remote sensing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 28
- 238000003062 neural network model Methods 0.000 title claims abstract description 15
- 230000001054 cortical effect Effects 0.000 title 1
- 230000003595 spectral effect Effects 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 11
- 210000002569 neuron Anatomy 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims description 2
- 238000007500 overflow downdraw method Methods 0.000 abstract description 19
- 230000003287 optical effect Effects 0.000 abstract description 2
- 230000000052 comparative effect Effects 0.000 abstract 1
- 238000011156 evaluation Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000004321 preservation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
受光学传感器自身限制,拍摄的多高光谱影像在获取高的光谱分辨率的同时,不可避免的需要牺牲其空间分辨率。本发明提出一种改进交叉皮质神经网络模型,可将高空间分辨率细节信息融合注入多高光谱遥感影像,从而获得兼有高空间分辨率和光谱分辨率的融合影像。对比实验结果表明,本发明方法优于经典的遥感影像融合方法,同时具有较小的光谱扭曲和细节失真。
Due to the limitation of the optical sensor itself, the captured multi-hyperspectral images inevitably need to sacrifice their spatial resolution while obtaining high spectral resolution. The invention proposes an improved cross-cortical neural network model, which can fuse high spatial resolution detail information into multi-hyperspectral remote sensing images, thereby obtaining a fusion image with both high spatial resolution and spectral resolution. The comparative experimental results show that the method of the present invention is superior to the classical remote sensing image fusion method, and has less spectral distortion and detail distortion.
Description
技术领域technical field
本发明涉及遥感影像处理技术领域,特别是涉及一种多、高光谱遥感影像的融合方法。The invention relates to the technical field of remote sensing image processing, in particular to a fusion method of multi-spectral remote sensing images.
背景技术Background technique
多光谱和高光谱遥感影像是进行遥感影像分类和解译的重要数据来源,但是由于传感器自身信噪比和通信下行链路的限制,在光学遥感传感器设计之初不得不在空间分辨率和光谱分辨率之间进行折中,这种折中使得丰富的光谱信息对复杂目标的解译和监测变得十分棘手,极大限制了多高光谱影像的实际应用,因此需要利用遥感影像融合技术对高空间分辨率影像和多高光谱影像进行融合,使融合结果同时具有高的空间分辨率和光谱分辨率。Multispectral and hyperspectral remote sensing images are important data sources for remote sensing image classification and interpretation. However, due to the limitations of the sensor's own signal-to-noise ratio and communication downlink, the optical remote sensing sensor has to be designed at the beginning of the spatial resolution and spectral resolution. This compromise makes the interpretation and monitoring of complex targets with rich spectral information very difficult, which greatly limits the practical application of multi-hyperspectral images. Therefore, it is necessary to use remote sensing image fusion technology to The spatial resolution image and multi-hyperspectral image are fused, so that the fusion result has high spatial resolution and spectral resolution at the same time.
经典的遥感影像融合算法有Gram-Schmidt融合方法、Brovey变换融合方法、主成分分析PCA融合方法及IHS融合方法等,本发明提出一种改进交叉皮质神经网络模型,并将其应用于多高光谱的遥感影像融合,相比传统的遥感影像融合方法,本算法在较好的融合高空间分辨率细节特征的同时,可极大减小融合结果中多高光谱影像的光谱扭曲。Classical remote sensing image fusion algorithms include Gram-Schmidt fusion method, Brovey transform fusion method, principal component analysis PCA fusion method and IHS fusion method, etc. The present invention proposes an improved cross-cortical neural network model and applies it to multi-hyperspectral Compared with the traditional remote sensing image fusion method, this algorithm can fuse the high spatial resolution detail features better, and at the same time, it can greatly reduce the spectral distortion of the multi-hyperspectral images in the fusion result.
发明内容SUMMARY OF THE INVENTION
为弥补现有技术的不足,本发明目的是提供一种改进交叉皮质神经网络模型,解决多光谱、高光谱遥感影像的融合问题,融合后的影像能够同时具有高的空间分辨率和光谱分辨率,同时可较好保持空间细节特征,极大减少融合过程中的光谱扭曲。In order to make up for the deficiencies of the prior art, the purpose of the present invention is to provide an improved cross-cortical neural network model to solve the fusion problem of multispectral and hyperspectral remote sensing images, and the fused images can have high spatial resolution and spectral resolution at the same time. At the same time, it can better maintain the spatial details and greatly reduce the spectral distortion in the fusion process.
为了实现上述目的,本发明提出一种改进交叉皮质神经网络模型,其神经元数学表达为:In order to achieve the above purpose, the present invention proposes an improved cross-cortical neural network model, and its neuron mathematical expression is:
Eij[n]=gEij[n-1]+hYij[n-1]E ij [n]=gE ij [n-1]+hY ij [n-1]
其中,ij表示当前神经元,kl表示邻域神经元,n为当前迭代次数,W和α分别为邻域连接强度矩阵和连接系数,S为多高光谱影像,D为高空间分辨率的细节影像,g和h分别为衰减系数和标准化常量,E为活动阈值,Y为输出脉冲,F表示输出融合结果,一旦在当前迭代中F ij 大于活动阈值E ij ,神经元ij被激发在第n次迭代中产生一个输出脉冲Y ij ,当神经网络中的所有神经元均被激发时获得最终的融合结果F。Among them, ij is the current neuron, kl is the neighborhood neuron, n is the current iteration number, W and α are the neighborhood connection strength matrix and connection coefficient, respectively, S is the multi-hyperspectral image, D is the high spatial resolution detail image, g and h are the decay coefficient and normalization constant, respectively, E is the activity threshold, Y is the output pulse, F is the output fusion result, once F ij is greater than the activity threshold E ij in the current iteration , the neuron ij is fired at the nth In the next iteration, an output pulse Y ij is generated, and the final fusion result F is obtained when all neurons in the neural network are fired.
为了适应遥感影像融合算法,遥感影像的每一个像素与本发明神经网络模型中的每个神经元一一对应,在利用神经网络模型对多高光谱和高空间分辨影像进行处理之前,需要对输入影像执行标准化操作,将它们的像素值标准化到[0,1] 之间,并对标准化后的影像执行直方图匹配操作,获得标准化后的多高光谱影像S和高空间分辨影像H,对高空间分辨影像H执行高斯平滑滤波获得平滑后的影像HL,其中高斯滤波器的分布参数σ为:In order to adapt to the remote sensing image fusion algorithm, each pixel of the remote sensing image corresponds to each neuron in the neural network model of the present invention. Before using the neural network model to process the multi-hyperspectral and high spatial resolution images, the input Perform a normalization operation on the images, normalize their pixel values to [0,1], and perform a histogram matching operation on the normalized images to obtain the normalized multi-hyperspectral image S and high spatial resolution image H. The spatially resolved image H performs Gaussian smoothing filtering to obtain the smoothed image HL , where the distribution parameter σ of the Gaussian filter is:
其中,M为滤波器长度,R为多高光谱影像和高空间分辨影像之间的空间尺度比例因子,G为多高光谱影像传感器的调制传递函数,由此可以得到细节影像D=H-HL,在获得标准化后的多高光谱影像S和细节影像D后,将其作为改进交叉皮质神经网络模型输入进行迭代计算。where M is the filter length, R is the spatial scale factor between the multi-hyperspectral image and the high spatial resolution image, and G is the modulation transfer function of the multi-hyperspectral image sensor, from which the detailed image D = H - HL can be obtained. , after obtaining the standardized multi-hyperspectral image S and detail image D , it is used as the input of the improved cross-cortical neural network model for iterative calculation.
神经网络各变量初始值设置为,Y[0]=F[0]=0,E[0]=1,n=1;α计算如下: The initial value of each variable of the neural network is set as, Y [0]= F [0]=0, E [0]=1, n =1; α is calculated as follows:
其中Std和Con分别表示标准差和协方差计算。where Std and Con represent standard deviation and covariance calculations, respectively.
神经网络每迭代一次对当前迭代次数n执行加一操作,直到所有神经元均被激发获得输出F,对F执行逆标准化,即扩大F中像素值的取值范围得到一个多高光谱影像的融合结果,设多高光谱影像通道总数为K,则对这K个通道分别进行上述融合处理,可得到最终具有K个通道的的多高光谱影像与高空间分辨率影像的融合结果。Each iteration of the neural network adds one to the current number of iterations n until all neurons are excited to obtain the output F , and perform inverse normalization on F , that is, expand the range of pixel values in F to obtain a fusion of multi-hyperspectral images As a result, if the total number of multi-hyperspectral image channels is K, the above fusion processing is performed on these K channels respectively, and the final fusion result of multi-hyperspectral image and high spatial resolution image with K channels can be obtained.
本发明的有益效果是:1.传统交叉皮质神经网络模型仅允许一个外部激励输入,本改进模型具有两个外部激励输入S和D,有益于将交叉皮质神经网络原理更方便的应用于影像融合;2.本发明的模型由于考虑了细节注入操作使其可应用于尺度有差异的遥感影像融合;3.使用本发明模型可更好的保持高空间分辨率影像的细节特征,同时可极大减小融合结果的光谱扭曲。The beneficial effects of the present invention are: 1. The traditional cross-cortical neural network model only allows one external excitation input, and the improved model has two external excitation inputs S and D , which is beneficial to the more convenient application of the cross-cortical neural network principle to image fusion 2. The model of the present invention can be applied to the fusion of remote sensing images with different scales due to the consideration of the detail injection operation; 3. The use of the model of the present invention can better maintain the detailed characteristics of high spatial resolution images, and at the same time can greatly Reduces spectral distortion of fusion results.
附图说明Description of drawings
图1为本发明的遥感影像融合方法流程图。FIG. 1 is a flowchart of the remote sensing image fusion method of the present invention.
图2为本发明的改进交叉皮质神经网络模型结构图。FIG. 2 is a structural diagram of an improved cross-cortical neural network model of the present invention.
图3为本发明实施例的输入影像与融合结果。FIG. 3 is an input image and a fusion result according to an embodiment of the present invention.
具体实施方式Detailed ways
为了使本发明实现的技术手段、达成目的与效果易于理解,下面进一步阐述本发明。In order to facilitate the understanding of the technical means realized by the present invention, the achieved objects and effects, the present invention is further described below.
本发明的遥感影像融合方法流程框图如图1所示,整体流程为首先对输入的高空间分辨率影像和多高光谱影像进行[0,1]区间的标准化处理;其次对标准化后的高空间分辨影像提取细节,将细节影像和高空间分辨影像作为输入送入本发明模型,其中,本发明的改进交叉皮质神经网络模型结构如图2所示,网络参数设置为邻域连接强度矩阵W=[0.5,1,0.5;1,0,1;0.5,1,0.5],衰减系数g=0.65,标准化常量h=20。The flowchart of the remote sensing image fusion method of the present invention is shown in Figure 1. The overall process is to first perform normalization processing in the [0,1] interval on the input high spatial resolution images and multi-hyperspectral images; Resolve the image to extract details, and send the detailed image and the high spatial resolution image into the model of the present invention as input, wherein, the improved cross-cortical neural network model structure of the present invention is shown in Figure 2, and the network parameters are set as the neighborhood connection strength matrix W = [0.5,1,0.5;1,0,1;0.5,1,0.5], attenuation coefficient g = 0.65, normalization constant h = 20.
神经元ij与图像像素一一对应,当神经网络中的所有神经元均被激发时获得最终的融合结果F,将多高光谱的K个通道分别执行上述操作,可获得最终K个独立通道融合结果。The neuron ij corresponds to the image pixel one-to-one. When all neurons in the neural network are excited, the final fusion result F is obtained, and the K channels of the multi-hyperspectral spectrum are respectively performed to obtain the final fusion of K independent channels. result.
输入的高空间分辨率灰度影像、多高光谱影像和融合结果分别如图3所示,其中,图3(a)为输入高空间分辨率全色灰度影像,图3(b) 为输入多高光谱影像,输入影像采集于Quickbird高分传感器,空间分辨率分别为0.7m和2.8m,图3(c) 为融合结果,由图3可见,本遥感融合方法同时获得了高的空间和高的光谱分辨率,且细节及光谱特征保持较好。The input high spatial resolution grayscale image, multi-hyperspectral image and fusion results are shown in Figure 3, where Figure 3(a) is the input high spatial resolution panchromatic grayscale image, and Figure 3(b) is the input Multi-hyperspectral images, the input images were collected from Quickbird high-resolution sensors, and the spatial resolutions were 0.7m and 2.8m, respectively. Figure 3(c) is the fusion result. It can be seen from Figure 3 that this remote sensing fusion method simultaneously obtained high spatial and High spectral resolution with good detail and spectral features.
表1给出了本发明方法和其它Gram-Schmidt融合方法、Brovey变换融合方法、主成分分析PCA融合方法及IHS融合方法等经典遥感影像融合方法的评价对比结果,对比评价指标采用谱角匹配度SAM、相对全局误差ERGAS和Q索引指标,评价指标数学表达如下:Table 1 provides the evaluation and comparison results of classical remote sensing image fusion methods such as the method of the present invention and other Gram-Schmidt fusion methods, Brovey transform fusion methods, principal component analysis PCA fusion methods and IHS fusion methods, and the comparison and evaluation index adopts spectral angle matching degree SAM, relative global error ERGAS and Q index index, the mathematical expression of the evaluation index is as follows:
其中,<>表示內积运算,RMSE代表均方根运算,σ和μ分别代表图像的协方差和均值,评价指标中谱角匹配度SAM是对遥感影像光谱失真的衡量,其值越小融合效果越好,相对全局误差ERGAS表示融合结果与高空间分辨率影像之间的细节失真度,其值越小融合效果越好,Q索引指标是对融合影像光谱扭曲和空间细节保持的综合评价,其值越大融合质量越好。Among them, <> represents the inner product operation, RMSE represents the root mean square operation, σ and μ represent the covariance and mean of the image, respectively, and the spectral angle matching degree SAM in the evaluation index is a measure of the spectral distortion of the remote sensing image. The better the effect, the relative global error ERGAS represents the degree of detail distortion between the fusion result and the high spatial resolution image. The smaller the value, the better the fusion effect. The Q index is a comprehensive evaluation of the spectral distortion and spatial detail preservation of the fusion image. The larger the value, the better the fusion quality.
由表1的评价指标计算结果可以看出本发明方法的Q索引指标均高于其它经典的Gram-Schmidt融合方法、Brovey变换融合方法、主成分分析PCA融合方法及IHS融合方法等,同时本发明方法光谱扭曲度指标SAM和细节失真度指标ERGAS均小于其它经典算法,可见本发明方法在光谱扭曲和空间细节的保持上均极大优于经典方法。It can be seen from the evaluation index calculation results in Table 1 that the Q index index of the method of the present invention is higher than other classical Gram-Schmidt fusion methods, Brovey transform fusion methods, principal component analysis PCA fusion methods and IHS fusion methods, etc. Method Spectral distortion index SAM and detail distortion index ERGAS are both smaller than other classical algorithms. It can be seen that the method of the present invention is much better than the classical method in the preservation of spectral distortion and spatial details.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910090285.5A CN111507454B (en) | 2019-01-30 | 2019-01-30 | Improved cross cortical neural network model for remote sensing image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910090285.5A CN111507454B (en) | 2019-01-30 | 2019-01-30 | Improved cross cortical neural network model for remote sensing image fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111507454A CN111507454A (en) | 2020-08-07 |
CN111507454B true CN111507454B (en) | 2022-09-06 |
Family
ID=71863783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910090285.5A Active CN111507454B (en) | 2019-01-30 | 2019-01-30 | Improved cross cortical neural network model for remote sensing image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507454B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1489111A (en) * | 2003-08-21 | 2004-04-14 | 上海交通大学 | Remote Sensing Image Fusion Method Based on Local Statistical Characteristics and Color Space Transformation |
CN101577003A (en) * | 2009-06-05 | 2009-11-11 | 北京航空航天大学 | Image segmenting method based on improvement of intersecting visual cortical model |
JP2011090309A (en) * | 2009-10-23 | 2011-05-06 | Ana-Aeroportos De Portugal Sa | Method to generate airport obstruction chart based on data fusion between interferometric data using synthetic aperture radar positioned in spaceborne platform and other types of data acquired by remote sensor |
CN102651132A (en) * | 2012-04-06 | 2012-08-29 | 华中科技大学 | Medical image registration method based on intersecting cortical model |
CN103049898A (en) * | 2013-01-27 | 2013-04-17 | 西安电子科技大学 | Method for fusing multispectral and full-color images with light cloud |
CN103177431A (en) * | 2012-12-26 | 2013-06-26 | 中国科学院遥感与数字地球研究所 | Method of spatial-temporal fusion for multi-source remote sensing data |
CN103295201A (en) * | 2013-05-31 | 2013-09-11 | 中国人民武装警察部队工程大学 | Multi-sensor image fusion method on basis of IICM (improved intersecting cortical model) in NSST (nonsubsampled shearlet transform) domain |
CN103700075A (en) * | 2013-12-25 | 2014-04-02 | 浙江师范大学 | Tetrolet transform-based multichannel satellite cloud picture fusing method |
WO2014183259A1 (en) * | 2013-05-14 | 2014-11-20 | 中国科学院自动化研究所 | Full-color and multi-spectral remote sensing image fusion method |
CN105160647A (en) * | 2015-10-28 | 2015-12-16 | 中国地质大学(武汉) | Panchromatic multi-spectral image fusion method |
CN105913075A (en) * | 2016-04-05 | 2016-08-31 | 浙江工业大学 | Endoscopic image focus identification method based on pulse coupling nerve network |
CN107341501A (en) * | 2017-05-31 | 2017-11-10 | 三峡大学 | A kind of image interfusion method and device based on PCNN and classification focusing technology |
-
2019
- 2019-01-30 CN CN201910090285.5A patent/CN111507454B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1489111A (en) * | 2003-08-21 | 2004-04-14 | 上海交通大学 | Remote Sensing Image Fusion Method Based on Local Statistical Characteristics and Color Space Transformation |
CN101577003A (en) * | 2009-06-05 | 2009-11-11 | 北京航空航天大学 | Image segmenting method based on improvement of intersecting visual cortical model |
JP2011090309A (en) * | 2009-10-23 | 2011-05-06 | Ana-Aeroportos De Portugal Sa | Method to generate airport obstruction chart based on data fusion between interferometric data using synthetic aperture radar positioned in spaceborne platform and other types of data acquired by remote sensor |
CN102651132A (en) * | 2012-04-06 | 2012-08-29 | 华中科技大学 | Medical image registration method based on intersecting cortical model |
CN103177431A (en) * | 2012-12-26 | 2013-06-26 | 中国科学院遥感与数字地球研究所 | Method of spatial-temporal fusion for multi-source remote sensing data |
CN103049898A (en) * | 2013-01-27 | 2013-04-17 | 西安电子科技大学 | Method for fusing multispectral and full-color images with light cloud |
WO2014183259A1 (en) * | 2013-05-14 | 2014-11-20 | 中国科学院自动化研究所 | Full-color and multi-spectral remote sensing image fusion method |
CN103295201A (en) * | 2013-05-31 | 2013-09-11 | 中国人民武装警察部队工程大学 | Multi-sensor image fusion method on basis of IICM (improved intersecting cortical model) in NSST (nonsubsampled shearlet transform) domain |
CN103700075A (en) * | 2013-12-25 | 2014-04-02 | 浙江师范大学 | Tetrolet transform-based multichannel satellite cloud picture fusing method |
CN105160647A (en) * | 2015-10-28 | 2015-12-16 | 中国地质大学(武汉) | Panchromatic multi-spectral image fusion method |
CN105913075A (en) * | 2016-04-05 | 2016-08-31 | 浙江工业大学 | Endoscopic image focus identification method based on pulse coupling nerve network |
CN107341501A (en) * | 2017-05-31 | 2017-11-10 | 三峡大学 | A kind of image interfusion method and device based on PCNN and classification focusing technology |
Non-Patent Citations (5)
Title |
---|
Hong Li 等.Fusion of Multispectral and Panchromatic Images via Local Geometrical Similarity.《Technical Gazette 》.2018,第25卷(第2期), * |
Ulf Ekblad 等.Theoretical foundation of the intersecting cortical model and its use for change detection of aircraft, cars, and nuclear explosion tests.《Signal Processing》.2004, * |
Xin Jin 等.Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks.《Journal of Applied Remote Sensing》.2016, * |
戴文战 等.改进交叉视觉皮质模型的医学图像融合方法.《计算机应用研究》.2015,第33卷(第9期), * |
王密 等.自适应高斯滤波与SFIM 模型相结合的全色多光谱影像融合方法.《测绘学报》.2018,第47卷(第1期), * |
Also Published As
Publication number | Publication date |
---|---|
CN111507454A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110415199B (en) | Multispectral remote sensing image fusion method and device based on residual learning | |
CN108765280A (en) | A kind of high spectrum image spatial resolution enhancement method | |
Lohit et al. | Unrolled projected gradient descent for multi-spectral image fusion | |
CN111696043A (en) | Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN | |
CN107480701B (en) | Optical image and radar image matching method based on multi-channel convolutional neural network | |
CN112200123B (en) | A Hyperspectral Open Set Classification Method Joint Densely Connected Network and Sample Distribution | |
CN107491793B (en) | Polarized SAR image classification method based on sparse scattering complete convolution | |
CN104867124A (en) | Multispectral image and full-color image fusion method based on dual sparse non-negative matrix factorization | |
CN108614992A (en) | A kind of sorting technique of high-spectrum remote sensing, equipment and storage device | |
CN104851077A (en) | Adaptive remote sensing image panchromatic sharpening method | |
Gao et al. | Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering | |
CN114022364A (en) | Multispectral image spectrum hyper-segmentation method and system based on spectrum library optimization learning | |
CN107316309A (en) | High spectrum image conspicuousness object detection method based on matrix decomposition | |
CN110717485A (en) | A classification method of hyperspectral image sparse representation based on locality-preserving projection | |
CN106157269A (en) | Full-colour image sharpening method based on direction multiple dimensioned group low-rank decomposition | |
CN115984110A (en) | A second-order spectral attention hyperspectral image super-resolution method based on Swin-Transformer | |
CN111915518B (en) | Hyperspectral image denoising method based on triple low-rank model | |
CN109271874A (en) | A kind of high spectrum image feature extracting method merging spatially and spectrally information | |
Han et al. | Spectral library based spectral super-resolution under incomplete spectral coverage conditions | |
CN111507454B (en) | Improved cross cortical neural network model for remote sensing image fusion | |
CN115131258A (en) | A hyperspectral, multispectral and panchromatic image fusion method based on sparse tensor priors | |
CN115100075A (en) | Hyperspectral Panchromatic Sharpening Method Based on Spectral Constraints and Residual Attention Networks | |
Cloninger et al. | The pre-image problem for Laplacian eigenmaps utilizing l1 regularization with applications to data fusion | |
Zhang et al. | Three-Dimension Spatial-Spectral Attention Transformer for Hyperspectral Image Denoising | |
CN111598115A (en) | A SAR Image Fusion Method Based on Cross-cortical Neural Network Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |