Nothing Special   »   [go: up one dir, main page]

CN103049898B - Method for fusing multispectral and full-color images with light cloud - Google Patents

Method for fusing multispectral and full-color images with light cloud Download PDF

Info

Publication number
CN103049898B
CN103049898B CN201310030819.8A CN201310030819A CN103049898B CN 103049898 B CN103049898 B CN 103049898B CN 201310030819 A CN201310030819 A CN 201310030819A CN 103049898 B CN103049898 B CN 103049898B
Authority
CN
China
Prior art keywords
image
multispectral
full
background image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310030819.8A
Other languages
Chinese (zh)
Other versions
CN103049898A (en
Inventor
刘芳
石程
李玲玲
郝红侠
戚玉涛
焦李成
郑莹
尚荣华
马文萍
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201310030819.8A priority Critical patent/CN103049898B/en
Publication of CN103049898A publication Critical patent/CN103049898A/en
Application granted granted Critical
Publication of CN103049898B publication Critical patent/CN103049898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a method for fusing multispectral and full-color images with light cloud. The problem that the cloud and mist area is interfered by a cloud layer after the multispectral and full-color images with light cloud are fused is mainly solved. The method comprises the following implementation steps of: sampling and filtering the multispectral image with light cloud, and obtaining background images of the multispectral and full-color images; respectively removing light cloud of the multispectral and full-color images with light cloud; performing PCA conversion on the multispectral image in which the cloud is removed, and performing Shearlet decomposition on the converted first principal component image and the full-color image; taking a low-frequency coefficient of the first principal component as a low-frequency coefficient of a fusion component, and taking a high-frequency coefficient of the full-color image as a high-frequency coefficient of the fusion component; and performing reversible PCA conversion on the fusion component and the other components subjected to the PCA conversion, and obtaining the fusion image. The method has the advantages of high definition of light cloud area of the fusion image and high spectrum retainability and can be used for military target recognition, weather and environment monitoring, land utilization, urban planning and prevention and reduction of natural disasters.

Description

带有薄云的多光谱和全色图像融合方法Multispectral and Panchromatic Image Fusion Method with Thin Clouds

技术领域technical field

本发明属于智能图像处理领域,涉及去除薄云和图像融合方法,可用于到军事目标识别、气象监测、环境监测、土地利用、城市规划以及防灾减灾等多个领域的技术。The invention belongs to the field of intelligent image processing, relates to thin cloud removal and image fusion methods, and can be used in technologies in multiple fields such as military target recognition, meteorological monitoring, environmental monitoring, land utilization, urban planning, and disaster prevention and mitigation.

背景技术Background technique

自1972年美国实施地球资源卫星计划以来,卫星技术在全球范围内迅猛发展,至今世界各主要发达国家和少数发展中国家,包括中国、印度等先后发射了数以百计的卫星,作业波段覆盖可见光至不可见的近红外、短波红外、中红外、远红外、微波等广阔频域。这些卫星每天向散布在世界各地的卫星地面站和移动接收站传送覆盖全球的海量卫星遥感数据。由于卫星遥感数据在空间上的连续性和时间上的序列性,因此到目前为止,是能够提供全球范围的动态观测数据的唯一手段。Since the United States implemented the Earth Resources Satellite Program in 1972, satellite technology has developed rapidly around the world. So far, major developed countries and a few developing countries in the world, including China and India, have successively launched hundreds of satellites. The operating band covers Visible light to invisible near-infrared, short-wave infrared, mid-infrared, far-infrared, microwave and other broad frequency domains. These satellites transmit massive amounts of satellite remote sensing data covering the globe to satellite ground stations and mobile receiving stations scattered around the world every day. Due to the spatial continuity and time sequence of satellite remote sensing data, so far, it is the only means that can provide dynamic observation data on a global scale.

目前广泛应用的卫星遥感数据大部分是光学影像,尽管光学影像一般具有信息量大、分辨率高和图像稳定等特点。但同时,由于在成像过程中,常见大气中云雾的存在,遥感卫星在获取图像数据时往往连同云雾信息一起记录下来。由于云雾的遮挡使得传感器无法获得云雾覆盖区域的地物信息,从而严重影响了遥感光学图像的质量,当部分图像被较厚的云雾所覆盖时,地物的信息将无法被传感器接收,而对相对较薄的云雾,传感器仍能接收部分的地物信息,但这种不完全信息对图像中目标的识别、分类,以及地物信息提取的精度,都造成了严重的影响。At present, most of the widely used satellite remote sensing data are optical images, although optical images generally have the characteristics of large amount of information, high resolution and stable images. But at the same time, due to the common presence of clouds and fog in the atmosphere during the imaging process, remote sensing satellites often record the image data together with cloud and fog information. Due to the occlusion of clouds and fog, the sensor cannot obtain the information of ground objects in the area covered by clouds and fog, which seriously affects the quality of remote sensing optical images. When part of the image is covered by thick clouds and fog, the information of ground objects cannot be received by the sensor. Relatively thin clouds, the sensor can still receive part of the ground object information, but this incomplete information has a serious impact on the recognition and classification of the target in the image, and the accuracy of the ground object information extraction.

为有效提高遥感光学图像的利用率,市场上有多种方法来减少或去除薄云薄雾对图像的影响。目前市场上主要用到的方法有:多光谱图像法、多幅图像叠加运算法、基于图像增强的方法,和基于图像复原的方法等。In order to effectively improve the utilization of remote sensing optical images, there are many methods on the market to reduce or remove the influence of thin clouds and haze on images. At present, the main methods used in the market are: multi-spectral image method, multi-image overlay algorithm, method based on image enhancement, and method based on image restoration.

多光谱法,是利用一种特殊的传感器,或者多光谱图像中某些波段对云雾具有较强的敏感性的特性,探测出云雾的信息。然后从原始图像中减去云雾的信息,得到去除云雾后的图像。该方法可以有效的去除图像中的云雾,但是需要增加对云雾敏感的传感器或者波段,成本较高,使得该种方法的应用受到限制。The multi-spectral method uses a special sensor, or some bands in the multi-spectral image have strong sensitivity to clouds and fog to detect cloud and fog information. Then the cloud and fog information is subtracted from the original image to obtain the cloud-removed image. This method can effectively remove the cloud and fog in the image, but it needs to add sensors or wave bands that are sensitive to the cloud and fog, and the cost is high, which limits the application of this method.

多幅图像叠加法,是通过对同一地区拍摄的不同季节、不同时间的图像进行叠加,得到信息图像。由于同一地区在不同时间的地物信息常常会发生改变,严重影响了叠加运算后的图像的判读。The multi-image superposition method is to obtain information images by superimposing images taken in different seasons and at different times in the same area. Since the ground object information in the same area often changes at different times, it seriously affects the interpretation of the superimposed image.

基于图像复原的去除云雾的方法,是通过分析薄云退化图像的机制和过程,寻找出相应的反过程,从而得到原图像的方法。但是从图像复原的角度处理图像,必须熟悉薄云退化图像的机制和过程,由于薄云退化图像的退化程度与目标物和相机的距离相关,处理时必须结合相关的辅助信息,如大气消光系数、目标物和相机距离等,而这些信息的获取成本较高,难以得到广泛的应用。The cloud and fog removal method based on image restoration is to find out the corresponding inverse process by analyzing the mechanism and process of thin cloud degradation image, so as to obtain the original image. However, to process images from the perspective of image restoration, one must be familiar with the mechanism and process of thin cloud degraded images. Since the degradation degree of thin cloud degraded images is related to the distance between the target and the camera, relevant auxiliary information, such as the atmospheric extinction coefficient, must be combined during processing. , target and camera distance, etc., and the acquisition cost of these information is high, and it is difficult to be widely used.

基于图像增强的去除云雾的方法,通常有同态滤波法和低频滤波法两种,同态滤波法是一种通过压缩图像的动态范围,提升图像高频分量达到去除薄云的目的的方法。该方法对图像细节不加区分,采用单一滤波器进行增强,自适应性较差;低频滤波法通过对图像进行高斯低通滤波,得到图像的背景图像,通过原始图像与背景图像相减,来达到去除薄云的目的。该方法对薄云薄雾具有一定的去除效果,但是同时也削弱了图像的背景,虽然后来采用了补偿法进行改善,但是从结果来看,该方法对去除云雾后的图像仍然产生了信息的损失。There are usually two methods for removing clouds and fog based on image enhancement, homomorphic filtering and low-frequency filtering. Homomorphic filtering is a method that compresses the dynamic range of the image and enhances the high-frequency components of the image to remove thin clouds. This method does not distinguish the details of the image, and uses a single filter for enhancement, which has poor adaptability; the low-frequency filtering method performs Gaussian low-pass filtering on the image to obtain the background image of the image, and subtracts the original image from the background image to obtain To achieve the purpose of removing thin clouds. This method has a certain removal effect on thin clouds and mist, but it also weakens the background of the image. Although the compensation method was used to improve it, from the results, this method still produces information on the image after removing clouds and fog. loss.

由于单一的传感器获取的图像在光谱信息和分辨率等方面很难满足实际的需要,所以就需要利用传感器图像数据的冗余性,通过融合技术,将不同传感器的图像信息融合在一幅图像中,比如多光谱和全色图像,多光谱图像具有较好的光谱信息,但是其空间分辨率低;全色图像具有较高的空间分辨率,但是其光谱信息不够丰富,所以就需要通过融合技术,得到高分辨率多光谱图像,以获得对地物信息更加准确的描述。但是多光谱和全色图像存在云雾信息的时候,融合技术并不能去除图像中的云雾信息。对于带有薄云的多光谱和全色图像的融合,目前市场上是先对这两幅图像分别进行云雾去除后,再进行融合处理。对单一传感器获取的图像分别进行云雾去除的时候,从获取图像信息成本的角度考虑,应当选择图像增强的方法,但是图像增强引起图像信息损失却是不可避免的。所以,利用多光谱和全色图像之间的冗余性,寻找一种更加有效的针对薄云图像的融合方法,是目前市场上急需解决的问题。Since the image acquired by a single sensor is difficult to meet the actual needs in terms of spectral information and resolution, it is necessary to use the redundancy of sensor image data to fuse the image information of different sensors into one image through fusion technology. , such as multispectral and panchromatic images, multispectral images have better spectral information, but their spatial resolution is low; panchromatic images have higher spatial resolution, but their spectral information is not rich enough, so fusion technology is needed , to obtain high-resolution multispectral images to obtain a more accurate description of surface object information. However, when there is cloud and fog information in multispectral and panchromatic images, the fusion technology cannot remove the cloud and fog information in the image. For the fusion of multi-spectral and panchromatic images with thin clouds, the current market is to firstly remove the clouds and fog of these two images, and then perform fusion processing. When removing clouds and fog from the images acquired by a single sensor, from the perspective of the cost of acquiring image information, the method of image enhancement should be selected, but the loss of image information caused by image enhancement is inevitable. Therefore, using the redundancy between multispectral and panchromatic images to find a more effective fusion method for thin cloud images is an urgent problem in the market.

发明内容Contents of the invention

本发明目的在于针对上述已有技术对带有薄云图像融合的不足,提出一种带有薄云的多光谱和全色图像融合方法,以降低多光谱和全色图像融合后云层对薄云区域信息的干扰,减少融合后图像信息的损失,提高融合后图像的清晰度,更准确的描述地物信息。The purpose of the present invention is to address the deficiencies in the fusion of images with thin clouds in the above-mentioned prior art, and propose a multi-spectral and panchromatic image fusion method with thin clouds, so as to reduce the impact of clouds on thin clouds after the fusion of multi-spectral and panchromatic images. The interference of regional information reduces the loss of fused image information, improves the clarity of fused images, and more accurately describes ground object information.

为实现上述目的,本发明的主要内容如下:To achieve the above object, the main contents of the present invention are as follows:

1)对带有薄云的多光谱图像进行上采样,采样后的多光谱图像的大小与带有薄云的全色图像的大小相同;1) Upsampling the multispectral image with thin clouds, the size of the sampled multispectral image is the same as the panchromatic image with thin clouds;

2)对采样后的多光谱和全色图像依次分别进行下采样、高斯低通滤波和上采样,得到多光谱图像的背景图像B1和全色图像的背景图像B2,其中多光谱图像的背景图像的波段数和多光谱图像的波段数相同;2) Down-sampling, Gaussian low-pass filtering and up-sampling are respectively performed on the sampled multispectral and panchromatic images to obtain the background image B 1 of the multispectral image and the background image B 2 of the panchromatic image, in which The number of bands of the background image is the same as that of the multispectral image;

3)对多光谱图像的每个波段和全色图像分别进行去除薄云操作,得到去除薄云的多光谱图像I1和去除薄云的全色图像I23) The thin cloud removal operation is performed on each band of the multispectral image and the panchromatic image, and the multispectral image I 1 with thin cloud removed and the panchromatic image I 2 with thin cloud removed are obtained;

4)对去除薄云后的多光谱图像I1进行PCA变换,得到各个分量图像其中表示多光谱图像经过PCA变换后得到的第i主分量,i=1,2,...,n,n为分量的总数;4) Perform PCA transformation on the multispectral image I 1 after removing thin clouds to obtain each component image in Represents the i-th principal component of the multispectral image after PCA transformation, i=1, 2,..., n, n is the total number of components;

5)对经过PCA变换后得到的第一主分量图像和去除薄云的全色图像I2,分别进行Shearlet变换分解,将第一主分量图像分解为一个低频系数x1和多个方向子带系数y1,y2,...,ym,将去除薄云的全色图像I2分解为一个低频系数x2和多个方向子带系数z1,z2,...zm;5) For the first principal component image obtained after PCA transformation and the panchromatic image I 2 with thin clouds removed, respectively decompose by Shearlet transform, and the first principal component image Decompose into a low-frequency coefficient x 1 and multiple directional subband coefficients y 1 , y 2 , ..., y m , decompose the panchromatic image I 2 with thin clouds removed into a low-frequency coefficient x 2 and multiple directional sub-bands Coefficients z 1 , z 2 , ... z m ;

6)对步骤2)得到的多光谱图像的每一个波段的背景图像,计算其灰度平均值,得到多光谱图像的合成背景图像 6) For the background image of each band of the multispectral image obtained in step 2), calculate the average gray value to obtain the composite background image of the multispectral image

7)根据多光谱图像的合成背景图像建立权值矩阵w1,根据全色图像的背景图像B2,建立权值矩阵w27) Synthetic background image based on multispectral image Establish a weight matrix w 1 , and establish a weight matrix w 2 according to the background image B 2 of the panchromatic image;

8)对全色图像I2的每一个方向子带系数z1,z2,...,zm,乘以权值矩阵w1和w2,得到融合的第一分量图像的方向子带系数l2,l2,...,lm,将第一主分量图像的低频系数x1作为融合后第一主分量图像的低频系数k;8) For each directional subband coefficient z 1 , z 2 , ..., z m of the panchromatic image I 2 , multiply the weight matrix w 1 and w 2 to obtain the directional subband of the fused first component image Coefficients l 2 , l 2 ,..., l m , using the low-frequency coefficient x 1 of the first principal component image as the low-frequency coefficient k of the first principal component image after fusion;

9)对融合后的第一主分量图像的低频系数k和多个方向子带系数l1,l2,...,lm,进行逆Shearlet变换,得到融合后的第一主分量图像 9) Perform inverse Shearlet transform on the low-frequency coefficient k and multiple directional subband coefficients l 1 , l 2 , ..., l m of the fused first principal component image to obtain the fused first principal component image

10)将融合后的第一主分量图像和步骤4)得到的除第一主分量外的其他分量图像组成新的数据集,并对该新的数据集进行逆PCA变换,得到融合后的图像I。10) The fused first principal component image and other component images obtained in step 4) except the first principal component A new data set is formed, and the inverse PCA transformation is performed on the new data set to obtain the fused image I.

本发明与现有技术相比,具有如下优点:Compared with the prior art, the present invention has the following advantages:

(a)本发明由于使用上采样后图像获取背景图像,通过对上采样后图像进行采样的方法得到原始图像的近似背景图像,利用了上采样后图像的信息十分平滑,通过采样得到的近似背景图像能够较好的逼近真实的背景图像特点,克服了传统的算法在滤波过程中计算时间复杂度高的问题,从而提高了图像去除薄云的速度。(a) Since the present invention uses the up-sampled image to obtain the background image, the approximate background image of the original image is obtained by sampling the up-sampled image, and the information of the up-sampled image is very smooth, and the approximate background image obtained by sampling The image can better approximate the characteristics of the real background image, overcome the problem of high computational time complexity in the filtering process of the traditional algorithm, and thus improve the speed of removing thin clouds from the image.

(b)本发明由于使用背景图像建立权值矩阵来增强融合图像薄云区域的细节信息,利用薄云区域在图像中具有较高灰度值的特点,克服了传统的融合图像中薄云区域信息丢失,导致融合后的图像模糊的问题,从而提高了融合图像的清晰度。(b) Since the present invention uses the background image to establish a weight matrix to enhance the detailed information of the thin cloud area of the fusion image, and utilizes the characteristics of the thin cloud area in the image to have a higher gray value, it overcomes the thin cloud area in the traditional fusion image. The loss of information leads to the blurring of the fused image, thereby improving the clarity of the fused image.

附图说明Description of drawings

图1是本发明带有薄云的多光谱和全色图像融合方法流程图;Fig. 1 is a flow chart of the multispectral and panchromatic image fusion method with thin clouds in the present invention;

图2是本发明的带有薄云的QuickBird卫星图像;Fig. 2 is the QuickBird satellite image with thin cloud of the present invention;

图3是用本发明方法得到的融合图像。Fig. 3 is the fused image obtained by the method of the present invention.

具体实施方式detailed description

参照图1,本发明的带有薄云的多光谱和全色图像融合方法步骤如下:With reference to Fig. 1, the multispectral and panchromatic image fusion method steps that have thin cloud of the present invention are as follows:

步骤1,对带有薄云的多光谱图像进行上采样,上采样后的多光谱图像的大小与带有薄云的全色图像的大小相同,其中原始的多光谱图像如图2(a)所示,记为Y1,原始的全色图像如图2(b)所示,记为Y2Step 1. Upsampling the multispectral image with thin clouds. The size of the upsampled multispectral image is the same as that of the panchromatic image with thin clouds. The original multispectral image is shown in Figure 2(a) As shown, denoted as Y 1 , the original full-color image is shown in Figure 2(b), denoted as Y 2 .

步骤2,对上采样后的多光谱图像和原始全色图像分别进行采样和高斯低通滤波处理,得到带有薄云的多光谱图像的背景图像B1和全色图像的背景图像B2Step 2: Sampling and Gaussian low-pass filtering are performed on the upsampled multispectral image and the original panchromatic image respectively to obtain the background image B 1 of the multispectral image with thin clouds and the background image B 2 of the panchromatic image.

2.1)对上采样后的多光谱图像和原始全色图像,分别进行下采样,下采样后的多光谱和全色图像的大小均为原始的全色图像大小的1/3;2.1) Downsampling is performed on the upsampled multispectral image and the original panchromatic image respectively, and the size of the downsampled multispectral and panchromatic image is 1/3 of the size of the original panchromatic image;

2.2)对下采样后的多光谱图像和全色图像分别采用高斯滤波器进行高斯低通滤波,得到下采样的多光谱图像和全色图像的背景图像,下采样的多光谱图像的背景图像的波段数和多光谱图像的波段数相同;2.2) Gaussian low-pass filtering is performed on the downsampled multispectral image and panchromatic image, respectively, to obtain the background image of the downsampled multispectral image and panchromatic image, and the background image of the downsampled multispectral image The number of bands is the same as that of the multispectral image;

2.3)对下采样的背景图像,进行上采样,得到多光谱图像的背景图像B1和全色图像的背景图像B2,所得图像大小与原始的全色图像大小相同。2.3) Up-sampling the down-sampled background image to obtain the background image B 1 of the multispectral image and the background image B 2 of the panchromatic image, and the size of the obtained images is the same as that of the original panchromatic image.

步骤3,对原始多光谱图像Y1的每个波段和原始全色图像分别进行去除薄云操作。Step 3, each band of the original multispectral image Y1 and the original panchromatic image are respectively subjected to thin cloud removal operations.

3.1)计算原始多光谱图像Y1每个波段图像的灰度均值和原始全色图像Y2的灰度均值,即对图像中所有的像素的灰度值求和,再除以图像的像素总数;3.1) Calculate the gray mean value of each band image of the original multispectral image Y 1 and the gray mean value of the original panchromatic image Y 2 , that is, sum the gray value of all pixels in the image, and then divide it by the total number of pixels in the image ;

3.2)对原始多光谱图像Y1的每个波段,减去背景图像B1的相应波段,再加上相应波段的灰度均值,得到去除薄云的多光谱图像I13.2) For each band of the original multispectral image Y1 , subtract the corresponding band of the background image B1, and add the gray mean value of the corresponding band to obtain the multispectral image I1 without thin clouds;

3.3)对原始全色图像Y2,减去其背景图像B2,再加上其灰度的均值,得到去除薄云的全色图像I23.3) For the original panchromatic image Y 2 , subtract its background image B 2 and add the mean value of its gray level to obtain a panchromatic image I 2 without thin clouds.

步骤4,对去除薄云后的多光谱图像I1进行PCA变换,得到各个分量图像其中表示多光谱图像经过PCA变换后得到的第i主分量,i=1,2,...,n,n为分量的总数。Step 4, perform PCA transformation on the multispectral image I 1 after thin clouds are removed, and obtain each component image in Indicates the ith principal component of the multispectral image after PCA transformation, i=1, 2,..., n, n is the total number of components.

步骤5,对经过PCA变换后得到的第一主分量图像和去除薄云的全色图像I2,分别进行Shearlet变换分解,将第一主分量图像分解为一个低频系数x1和m个方向子带系数y1,y2,...,ym,将去除薄云的全色图像I2分解为一个低频系数x2和m个方向子带系数z1,z2,...,zmStep 5, the first principal component image obtained after PCA transformation and the panchromatic image I 2 with thin clouds removed, respectively decompose by Shearlet transform, and the first principal component image Decompose into a low-frequency coefficient x 1 and m direction sub-band coefficients y 1 , y 2 , ..., y m , decompose the panchromatic image I 2 with thin clouds removed into a low-frequency coefficient x 2 and m direction sub-bands Coefficients z 1 , z 2 , . . . , z m .

Shearlet变换是新一代的多尺度几何分析工具中的一种,具体分解方式为:Shearlet transform is one of the new generation of multi-scale geometric analysis tools. The specific decomposition method is as follows:

5.1)通过非下采样拉普拉斯金字塔变换对图像进行尺度分解,分别将第一主分量图像和全色图像I2分解为一个低频系数和多个高频系数,高频系数的个数和分解层数相同,其中尺度分解的层数不受限制,为了使分解后的低频系数与全色图像具有更高的相关性,本实例确定分解层数为4层;5.1) Scale-decompose the image through the non-subsampling Laplacian pyramid transform, and respectively divide the first principal component image And the panchromatic image I2 is decomposed into a low-frequency coefficient and multiple high-frequency coefficients, the number of high-frequency coefficients is the same as the number of decomposition layers, and the number of layers of scale decomposition is not limited. In order to make the decomposed low-frequency coefficients and panchromatic The image has a higher correlation. In this example, the number of decomposition layers is determined to be 4;

5.2)通过Shear滤波器将每一个高频系数分解为多个方向,其中方向数的选择不受到限制,综合考虑计算的时间复杂度和算法的需要,本实例确定分别从粗尺度到细尺度,将4个高频系数分别分解为6个、6个、10个和10个方向,得到共22个方向子带系数。5.2) Decompose each high-frequency coefficient into multiple directions through the Shear filter, and the choice of the number of directions is not limited. Considering the time complexity of the calculation and the needs of the algorithm, this example determines from the coarse scale to the fine scale. The 4 high-frequency coefficients are decomposed into 6, 6, 10 and 10 directions respectively, and a total of 22 direction subband coefficients are obtained.

步骤6,根据步骤2中多光谱图像的背景图像B1,计算所有波段的灰度均值,得到多光谱图像的合成背景图像 Step 6, according to the background image B 1 of the multispectral image in step 2, calculate the gray mean value of all bands to obtain the composite background image of the multispectral image

合成背景图像通过如下公式获得:The synthetic background image is obtained by the following formula:

BB 11 ′′ (( pp ,, qq )) == ΣΣ kk == 11 NN BB 11 kk (( pp ,, qq )) NN

其中表示背景图像B1的第k波段在坐标(p,q)处的灰度值,表示合成的背景图像在坐标(p,q)处的灰度值,N是多光谱图像的波段数。in Indicates the gray value of the k- th band of the background image B1 at coordinates (p, q), Represents the composited background image Gray value at coordinates (p,q), N is the number of bands of the multispectral image.

步骤7,根据得到的多光谱图像的合成背景图像建立合成背景图像的权值矩阵w1Step 7, based on the synthetic background image of the obtained multispectral image Create a composite background image The weight matrix w 1 of .

7.1)建立合成背景图像的缓存矩阵M1,初值为零,其大小与多光谱图像的背景图像相同,将多光谱图像的合成背景图像的每一个位置的灰度值,直接赋值给合成背景图像缓存矩阵M1的相应位置;7.1) Create a synthetic background image The cache matrix M 1 of the initial value is zero, and its size is the same as the background image of the multispectral image, and the composite background image of the multispectral image is The gray value of each position of is directly assigned to the synthetic background image The corresponding position of cache matrix M 1 ;

7.2)统计合成背景图像缓存矩阵M1中所有值的最大值和最小值,分别记作 7.2) Statistically synthesized background images The maximum and minimum values of all values in the cache matrix M 1 are denoted as and

7.3)根据最大值和最小值计算合成背景图像的阈值T17.3) According to the maximum and minimum Computationally synthesized background image Threshold T 1 :

TT 11 == Mm minmin 11 ++ 0.50.5 ×× (( Mm maxmax 11 -- Mm minmin 11 )) ;;

7.4)将多光谱图像的合成背景图像中每一个像素的灰度值与其阈值T1进行比较,将灰度值大于其阈值T1的像素分为一类,并将这类像素组成的区域记作S11;将灰度值小于其阈值T1的像素分为另一类,并将这类像素组成的区域记作S127.4) Synthesize the background image of the multispectral image Compare the gray value of each pixel with its threshold T 1 , classify the pixels whose gray value is greater than its threshold T 1 into one category, and record the area composed of such pixels as S 11 ; The pixels of the threshold T1 are classified into another category, and the area composed of such pixels is denoted as S12 ;

7.5)分别计算区域S11和S12中的所有像素灰度值的均值,分别记作计算缓存矩阵M1的压缩取值范围的阈值R17.5) Calculate the mean value of all pixel gray values in regions S 11 and S 12 respectively, denoted as and Calculate the threshold R 1 of the compressed value range of the cache matrix M 1 :

RR 11 == AA SS 1111 -- AA SS 1212 Mm maxmax 11 -- Mm minmin 11 -- -- -- (( BB ))

7.6)对缓存矩阵M1中所有的值进行归一化处理,得到归一化矩阵该归一化矩阵中所有的值都在[0,1]区间内;7.6) Normalize all values in the cache matrix M 1 to obtain a normalized matrix The normalization matrix All values in are in the [0,1] interval;

7.7)对归一化矩阵中的每一个值乘以压缩取值范围的阈值R1,使归一化矩阵中的所有的值压缩到[0,R1]范围内,压缩后的矩阵就是权值矩阵w17.7) For normalized matrix Each value in is multiplied by the threshold R 1 to compress the range of values, so that the normalized matrix All the values in are compressed into the range [0, R 1 ], and the compressed matrix is the weight matrix w 1 .

步骤8,根据全色图像的背景图像B2,建立第二权值矩阵w2Step 8: Establish a second weight matrix w 2 according to the background image B 2 of the panchromatic image.

8.1)建立全色图像的背景图像B2的缓存矩阵M2,初值为零,其大小与全色图像的背景图像相同,将该背景图像B2的每一个位置的灰度值,直接赋值给其缓存矩阵M2的相应位置;8.1) Establish the buffer matrix M 2 of the background image B 2 of the full-color image, the initial value is zero, and its size is the same as the background image of the full-color image, and directly assign the gray value of each position of the background image B 2 Give it the corresponding position of cache matrix M 2 ;

8.2)统计全色图像的背景图像B2的缓存矩阵M2中所有值的最大值和最小值,分别记作 8.2) The maximum and minimum values of all the values in the cache matrix M 2 of the background image B 2 of the panchromatic image are statistically denoted as and

8.3)根据最大值和最小值计算全色图像的背景图像B2的阈值T28.3) According to the maximum and minimum Calculate the threshold T 2 of the background image B 2 of the panchromatic image,

TT 22 == Mm minmin 22 ++ 0.50.5 ×× (( Mm maxmax 22 -- Mm minmin 22 )) ;;

8.4)将全色图像的背景图像B2中每一个像素的灰度值与其阈值T2进行比较,将灰度值大于该阈值T2的像素分为一类,并将这类像素组成区域记作S21;将灰度值小于该阈值T2的像素分为另一类,并将这些像素组成的区域记作S228.4) Compare the gray value of each pixel in the background image B 2 of the full-color image with its threshold T 2 , classify the pixels whose gray value is greater than the threshold T 2 into one category, and record such pixels as regions Make S 21 ; classify pixels whose gray value is smaller than the threshold T 2 into another category, and record the area formed by these pixels as S 22 ;

8.5)分别计算区域S21和S22中所有像素灰度值的均值,分别记作计算所述缓存矩阵M2的压缩取值范围的阈值R28.5) Calculate the mean value of the gray value of all pixels in the regions S 21 and S 22 respectively, denoted as and Calculate the threshold R 2 of the compressed value range of the cache matrix M 2 :

RR 22 == AA SS 21twenty one -- AA SS 22twenty two Mm maxmax 22 -- Mm minmin 22 ,, -- -- -- (( BB ))

8.6)对全色图像的背景图像B2的缓存矩阵M2中所有的值进行归一化处理,得到归一化矩阵该归一化矩阵中所有的值都在[0,1]区间内;8.6 ) Normalize all the values in the cache matrix M2 of the background image B2 of the panchromatic image to obtain a normalized matrix The normalization matrix All values in are in the [0,1] interval;

8.7)对归一化矩阵中的每一个值乘以压缩取值范围的阈值R2,使归一化矩阵中的所有的值压缩到[0,R2]范围内,压缩后的矩阵就是权值矩阵w28.7) For normalized matrix Each value in is multiplied by the threshold R 2 to compress the value range, so that the normalized matrix All the values in are compressed to the range of [0, R 2 ], and the compressed matrix is the weight matrix w 2 .

步骤9,对全色图像I2的每一个方向子带系数z1,z2,...,zm,乘以权值矩阵w1和w2,得到融合后的第一分量图像的方向子带系数l1,l2,...,lm,将第一主分量图像的低频系数x1直接赋值给融合后第一主分量图像的低频系数k。Step 9: Multiply each direction subband coefficient z 1 , z 2 , ..., z m of the panchromatic image I 2 by the weight matrix w 1 and w 2 to obtain the direction of the fused first component image The sub-band coefficients l 1 , l 2 , ..., l m directly assign the low-frequency coefficient x 1 of the first principal component image to the low-frequency coefficient k of the fused first principal component image.

步骤10,对融合后的第一主分量图像的低频系数k和多个方向子带系数l1,l2,...,lm,进行逆Shearlet变换,得到融合后的第一主分量图像 Step 10, performing an inverse Shearlet transform on the low-frequency coefficient k and multiple directional subband coefficients l 1 , l 2 , ..., l m of the fused first principal component image to obtain the fused first principal component image

步骤11,将融合后的第一主分量图像和步骤4得到的除第一主分量外的其它分量图像组成新的数据集,并对该新的数据集进行逆PCA变换,得到融合后的图像I,如图3所示,其中图3(a)是通过权值矩阵增强的融合图像,图3(b)是图3(a)的局部放大图。Step 11, the fused first principal component image and other component images obtained in step 4 except the first principal component Form a new data set, and perform inverse PCA transformation on the new data set to obtain the fused image I, as shown in Figure 3, where Figure 3(a) is the fused image enhanced by the weight matrix, and Figure 3 ( b) is a partially enlarged view of Fig. 3(a).

以上描述是本发明的一个具体实例,显然对于本领域的专业人员来说,在了解了本发明内容和原理后,都可能在不背离本发明原理、结构的情况下,进行形式和细节上的修正和改变,但是这些基于本发明思想的修正和改变仍在本发明的权利要求保护范围之内。The above description is a specific example of the present invention. Obviously, for those skilled in the art, after understanding the content and principle of the present invention, it is possible to make changes in form and details without departing from the principle and structure of the present invention. Amendments and changes, but these amendments and changes based on the idea of the present invention are still within the protection scope of the claims of the present invention.

Claims (2)

1., with the multispectral of thin cloud and a panchromatic image fusion method, comprise the steps:
1) carry out up-sampling to the multispectral image with thin cloud, the size of the multispectral image after sampling is identical with the size of the full-colour image with thin cloud;
2) respectively down-sampling, Gassian low-pass filter and up-sampling are carried out successively to the multispectral and full-colour image after sampling, obtain the background image B of multispectral image 1with the background image B of full-colour image 2, wherein the wave band number of the background image of multispectral image is identical with the wave band number of multispectral image;
3) operation of thin cloud is removed respectively to each wave band of multispectral image and full-colour image:
3.1) calculate the gray average of multispectral image each band image and the gray average of full-colour image, namely the gray-scale value of pixels all in image is sued for peace, then divided by the sum of all pixels of image;
3.2) to each wave band of multispectral image, subtracting background image B 1corresponding wave band, add the gray average of corresponding wave band, obtain the multispectral image I removing thin cloud 1;
3.3) to full-colour image, its background image B is deducted 2, add the average of its gray scale, obtain the full-colour image I removing thin cloud 2;
4) to the multispectral image I removed after thin cloud 1carry out PCA conversion, obtain each component image wherein represent the i-th principal component that multispectral image obtains after PCA conversion, i=1,2 ..., n, n are the sum of component;
5) to the first factor image obtained after PCA conversion with the full-colour image I removing thin cloud 2, carry out Shearlet conversion respectively and decompose, by the first factor image be decomposed into a low frequency coefficient x 1with multiple directions sub-band coefficients y 1, y 2..., y m, will the full-colour image I of thin cloud be removed 2be decomposed into a low frequency coefficient x 2with multiple directions sub-band coefficients z 1, z 2..., z m;
6) to step 2) background image of each wave band of multispectral image that obtains, calculate its average gray by following formula, obtain the synthesis background image B ' of multispectral image 1:
Wherein represent background image B 1the gray-scale value of kth wave band at coordinate (p, q) place, B ' 1(p, q) represents the background image B ' of synthesis 1at the gray-scale value at coordinate (p, q) place, N is the wave band number of multispectral image;
7) according to the synthesis background image B ' of multispectral image 1, set up weight matrix w 1:
7.1) synthesis background image B ' is set up 1buffer memory matrix M 1, initial value is zero, and its size is identical with the background image of multispectral image, by the synthesis background image B ' of multispectral image 1the gray-scale value of each position, indirect assignment gives its buffer memory matrix M 1relevant position;
7.2) statistics synthesis background image B ' 1buffer memory matrix M 1the maximal value of middle all values and minimum value, be denoted as respectively with
7.3) according to maximal value and minimum value calculate synthesis background image B ' 1threshold value T 1:
7.4) by the synthesis background image B ' of multispectral image 1in the gray-scale value of each pixel and its threshold value T 1compare, gray-scale value is greater than its threshold value T 1pixel be divided into a class, and the region that this kind of pixel forms is denoted as S 11; Gray-scale value is less than its threshold value T 1pixel be divided into another kind of, and the region that this kind of pixel forms is denoted as S 12;
7.5) difference zoning S 11and S 12in the average of all grey scale pixel values, be denoted as respectively with calculate synthesis background image B ' 1buffer memory matrix M 1the threshold value R of compression span 1:
7.6) to described buffer memory matrix M 1in all values be normalized, obtain normalization matrix M ' 1, this normalization matrix M ' 1in all value all in [0,1] interval;
7.7) to normalization matrix M ' 1in each value be multiplied by its compression span threshold value R 1, make normalization matrix M ' 1in all values be compressed to [0, R 1] in scope, the matrix after compression is exactly weight matrix w 1;
8) according to the background image B of full-colour image 2, set up weight matrix w 2:
8.1) the background image B of full-colour image is set up 2buffer memory matrix M 2, initial value is zero, and its size is identical with the background image of full-colour image, by background image B 2the gray-scale value of each position, indirect assignment gives its buffer memory matrix M 2relevant position;
8.2) the background image B of full-colour image is added up 2buffer memory matrix M 2the maximal value of middle all values and minimum value, be denoted as respectively with
8.3) according to maximal value and minimum value calculate the background image B of full-colour image 2threshold value T 2,
8.4) by the background image B of full-colour image 2in the gray-scale value of each pixel and its threshold value T 2compare, gray-scale value is greater than its threshold value T 2pixel be divided into a class, and this kind of pixel compositing area is denoted as S 21; Gray-scale value is less than its threshold value T 2pixel be divided into another kind of, and the region that these pixels form is denoted as S 22;
8.5) difference zoning S 21and S 22in the average of all grey scale pixel values, be denoted as respectively with calculate the background image B of full-colour image 2buffer memory matrix M 2the threshold value R of compression span 2:
8.6) to described buffer memory matrix M 2in all values be normalized, obtain normalization matrix M ' 2, this normalization matrix M ' 2in all value all in [0,1] interval;
8.7) to normalization matrix M ' 2in each value be multiplied by its compression span threshold value R 2, make normalization matrix M ' 2in all values be compressed to [0, R 2] in scope, the matrix after compression is exactly weight matrix w 2;
9) to full-colour image I 2each directional subband coefficient z 1, z 2..., z m, be multiplied by weight matrix w 1and w 2, obtain the directional subband coefficient l of the first component image merged 1, l 2..., l m, by the low frequency coefficient x of the first factor image 1as the low frequency coefficient k merging rear the first factor image;
10) to low frequency coefficient k and the multiple directions sub-band coefficients l of the first factor image after fusion 1, l 2..., l m, carry out inverse Shearlet conversion, obtain the first factor image after merging
11) by the first factor image after fusion with step 4) other component images except the first factor of obtaining form new data set, and inverse PCA conversion is carried out to this new data set, obtain the image I after merging.
2. according to claim 1 with the multispectral of thin cloud and panchromatic image fusion method, wherein step 5) described in the first factor image with the full-colour image I removing thin cloud 2carry out Shearlet decomposition respectively, carry out as follows:
5.1) carrying out Scale Decomposition by non-lower sampling Laplacian Pyramid Transform to image, is a low frequency coefficient and multiple high frequency coefficient by picture breakdown, and the number Sum decomposition number of plies of high frequency coefficient is identical, and determination Decomposition order of the present invention is 4 layers;
5.2) by Shear wave filter, each high frequency coefficient is decomposed into multiple directions, the present invention determines respectively from thick yardstick to thin yardstick, each high frequency coefficient is decomposed into 6,6,10 and 10 directions, obtains multiple directions sub-band coefficients.
CN201310030819.8A 2013-01-27 2013-01-27 Method for fusing multispectral and full-color images with light cloud Active CN103049898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310030819.8A CN103049898B (en) 2013-01-27 2013-01-27 Method for fusing multispectral and full-color images with light cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310030819.8A CN103049898B (en) 2013-01-27 2013-01-27 Method for fusing multispectral and full-color images with light cloud

Publications (2)

Publication Number Publication Date
CN103049898A CN103049898A (en) 2013-04-17
CN103049898B true CN103049898B (en) 2015-04-22

Family

ID=48062528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310030819.8A Active CN103049898B (en) 2013-01-27 2013-01-27 Method for fusing multispectral and full-color images with light cloud

Country Status (1)

Country Link
CN (1) CN103049898B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103293168B (en) * 2013-05-28 2015-01-28 陕西科技大学 Fruit surface defect detection method based on visual saliency
CN103295214B (en) * 2013-06-28 2016-01-20 深圳大学 Cloudless MODIS remote sensing images based on color character generate method and system
CN103456018B (en) * 2013-09-08 2017-01-18 西安电子科技大学 Remote sensing image change detection method based on fusion and PCA kernel fuzzy clustering
CN104517264B (en) * 2013-09-30 2018-03-16 华为终端(东莞)有限公司 Image processing method and device
CN103793883B (en) * 2013-12-11 2016-11-09 北京工业大学 A Super-resolution Restoration Method of Imaging Spectral Image Based on Principal Component Analysis
CN104616253B (en) * 2015-01-09 2017-05-10 电子科技大学 Light cloud removing method of optical remote sensing image utilizing independent component analysis technology
CN106127712B (en) * 2016-07-01 2020-03-31 上海联影医疗科技有限公司 Image enhancement method and device
GB2548767B (en) 2015-12-31 2018-06-13 Shanghai United Imaging Healthcare Co Ltd Methods and systems for image processing
CN107622479B (en) * 2017-09-04 2020-11-27 南京理工大学 A Contour Wave Subband Adaptive Detail Injection Method for Panchromatic Sharpening of Multispectral Images
CN107958450B (en) * 2017-12-15 2021-05-04 武汉大学 Panchromatic multispectral image fusion method and system based on adaptive Gaussian filtering
CN109118462A (en) * 2018-07-16 2019-01-01 中国科学院东北地理与农业生态研究所 A kind of remote sensing image fusing method
CN111507454B (en) * 2019-01-30 2022-09-06 兰州交通大学 Improved cross cortical neural network model for remote sensing image fusion
CN112565756B (en) * 2020-11-26 2021-09-03 西安电子科技大学 Cloud-containing remote sensing image compression method based on quantization strategy
CN113436123B (en) * 2021-06-22 2022-02-01 宁波大学 High-resolution SAR and low-resolution multispectral image fusion method based on cloud removal-resolution improvement cooperation
CN113570536B (en) * 2021-07-31 2022-02-01 中国人民解放军61646部队 Panchromatic and multispectral image real-time fusion method based on CPU and GPU cooperative processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008070544A2 (en) * 2006-12-01 2008-06-12 Harris Corporation Structured smoothing for superresolution of multispectral imagery based on registered panchromatic image
CN101359399A (en) * 2008-09-19 2009-02-04 常州工学院 Optical Image Declouding Method
CN102509262A (en) * 2011-10-17 2012-06-20 中煤地航测遥感局有限公司 Method for removing thin cloud of remote sensing image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8737733B1 (en) * 2011-04-22 2014-05-27 Digitalglobe, Inc. Hyperspherical pan sharpening

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008070544A2 (en) * 2006-12-01 2008-06-12 Harris Corporation Structured smoothing for superresolution of multispectral imagery based on registered panchromatic image
CN101359399A (en) * 2008-09-19 2009-02-04 常州工学院 Optical Image Declouding Method
CN102509262A (en) * 2011-10-17 2012-06-20 中煤地航测遥感局有限公司 Method for removing thin cloud of remote sensing image

Also Published As

Publication number Publication date
CN103049898A (en) 2013-04-17

Similar Documents

Publication Publication Date Title
CN103049898B (en) Method for fusing multispectral and full-color images with light cloud
Shah et al. An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets
CN111104889B (en) A water body remote sensing recognition method based on U-net
CN106251368B (en) The fusion method of SAR image and multispectral image based on BEMD
Bhatnagar et al. An image fusion framework based on human visual system in framelet domain
CN104537678B (en) A kind of method that cloud and mist is removed in the remote sensing images from single width
Yadav et al. Fog removal techniques from images: A comparative review and future directions
CN103116881A (en) Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
CN107358260A (en) A kind of Classification of Multispectral Images method based on surface wave CNN
Li et al. Multifocus image fusion based on redundant wavelet transform
CN111738916B (en) Remote sensing image generalized shadow spectrum reconstruction method and system based on statistics
CN113191325B (en) Image fusion method, system and application thereof
CN113793289A (en) Fuzzy fusion method of multispectral image and panchromatic image based on CNN and NSCT
CN109961408B (en) Photon counting image denoising method based on NSCT and block matching filtering
Elhabiby et al. Second generation curvelet transforms Vs Wavelet transforms and Canny edge detector for edge detection from worldview-2 data
Yehia et al. Fusion of high-resolution SAR and optical imageries based on a wavelet transform and IHS integrated algorithm
CN113436123B (en) High-resolution SAR and low-resolution multispectral image fusion method based on cloud removal-resolution improvement cooperation
Wang et al. Poissonian blurred hyperspectral imagery denoising based on variable splitting and penalty technique
Li et al. Multiscale spatial-frequency domain dynamic pansharpening of remote sensing images integrated with wavelet transform
Chakravortty et al. Fusion of hyperspectral and multispectral image data for enhancement of spectral and spatial resolution
Jaswanth et al. Change detection of sar images based on convolution neural network with curvelet transform
CN112508829B (en) A pan-sharpening method based on shearlet transform
Ourabia et al. Pansharpening Methods Based on a Redundant Contourlet Transform
Xue et al. A fusion method of multi-spectral image and panchromatic image based on NSCT and IHS transform
Yang et al. Optical Remote Sensing Image Optimized Dehazing Algorithm Based On Hot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant