Background technology
Since the U.S. in 1972 implements the earth resources satellite plan, satellite technology is fast development in the world, so far each Main Developed Countries of the world and minority developing country, comprise that China, India etc. have successively launched hundreds of satellite, operation wave band covering visible light to sightless near infrared, short-wave infrared, in the wide frequency domains such as infrared, far infrared, microwave.These satellites transmit Massive Remote Sensing Data covering the whole world to the satellite ground station and the mobile reception station that are dispersed in all over the world every day.Because therefore up to the present satellite remote sensing date continuity and temporal sequentiality spatially, be the only resource that the dynamic observation data of global range can be provided.
The satellite remote sensing date major part of at present widespread use is optical image, although optical image generally have contain much information, resolution is high and the characteristics such as image stabilization.But simultaneously, because in imaging process, the existence of cloud and mist in the common atmosphere, remote sensing satellite is often recorded together with cloud and mist information when obtaining view data.Because blocking so that sensor can't obtain the terrestrial object information of cloud and mist overlay area of cloud and mist, thereby had a strong impact on the quality of remote sensing optical imagery, when parts of images is covered by thicker cloud and mist, the information of atural object can't be received by sensor, and to relatively thin cloud and mist, the terrestrial object information that sensor still can receiving unit, but this imperfect information is to identification, the classification of target in the image, and the precision of ground object information extraction, all caused to seriously influence.
Be the utilization factor of Effective Raise remote sensing optical imagery, have several different methods to reduce on the market or remove thin cloud and mist to the impact of image.The method of mainly using in the market has: multispectral image method, multiple image superposition method, based on the method for figure image intensifying, and based on method of image restoration etc.
Multispectral method is to utilize a kind of special sensor, and perhaps some wave band has the characteristic of stronger susceptibility in the multispectral image to cloud and mist, detects the information of cloud and mist.Then from original image, deduct the information of cloud and mist, obtain removing the image behind the cloud and mist.The method can effectively be removed the cloud and mist in the image, but needs to increase sensor or wave band to the cloud and mist sensitivity, and cost is higher, so that the application of this kind method is restricted.
The multiple image method of superposition is to superpose by the Various Seasonal that areal is taken, the image of different time, obtains frame.Because areal usually can change at the terrestrial object information of different time, has had a strong impact on the interpretation of the image behind the superposition.
Based on the method for the removal cloud and mist of image restoration, be mechanism and the process by analyzing thin cloud degraded image, seek out corresponding anti-process, thereby obtain the method for original image.But process image from the angle of image restoration, must be familiar with mechanism and the process of thin cloud degraded image, because the degree of degeneration of thin cloud degraded image and the Range-based of object and camera, must be in conjunction with relevant supplementary during processing, such as atmospheric extinction coefficient, object and camera distance etc., and the procurement cost of these information is higher, is difficult to be widely used.
Method based on the removal cloud and mist of figure image intensifying has two kinds of homomorphic filtering method and low frequency filtering methods usually, and the homomorphic filtering method is a kind of dynamic range by compressed image, promotes the method that the image high fdrequency component reaches the purpose of removing thin cloud.The method does not add differentiation to image detail, adopts single wave filter to strengthen, and adaptivity is relatively poor; The low frequency filtering method obtains the background image of image by image is carried out Gassian low-pass filter, subtracts each other by original image and background image, reaches the purpose of removing thin cloud.The method has certain removal effect to thin cloud and mist, has simultaneously also weakened the background of image, although adopted afterwards penalty method to improve, but from the result, the loss that the method has still produced information to the image of removing behind the cloud and mist.
Because the image that single sensor obtains is difficult to satisfy actual needs at aspects such as spectral information and resolution, so just need to utilize the redundancy of sensor image data, pass through integration technology, with the image information fusion of different sensors in piece image, such as multispectral and full-colour image, multispectral image has preferably spectral information, but its spatial resolution is low; Full-colour image has higher spatial resolution, but its spectral information is abundant not, so just need by integration technology, obtains high-resolution multi-spectral image, with acquisition terrestrial object information is described more accurately.When but there was cloud and mist information in multispectral and full-colour image, integration technology can not be removed the cloud and mist information in the image.For the fusion with the multispectral and full-colour image of thin cloud, be first this two width of cloth image to be carried out respectively carrying out fusion treatment after cloud and mist removes again in the market.The image that single-sensor is obtained carries out respectively cloud and mist when removing, and considers from the angle of obtaining the image information cost, should select the method for figure image intensifying, but is inevitable but the figure image intensifying causes image information loss.So, utilize the redundancy between multispectral and the full-colour image, seek a kind ofly more effectively for the fusion method of thin cloud atlas picture, be urgent problem in the market.
Summary of the invention
The object of the invention is for the deficiency of above-mentioned prior art to merging with thin cloud atlas picture, a kind of multispectral and panchromatic image fusion method with thin cloud is proposed, with the interference of cloud layer after reducing multispectral and full-colour image and merging to thin cloud sector domain information, reduce the loss of fused image information, improve the sharpness of fused image, describe more accurately terrestrial object information.
For achieving the above object, main contents of the present invention are as follows:
1) multispectral image with thin cloud is carried out up-sampling, big or small identical with the full-colour image of thin cloud of the size of the multispectral image after the sampling;
2) the multispectral and full-colour image after the sampling is carried out respectively down-sampling, Gassian low-pass filter and up-sampling successively, obtain the background image B of multispectral image
1Background image B with full-colour image
2, wherein the wave band number of the wave band number of the background image of multispectral image and multispectral image is identical;
3) each wave band and the full-colour image of multispectral image are removed respectively thin cloud operation, obtain removing the multispectral image I of thin cloud
1With the full-colour image I that removes thin cloud
2
4) to removing the multispectral image I behind the thin cloud
1Carry out the PCA conversion, obtain each component image
Wherein
The i principal component that obtains after the expression multispectral image process PCA conversion, i=1,2 ..., n, n are the sum of component;
5) to the first factor image through obtaining after the PCA conversion
With the full-colour image I that removes thin cloud
2, carry out respectively the Shearlet conversion and decompose, with the first factor image
Be decomposed into a low frequency coefficient x
1With a plurality of directional subband coefficient y
1, y
2..., y
m, will remove the full-colour image I of thin cloud
2Be decomposed into a low frequency coefficient x
2With a plurality of directional subband coefficient z
1, z
2... z
m;
6) to step 2) background image of each wave band of the multispectral image that obtains, calculate its average gray, obtain the synthetic background image of multispectral image
7) according to the synthetic background image of multispectral image
Set up weight matrix w
1, according to the background image B of full-colour image
2, set up weight matrix w
2
8) to full-colour image I
2Each directional subband coefficient z
1, z
2..., z
m, multiply by weight matrix w
1And w
2, the directional subband coefficient l of the first component image that obtains merging
2, l
2..., l
m, with the low frequency coefficient x of the first factor image
1Low frequency coefficient k as the first factor image after merging;
9) to low frequency coefficient k and a plurality of directional subband coefficient l of the first factor image after merging
1, l
2..., l
m, carry out contrary Shearlet conversion, the first factor image after obtaining merging
10) the first factor image after will merging
Other component images except the first factor that obtain with step 4)
Form new data set, and this new data set is carried out contrary PCA conversion, the image I after obtaining merging.
The present invention compared with prior art has following advantage:
(a) the present invention is because Image Acquisition background image after using up-sampling, obtain the approximate background image of original image by the method that image behind the up-sampling is sampled, utilized behind the up-sampling information of image very level and smooth, the approximate background image that obtains by sampling is the background image characteristics of approaching to reality preferably, overcome traditional algorithm high problem of complexity computing time in filtering, thereby improved the speed that image is removed thin cloud.
(b) the present invention is owing to setting up the detailed information that weight matrix strengthens territory, the thin cloud sector of fused images with background image, utilize territory, thin cloud sector in image, to have the characteristics of higher gray-scale value, having overcome in traditional fused images thin cloud sector domain information loses, image blurring problem after causing merging, thus the sharpness of fused images improved.
Embodiment
With reference to Fig. 1, of the present invention as follows with the multispectral of thin cloud and panchromatic image fusion method step:
Step 1 is carried out up-sampling to the multispectral image with thin cloud, big or small identical with the full-colour image of thin cloud of the size of the multispectral image behind the up-sampling, and wherein original multispectral image is designated as Y shown in Fig. 2 (a)
1, original full-colour image is designated as Y shown in Fig. 2 (b)
2
Step 2 is sampled respectively and the Gassian low-pass filter processing to the multispectral image behind the up-sampling and original full-colour image, obtains the background image B with the multispectral image of thin cloud
1Background image B with full-colour image
2
2.1) to the multispectral image behind the up-sampling and original full-colour image, carry out respectively down-sampling, the size of the multispectral and full-colour image behind the down-sampling is 1/3 of original full-colour image size;
2.2) adopt respectively Gaussian filter to carry out Gassian low-pass filter to the multispectral image behind the down-sampling and full-colour image, obtain the multispectral image of down-sampling and the background image of full-colour image, the wave band number of the background image of the multispectral image of down-sampling is identical with the wave band number of multispectral image;
2.3) to the background image of down-sampling, carry out up-sampling, obtain the background image B of multispectral image
1Background image B with full-colour image
2, gained image size is identical with original full-colour image size.
Step 3 is to original multispectral image Y
1Each wave band and original full-colour image remove respectively the operation of thin cloud.
3.1) the original multispectral image Y of calculating
1The gray average of each band image and original full-colour image Y
2Gray average, namely to the summation of the gray-scale value of pixels all in the image, again divided by the sum of all pixels of image;
3.2) to original multispectral image Y
1Each wave band, the subtracting background image B
1Corresponding wave band, add the gray average of corresponding wave band, obtain removing the multispectral image I of thin cloud
1
3.3) to original full-colour image Y
2, deduct its background image B
2, add the average of its gray scale, obtain removing the full-colour image I of thin cloud
2
Step 4 is to removing the multispectral image I behind the thin cloud
1Carry out the PCA conversion, obtain each component image
Wherein
The i principal component that obtains after the expression multispectral image process PCA conversion, i=1,2 ..., n, n are the sum of component.
Step 5 is to the first factor image through obtaining after the PCA conversion
With the full-colour image I that removes thin cloud
2, carry out respectively the Shearlet conversion and decompose, with the first factor image
Be decomposed into a low frequency coefficient x
1With m directional subband coefficient y
1, y
2..., y
m, will remove the full-colour image I of thin cloud
2Be decomposed into a low frequency coefficient x
2With m directional subband coefficient z
1, z
2..., z
m
The Shearlet conversion is a kind of in the multi-scale geometric analysis instrument of a new generation, and concrete is olation is:
5.1) by the non-lower sampling Laplacian Pyramid Transform image is carried out Scale Decomposition, respectively with the first factor image
With full-colour image I
2Be decomposed into a low frequency coefficient and a plurality of high frequency coefficient, the number of high frequency coefficient is identical with the decomposition number of plies, wherein the number of plies of Scale Decomposition is unrestricted, has higher correlativity in order to make low frequency coefficient and full-colour image after the decomposition, and this example determines that decomposing the number of plies is 4 layers;
5.2) by the Shear wave filter each high frequency coefficient is decomposed into a plurality of directions, wherein the selection of direction number is not restricted, consider the time complexity of calculating and the needs of algorithm, this example is determined respectively from thick yardstick to thin yardstick, 4 high frequency coefficients are decomposed into respectively 6,6,10 and 10 directions, obtain totally 22 directional subband coefficients.
Step 6 is according to the background image B of multispectral image in the step 2
1, calculate the gray average of all wave bands, obtain the synthetic background image of multispectral image
Synthetic background image obtains by following formula:
Wherein
Expression background image B
1The gray-scale value located at coordinate (p, q) of K-band,
The background image that expression is synthetic
At the gray-scale value that coordinate (p, q) is located, N is the wave band number of multispectral image.
Step 7 is according to the synthetic background image of the multispectral image that obtains
Set up synthetic background image
Weight matrix w
1
7.1) set up and synthesize background image
The buffer memory matrix M
1, initial value is zero, its size is identical with the background image of multispectral image, with the synthetic background image of multispectral image
The gray-scale value of each position, indirect assignment is given synthetic background image
The buffer memory matrix M
1The relevant position;
7.2) the synthetic background image of statistics
The buffer memory matrix M
1The maximal value of middle all values and minimum value are denoted as respectively
With
7.3) according to maximal value
And minimum value
Calculate synthetic background image
Threshold value T
1:
7.4) with the synthetic background image of multispectral image
In gray-scale value and its threshold value T of each pixel
1Compare, with gray-scale value greater than its threshold value T
1Pixel be divided into a class, and the zone that this class pixel is formed is denoted as S
11With gray-scale value less than its threshold value T
1Pixel be divided into another kind ofly, and the zone that this class pixel is formed is denoted as S
12
7.5) difference zoning S
11And S
12In the average of all grey scale pixel values, be denoted as respectively
With
Calculate the buffer memory matrix M
1The threshold value R of compression span
1:
7.6) to the buffer memory matrix M
1In all values carry out normalized, obtain normalization matrix
This normalization matrix
In all value all in [0,1] interval;
7.7) to normalization matrix
In each on duty with the compression span threshold value R
1, make normalization matrix
In all values be compressed to [0, R
1] in the scope, the matrix after the compression is exactly weight matrix w
1
Step 8 is according to the background image B of full-colour image
2, set up the second weight matrix w
2
8.1) set up the background image B of full-colour image
2The buffer memory matrix M
2, initial value is zero, its size is identical with the background image of full-colour image, with this background image B
2The gray-scale value of each position, indirect assignment is given its buffer memory matrix M
2The relevant position;
8.2) statistics full-colour image background image B
2The buffer memory matrix M
2The maximal value of middle all values and minimum value are denoted as respectively
With
8.3) according to maximal value
And minimum value
Calculate the background image B of full-colour image
2Threshold value T
2,
8.4) with the background image B of full-colour image
2In gray-scale value and its threshold value T of each pixel
2Compare, with gray-scale value greater than this threshold value T
2Pixel be divided into a class, and this class pixel compositing area is denoted as S
21With gray-scale value less than this threshold value T
2Pixel be divided into another kind ofly, and the zone that these pixels are formed is denoted as S
22
8.5) difference zoning S
21And S
22In the average of all grey scale pixel values, be denoted as respectively
With
Calculate described buffer memory matrix M
2The threshold value R of compression span
2:
8.6) to the background image B of full-colour image
2The buffer memory matrix M
2In all values carry out normalized, obtain normalization matrix
This normalization matrix
In all value all in [0,1] interval;
8.7) to normalization matrix
In each on duty with the compression span threshold value R
2, make normalization matrix
In all values be compressed to [0, R
2] in the scope, the matrix after the compression is exactly weight matrix w
2
Step 9 is to full-colour image I
2Each directional subband coefficient z
1, z
2..., z
m, multiply by weight matrix w
1And w
2, the directional subband coefficient l of the first component image after obtaining merging
1, l
2..., l
m, with the low frequency coefficient x of the first factor image
1The low frequency coefficient k of the first factor image after indirect assignment is given and merged.
Step 10 is to low frequency coefficient k and a plurality of directional subband coefficient l of the first factor image after merging
1, l
2..., l
m, carry out contrary Shearlet conversion, the first factor image after obtaining merging
Step 11 is with the first factor image after merging
Other component image except the first factor that obtains with step 4
Form new data set, and this new data set is carried out contrary PCA conversion, the image I after obtaining merging, as shown in Figure 3, wherein Fig. 3 (a) is the fused images that strengthens by weight matrix, Fig. 3 (b) is the partial enlarged drawing of Fig. 3 (a).
Above description is example of the present invention; obviously for those skilled in the art; after having understood content of the present invention and principle; all may be in the situation that do not deviate from the principle of the invention, structure; carry out correction and change on form and the details, but these are based on the correction of inventive concept with change still within claim protection domain of the present invention.