CN110472658A - A kind of the level fusion and extracting method of the detection of moving-target multi-source - Google Patents
A kind of the level fusion and extracting method of the detection of moving-target multi-source Download PDFInfo
- Publication number
- CN110472658A CN110472658A CN201910602605.0A CN201910602605A CN110472658A CN 110472658 A CN110472658 A CN 110472658A CN 201910602605 A CN201910602605 A CN 201910602605A CN 110472658 A CN110472658 A CN 110472658A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- points
- point
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 230000003313 weakening effect Effects 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 39
- 238000004422 calculation algorithm Methods 0.000 claims description 35
- 239000011159 matrix material Substances 0.000 claims description 32
- 238000004364 calculation method Methods 0.000 claims description 26
- 230000009466 transformation Effects 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 18
- 230000011218 segmentation Effects 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 12
- 238000003709 image segmentation Methods 0.000 claims description 10
- 238000007500 overflow downdraw method Methods 0.000 claims description 10
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 238000012847 principal component analysis method Methods 0.000 claims description 5
- 230000005855 radiation Effects 0.000 claims description 4
- 239000000654 additive Substances 0.000 claims description 3
- 230000000996 additive effect Effects 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 abstract description 30
- 230000008447 perception Effects 0.000 abstract description 3
- 238000002156 mixing Methods 0.000 abstract 4
- 230000002045 lasting effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000003595 spectral effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 230000003628 erosive effect Effects 0.000 description 4
- 238000007499 fusion processing Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000010339 dilation Effects 0.000 description 3
- 239000000523 sample Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 238000005316 response function Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 206010034719 Personality change Diseases 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008802 morphological function Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the multi-source data levels based on Multiple Source Sensor to merge and extractive technique field, and in particular to a kind of the level fusion and extracting method of the detection of moving-target multi-source.The present invention will be seen that light image and infrared light image carry out registration fusion and obtain first layer blending image;After first layer blending image and high spectrum image are registrated, Weakening treatment is carried out to the image pixel after registration according to terrain classification region, obtains second layer blending image;Target acquisition is carried out to second layer blending image, the location information of target in the picture is obtained, target is perceived, obtain longitude and latitude of the target in true environment, the posture tracking target of aircraft is adjusted, realizes the lasting detection and perception to target.The present invention combines a variety of image sources, and the signal characteristic of a variety of image sources is effectively combined by image co-registration, removes the repeated data information of redundancy, increases the accuracy rate of target acquisition, improves detection efficient.
Description
Technical Field
The invention belongs to the technical field of multi-source data hierarchical fusion and extraction based on a multi-source sensor, and particularly relates to a hierarchical fusion and extraction method for multi-source detection of a moving target.
Background
With the progress and development of science and technology, the quality of the payload is remarkably improved, the increase of the quality means that more sensing devices can be carried, the calculation capability and the information storage capability of the payload are also remarkably improved, and the calculation capable of being executed is more complicated. Various detection devices such as a visible light sensor, an infrared sensor, a hyperspectral sensor and the like can be carried in the payload of the spacecraft, and visible light sensing data images, infrared sensing data images, hyperspectral sensing data images and the like can be obtained respectively.
The precondition for tracking the target in the image is target detection, and how to establish a quick, accurate and effective target detection method is a key problem. Image recognition is a method and technique of pattern recognition used in the image domain. Pattern recognition refers to the process of processing and analyzing various forms of information characterizing an object or phenomenon, describing, recognizing, classifying and interpreting the object or phenomenon, i.e., identifying and classifying are implemented by a computer. The idea is used in images to realize the intelligence-like cognition on the perceived things. The main idea of image recognition is to establish an information base of the characteristics of objects, perform characteristic collection on strange images, compare the collected characteristics with the information in a known characteristic information base, and consider that a target is found and recognized when the acquired characteristics exceed a certain similarity threshold.
Disclosure of Invention
The invention aims to provide a hierarchical fusion and extraction method for multi-source detection of a moving target.
The purpose of the invention is realized by the following technical scheme: the method comprises the following steps:
step 1: reading an image input by a multi-source image sensor;
step 2: carrying out image registration on the visible light image and the infrared light image, and fusing the registered images to obtain a first-layer fused image;
and step 3: carrying out image registration on the first layer of fused image and the hyperspectral image, and weakening the registered image pixels according to the ground feature classification area to obtain a second layer of fused image;
and 4, step 4: and detecting the target of the second layer fused image to obtain the position information of the target in the image, sensing the target to obtain the longitude and latitude of the target in the real environment, adjusting the attitude of the aircraft to track the target, and realizing continuous detection and sensing of the target.
The present invention may further comprise:
the image registration method in step 2 and step 3 specifically comprises the following steps:
step 2.1: extracting an edge contour of the image to obtain an edge contour image of the original image;
extracting the contour of the image by using a phase consistency algorithm, wherein a phase consistency function is as follows:
wherein A isnIs the amplitude on scale n; phi is an(x) Is the nth fourier component phase value at x;represents the weighted average of the local phase angles of the components of the fourier transform when pc (x) takes the maximum value at x;
step 2.2: establishing a characteristic corner point with scale, position and direction information in the edge contour image, wherein the specific method comprises the following steps:
step 2.2.1: constructing a nonlinear scale space, so that the characteristic angular points have scale information;
carrying out Gaussian filtering processing on the edge contour image to obtain an image gray level histogram and a contrast factor k; and converting a group of calculation time, and then obtaining all information layers of the nonlinear filtering image by adopting an additive operator splitting algorithm:
wherein A islA conduction matrix representing the image I in different dimensions l; t is tiDefining as computation time, and only using one group of computation time to construct the nonlinear scale space at a time; e is a unit array;
step 2.2.2: detecting characteristic angular points to obtain characteristic angular point position information;
moving a local window point by point in an edge contour image of a nonlinear scale space, and calculating pixel values in the window to judge whether the window is an angular point;
step 2.2.3: calculating direction information of characteristic angular points
The coordinates of the characteristic angular points p (i) in the image are (x (i), y (i)) and adjacent toTwo points p (i-k) and p (i + k) are selected in the domain, the distance between the two points and the point p (i) is k, T is a tangent line at the point p (i), and the main direction of the characteristic angular point p (i) is an included angle theta between the tangent line T and the positive direction of the x axisFeature(s)The calculation formula is as follows:
step 2.3: establishing a shape description matrix;
let feature point set P { P1,p2,...pn,},pi∈R2Establishing a polar coordinate system in an r multiplied by r neighborhood taking a certain characteristic point p (i) as an origin and taking the point p (i) as a center, equally dividing 360 degrees to obtain 12 sectors, and sequentially dividing the sectors into 12 sectors according to the radiusDrawing five concentric circles to obtain 60 small areas; counting the number of characteristic points in each cell and calculating piA shape histogram hi of points, the shape histogram hi of each feature point being a shape context descriptor of each feature point; the method for calculating the shape histogram hi of each feature point is as follows:
hi(k)=#{q≠pi:(q-pi)∈bin(k)}
wherein # represents the number of characteristic points in the k (k ═ 1, 2.. 60) th statistical region;
step 2.4: matching the characteristic angular points of the two images to complete image registration;
searching the feature points of the nearest neighbor and the next nearest neighbor by using the Euclidean distance, wherein the Euclidean distance is as follows:
wherein, aiDescribing R (a) for the shape context of arbitrary feature points of a reference image0,a1,...a59) The ith of (1), biContextual description of shape I (b) for arbitrary feature points of a reference image0,b1,...b59) The ith of (1);
if p is any feature point in a certain image, the nearest neighbor feature point and the next nearest neighbor feature point to be registered with p are respectively set as i and j, and the Euclidean distances between the feature points and p are respectively DipAnd Djp(ii) a Setting a calculation thresholdWhen the threshold is less than a certain value, p and i are considered as feature points of correct pairing, otherwise, the operation fails.
The method for fusing the registered visible light image and infrared light image in the step 2 specifically comprises the following steps:
step 3.1: performing region segmentation on the registered infrared image, and separating a suspected region and a background region of the infrared image; the suspected area is a high-brightness area with a large infrared radiation image bright;
step 3.2: respectively carrying out dual-tree complex wavelet transformation on the infrared image and the visible light image after registration to obtain low-frequency information and high-frequency information of the image, wherein the basic information of the image corresponds to the low-frequency information of a wavelet transformation result, and the detail information of the image corresponds to the high-frequency information of the wavelet transformation result;
step 3.3: fusing the result of image segmentation and the result of wavelet transformation to respectively obtain a low-frequency fused image and a high-frequency fused image;
step 3.4: and performing dual-tree complex wavelet inverse transformation on the low-frequency fusion image and the high-frequency fusion image to obtain a first fusion image.
The method for detecting the target of the second-layer fusion image in the step 4 to obtain the position information of the target in the image specifically comprises the following steps:
step 4.1: filtering the second layer fused image;
establishing a window matrix to scan pixels one by one on the two-dimensional image, wherein the value of the central position of the matrix is replaced by the average value of all point values in the window matrix, and the expression is as follows:
wherein: f (x, y) is a second layer fusion image to be processed; g (x, y) is the second layer fused image after filtering processing; s is a set of neighborhood coordinate points with (x, y) points as midpoints, and M is the total number of coordinates in the set;
step 4.2: processing the second-layer fusion image after filtering by using a moving average image threshold method to obtain a binary image;
Zk+1representing one point encountered at step k +1 in the scan order, the moving average gray scale at the new point is:
wherein n isGrey scaleRepresenting the number of points used in calculating the average gray scale, and having an initial value m (1) zi/nGrey scale;
The moving average is calculated for each point in the image, so the segmentation is performed using the following equation:
wherein K is [0,1 ]]Constant within the range, mxyIs the moving average of the input image at (x, y);
step 4.3: deleting images with the area smaller than that of the target from the binary image, and removing interference of irrelevant information;
step 4.4: processing the binary image without the interference of the irrelevant information by using image morphology;
step 4.5: establishing a cutting function, and cutting the target from the full image after the image morphology processing to obtain a target image to be checked;
the background part in the image I after the image morphological processing is a black value of 0, and the target part to be checked is a white value of 1; starting from the coordinate of the image (0,0), finding the first point with the pixel value of 1, starting from the point, finding all the points with the pixel value of 1 connected with the point, and naming all the points as a setT1In the set T1Find the maximum x of the abscissa in the coordinates of the point in (1)1maxAnd the minimum value x1minIn the set T1Find the maximum value y of the ordinate in the coordinates of the points in (1)1maxAnd the minimum value y1minThen cut the obtained target image to be checkedBy analogy, all the targets to be checked are found to obtain images of all the targets to be checked
Step 4.6: finding out the main symmetry axis of the target image to be inspected by using a principal component analysis method, and obtaining the included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be tested;
The coordinates of each point in the image information of the target to be checked are two-dimensional, and the points are combined into nTo be testedRow 2 column matrix XTo be testedWherein n isTo be testedCalculating X for the number of points in the target image information to be checkedTo be testedCovariance matrix C ofTo be testedAnd continuously calculating the covariance matrix CTo be testedCharacteristic vector V ofTo be tested=(xv,yv) The included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be testedComprises the following steps:
step 4.7: performing image direction normalization processing, and rotating the target image to be checked by thetaTo be testedCorners, and removing newly generated black edges;
step 4.8: carrying out image size normalization processing, and changing the image size of the target image to be checked after the direction normalization processing into the size of a template;
step 4.9: and matching the target image to be checked after the direction normalization and the size normalization with the images in the template library one by one, setting a similarity threshold T, and identifying the image as a target when the similarity degree exceeds the threshold.
The method for obtaining the low-frequency fusion image in the step 3.3 comprises the following steps: according to the position information of the infrared image divided into the suspected area and the background area, the visible light image is divided according to the same position information; for the suspected area of the low-frequency part of the infrared image and the visible light image, the following rule is adopted:
wherein,is the fused pattern low-frequency coefficient of the l-th layer,is the low-frequency coefficient of the infrared image of the l layer,the low-frequency coefficient of the visible light image of the l layer;
for a background area of the low-frequency part of the infrared image and the visible light image, adopting an area variance method, wherein the larger the area variance is, the larger the change of the gray value corresponding to each pixel in the area is, the higher the contrast of the pixel in the area is, and the more the information corresponding to the area is; and adding weight to the pixel points with large regional variance values in image fusion, wherein the rule is as follows:
wherein, ω isirIs the weight of the infrared image, omegavisThe weight value is the visible light image weight value; infrared image weight omegairAnd the visible light image weight omegavisThe calculation method comprises the following steps:
ωir=1-ωvis;
wherein σvisAnd σirRespectively the area variance of the visible light image and the infrared image, and r is a correlation coefficient area; regional variance σ of visible light imagevisRegional variance σ from infrared imageirThe calculation method comprises the following steps:
the calculation method of the correlation coefficient r comprises the following steps:
in which the size of the image is M x N,represents the average gray-scale value of the visible light image,representing the mean gray value of the infrared image, Iir(I, j) represents an infrared image, Ivis(i, j) represents a visible light image.
The invention has the beneficial effects that:
the invention simultaneously applies the visible light image, the infrared light image and the hyperspectral image, simultaneously has the advantages of high visible light image resolution, high infrared light image target contrast, and the hyperspectral image can distinguish various sensor images such as artificial objects, natural objects and the like, the target detection judgment is accurate, and the influence of earth atmospheric activity on the target detection is effectively reduced. The invention combines various image sources, can effectively combine the signal characteristics of the various image sources through image fusion, and effectively removes redundant repeated data information, thereby effectively increasing the accuracy of target detection and improving the detection efficiency. The invention can realize the tracking of the target and the prediction of position information such as target course, navigational speed, longitude and latitude and the like.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention.
Fig. 2 is a schematic flow chart of registration and fusion of an infrared image and a visible light image in the invention.
FIG. 3 is a schematic diagram of a fusion process of a first layer fused image and a hyperspectral image according to the present invention.
FIG. 4 is a schematic diagram of the object detection and sensing process of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention belongs to the technical field of multi-source data hierarchy fusion and extraction based on a multi-source sensor, and relates to a technology for respectively obtaining a visible light sensing data image, an infrared sensing data image, a hyperspectral sensing data image and the like by often carrying various detection devices such as a visible light sensor, an infrared sensor, a hyperspectral sensor and the like in a payload of a spacecraft, in particular to the technology for fusing multi-source images, identifying images, sensing targets and the like
According to the invention, image data come from a plurality of sensors, a plurality of images often appear on a target at the same position, but the information emphasis points carried by the images from each sensor are different, and data redundancy is caused by similar background information among different images, so that before image processing technologies such as image segmentation, target detection, target perception and the like are executed, fusion processing is carried out on the images, a large amount of information data is integrated and screened, so that redundant information is removed, and effective data information in different data source images is left. The image fusion technology comprises an image registration technology and an image data fusion technology.
The invention aims to utilize a multi-source image sensor and consider image distortion caused by factors such as aircraft attitude maneuver, spatial attitude disturbance, holder vibration and the like; considering image target shielding caused by earth atmospheric motion interference, sea surface sea condition and other conditions, fully utilizing the sensitivity difference of different sensors to different characteristics of signals, enabling the image to meet the requirements of image definition, target saliency and the requirement of displaying the shielded target to a certain extent through multi-layer image registration and image fusion, designing a target detection algorithm based on the fused image, achieving target detection of multiple image sources in multiple ranges, and achieving the functions of target discovery, tracking and position prediction in a certain time space by combining attitude and orbit information, attitude maneuver and the like of an aircraft.
The purpose realization mode of the invention is as follows:
step 1: reading an image input by a multi-source image sensor;
step 2: carrying out image registration on the visible light image and the infrared light image, and fusing the registered images to obtain a first-layer fused image;
and step 3: carrying out image registration on the first layer of fused image and the hyperspectral image, and weakening the registered image pixels according to the ground feature classification area to obtain a second layer of fused image;
and 4, step 4: and detecting the target of the second layer fused image to obtain the position information of the target in the image, sensing the target to obtain the longitude and latitude of the target in the real environment, adjusting the attitude of the aircraft to track the target, and realizing continuous detection and sensing of the target.
1) First layer fusion: establishing an infrared image and visible light image registration and fusion algorithm, and making use of the characteristics of high visible light resolution, high atmospheric influence, high infrared image target contrast, low environmental resolution and the like to make the fused image target prominent and make the surrounding environment have certain details.
2) And (3) second layer fusion: and establishing a registration fusion algorithm of the hyperspectral image and the first-layer fusion image to achieve the purposes of highlighting the target and weakening the influence of irrelevant information.
3) And carrying out image recognition on the second layer fused image. And establishing a target detection algorithm with certain practical application capability and feasibility.
4) And 3) sensing the target, roughly calibrating the longitude and latitude of the target earth by combining the attitude and orbit information of the aircraft, the earth rotation, the position vector of the subsatellite point, the direction vector of the optical axis and other information on the basis of finding the target in the step 3), predicting the target track on the basis of the target track, and sensing and finding.
Through the steps, the capability of independently and autonomously discovering, identifying and tracking the target by the effective load can be achieved.
The invention has the following advantages: firstly, the image source of the invention is from an aircraft, and has the characteristics of global, all-weather, wide range, high timeliness and the like. Secondly, the invention simultaneously applies the visible light image, the infrared light image and the hyperspectral image, simultaneously has the advantages of high visible light image resolution, high infrared light image target contrast and capability of distinguishing various sensor images such as artificial objects, natural objects and the like by the hyperspectral image, and fully improves the target detection rate and the accuracy rate. Thirdly, the invention fully utilizes the advantages of the platform, and can realize the tracking of the target to a certain degree and the prediction of the position (course, navigational speed, longitude and latitude).
Firstly, the problem of image registration must be considered before the infrared light image and the visible light image are fused because of the problems of vibration of an aircraft platform, focusing deviation and the like. And considering the target contour in the infrared image and the visible light image as a core, a data matching and fusing method based on a shape descriptor is selected. The image registration of the invention adopts a rapid similar nearest neighbor algorithm, and the existing algorithm is considered to be widely applied to the images under the conditions, so that the image registration has higher stability and universality. On the basis, in order to further enhance the robustness of the algorithm, a random sampling consistency algorithm is selected, matched feature points are screened, error points are removed, and an optimal set is left.
The image area segmentation method is based on image significance consideration, and an image fusion algorithm is also based on a segmentation result. In view of the imaging characteristics of the infrared image, the long-wave infrared image is a long-wave infrared ray emitted by a collected object, and the infrared radiation of a target area is often higher than that of the environment, so that in the image processing, the image processing method of saliency enhancement is considered, the target area of the infrared image is enhanced, background information is weakened, and the effect of noise suppression is achieved to a certain extent. The characteristics of high-frequency information and low-frequency information in the image can be effectively separated by combining a dual-tree complex wavelet algorithm, the imaging characteristics and the image quality of the infrared image and the optical image are considered, and different fusion strategies are adopted for the high frequency and the low frequency respectively. And finishing the first layer image fusion.
Secondly, in the research of analyzing ground objects, the hyperspectral images have irreplaceable advantages, the invention adopts a frequency spectrum similarity classification method to improve the accuracy of ground object target detection, and performs a layer of image fusion on the basis of distinguishing ground objects as non-artificial or artificial objects, thereby achieving the purpose of weakening the interference background information of non-artificial objects and the like and providing powerful basis and effective support for finally distinguishing the targets. And completing the second layer image fusion.
Finally, based on the final image fusion, considering that redundant information in the image is effectively weakened and the target is highlighted, the method comprises an image recognition algorithm based on template matching, can continuously mark the position and the track of the target, can predict the direction (such as longitude and latitude, course, navigational speed and the like) of the target to a certain extent, guides the attitude change of the aircraft, and ensures the autonomous continuous recognition and tracking of the target.
The heterogeneous sensing data used in the present invention are from a visible light sensor and an infrared sensor, respectively. The image characteristics obtained by these two sensors are shown in table 1 below. The load is on the aircraft platform, and the earth atmospheric motion and the environment reflection refraction and scattering effect are also considered when the image is obtained; the attitude adjustment of the aircraft platform, the disturbance vibration of the platform, the stability of the platform and the like are considered; and the difference of image position, orientation, dimension and shape caused by various factors in consideration of photoelectric platform assembly debugging, platform performance and the like. Therefore, the method aims to reduce the influence of external factors on the image to the maximum extent and highlight the advantages of the two sensors in data acquisition to the maximum extent.
TABLE 1 characteristics of images collected by visible light sensor and infrared sensor
The invention includes an image registration process. The image is first outlined using a Phase consistency algorithm (Phase consistency). The phase consistency function is defined as:
wherein A isnIs the amplitude on scale n; phi is an(x) Is the nth fourier component phase value at x;represents the weighted average of the local phase angles of the components of the fourier transform when pc (x) takes the maximum value at x. And obtaining an edge contour image of the original image, and applying the edge contour image to subsequent processing.
The invention includes a method of establishing characteristic points. The nonlinear scale space construction method comprises the following steps of carrying out Gaussian filtering processing on an input image, then obtaining an image gray level histogram, and obtaining a contrast factor k. And converting a group of calculation time, and then obtaining all information layers of the nonlinear filtering image by adopting an additive operator splitting algorithm:
wherein A islRepresenting the conduction matrix of the image I in different dimensions I. t is tiIs defined as the computation time, and only one set of computation time is used to construct the nonlinear scale space at a time, and E is the unit array.
The angular point detection method comprises the following steps of moving a local window point by point in an image, and calculating pixel values in the window to judge whether the window is an angular point. Assuming the gray scale change produced after the local window C is translated (u, v), i.e.:
where I (x, y) is the gray value of the image at the (x, y) point and w (x, y) represents a gaussian weighting function.
In order to obtain a value for E (u, v) that is as large as possible, the above equation is developed by:
and then the matrix form is converted:
whereinIx,IyAre the gradient components of the image gray scale in the x-direction and the y-direction.
The local autocorrelation function E (u, v) can be approximated as an elliptical function:
E(u,v)≈Au2+2Cuv+Bv2
the equal correlation points around the point form an elliptic curve, the correlation degree of the point on the ellipse and the central point is the same, and the characteristic value lambda of the second-order matrix M1,λ2The major axis and the minor axis of the ellipse, respectively, represent the direction of the speed of the gray scale change. So when the characteristic value lambda1,λ2Are large and quite large, i.e. corner points. When the two feature values are one large and one small, it is an edge. When both the eigenvalues are small, the flat area is obtained. In order to make the image have scale invariance during corner detection, the corner detection algorithm is brought into the nonlinear scale space introduced above, so that the feature points have scale and position information at the same time, and the corner response function is obtained as follows:
wherein σi,SIs a scale factor, and is a function of,second order differential and partial derivative of the gray scale change in the x and y directions, respectively. The points satisfying the corner response function are the corners.
Adding appropriate direction information to the characteristic corner points. Knowing the coordinates of the characteristic angular point p (i) in the image as (x (i), y (i)), selecting two points p (i-k) and p (i + k) in the neighborhood, making the distance between the two points and the point p (i) as k, and making T be the tangent at the point p (i), wherein the main direction of the characteristic angular point p (i) is the included angle theta between the tangent T and the positive direction of the x-axisFeature(s)The calculation formula is as follows:
the main direction is determined for the characteristic corner points on the basis of the above, which has the advantage of making it rotation-invariant. The method can be well used for solving the problem of matching characteristic angular points between the infrared image and the visible light image, and is hereinafter referred to as characteristic points for short.
The invention includes a shape descriptor generation algorithm. Let feature point set P { P1,p2,...pn,},pi∈R2Establishing a polar coordinate system in an r multiplied by r neighborhood taking a certain characteristic point p (i) as an origin and taking the point p (i) as a center, equally dividing 360 degrees to obtain 12 sectors, and sequentially dividing the sectors into 12 sectors according to the radiusFive concentric circles are drawn, resulting in 60 small regions. Counting the number of characteristic points in each cell and calculating piThe shape histogram of points hi, hi is defined as follows:
hi(k)=#{q≠pi:(q-pi)∈bin(k)}
where # denotes the number of feature points in the k (k ═ 1, 2.. 60) th region. The shape histogram of each feature point is a shape context descriptor for each feature point.
The invention comprises a matching method of a feature point set. Searching nearest neighbor feature points and next nearest neighbor feature points, searching each feature point by using Euclidean distance to search the feature points of the nearest neighbor and the next nearest neighbor of each feature point by using a fast approximate nearest neighbor algorithm. The Euclidean distance is defined as:
wherein, aiDescribing R (a) for the shape context of arbitrary feature points of a reference image0,a1,...a59) The ith of (1), biContextual description of shape I (b) for arbitrary feature points of a reference image0,b1,...b59) The ith. The specific operation steps of the algorithm are that if p is any feature point in the infrared image, and the nearest neighbor feature point and the next nearest neighbor feature point to be registered with p are respectively set as i and j, the Euclidean distances between the feature points and p are respectively DipAnd Djp. Setting a calculation thresholdWhen the threshold is less than a certain value, p and i are considered as feature points of correct pairing, otherwise, the operation fails.
In order to further enhance the robustness of the algorithm, a random sampling consistency algorithm is selected, matched feature points are screened, error points are removed, and an optimal set is left. The algorithm substitutes all the best matching characteristic point pair position parameters into an image space projection transformation model, obtains an image projection transformation relation through a direct linear transformation algorithm, and the registration parameters of the image are the affine transformation relation between the infrared image and the visible light image. At this point, we complete the registration process between the infrared image and the visible image.
The present invention includes an image fusion process. The flow process is summarized as that the infrared image is firstly subjected to region segmentation, a highlight region and a background region of the infrared image can be separated, and then the visible light image is correspondingly mapped according to the result of the region segmentation of the infrared image. And respectively carrying out dual-tree complex wavelet transform on the infrared image and the visible light image, wherein the processing result is to obtain low-frequency information and high-frequency information of the image, the basic information of the image corresponds to the low-frequency information of the wavelet transform result, and the detail information of the image corresponds to the high-frequency information of the wavelet transform result. The results of the image segmentation and the results of the wavelet transform are taken into account. When low-frequency information is processed, a highlight area and a background area are considered, the difference of reflected information and the consideration of task requirements are considered, and different fusion strategies are adopted according to actual situations. When high-frequency information is processed, the detail characteristics of the main anti-positive image of the high-frequency information are considered, so that a weight is distributed to each region according to different richness degrees of the detail information, and a fusion strategy is designed according to the weight. The specific steps of the process are as follows:
step 3.1: performing region segmentation on the registered infrared image, and separating a suspected region and a background region of the infrared image; the suspected area is a high-brightness area with a large infrared radiation image bright;
step 3.2: respectively carrying out dual-tree complex wavelet transformation on the infrared image and the visible light image after registration to obtain low-frequency information and high-frequency information of the image, wherein the basic information of the image corresponds to the low-frequency information of a wavelet transformation result, and the detail information of the image corresponds to the high-frequency information of the wavelet transformation result;
step 3.3: and fusing the image segmentation result and the wavelet transformation result to respectively obtain a low-frequency fusion image and a high-frequency fusion image.
And selecting a highlight area of the infrared image, firstly, performing saliency-enhanced image processing on the registered infrared image, enhancing the hot target information of the processed infrared image, and blurring background information to enhance the contrast of the whole infrared image. The algorithm for saliency enhancement is mainly based on histograms of images. Pixel I in image IcThe significance of (a) is defined as:
wherein Dis (I)c,Ii)=||Ic-IiI iscColor distance, representing IcAnd IiThe difference in color, the above formula can be rewritten as:
wherein a iscIs a pixel IcN is the total number of gray levels contained in the image, fjIs ajProbability of occurrence in the image; calculating the image saliency map to obtain a saliency map Isal;
The present invention encompasses region segmentation algorithms. And expressing each pixel point in the image by K Gaussian mixture model mixed features, wherein K belongs to K e {1,2, … K }. One pixel point in the image corresponds to a target Gaussian mixture model, and the other pixel point corresponds to a background Gaussian mixture model. Thus, the image Gibbs energy function of the region segmentation algorithm is:
E(α,k,θ,z)=U(α,k,θ,z)+V(α,z)
wherein z is a pixel value, alpha belongs to {0,1}, the corresponding pixel belongs to the background when alpha is 0, the corresponding pixel belongs to the target when alpha is 1, U is a region item, V is a boundary item, and the calculation method of the region item U is as follows:
θ={π(α,k),μ(α,k),∑(α,k),α=0,1,k=1...K}
the region item is used for distinguishing pixels in a target region or pixels in a background region, and after the parameter theta is determined through learning, the region energy item of Gibbs is determined.
The boundary item V is calculated by the following method:
where γ is an empirical constant, derived from training, C represents the set of pairs of adjacent pixel points, the function [ α [ ]n≠αm]Take values of only 1 or 0, when alpha isn≠αmWhen is in [ alpha ]n≠αm]When α is 1n=αmWhen is in [ alpha ]n≠αm]=0。
β=(2<||zm-zn||2>)-1Indicating the mathematical expectation of the sample. β corresponds to the contrast of the image and can determine the boundary term in the case of high or low contrast. And (3) segmenting the image by using a maximum flow minimum segmentation algorithm, optimizing parameters of the Gaussian mixture model after segmentation is finished, repeating iteration for multiple times, and finishing image segmentation when the energy function is minimized. When the image segmentation method is applied to the invention, the significance map I mentioned above needs to be appliedsalThe image segmentation algorithm is used as an initialization value of the image segmentation algorithm to calibrate a highlight area and a background area of an image, and iterative segmentation is carried out according to the calibrated areas to obtain a segmentation result.
The invention includes an image fusion rule based on dual-tree complex wavelet transform (DTWT), the dual-tree complex wavelet function is defined as:
ψ(x)=ψh(x)+jψg(x)
wherein psih(x) And psig(x) Are all real wavelets. After two-dimensional DTCTWT transformation, image decomposition obtains two low-frequency wavelet coefficients and high-frequency coefficients in six directions (plus or minus 15 degrees, plus or minus 45 degrees and plus or minus 75 degrees).
For the low-frequency part, the infrared image is firstly divided by the above-mentioned region division method, a suspected region and an excluded region are divided, position information is recorded, the visible light image is divided according to the same position information, and if the resolution is different, the position information is normalized and then is treated as a coefficient.
According to the position information of the infrared image divided into the suspected area and the background area, the visible light image is divided according to the same position information; for the suspected area of the low-frequency part of the infrared image and the visible light image, the following rule is adopted:
wherein,is the fused pattern low-frequency coefficient of the l-th layer,is the low-frequency coefficient of the infrared image of the l layer,the low-frequency coefficient of the visible light image of the l layer;
for the excluded region of the low-frequency part, a region variance method is adopted, and the larger the region variance is, the larger the change of the gray value corresponding to each pixel in the region is, the higher the contrast of the pixel in the region is, and we can consider that the information corresponding to the region is also more. Adding weight to the pixel points with large regional variance values in image fusion according to the analysis, wherein the rule is as follows:
for a background area of the low-frequency part of the infrared image and the visible light image, a regional variance method is adopted, the larger the regional variance is, the larger the change of the gray value corresponding to each pixel in the area is, the higher the contrast of the pixel in the area is, and the more information corresponding to the area can be considered. Adding weight to the pixel points with large regional variance values in image fusion according to the analysis, wherein the rule is as follows:
wherein, ω isirIs the weight of the infrared image, omegavisThe weight value is the visible light image weight value; infrared image weight omegairAnd the visible light image weight omegavisThe calculation method comprises the following steps:
ωir=1-ωvis;
wherein σvisAnd σirRespectively the area variance of the visible light image and the infrared image, and r is a correlation coefficient area; regional variance σ of visible light imagevisRegional variance σ from infrared imageirThe calculation method comprises the following steps:
the calculation method of the correlation coefficient r comprises the following steps:
in which the size of the image is M x N,represents the average gray-scale value of the visible light image,representing the mean gray value of the infrared image, Iir(I, j) represents an infrared image, Ivis(i, j) represents a visible light image.
For the high frequency part, the image is divided into n areas by the image division method, and A ═ a is used1,a2...anRepresents it. Setting the region weight corresponding to each regionThe present invention gives the following weighted high frequency fusion rule.
Wherein a value C is setl,θ> 1 is used to amplify the high frequency coefficients in order to highlight the contrast of detailed parts in the image. But considering this, it will be the sameAmplifying noise in the image, so adding a binary matrix Ml,θWhen is coming into contact withTime Ml,θ1, removing the isolated points with the value of 1, so that only the high-frequency coefficient pixel points connected into slices can be amplified, and removing noise;the objective is to reduce the effect of noise on the high frequency information for the function of contraction. In the actual fusion process, the concave-convex change of the edge may cause the fusion result to be distorted, a unit vector is calculated by using a high-frequency coefficient obtained by the DTWCT conversion of the infrared image and the visible light image, the original high-frequency coefficient is improved, and then the high-frequency coefficient of the fusion image is rewritten as follows:
wherein:wherein, the fusion rule f is the weight of infrared image and light imageCalculating the area r of the image with a large size as a referenceiAnd taking the mean value of the high-frequency coefficients as the corresponding high-frequency coefficient of the fused image. Sl,θThe image segmentation method is obtained by performing expansion operation on segmentation results of the image in the highlight area and then performing 2-dimensional mean filtering, and aims to ensure that detail information of each small area of the fused image is more obvious. Wherein the region weight
Wherein Hl,θ(x, y) high frequency coefficient, l is the number of layers, θ is the directional subband, | ri θL is the region ri θThe size of (2).
The method comprises a method for extracting the feature of a ground object at the level of a hyperspectral image source and fusing the image into a main image, as shown in the attached figure 3.
First, it is generally considered that hyperspectral remote sensing means a spectral resolution of 10-2Remote sensing of a range of the order of λ. The hyperspectral image has the characteristics of multiple wave bands, narrow spectral range, continuous spectrum and the like, a single pixel element in the image contains dozens or even hundreds of wave bands, and the range of each wave band is less than 10nm of spectrum. Therefore, the remote sensing information is analyzed from the spectral dimension, the characteristics of the reflection spectra of different ground objects are analyzed, calibration is carried out, an information base is established, the spectral data of the target are used for matching and identification in the information base, and therefore the label is added to the image target, and therefore identification of the ground objects is achieved.
And distinguishing ground objects from a frequency domain angle, taking the complete spectrum corresponding to each pixel of the hyperspectral image as a sequence signal, and classifying the images in the test area by using a spectrum similarity classification method (FSSM method). The high-spectrum data is discrete, so that the high-spectrum data can be analyzed by adopting Discrete Fourier Transform (DFT), the DFT can compress signals, noise and Hughes phenomena can be inhibited to a great extent, and a signal spectrum is obtained, so that the frequency spectrums of main wave crests and wave troughs at different wavelength positions of different object spectral curves can be effectively extracted, and effective information on the spectral curves is reserved.
Firstly, a one-dimensional discrete Fourier transform is adopted to convert a spectrum signal into a frequency domain to obtain a frequency spectrum. Regarding the spectrum sequence corresponding to each pixel in the HIS image as a one-dimensional discrete signal f (n), DFT can be defined as:
whereinP(k)=R2(k)+I2(k);
Fphase=arctan(I(k)/R(k));
Wherein | F (k) |, P (k), FphaseThe spectrum sequence comprises an amplitude spectrum value, an energy spectrum and a phase spectrum which are respectively corresponding to an image element, R (k), I (k) and F (k) are respectively a real part and an imaginary part of F (k), k is a serial number of DFT conversion, N is a discrete sampling data length, N is a discrete sampling point, namely a corresponding hyperspectral data wave band number, and f (N) is a reflectivity value, namely a ground spectrum reflectivity value, corresponding to each wave band of the image element.
Then, the difference between the target spectrum and the reference spectrum is calculated by using the Laplace distance, and the similarity of the spectrums is further measured for classification, wherein the calculation formula is as follows:
in the formula, Ftar(i) And Fref(i) Frequency spectra of the target and reference spectral curves, N, respectivelysIs the number of lower order harmonics that participate in the calculation. The reference spectrum can be a laboratory spectrum, a field measurement spectrum or a pixel spectrum extracted from an image. When the spectrum measured in the field is used as the reference spectrum, the remote sensing image needs to be corrected by atmosphere in order to eliminate the influence of the atmosphere on the spectrum.
Consider the region that was segmented at the time of first layer fusion. The method is simple and efficient, namely, the requirements of the algorithm on hardware computing resources and storage resources are reduced as much as possible. Considering the material of artificial target and water, rock, plant, etc. in nature. In the case of the background of the target, the natural object is the background. Therefore, only the natural background part identified in the hyperspectral image needs to be weakened in the first layer image by a weighting method so as to weaken the influence of non-target information to the minimum. The specific image registration method is the same as that in the first layer fusion.
The invention includes a moving object detection and tracking algorithm, as shown in FIG. 4.
Target probe function:
firstly, the final image is filtered to remove noise. The mean filtering process is to establish a window matrix to scan the two-dimensional image pixel by pixel, and the value of the central position of the matrix is replaced by the average value of the values of the points in the window matrix, which can be expressed as
Wherein: f (x, y) is a second layer fusion image to be processed; g (x, y) is the second layer fused image after filtering processing; s is a set of neighborhood coordinate points with (x, y) points as midpoints, and M is the total number of coordinates in the set;
then, the gray scale image is changed into a binary image by processing by using an image threshold value method of moving average. The basic idea is to compute a moving average along the scan lines of an image. Performed line by line in a zigzag pattern, thereby reducing illumination deviations. Let Zk+1Indicating a point encountered at step k +1 in the scan order. The moving average gray at the new point is given by:
wherein n isGrey scaleRepresenting the number of points used in calculating the average gray scale, and having an initial value m (1) zi/nGrey scale;
The moving average is calculated for each point in the image, so the segmentation is performed using the following equation:
wherein K is [0,1 ]]Constant within the range, mxyIs the moving average of the input image at (x, y);
typically we take n to be 5 times the target width and K to be 0.5. The threshold value selection method can effectively avoid the influence of uneven light and shadow distribution on image binary change, and helps to extract the target image.
The image can be scaled appropriately to increase the computation speed, taking into account the hardware computation power and the allowable single processing time. Only the image with larger space is reduced, and the problem of amplification distortion is not involved, so that the method only needs to consider simplicity and convenience, and the nearest neighbor difference algorithm is selected in the invention
And then deleting a small-area image irrelevant to the target in the image, and deleting an image with an area obviously smaller than that of the target in the binary image so as to delete the interference of irrelevant information in the image.
And processing the image by using an image morphological function to ensure that the most essential shape characteristic is obtained after the image target is processed. First, two basic morphological operations are known. The erosion operation is defined as:
wherein A ^ S indicates that A is etched using S, and the specific procedure is to move the structural element S in the planar area of the A image, and if S can be completely contained in A when the origin of S is translated to a z point, the set of all such z points is denoted as an etched image of S for A. Erosion can ablate the boundaries of objects, breaking thin junctions in the image target.
The dilation operation is defined as:
whereinDenotes the dilation of A with S by moving the structuring element S within the entire image plane of A, the mapping of S with respect to its own origin when translating to point zWith A in common, i.e.If at least 1 pixel overlaps with A, the set of all such z points is denoted as the dilated image of S vs A. The expansion can enlarge the boundary of the object and can connect the broken gaps.
The open operation is to perform erosion operation and then perform dilation operation on the image a by using the structural element S, and is expressed as:
the open operation can eliminate small objects, separate objects at fine points, smooth large object boundaries and not change volume.
The closed operation is to perform an expansion operation on the image a by using the structural element S and then perform an erosion operation, and is expressed as:
the closed operation can fill small holes in the interior of an object, connect neighboring objects, and smooth its boundaries without significantly changing the object's volume and shape.
The method can select a corresponding morphological algorithm according to actual conditions to finally find the suspected target.
And establishing a cutting function, and cutting the target from the full image to obtain a target image to be checked. According to the result of the previous processing, the background part in the image I has a black value of 0, and the target part to be checked has a white value of 1. Starting from the coordinate of the image (0,0), finding the first point with the pixel value of 1, starting from the point, finding all the points with the pixel value of 1 connected with the point, and naming all the points as a set T1In the set T1Find the maximum x of the abscissa in the coordinates of the point in (1)1maxAnd the minimum value x1minIn the set T1Find the maximum value y of the ordinate in the coordinates of the points in (1)1maxAnd the minimum value y1minThen cut the obtained target image to be checkedx1min<x<x1max,y1min<y<y1max. By analogy, all the targets to be checked are found to obtain images of all the targets to be checked
Considering that the artificial target identified by us is generally symmetrical, a principal component analysis method is used for finding a principal symmetry axis of the image to be tested and obtaining an included angle theta between the principal symmetry axis and an x-axis. The principal component analysis method is to find its maximum distribution vector sum value in the N-dimensional data. The coordinates of each point in the image information of the target to be checked are two-dimensional, and the points are combined into nTo be testedRow 2 column matrix XTo be testedWherein n isTo be testedCalculating X for the number of points in the target image information to be checkedTo be testedCovariance matrix C ofTo be testedAnd continuously calculating the covariance matrix CTo be testedCharacteristic vector V ofTo be tested=(xv,yv) The included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be testedComprises the following steps:
then, the image orientation normalization processing is carried out to rotate the image by thetaTo be testedAnd removing the newly generated black edge, namely cutting again.
Then, an image size normalization process is performed to change the image size to a template size. And establishing a template library, and specifying the size of the template library to be M multiplied by N.
Establishing an objective decision function:
and matching the processed images to be checked with the images in the template library one by one, and setting a certain similarity threshold T. When the degree of similarity exceeds this threshold, this image is identified as a target. The concrete matching steps are as follows:
1) setting a similarity threshold T
2) H is calculated according to the following method, assuming image a to be examined, template image B. And judging whether the pixel value A (x, y) of the image to be checked is equal to the pixel value B (x, y) of the template image. If equal, H is H + 1; and if the two points are not equal, judging the next point, wherein x belongs to (0, M), and y belongs to (0, N). The final H value is obtained
3) Judging that a target is found if H is greater than T; and if H < T, replacing the next template, and repeating the step 2) until all the templates are exhausted, and judging that no target is found.
Target perception:
the target position, i.e. the coordinates in the image, is obtained by applying the target probe function.
(1) And determining the latitude and longitude of the target position according to the position of the load.
(2) And determining a posture adjustment angle according to the position coordinates of the target in the image, and keeping the target in the central area of the image.
The above two points achieve the goal of sensing.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A hierarchical fusion and extraction method for multi-source detection of moving targets is characterized by comprising the following steps: the method comprises the following steps:
step 1: reading an image input by a multi-source image sensor;
step 2: carrying out image registration on the visible light image and the infrared light image, and fusing the registered images to obtain a first-layer fused image;
and step 3: carrying out image registration on the first layer of fused image and the hyperspectral image, and weakening the registered image pixels according to the ground feature classification area to obtain a second layer of fused image;
and 4, step 4: and detecting the target of the second layer fused image to obtain the position information of the target in the image, sensing the target to obtain the longitude and latitude of the target in the real environment, adjusting the attitude of the aircraft to track the target, and realizing continuous detection and sensing of the target.
2. The hierarchical fusion and extraction method for multi-source detection of the moving object according to claim 1, characterized in that: the image registration method in step 2 and step 3 specifically comprises the following steps:
step 2.1: extracting an edge contour of the image to obtain an edge contour image of the original image;
extracting the contour of the image by using a phase consistency algorithm, wherein a phase consistency function is as follows:
wherein A isnIs the amplitude on scale n; phi is an(x) Is the nth fourier component phase value at x;represents the weighted average of the local phase angles of the components of the fourier transform when pc (x) takes the maximum value at x;
step 2.2: establishing a characteristic corner point with scale, position and direction information in the edge contour image, wherein the specific method comprises the following steps:
step 2.2.1: constructing a nonlinear scale space, so that the characteristic angular points have scale information;
carrying out Gaussian filtering processing on the edge contour image to obtain an image gray level histogram and a contrast factor k; and converting a group of calculation time, and then obtaining all information layers of the nonlinear filtering image by adopting an additive operator splitting algorithm:
wherein A islA conduction matrix representing the image I in different dimensions l; t is tiDefining as computation time, and only using one group of computation time to construct the nonlinear scale space at a time; e is a unit array;
step 2.2.2: detecting characteristic angular points to obtain characteristic angular point position information;
moving a local window point by point in an edge contour image of a nonlinear scale space, and calculating pixel values in the window to judge whether the window is an angular point;
step 2.2.3: calculating direction information of characteristic angular points
The coordinates of the characteristic angular point p (i) in the image are (x (i), y (i)), two points p (i-k) and p (i + k) are selected in the neighborhood, the distance between the two points and the point p (i) is k, T is a tangent line at the point p (i), and the main direction of the characteristic angular point p (i) is an included angle theta between the tangent line T and the positive direction of the x axisFeature(s)The calculation formula is as follows:
step 2.3: establishing a shape description matrix;
let feature point set P { P1,p2,...pn,},pi∈R2Establishing a polar coordinate system in an r multiplied by r neighborhood taking a certain characteristic point p (i) as an origin and taking the point p (i) as a center, equally dividing 360 degrees to obtain 12 sectors, and sequentially dividing the sectors into 12 sectors according to the radiusDrawing five concentric circles to obtain 60 small areas; counting the number of characteristic points in each cell and calculating piA shape histogram hi of points, the shape histogram hi of each feature point being a shape context descriptor of each feature point; the method for calculating the shape histogram hi of each feature point is as follows:
hi(k)=#{q≠pi:(q-pi)∈bin(k)}
wherein # represents the number of characteristic points in the k (k ═ 1, 2.. 60) th statistical region;
step 2.4: matching the characteristic angular points of the two images to complete image registration;
searching the feature points of the nearest neighbor and the next nearest neighbor by using the Euclidean distance, wherein the Euclidean distance is as follows:
wherein, aiFor arbitrary feature points of reference imagesShape context description R (a)0,a1,...a59) The ith of (1), biContextual description of shape I (b) for arbitrary feature points of a reference image0,b1,...b59) The ith of (1);
if p is any feature point in a certain image, the nearest neighbor feature point and the next nearest neighbor feature point to be registered with p are respectively set as i and j, and the Euclidean distances between the feature points and p are respectively DipAnd Djp(ii) a Setting a calculation thresholdWhen the threshold is less than a certain value, p and i are considered as feature points of correct pairing, otherwise, the operation fails.
3. The hierarchical fusion and extraction method for multi-source detection of the moving object according to claim 1 or 2, characterized in that: the method for fusing the registered visible light image and infrared light image in the step 2 specifically comprises the following steps:
step 3.1: performing region segmentation on the registered infrared image, and separating a suspected region and a background region of the infrared image; the suspected area is a high-brightness area with a large infrared radiation image bright;
step 3.2: respectively carrying out dual-tree complex wavelet transformation on the infrared image and the visible light image after registration to obtain low-frequency information and high-frequency information of the image, wherein the basic information of the image corresponds to the low-frequency information of a wavelet transformation result, and the detail information of the image corresponds to the high-frequency information of the wavelet transformation result;
step 3.3: fusing the result of image segmentation and the result of wavelet transformation to respectively obtain a low-frequency fused image and a high-frequency fused image;
step 3.4: and performing dual-tree complex wavelet inverse transformation on the low-frequency fusion image and the high-frequency fusion image to obtain a first fusion image.
4. The hierarchical fusion and extraction method for multi-source detection of the moving object according to claim 1 or 2, characterized in that: the method for detecting the target of the second-layer fusion image in the step 4 to obtain the position information of the target in the image specifically comprises the following steps:
step 4.1: filtering the second layer fused image;
establishing a window matrix to scan pixels one by one on the two-dimensional image, wherein the value of the central position of the matrix is replaced by the average value of all point values in the window matrix, and the expression is as follows:
wherein: f (x, y) is a second layer fusion image to be processed; g (x, y) is the second layer fused image after filtering processing; s is a set of neighborhood coordinate points with (x, y) points as midpoints, and M is the total number of coordinates in the set;
step 4.2: processing the second-layer fusion image after filtering by using a moving average image threshold method to obtain a binary image;
Zk+1representing one point encountered at step k +1 in the scan order, the moving average gray scale at the new point is:
wherein n isGrey scaleRepresenting the number of points used in calculating the average gray scale, and having an initial value m (1) zi/nGrey scale;
The moving average is calculated for each point in the image, so the segmentation is performed using the following equation:
wherein K is [0,1 ]]Constant within the range, mxyIs the moving average of the input image at (x, y);
step 4.3: deleting images with the area smaller than that of the target from the binary image, and removing interference of irrelevant information;
step 4.4: processing the binary image without the interference of the irrelevant information by using image morphology;
step 4.5: establishing a cutting function, and cutting the target from the full image after the image morphology processing to obtain a target image to be checked;
the background part in the image I after the image morphological processing is a black value of 0, and the target part to be checked is a white value of 1; starting from the coordinate of the image (0,0), finding the first point with the pixel value of 1, starting from the point, finding all the points with the pixel value of 1 connected with the point, and naming all the points as a set T1In the set T1Find the maximum x of the abscissa in the coordinates of the point in (1)1maxAnd the minimum value x1minIn the set T1Find the maximum value y of the ordinate in the coordinates of the points in (1)1maxAnd the minimum value y1minThen cut the obtained target image to be checkedx1min<x<x1max,y1min<y<y1maxBy analogy, all the targets to be checked are found to obtain images of all the targets to be checked
Step 4.6: finding out the main symmetry axis of the target image to be inspected by using a principal component analysis method, and obtaining the included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be tested;
The coordinates of each point in the image information of the target to be checked are two-dimensional, and the points are combined into nTo be testedRow 2 column matrix XTo be testedWherein n isTo be testedCalculating X for the number of points in the target image information to be checkedTo be testedCovariance matrix C ofTo be testedAnd continuously calculating the covariance matrix CTo be testedCharacteristic vector V ofTo be tested=(xv,yv) The included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be testedComprises the following steps:
step 4.7: performing image direction normalization processing, and rotating the target image to be checked by thetaTo be testedCorners, and removing newly generated black edges;
step 4.8: carrying out image size normalization processing, and changing the image size of the target image to be checked after the direction normalization processing into the size of a template;
step 4.9: and matching the target image to be checked after the direction normalization and the size normalization with the images in the template library one by one, setting a similarity threshold T, and identifying the image as a target when the similarity degree exceeds the threshold.
5. The hierarchical fusion and extraction method for multi-source detection of moving targets according to claim 3, characterized in that: the method for detecting the target of the second-layer fusion image in the step 4 to obtain the position information of the target in the image specifically comprises the following steps:
step 4.1: filtering the second layer fused image;
establishing a window matrix to scan pixels one by one on the two-dimensional image, wherein the value of the central position of the matrix is replaced by the average value of all point values in the window matrix, and the expression is as follows:
wherein: f (x, y) is a second layer fusion image to be processed; g (x, y) is the second layer fused image after filtering processing; s is a set of neighborhood coordinate points with (x, y) points as midpoints, and M is the total number of coordinates in the set;
step 4.2: processing the second-layer fusion image after filtering by using a moving average image threshold method to obtain a binary image;
Zk+1representing one point encountered at step k +1 in the scan order, the moving average gray scale at the new point is:
wherein n isGrey scaleRepresenting the number of points used in calculating the average gray scale, and having an initial value m (1) zi/nGrey scale;
The moving average is calculated for each point in the image, so the segmentation is performed using the following equation:
wherein K is [0,1 ]]Constant within the range, mxyIs the moving average of the input image at (x, y);
step 4.3: deleting images with the area smaller than that of the target from the binary image, and removing interference of irrelevant information;
step 4.4: processing the binary image without the interference of the irrelevant information by using image morphology;
step 4.5: establishing a cutting function, and cutting the target from the full image after the image morphology processing to obtain a target image to be checked;
the background part in the image I after the image morphological processing is a black value of 0, and the target part to be checked is a white value of 1; starting from the coordinate of the image (0,0), finding the first point with the pixel value of 1, starting from the point, finding all the points with the pixel value of 1 connected with the point, and naming all the points as a set T1In the set T1Find the maximum x of the abscissa in the coordinates of the point in (1)1maxAnd the minimum value x1minIn the set T1Find the maximum value y of the ordinate in the coordinates of the points in (1)1maxAnd the minimum value y1minThen cut the obtained target image to be checkedx1min<x<x1max,y1min<y<y1maxBy analogy, all the targets to be checked are found to obtain images of all the targets to be checked
Step 4.6: finding out the main symmetry axis of the target image to be inspected by using a principal component analysis method, and obtaining the included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be tested;
The coordinates of each point in the image information of the target to be checked are two-dimensional, and the points are combined into nTo be testedRow 2 column matrix XTo be testedWherein n isTo be testedCalculating X for the number of points in the target image information to be checkedTo be testedCovariance matrix C ofTo be testedAnd continuously calculating the covariance matrix CTo be testedCharacteristic vector V ofTo be tested=(xv,yv) The included angle theta between the main symmetry axis of the target image to be inspected and the x axisTo be testedComprises the following steps:
step 4.7: performing image direction normalization processing, and rotating the target image to be checked by thetaTo be testedCorners, and removing newly generated black edges;
step 4.8: carrying out image size normalization processing, and changing the image size of the target image to be checked after the direction normalization processing into the size of a template;
step 4.9: and matching the target image to be checked after the direction normalization and the size normalization with the images in the template library one by one, setting a similarity threshold T, and identifying the image as a target when the similarity degree exceeds the threshold.
6. The hierarchical fusion and extraction method for multi-source detection of moving targets according to claim 3, characterized in that: the method for obtaining the low-frequency fusion image in the step 3.3 comprises the following steps: according to the position information of the infrared image divided into the suspected area and the background area, the visible light image is divided according to the same position information; for the suspected area of the low-frequency part of the infrared image and the visible light image, the following rule is adopted:
wherein,is the fused pattern low-frequency coefficient of the l-th layer,is the low-frequency coefficient of the infrared image of the l layer,the low-frequency coefficient of the visible light image of the l layer;
for a background area of the low-frequency part of the infrared image and the visible light image, adopting an area variance method, wherein the larger the area variance is, the larger the change of the gray value corresponding to each pixel in the area is, the higher the contrast of the pixel in the area is, and the more the information corresponding to the area is; and adding weight to the pixel points with large regional variance values in image fusion, wherein the rule is as follows:
wherein, ω isirIs the weight of the infrared image, omegavisThe weight value is the visible light image weight value; infrared image weight omegairAnd the visible light image weight omegavisThe calculation method comprises the following steps:
ωir=1-ωvis;
wherein σvisAnd σirRespectively the area variance of the visible light image and the infrared image, and r is a correlation coefficient area; regional variance σ of visible light imagevisRegional variance σ from infrared imageirThe calculation method comprises the following steps:
the calculation method of the correlation coefficient r comprises the following steps:
in which the size of the image is M x N,represents the average gray-scale value of the visible light image,representing the mean gray value of the infrared image, Iir(I, j) represents an infrared image, Ivis(i, j) represents a visible light image.
7. The hierarchical fusion and extraction method for multi-source detection of moving targets according to claim 5, characterized in that: the method for obtaining the low-frequency fusion image in the step 3.3 comprises the following steps: according to the position information of the infrared image divided into the suspected area and the background area, the visible light image is divided according to the same position information; for the suspected area of the low-frequency part of the infrared image and the visible light image, the following rule is adopted:
wherein,is the fused pattern low-frequency coefficient of the l-th layer,is the low-frequency coefficient of the infrared image of the l layer,the low-frequency coefficient of the visible light image of the l layer;
for a background area of the low-frequency part of the infrared image and the visible light image, adopting an area variance method, wherein the larger the area variance is, the larger the change of the gray value corresponding to each pixel in the area is, the higher the contrast of the pixel in the area is, and the more the information corresponding to the area is; and adding weight to the pixel points with large regional variance values in image fusion, wherein the rule is as follows:
wherein, ω isirIs the weight of the infrared image, omegavisThe weight value is the visible light image weight value; infrared image weight omegairAnd the visible light image weight omegavisThe calculation method comprises the following steps:
ωir=1-ωvis;
wherein σvisAnd σirRespectively the area variance of the visible light image and the infrared image, and r is a correlation coefficient area; regional variance σ of visible light imagevisRegional variance σ from infrared imageirThe calculation method comprises the following steps:
the calculation method of the correlation coefficient r comprises the following steps:
in which the size of the image is M x N,represents the average gray-scale value of the visible light image,representing the mean gray value of the infrared image, Iir(I, j) represents an infrared image, Ivis(i, j) represents a visible light image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910602605.0A CN110472658B (en) | 2019-07-05 | 2019-07-05 | Hierarchical fusion and extraction method for multi-source detection of moving target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910602605.0A CN110472658B (en) | 2019-07-05 | 2019-07-05 | Hierarchical fusion and extraction method for multi-source detection of moving target |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110472658A true CN110472658A (en) | 2019-11-19 |
CN110472658B CN110472658B (en) | 2023-02-14 |
Family
ID=68506839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910602605.0A Active CN110472658B (en) | 2019-07-05 | 2019-07-05 | Hierarchical fusion and extraction method for multi-source detection of moving target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472658B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626230A (en) * | 2020-05-29 | 2020-09-04 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111667517A (en) * | 2020-06-05 | 2020-09-15 | 北京环境特性研究所 | Infrared polarization information fusion method and device based on wavelet packet transformation |
CN111815689A (en) * | 2020-06-30 | 2020-10-23 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
CN112669360A (en) * | 2020-11-30 | 2021-04-16 | 西安电子科技大学 | Multi-source image registration method based on non-closed multi-dimensional contour feature sequence |
WO2021098081A1 (en) * | 2019-11-22 | 2021-05-27 | 大连理工大学 | Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm |
CN113191965A (en) * | 2021-04-14 | 2021-07-30 | 浙江大华技术股份有限公司 | Image noise reduction method, device and computer storage medium |
CN113303905A (en) * | 2021-05-26 | 2021-08-27 | 中南大学湘雅二医院 | Interventional operation simulation method based on video image feedback |
CN113781315A (en) * | 2021-07-21 | 2021-12-10 | 武汉市异方体科技有限公司 | Multi-view-angle-based homologous sensor data fusion filtering method |
CN113902660A (en) * | 2021-09-23 | 2022-01-07 | Oppo广东移动通信有限公司 | Image processing method and device, electronic device and storage medium |
CN114153001A (en) * | 2021-12-30 | 2022-03-08 | 同方威视技术股份有限公司 | Inspection system and inspection method for inspecting frozen goods in goods |
CN115937700A (en) * | 2022-11-10 | 2023-04-07 | 哈尔滨工业大学 | Multi-source collaborative moving target online detection and identification method |
CN116503756A (en) * | 2023-05-25 | 2023-07-28 | 数字太空(北京)科技股份公司 | Method for establishing surface texture reference surface based on ground control point database |
CN116862916A (en) * | 2023-09-05 | 2023-10-10 | 常熟理工学院 | Production detection method and system based on image processing |
CN117994624A (en) * | 2024-04-03 | 2024-05-07 | 聊城大学 | Target identification method based on visible light and hyperspectral image information fusion |
CN118190019A (en) * | 2024-05-17 | 2024-06-14 | 中国科学院空天信息创新研究院 | Air moving target flight parameter calculation method based on push-broom mode multispectral remote sensing image |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1932882A (en) * | 2006-10-19 | 2007-03-21 | 上海交通大学 | Infared and visible light sequential image feature level fusing method based on target detection |
US20090147238A1 (en) * | 2007-03-27 | 2009-06-11 | Markov Vladimir B | Integrated multi-sensor survailance and tracking system |
CN101546428A (en) * | 2009-05-07 | 2009-09-30 | 西北工业大学 | Image fusion of sequence infrared and visible light based on region segmentation |
CN105321172A (en) * | 2015-08-31 | 2016-02-10 | 哈尔滨工业大学 | SAR, infrared and visible light image fusion method |
CN106485740A (en) * | 2016-10-12 | 2017-03-08 | 武汉大学 | A kind of combination point of safes and the multidate SAR image registration method of characteristic point |
CN108198157A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Heterologous image interfusion method based on well-marked target extracted region and NSST |
CN109558848A (en) * | 2018-11-30 | 2019-04-02 | 湖南华诺星空电子技术有限公司 | A kind of unmanned plane life detection method based on Multi-source Information Fusion |
-
2019
- 2019-07-05 CN CN201910602605.0A patent/CN110472658B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1932882A (en) * | 2006-10-19 | 2007-03-21 | 上海交通大学 | Infared and visible light sequential image feature level fusing method based on target detection |
US20090147238A1 (en) * | 2007-03-27 | 2009-06-11 | Markov Vladimir B | Integrated multi-sensor survailance and tracking system |
CN101546428A (en) * | 2009-05-07 | 2009-09-30 | 西北工业大学 | Image fusion of sequence infrared and visible light based on region segmentation |
CN105321172A (en) * | 2015-08-31 | 2016-02-10 | 哈尔滨工业大学 | SAR, infrared and visible light image fusion method |
CN106485740A (en) * | 2016-10-12 | 2017-03-08 | 武汉大学 | A kind of combination point of safes and the multidate SAR image registration method of characteristic point |
CN108198157A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Heterologous image interfusion method based on well-marked target extracted region and NSST |
CN109558848A (en) * | 2018-11-30 | 2019-04-02 | 湖南华诺星空电子技术有限公司 | A kind of unmanned plane life detection method based on Multi-source Information Fusion |
Non-Patent Citations (4)
Title |
---|
ZENG XIANGJIN等: "Fusion research of visible and infrared images based on IHS transform and regional variance wavelet transform", 《2018 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS》 * |
张文娜: "多源图像融合技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
张筱晗等: "高光谱图像融合算法研究与进展", 《舰船电子工程》 * |
郭庆乐: "多时相遥感图像变化检测及趋势分析", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021098081A1 (en) * | 2019-11-22 | 2021-05-27 | 大连理工大学 | Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm |
US11575873B2 (en) | 2019-11-22 | 2023-02-07 | Dalian University Of Technology | Multispectral stereo camera self-calibration algorithm based on track feature registration |
CN111626230A (en) * | 2020-05-29 | 2020-09-04 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111626230B (en) * | 2020-05-29 | 2023-04-14 | 合肥工业大学 | Vehicle logo identification method and system based on feature enhancement |
CN111667517A (en) * | 2020-06-05 | 2020-09-15 | 北京环境特性研究所 | Infrared polarization information fusion method and device based on wavelet packet transformation |
CN111815689A (en) * | 2020-06-30 | 2020-10-23 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
CN111815689B (en) * | 2020-06-30 | 2024-06-04 | 杭州科度科技有限公司 | Semi-automatic labeling method, equipment, medium and device |
CN112669360A (en) * | 2020-11-30 | 2021-04-16 | 西安电子科技大学 | Multi-source image registration method based on non-closed multi-dimensional contour feature sequence |
CN112669360B (en) * | 2020-11-30 | 2023-03-10 | 西安电子科技大学 | Multi-source image registration method based on non-closed multi-dimensional contour feature sequence |
CN113191965B (en) * | 2021-04-14 | 2022-08-09 | 浙江大华技术股份有限公司 | Image noise reduction method, device and computer storage medium |
CN113191965A (en) * | 2021-04-14 | 2021-07-30 | 浙江大华技术股份有限公司 | Image noise reduction method, device and computer storage medium |
CN113303905A (en) * | 2021-05-26 | 2021-08-27 | 中南大学湘雅二医院 | Interventional operation simulation method based on video image feedback |
CN113303905B (en) * | 2021-05-26 | 2022-07-01 | 中南大学湘雅二医院 | Interventional operation simulation method based on video image feedback |
CN113781315A (en) * | 2021-07-21 | 2021-12-10 | 武汉市异方体科技有限公司 | Multi-view-angle-based homologous sensor data fusion filtering method |
CN113902660A (en) * | 2021-09-23 | 2022-01-07 | Oppo广东移动通信有限公司 | Image processing method and device, electronic device and storage medium |
CN114153001B (en) * | 2021-12-30 | 2024-02-06 | 同方威视技术股份有限公司 | Inspection system and inspection method for inspecting frozen products in goods |
CN114153001A (en) * | 2021-12-30 | 2022-03-08 | 同方威视技术股份有限公司 | Inspection system and inspection method for inspecting frozen goods in goods |
CN115937700A (en) * | 2022-11-10 | 2023-04-07 | 哈尔滨工业大学 | Multi-source collaborative moving target online detection and identification method |
CN116503756A (en) * | 2023-05-25 | 2023-07-28 | 数字太空(北京)科技股份公司 | Method for establishing surface texture reference surface based on ground control point database |
CN116503756B (en) * | 2023-05-25 | 2024-01-12 | 数字太空(北京)科技股份公司 | Method for establishing surface texture reference surface based on ground control point database |
CN116862916B (en) * | 2023-09-05 | 2023-11-07 | 常熟理工学院 | Production detection method and system based on image processing |
CN116862916A (en) * | 2023-09-05 | 2023-10-10 | 常熟理工学院 | Production detection method and system based on image processing |
CN117994624A (en) * | 2024-04-03 | 2024-05-07 | 聊城大学 | Target identification method based on visible light and hyperspectral image information fusion |
CN117994624B (en) * | 2024-04-03 | 2024-06-11 | 聊城大学 | Target identification method based on visible light and hyperspectral image information fusion |
CN118190019A (en) * | 2024-05-17 | 2024-06-14 | 中国科学院空天信息创新研究院 | Air moving target flight parameter calculation method based on push-broom mode multispectral remote sensing image |
Also Published As
Publication number | Publication date |
---|---|
CN110472658B (en) | 2023-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472658B (en) | Hierarchical fusion and extraction method for multi-source detection of moving target | |
Kampffmeyer et al. | Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks | |
Plaza et al. | Spatial/spectral endmember extraction by multidimensional morphological operations | |
Fu et al. | A novel band selection and spatial noise reduction method for hyperspectral image classification | |
Kale et al. | A research review on hyperspectral data processing and analysis algorithms | |
CN106339674B (en) | The Hyperspectral Image Classification method that model is cut with figure is kept based on edge | |
Teodoro et al. | Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data | |
CN108596213A (en) | A kind of Classification of hyperspectral remote sensing image method and system based on convolutional neural networks | |
CN112101271A (en) | Hyperspectral remote sensing image classification method and device | |
Moghimi et al. | Comparison of keypoint detectors and descriptors for relative radiometric normalization of bitemporal remote sensing images | |
Liu et al. | Multilayer cascade screening strategy for semi-supervised change detection in hyperspectral images | |
Byun et al. | A multispectral image segmentation approach for object-based image classification of high resolution satellite imagery | |
Asokan et al. | Deep Feature Extraction and Feature Fusion for Bi-Temporal Satellite Image Classification. | |
Manaf et al. | Hybridization of SLIC and Extra Tree for Object Based Image Analysis in Extracting Shoreline from Medium Resolution Satellite Images. | |
Jing et al. | Island road centerline extraction based on a multiscale united feature | |
Ma et al. | A priori land surface reflectance synergized with multiscale features convolution neural network for MODIS imagery cloud detection | |
Ge et al. | Self-training algorithm for hyperspectral imagery classification based on mixed measurement k-nearest neighbor and support vector machine | |
Zhang et al. | Feature-band-based unsupervised hyperspectral underwater target detection near the coastline | |
Chen et al. | Spectral Query Spatial: Revisiting the Role of Center Pixel in Transformer for Hyperspectral Image Classification | |
Han et al. | A robust LCSE-ResNet for marine man-made target classification based on optical remote sensing imagery | |
CN116758361B (en) | Engineering rock group remote sensing classification method and system based on spatial and spectral joint characteristics | |
Shah et al. | Energy based convex set hyperspectral endmember extraction algorithm | |
Albanwan et al. | Spatiotemporal fusion in remote sensing | |
Paillou | Detecting step edges in noisy SAR images: a new linear operator | |
Tang et al. | Multimodel fusion method for cloud detection in satellite laser footprint images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |