Nothing Special   »   [go: up one dir, main page]

CN115588033A - Synthetic aperture radar and optical image registration system and method based on structure extraction - Google Patents

Synthetic aperture radar and optical image registration system and method based on structure extraction Download PDF

Info

Publication number
CN115588033A
CN115588033A CN202211083601.4A CN202211083601A CN115588033A CN 115588033 A CN115588033 A CN 115588033A CN 202211083601 A CN202211083601 A CN 202211083601A CN 115588033 A CN115588033 A CN 115588033A
Authority
CN
China
Prior art keywords
image
layer
scale
synthetic aperture
aperture radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211083601.4A
Other languages
Chinese (zh)
Inventor
梁继民
庞思琪
郭开泰
郑洋
胡海虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202211083601.4A priority Critical patent/CN115588033A/en
Publication of CN115588033A publication Critical patent/CN115588033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a synthetic aperture radar and optical image registration system and method based on structure extraction, which comprises a structure extraction module, a characteristic point description module, a secondary registration module and a registration result display module, wherein the structure extraction module is used for extracting characteristic points; the method comprises the following steps: carrying out structure extraction on the synthetic aperture radar and the optical image to obtain a multilayer structure chart; extracting characteristic points in each layer of structure diagram by using a multi-scale Harris corner extractor; constructing an enhanced structure descriptor by combining a Sobel operator and the maximum phase consistency moment; carrying out coarse-to-fine registration by combining the position and angle information of the feature points; the registered images are displayed in a checkerboard format. The invention extracts the characteristic points on the texture-removed structural image and constructs the enhanced descriptor, can overcome the influence of speckle noise on the synthetic aperture radar image and the texture area of the optical image on the registration, promotes the characteristic points to be distributed on the structure with obvious characteristics, can also obtain the accurate and distinguishable descriptor, and effectively improves the registration accuracy and the robustness of the system.

Description

Synthetic aperture radar and optical image registration system and method based on structure extraction
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a registration method of a synthetic aperture radar image and an optical image based on structure extraction.
Background
With the development of aerospace technology, more and more artificial satellites carrying various imaging sensors are sent to the outer space, a large amount of remote sensing data are provided for people, each sensor has different characteristics, and richer ground observation information is provided for people. The SAR image generated by a Synthetic Aperture Radar (SAR) and the optical image generated by an optical camera are two very important images in a remote sensing image, and the two images can provide highly complementary ground observation information. The registration of the SAR image and the optical image is an important ring for processing and understanding the remote sensing image and is also the premise for carrying out the tasks of remote sensing image fusion, change detection and target detection subsequently.
The registration of the SAR image and the optical image is always a difficult problem in the field of computer vision, and the huge radiation difference, geometric difference, imaged gray value nonlinear transformation and inherent multiplicative speckle noise of the SAR image are all important difficult problems to be faced by the development of the SAR and optical image registration technology.
The SAR image and optical image registration technique is a technique that aligns the same landmark positions on both images through the optimal mapping transformation. At present, SAR and optical image registration methods are mainly divided into image registration methods based on gray information and characteristics. In image registration algorithms based on gray scale information, most similarity measurement methods such as Mutual Information (MI) or cross-accumulation residual entropy are used for realizing template matching. However, these methods are prone to fail due to local optimal solutions, and the iterative solution feature makes the calculation amount large. Since the 21 st century, image registration mainly used a feature-based method, which mainly included the steps of: feature point extraction, feature description, feature matching and model parameter estimation. However, the existing methods often fail to solve the following problems: noise is often selected as a key point when characteristic points are extracted due to inherent multiplicative speckle noise of the SAR image; the characteristic points distributed in the texture region cannot obtain unique and distinguishable descriptors due to similar neighborhood structures; on the other hand, the non-linear gray value conversion between the SAR and the optical image causes that the feature descriptors of the same area of the two images are not repeatable, and the matching system is invalid.
Disclosure of Invention
The invention aims to provide a system and a method for registering a synthetic aperture radar and an optical image based on structure extraction, which overcome the spot noise of a synthetic aperture radar image, extract characteristic points on a remarkable structure and obtain a unique and repeatable descriptor.
The technical scheme adopted by the invention is that a synthetic aperture radar and optical image registration system based on structure extraction comprises:
the structure extraction module is connected with the input image end and used for smoothing the texture information of the input image in a multi-scale mode, extracting the structure information, thinning the edge and outputting a structural image with a plurality of layers of smooth texture information layer by layer;
the characteristic point extraction module is connected with the structure extraction module and is used for extracting the characteristic points in each layer of structure image, the layer number and the position of the characteristic points and the scale information of the characteristic points;
the characteristic point description module is connected with the characteristic point extraction module and the structure extraction module and is used for describing the main direction of the characteristic points and constructing a characteristic point descriptor;
the characteristic point matching module is connected with the characteristic point extracting module and the characteristic point describing module: coarse registration is realized by adopting a nearest neighbor distance ratio and a rapid sampling consistency algorithm; after the coarse registration is combined, the position and direction information of the feature points is subjected to fine registration for filtering out error matching points;
a registration result display module connected with the feature point matching module and the input image end: and displaying the registration effect in a checkerboard format.
The invention is also characterized in that:
the structure extraction module is specifically configured to: and adopting a window relative total transformation iterative optimization algorithm to carry out multi-scale smoothing on textures in the synthetic aperture radar image and the optical image, wherein each iterative optimization is carried out on the image obtained by the last iterative optimization, each layer of image is stored, and a structural image with 6 layers of texture information smoothed layer by layer is output.
In the multi-scale smoothing process of the texture in the synthetic aperture radar image and the optical image by adopting a window relative total transformation iterative optimization algorithm: the initial smooth window variance scale of the synthetic aperture radar image is 4 pixels, the initial window scale of the optical image is 2 pixels, the scale window is reduced by 0.5 times along with the iteration number, and the iteration number is 6.
The feature point extraction module includes:
a multi-scale Harris function calculating unit, which is used for calculating the Harris function value of each layer of image by using 6 decreasing Harris angular point detection windows respectively for the multi-layer structure image output by the structure extracting module, wherein the initial detection window is 6 pixels in scale, and the window increases progressively along with the number of layers of the structure image so as to
Figure BDA0003834481030000031
Decreasing progressively to obtain a multi-scale Harris function value;
and the characteristic point information obtaining unit is connected with the multi-scale Harris function calculating unit and is used for searching the maximum value in the neighborhood of the multi-scale Harris function value, and the point meeting the condition larger than the threshold value in the searched maximum value is the characteristic point, the layer number, the position and the scale information of the characteristic point.
The characteristic point description module comprises:
constructing an enhanced phase consistency edge unit, and combining a Sobel edge with gradient strength on the phase consistency edge on each layer of structural image to obtain an enhanced phase consistency edge;
and a descriptor construction unit connected with the construction enhanced phase consistency edge unit and the feature point extraction module: and (3) calculating the gradient and the distribution of the direction of the pixels in 17 fan ring areas of the neighborhood of the feature point circle on the edge of the enhanced phase consistency by combining the position information, the scale information and the direction information of the feature point, and creating a feature descriptor.
The feature point matching module includes:
a coarse matching unit for coarse registration by adopting nearest neighbor distance ratio and rapid sampling consistency algorithm to obtain a coarse registration transformation matrix H 1
A fine matching unit connected with the coarse matching unit adjusts Euclidean distance between descriptors by combining position information and direction information of the characteristic points, and filters error matching pairs by using a second nearest neighbor distance ratio and a rapid sampling consistency algorithm to obtain a fine registration perspective transformation matrix H 2
The synthetic aperture radar and optical image registration method based on structure extraction is implemented according to the following steps:
step 1, extracting a synthetic aperture radar and an optical image structure by using a relative total transformation structure extraction algorithm to obtain a multi-layer structure image with smooth texture information layer by layer;
step 2, on the multilayer structural image, extracting the characteristic points of the synthetic aperture radar and the optical image by using a multi-scale Harris angular point detection algorithm with a characteristic point detection window gradually decreased along with the structural image layer by layer, and obtaining the layer number and the position of the characteristic points and the scale information of the characteristic points;
step 3, constructing an enhanced phase consistency edge on each layer of structural image, and constructing a descriptor by using the edge characteristics;
step 4, performing primary matching by using a nearest neighbor method distance ratio, and performing consistent fitting by using rapid sampling to obtain a rough transformation matrix H 1 And adjusting the distance between descriptors by combining the position distance difference and the angle difference of the feature points under the reference image pixel coordinate system after primary transformation, and performing secondary matching to obtain an accurate perspective transformation matrix H 2
Step 5, multiplying the image to be registered by a transformation matrix H 2 And transforming to a pixel coordinate system where the reference image is located, and displaying the two modal images in a checkerboard form.
The specific process of extracting the characteristic points of the synthetic aperture radar and the optical image is as follows:
on the multi-layer structure image, the initial window scale is 6 pixels, and the number of layers of the structure image is increased gradually to
Figure BDA0003834481030000041
Calculating a Harris function value of each layer of image by using a multiple decreasing Harris angular point detection window;
searching a maximum value in the neighborhood of the multi-scale Harris function value, wherein the point, the searched maximum value of which is larger than a threshold value, is a feature point, and recording the number of layers, the position and scale information of the feature point; where the threshold τ is a custom threshold.
The specific process of the step 3 is as follows:
step 3.1, calculating a phase consistency edge and a Sobel edge on each layer of structural image, carrying out AND operation on the Sobel edge and the phase consistency edge at corresponding pixel positions to obtain an intersection mask, multiplying the intersection mask with corresponding elements of a Sobel edge map, carrying out primary spatial domain filtering, taking a larger value of the corresponding positions of the filtered Sobel edge and the phase consistency edge as an enhanced phase consistency edge map M en
And 3.2, on the edge of the enhanced phase consistency, counting a gradient direction histogram of the position of the feature point, distributing a main direction for the feature point, describing features in a circle neighborhood which is positively correlated with the size of the feature point and is divided into 17 fan-ring areas by using an SIFT feature descriptor method, and finally obtaining a feature point descriptor with 136 dimensions for each feature point.
The specific process of the step 4 is as follows:
step 4.1, distance ratio primary matching by using a nearest neighbor method: comparing Euclidean distances between descriptors on the reference image and the to-be-registered image by using a nearest neighbor distance ratio, selecting a feature point matching pair in the two images, filtering the error matching pair by using a rapid sampling consistency algorithm after duplication removal, and fitting a rough transformation matrix H 1
Step 4.2, passing the feature point pairs on the image to be registered through a transformation matrix H 1 Transforming to reference image pixel coordinate system, calculating eachThe position distance difference e between one characteristic point and all characteristic points on another image p (p i ,p′ j ) And the difference e of the main direction o (p i ,p′ j ) And adding a constant 1 to the two differences to be used as a magnification factor, magnifying the Euclidean distance between the corresponding descriptors, and performing the same step with the primary matching again to obtain an accurate perspective transformation matrix H 2
The invention has the beneficial effects that:
the structure extraction module can regard the coherent speckle noise as a texture feature, effectively remove the texture and avoid the influence of the noise on the feature point extraction. The invention extracts the feature points after extracting the image structure, avoids the feature points from falling into texture areas with difficult distinguishing features, ensures that the feature points extracted from the synthetic aperture radar and the optical image are mainly distributed on the structure edge with more obvious features, and improves the position space repetition rate of the feature points extracted from the two images and the uniqueness of the descriptor. In addition, the system does not carry out down sampling on the image, and edges are finer and finer when the structure is extracted, so that accurate positioning is provided for the feature points. When describing the characteristic points, firstly, an enhanced phase consistency edge is constructed on each layer of structural image, clear, accurate and noiseless edges can be obtained, then the characteristic points are described by using edge information, and the problem that a descriptor which can be distinguished and is close to the corresponding characteristic points is obtained by overcoming the gray value nonlinear transformation between a synthetic aperture radar and an optical image can be solved. The whole system extracts the characteristic points on the structural image of which the multilayer texture information is inhibited layer by layer and performs characteristic description, so that the characteristic points on a remarkable structure are extracted, and the characteristic points on the edge with weaker contrast can be extracted on a shallow structural image, so that the extracted characteristic points are more in quantity and more uniformly distributed. The invention not only improves the robustness of the registration of the synthetic aperture radar and the optical image, but also improves the precision of the image registration. The method for simply, conveniently and efficiently aligning the synthetic aperture radar and the optical image is provided, and has important application value.
Drawings
FIG. 1 is a flow chart of a synthetic aperture radar and optical image registration system based on feature points.
FIG. 2 is a schematic diagram of a feature point extraction device
FIG. 3 is a schematic diagram of the principle of the characteristic point describing device
FIG. 4 is a diagram showing an example of an output of the structure extracting apparatus
FIG. 5 is a schematic diagram of an output of a registration result display device;
fig. 6 shows the registered images of the two modalities in a checkerboard format.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The invention relates to a synthetic aperture radar and optical image registration system based on structure extraction, which comprises:
the structure extraction module is connected with the input image end and used for smoothing the texture information of the input image in a multi-scale mode, extracting the structure information, thinning the edge and outputting a structural image with a plurality of layers of smooth texture information layer by layer;
the structure extraction module is specifically configured to: and adopting a window relative total transformation iterative optimization algorithm to carry out multi-scale smoothing on textures in the synthetic aperture radar image and the optical image, wherein each iterative optimization is carried out on the image obtained by the last iterative optimization, each layer of image is stored, and a structural image with 6 layers of texture information smoothed layer by layer is output.
In the multi-scale smoothing process of the texture in the synthetic aperture radar image and the optical image by adopting a window relative total transformation iterative optimization algorithm: the initial smooth window variance scale of the synthetic aperture radar image is 4 pixels, the initial window scale of the optical image is 2 pixels, the scale window is reduced by 0.5 times along with the iteration number, and the iteration number is 6.
The characteristic point extraction module is connected with the structure extraction module and is used for extracting the characteristic points in each layer of structure image, the layer number and the position of the characteristic points and the scale information of the characteristic points;
as shown in fig. 2, the feature point extraction module includes:
a multi-scale Harris function calculating unit for respectively calculating the multi-layer structure image output by the structure extracting moduleCalculating Harris function value of each layer of image by using 6 descending Harris corner detection windows, wherein the initial detection window is 6 pixels in scale, and the window is gradually increased along with the number of layers of the structural image so as to
Figure BDA0003834481030000071
Decreasing progressively to obtain a multi-scale Harris function value;
and the characteristic point information obtaining unit is connected with the multi-scale Harris function calculating unit and is used for searching the maximum value in the neighborhood of the multi-scale Harris function value, and the point meeting the requirement of being more than a threshold value in the searched maximum value is the characteristic point, the number of layers and the position of the characteristic point and the scale information of the characteristic point, and the threshold value is set to be 0.1 by default in the method.
The characteristic point description module is connected with the characteristic point extraction module and the structure extraction module and is used for describing the main direction of the characteristic points and constructing a characteristic point descriptor;
as shown in fig. 3, the feature point description module includes:
constructing an enhanced phase consistency edge unit, and combining a Sobel edge with gradient strength on the phase consistency edge on each layer of structural image to obtain an enhanced phase consistency edge;
and a descriptor construction unit connected with the construction enhanced phase consistency edge unit and the feature point extraction module: and (3) calculating the gradient and the distribution of the direction of the pixels in 17 fan ring areas of the neighborhood of the feature point circle on the edge of the enhanced phase consistency by combining the position information, the scale information and the direction information of the feature point, and creating a feature descriptor.
The characteristic point matching module is connected with the characteristic point extracting module and the characteristic point describing module: coarse registration is realized by adopting a nearest neighbor distance ratio and a rapid sampling consistency algorithm; after the coarse registration is combined, the position and direction information of the feature points is subjected to fine registration for filtering out error matching points;
the feature point matching module includes:
a coarse matching unit for coarse registration by adopting nearest neighbor distance ratio and rapid sampling consistency algorithm to obtain a coarse registration transformation matrix H 1
A fine matching unit connected with the coarse matching unit adjusts Euclidean distance between descriptors by combining position information and direction information of the characteristic points, and filters error matching pairs by using a second nearest neighbor distance ratio and a rapid sampling consistency algorithm to obtain a fine registration perspective transformation matrix H 2
A registration result display module connected with the feature point matching module and the input image end: the registration effect is shown in a checkerboard format.
The synthetic aperture radar and optical image registration method based on structure extraction is implemented according to the following steps:
step 1, after a program is operated, opening the addresses of a synthetic aperture radar and an optical image which need to be registered, selecting a pair of the synthetic aperture radar and the optical image which need to be registered, and extracting the synthetic aperture radar and the optical image structure by using a relative total transformation structure extraction algorithm to obtain a multilayer structure image with 6 layers of texture information which are smooth layer by layer;
extracting an image structure by a relative total transform (RTV) algorithm, and solving a minimum optimization problem through iteration, wherein the method specifically comprises the following steps:
Figure BDA0003834481030000081
wherein
Figure BDA0003834481030000091
Figure BDA0003834481030000092
Figure BDA0003834481030000093
The total variation of the Window (WTV) in the x and y directions respectively,
Figure BDA0003834481030000094
the Window Internal Variation (WIV) in the x and y directions respectively,
Figure BDA0003834481030000095
namely the Relative Total Variation (RTV) of the window, and is used as a punishment item of the structure extraction problem.
G σ Is a two-dimensional gaussian convolution kernel with variance σ, the convolution window size is typically 5 σ, S is the estimated structural image,
Figure BDA0003834481030000096
and
Figure BDA0003834481030000097
the method comprises the following steps that partial derivatives in x and y directions are respectively adopted, I is an original image, P represents a pixel point in the image, epsilon is a very small constant, the denominator is avoided to be 0, and lambda is a weight coefficient;
the initial calculation transformation window scale sigma of the synthetic aperture radar image from the high score 3 is 4 (pixel), the initial window scale sigma of the optical image is 2 (pixel), and the method is used for overcoming the influence of speckle noise on the SAR image and texture areas on the optical image on feature point extraction and description; the scale window is reduced by 0.5 times along with the iteration times and is used for thinning the structural edge of the image; and iterating for 6 times to obtain a structural image with 6 layers of texture information and smooth layer by layer.
The initial calculation transformation window scale aiming at the synthetic aperture radar image is 0.5 (pixel), the initial window scale sigma aiming at the optical image is 1 (pixel), and the method is used for overcoming the influence of speckle noise on the synthetic aperture radar image and texture areas on the optical image on feature point extraction and description; the scale window is reduced by 0.5 times along with the iteration times and is used for thinning the structural edge of the image; and iterating for 6 times to obtain a structural image with 6 layers of texture information, which is smoothed layer by layer and the edges of which are refined layer by layer.
The initial relative total transform window scale for synthetic aperture radar images is 4 (pixel), the initial relative total transform window scale for optical images is 2 (pixel), and the typical window scale is 5 σ. The scale window is reduced by 0.5 times along with the iteration times, and 6 times of iteration is carried out to obtain a structural image with 6 layers of gradually smooth textures and gradually fine edge positioning. The effect of the multilayer structure diagram is shown in fig. 4, where the upper and lower rows are the structure diagrams of the synthetic aperture radar and the optical image respectively when iterating for 0 time, 3 times and 6 times.
The initial window scale, the reduction multiple and the iteration times are all empirical values, and the optimal values are obtained through experimental verification.
Step 2, on the multilayer structural image, extracting the characteristic points of the synthetic aperture radar and the optical image by using a multi-scale Harris angular point detection algorithm with a characteristic point detection window gradually decreased along with the structural image layer by layer, and obtaining the layer number and the position of the characteristic points and the scale information of the characteristic points;
the specific process of extracting the characteristic points of the synthetic aperture radar and the optical image is as follows:
on the multi-layer structural image, a Harris window with an initial window scale of 6 pixels is used for calculating a function value, and the function value is increased along with the number of layers of the structural image
Figure BDA0003834481030000101
Calculating to obtain a Harris function value of each layer of image through a multiple decreasing Harris corner detection window;
searching a maximum value in the neighborhood of the multi-scale Harris function value, wherein the point, the searched maximum value of which is larger than a threshold value, is a feature point, and recording the number of layers, the position and scale information of the feature point; wherein the threshold tau is a self-defined threshold, and the method is set to be 0.1 by default.
Step 3, constructing an enhanced phase consistency edge on each layer of structural image, and constructing a descriptor by using the edge characteristics; the specific process is as follows:
step 3.1, calculating a phase consistency edge and a Sobel edge on each layer of structural image, performing AND operation on the Sobel edge and the phase consistency edge at corresponding pixel positions to obtain an intersection mask, multiplying the intersection mask by corresponding elements of a Sobel edge map, performing primary spatial domain filtering, and taking a larger value of the corresponding positions of the filtered Sobel edge and the phase consistency edge as an enhanced phase consistency edge map M en
And 3.2, on the edge of the enhanced phase consistency, counting a gradient direction histogram of the position of the feature point, distributing a main direction for the feature point, describing the feature by using an SIFT feature descriptor method in a circle neighborhood which is positively correlated with the size of the feature point and is divided into 17 fan-ring areas according to a GLOH algorithm, and finally obtaining a 136-dimensional feature point descriptor for each feature point.
Step 4, performing primary matching on the distance ratio by using a nearest neighbor method, setting the ratio to be 0.9 in the method, and performing consistency fitting by using rapid sampling to obtain a rough transformation matrix H 1 And after primary transformation is combined, the distance between descriptors is adjusted according to the position and angle information of the feature points under the reference image pixel coordinate system, and secondary matching is carried out to obtain an accurate perspective transformation matrix H 2 (ii) a The specific process is as follows:
step 4.1, primary matching: primary matching by using a nearest neighbor method distance ratio: comparing Euclidean distances between the descriptors on the reference image and the descriptors on the image to be registered by using a nearest neighbor method distance ratio, selecting a matching pair with the minimum distance being less than 0.9 times of the sub-minimum distance, filtering out repeated matching point pairs on the image with the multilayer structure, filtering out wrong matching pairs by using a rapid sampling consistency algorithm and fitting out a rough transformation matrix H 1
Step 4.2, secondary matching: passing the feature point pairs on the image to be registered through a transformation matrix H 1 Transforming the image into a reference image pixel coordinate system, and calculating the position distance difference e between each feature point and all the feature points on the other image p (p i ,p′ j ) And main direction difference e o (p i ,p′ j ) The euclidean distance between the two image descriptors becomes:
POED(p i ,p′ j )=(1+e p (p i ,p′ j ))(1+e o (p i ,p′ j ))ED(p i ,p j )
wherein p is i Is a feature point, p, of the reference image j Is p in the image to be registered i Of p' j Is a point p j Through H 1 Transformed point, ED (p) i ,p′ j ) Representative point p i And p j Is describedThe euclidean distance between the seeds. Then matching the descriptors after Euclidean distance transformation by using a Nearest Neighbor Distance Ratio (NNDR) method, filtering out repeated matching point pairs, and fitting out an accurate transformation matrix H by using a Fast Sampling Consistency (FSC) algorithm 2
Step 5, fig. 5 shows the correctly matched pairs of feature points on the two-modality images obtained by the method, fig. 6 shows the registered two-modality images displayed in a checkerboard form, and it can be seen that the same geographic position of the two images can reach the alignment of pixel level.
In the synthetic aperture radar and optical image registration system based on structure extraction, the structure extraction module can regard coherent speckle noise as a texture feature, effectively erase the texture and avoid the influence of noise on feature point extraction. The invention extracts the feature points after extracting the image structure, avoids the feature points from falling into texture areas with indistinguishable features, ensures that the feature points extracted from the synthetic aperture radar and the optical image are mainly distributed on the edge of the structure with more obvious features, and improves the position space repetition rate of the feature points extracted from the two images and the uniqueness of the descriptor. In addition, the system does not carry out down sampling on the image, and edges are finer and finer when the structure is extracted, so that accurate positioning is provided for the feature points. When describing the feature points, firstly, an enhanced phase consistency edge is constructed on each layer of structural image, clear and accurate noiseless edges can be obtained, then the edge information is used for describing the feature points, and therefore the problem that a distinguishable descriptor with the same geographic position and similar feature points is obtained through gray value nonlinear transformation between a synthetic aperture radar and an optical image can be overcome. The whole system extracts the characteristic points on the structural image of which the multilayer texture information is inhibited layer by layer and performs characteristic description, so that the characteristic points on a remarkable structure are extracted, and the characteristic points on the edge with weak contrast can be extracted on a shallow structure image, so that the extracted characteristic points are more in quantity and more uniformly distributed. The invention not only improves the robustness of the registration of the synthetic aperture radar and the optical image, but also improves the precision of the image registration. The method for registering the synthetic aperture radar and the optical image is simple, convenient and efficient, and has important application value.

Claims (10)

1. Synthetic aperture radar and optical image registration system based on structure extraction is characterized by comprising:
the structure extraction module is connected with the input image end and used for inputting the texture information of the image in a multi-scale smooth mode, extracting the structure information, thinning the edge and outputting a structure image with multiple layers of smooth texture information layer by layer;
the characteristic point extraction module is connected with the structure extraction module and is used for extracting characteristic points in each layer of structure image, the layer number, the position and the scale information of the characteristic points;
the characteristic point description module is connected with the characteristic point extraction module and the structure extraction module and is used for describing the main direction of the characteristic points and constructing a characteristic point descriptor;
the characteristic point matching module is connected with the characteristic point extracting module and the characteristic point describing module: coarse registration is realized by adopting a nearest neighbor distance ratio and a rapid sampling consistency algorithm; after the coarse registration is combined, the position and direction information of the feature points is subjected to fine registration to filter out error matching points;
a registration result display module connected with the feature point matching module and the input image end: the registration effect is shown in a checkerboard format.
2. The system of claim 1, wherein the structure extraction module is specifically configured to: and adopting a window relative total transformation iterative optimization algorithm to carry out multi-scale smoothing on the texture in the synthetic aperture radar image and the optical image, wherein each iterative optimization is carried out on the image obtained by the last iterative optimization, each layer of image is stored, and the structural image with 6 layers of texture information smoothed layer by layer is output.
3. The system according to claim 2, wherein in the multi-scale smoothing of the texture in the synthetic aperture radar image and the optical image by using the iterative optimization algorithm of window-to-total transformation: the initial smooth window variance scale of the synthetic aperture radar image is 4 pixels, the initial window scale of the optical image is 2 pixels, the scale window is reduced by 0.5 times along with the iteration number, and the iteration number is 6.
4. The structure extraction-based synthetic aperture radar and optical image registration system of claim 1, wherein the feature point extraction module comprises:
a multi-scale Harris function calculating unit, which is used for calculating the Harris function value of each layer of image by using 6 decreasing Harris angular point detection windows respectively for the multi-layer structure image output by the structure extracting module, wherein the initial detection window is 6 pixels in scale, and the window increases progressively along with the number of layers of the structure image so as to
Figure FDA0003834481020000021
Decreasing progressively to obtain a multi-scale Harris function value;
and the characteristic point information obtaining unit is connected with the multi-scale Harris function calculating unit and is used for searching the maximum value in the neighborhood of the multi-scale Harris function value, and the point meeting the condition greater than a threshold value in the searched maximum value is the characteristic point, the number of layers, the position of the characteristic point and the scale information of the characteristic point.
5. The structure extraction based synthetic aperture radar and optical image registration system of claim 1, wherein the feature point description module comprises:
constructing an enhanced phase consistency edge unit, and combining a Sobel edge with gradient strength on the phase consistency edge on each layer of structural image to obtain an enhanced phase consistency edge;
and a descriptor construction unit connected with the construction enhanced phase consistency edge unit and the feature point extraction module: and (3) calculating the gradient and the distribution of the direction of the pixels in 17 fan ring areas of the neighborhood of the feature point circle on the edge of the enhanced phase consistency by combining the position information, the scale information and the direction information of the feature point, and creating a feature descriptor.
6. The structure extraction based synthetic aperture radar and optical image registration system of claim 1, wherein the feature point matching module comprises:
a coarse matching unit for coarse registration by adopting a nearest neighbor distance ratio and a rapid sampling consistency algorithm to obtain a coarse registration transformation matrix H 1
And the fine matching unit is connected with the coarse matching unit, adjusts Euclidean distance between descriptors by combining position information and direction information of the characteristic points, filters error matching pairs by using a second nearest neighbor distance ratio and a rapid sampling consistency algorithm, and obtains a fine registration perspective transformation matrix H 2
7. The synthetic aperture radar and optical image registration method based on structure extraction is characterized by being implemented according to the following steps:
step 1, extracting a synthetic aperture radar and an optical image structure by using a relative total transformation structure extraction algorithm to obtain a multi-layer structure image with smooth texture information layer by layer;
step 2, on the multilayer structural image, extracting the characteristic points of the synthetic aperture radar and the optical image by using a multi-scale Harris angular point detection algorithm with a characteristic point detection window gradually decreased along with the structural image layer by layer, and obtaining the layer number and the position of the characteristic points and the scale information of the characteristic points;
step 3, constructing an enhanced phase consistency edge on each layer of structural image, and constructing a descriptor by using the edge characteristics;
step 4, performing primary matching by using a nearest neighbor method distance ratio, and performing consistency fitting by using rapid sampling to obtain a coarse transformation matrix H 1 And after primary transformation is combined, the distance between descriptors is adjusted according to the position and angle information of the feature points in the pixel coordinate system of the reference image, and secondary matching is carried out to obtain an accurate perspective transformation matrix H 2
Step 5, multiplying the image to be registered by a transformation matrix H 2 And transforming to a pixel coordinate system where the reference image is located, and displaying the two modal images in a checkerboard form.
8. The method for registering a synthetic aperture radar and an optical image based on structure extraction as claimed in claim 7, wherein the specific process for extracting the characteristic points of the synthetic aperture radar and the optical image is as follows:
on the multi-layer structure image, the initial window scale is 6 pixels, and the number of layers of the structure image is increased gradually to
Figure FDA0003834481020000031
Calculating a Harris function value of each layer of image by using a multiple decreasing Harris angular point detection window;
searching a maximum value in the neighborhood of the multi-scale Harris function value, wherein the point, the searched maximum value of which is larger than a threshold value, is a feature point, and recording the number of layers, the position and scale information of the feature point; where the threshold τ is a custom threshold.
9. The method for registering a synthetic aperture radar and an optical image based on structure extraction as claimed in claim 7, wherein the specific process of step 3 is as follows:
step 3.1, calculating a phase consistency edge and a Sobel edge on each layer of structure image, carrying out AND operation on the Sobel edge and the phase consistency edge at corresponding pixel positions to obtain an intersection mask, multiplying the intersection mask and corresponding elements of a Sobel edge map, carrying out primary spatial domain filtering, and taking a larger value of the filtered Sobel edge and the corresponding position of the phase consistency edge as an enhanced phase consistency edge map M en
And 3.2, on the edge of the enhanced phase consistency, counting a gradient direction histogram of the position of the feature point, distributing a main direction for the feature point, describing the feature by using a SIFT feature descriptor method in a circular neighborhood which is positively correlated with the size of the feature point and is divided into 17 fan ring areas, and finally obtaining a 136-dimensional feature point descriptor for each feature point.
10. The method for registering a synthetic aperture radar and an optical image based on structure extraction as claimed in claim 7, wherein the specific process of step 4 is as follows:
step 4.1, distance ratio primary matching by using a nearest neighbor method: comparing Euclidean distances between descriptors on the reference image and the to-be-registered image by using a nearest neighbor distance ratio, selecting a feature point matching pair in the two images, filtering the error matching pair by using a rapid sampling consistency algorithm after duplication removal, and fitting a rough transformation matrix H 1
Step 4.2, passing the characteristic point pairs on the image to be registered through a transformation matrix H 1 Transforming to the pixel coordinate system of the reference image, and calculating the position distance difference e between each characteristic point and all the characteristic points on the other image p (p i ,p′ j ) And the difference e of the main direction o (p i ,p′ j ) And adding a constant 1 to the two differences to be used as a magnification factor, magnifying the Euclidean distance between the corresponding descriptors, and performing the same step with the primary matching again to obtain an accurate perspective transformation matrix H 2
CN202211083601.4A 2022-09-06 2022-09-06 Synthetic aperture radar and optical image registration system and method based on structure extraction Pending CN115588033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211083601.4A CN115588033A (en) 2022-09-06 2022-09-06 Synthetic aperture radar and optical image registration system and method based on structure extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211083601.4A CN115588033A (en) 2022-09-06 2022-09-06 Synthetic aperture radar and optical image registration system and method based on structure extraction

Publications (1)

Publication Number Publication Date
CN115588033A true CN115588033A (en) 2023-01-10

Family

ID=84772449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211083601.4A Pending CN115588033A (en) 2022-09-06 2022-09-06 Synthetic aperture radar and optical image registration system and method based on structure extraction

Country Status (1)

Country Link
CN (1) CN115588033A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015004A (en) * 2024-04-10 2024-05-10 宝鸡康盛精工精密制造有限公司 Laser cutting scanning system and method
CN118735975A (en) * 2024-09-03 2024-10-01 中国科学院自动化研究所 Registration method, apparatus, electronic device, storage medium and computer program product for optical image and synthetic aperture radar image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015004A (en) * 2024-04-10 2024-05-10 宝鸡康盛精工精密制造有限公司 Laser cutting scanning system and method
CN118015004B (en) * 2024-04-10 2024-07-05 宝鸡康盛精工精密制造有限公司 Laser cutting scanning system and method
CN118735975A (en) * 2024-09-03 2024-10-01 中国科学院自动化研究所 Registration method, apparatus, electronic device, storage medium and computer program product for optical image and synthetic aperture radar image

Similar Documents

Publication Publication Date Title
CN107067415B (en) A kind of object localization method based on images match
Chen et al. Building change detection with RGB-D map generated from UAV images
Xiong et al. A critical review of image registration methods
WO2019042232A1 (en) Fast and robust multimodal remote sensing image matching method and system
CN110232387B (en) Different-source image matching method based on KAZE-HOG algorithm
CN108765476B (en) Polarized image registration method
Li et al. RIFT: Multi-modal image matching based on radiation-invariant feature transform
EP3420532B1 (en) Systems and methods for estimating pose of textureless objects
Chen et al. Robust affine-invariant line matching for high resolution remote sensing images
CN106981077A (en) Infrared image and visible light image registration method based on DCE and LSS
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
Misra et al. Feature based remote sensing image registration techniques: a comprehensive and comparative review
CN110147162B (en) Fingertip characteristic-based enhanced assembly teaching system and control method thereof
CN111462198B (en) Multi-mode image registration method with scale, rotation and radiation invariance
CN108550165A (en) A kind of image matching method based on local invariant feature
Xiang et al. A robust two-stage registration algorithm for large optical and SAR images
Lee et al. Accurate registration using adaptive block processing for multispectral images
Liu et al. Multi-sensor image registration by combining local self-similarity matching and mutual information
CN116612165A (en) Registration method for large-view-angle difference SAR image
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN115588033A (en) Synthetic aperture radar and optical image registration system and method based on structure extraction
CN115861792A (en) Multi-mode remote sensing image matching method for weighted phase orientation description
Parmehr et al. Automatic parameter selection for intensity-based registration of imagery to LiDAR data
Xie et al. SMRD: A local feature descriptor for multi-modal image registration
Cui et al. Multi-modal remote sensing image registration based on multi-scale phase congruency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination