CN108333562A - A kind of dimensionality reduction method for registering images that landform altitude is adaptive - Google Patents
A kind of dimensionality reduction method for registering images that landform altitude is adaptive Download PDFInfo
- Publication number
- CN108333562A CN108333562A CN201810090998.7A CN201810090998A CN108333562A CN 108333562 A CN108333562 A CN 108333562A CN 201810090998 A CN201810090998 A CN 201810090998A CN 108333562 A CN108333562 A CN 108333562A
- Authority
- CN
- China
- Prior art keywords
- image
- representing
- satellite
- main image
- determined
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000009467 reduction Effects 0.000 title claims abstract description 14
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 8
- 238000012952 Resampling Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 7
- 230000017105 transposition Effects 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000014509 gene expression Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 13
- 241001347978 Major minor Species 0.000 abstract 2
- 238000003384 imaging method Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010587 phase diagram Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000005314 correlation function Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010006 flight Effects 0.000 description 1
- 238000005305 interferometry Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a kind of dimensionality reduction method for registering images that landform altitude is adaptive, belong to signal processing technology field, are suitable for the registration of interference synthetic aperture radar InSAR images, and main thought is:It determines master image and auxiliary image, then determines the registration offset between major-minor image;According to the registration offset between the major-minor image, registration offset model of fit is established, obtains 8 fitting coefficients to be determined in registration offset model of fit;8 fitting coefficients to be determined in registration offset model of fit are solved, and then determine final registration offset model of fit;According to final registration offset model of fit, registration offset of each pixel relative to master image in auxiliary image is obtained, resampling then is carried out to auxiliary image, and then obtain the image of accuracy registration.
Description
Technical Field
The invention belongs to the technical field of signal processing, and particularly relates to a landform elevation self-adaptive dimensionality reduction image registration method which is suitable for registration of interferometric synthetic aperture radar (InSAR) images.
Background
The interferometric synthetic aperture radar InSAR is the extension and development of the function of a general SAR, and carries out interference processing by using a plurality of receiving antennas or echo data obtained by multiple observation of a single antenna; because InSAR can enhance the information acquisition capability of SAR, the method has wide application prospect in various fields of military affairs and national economy, thereby gaining wide attention. In the spaceborne InSAR elevation measurement, a single-navigation mode is difficult to adopt because the distance between a flight path and a scene is relatively long, so that a double-navigation working mode is widely adopted in the spaceborne InSAR interference elevation measurement at present, namely, a radar satellite passes through the sky of the same area in the same direction and different flight paths twice, and echo data and system parameters of a radar in the two imaging processes are recorded for interference processing; when the synthetic aperture radar is used for measuring the terrain elevation by interferometry, two SAR images with high coherence need to be obtained for the same scene, so that the step of image registration is needed, and pixels at the same position in the registered images correspond to the same small area on the ground so as to ensure the coherence of the two images. The accuracy of the registration directly affects the coherence of the two images.
The method comprises the steps of taking each pixel of one aerial image (called a main image) as a reference, finding out the position of a ground sample point corresponding to the pixel in another aerial image (called an auxiliary image), representing the position by using an offset, and displacing the pixel of the auxiliary image according to the offset so that the pixel at the same position in the main image can correspond to the same point on the ground. The registration processing is divided into coarse registration and fine registration; the coarse registration shifts the auxiliary image as a whole, does not consider the locality, the fine registration considers the locality, and the auxiliary image is interpolated to achieve high registration accuracy.
The registration of images is to firstly determine the position of each pixel in one image in the other image, namely the two-dimensional offset of the registration; under the condition of short base line, corresponding pixels of two images can be registered only by performing simple two-dimensional translation on one image for a small and flat imaging scene; under the condition of longer baseline or severe ground fluctuation, pixels with different orientations and distances need different translation amounts for registration, so that the complexity of the image registration process is greatly increased, and the image registration process cannot be realized by simple image translation and interpolation.
The existing InSAR image registration method can be mainly divided into two major methods, wherein the first method estimates registration offset at a control point by using a method based on image data or image characteristics, and then calculates the registration offset at other pixels according to a two-dimensional binary low-order polynomial model, the first method does not consider the influence of terrain elevation on the registration offset, when a base line is longer or the complexity of terrain in an observation scene is higher, the precision of fitting the registration offset by using the two-dimensional binary low-order polynomial model is lower, and when a polynomial coefficient is estimated by using the registration offset at the control point, the influence of the number of the control points and position distribution on the coefficient estimation precision is larger; the second method firstly calculates the initial registration offset of each pixel by using a geometric registration method, and then corrects the initial registration offset according to a two-dimensional binary first-order polynomial model by using the high-precision registration offset at a control point, so as to obtain the full-scene high-precision registration offset; the second method needs to perform pixel-by-pixel geometric registration processing, and has low operation efficiency.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a landform elevation self-adaptive dimensionality reduction image registration method, which can track the change of registration offset between a main image and an auxiliary image along with the landform elevation and is a landform elevation self-adaptive dimensionality reduction image registration method for improving the fitting accuracy of the registration offset.
In order to achieve the technical purpose, the invention adopts the following technical scheme to realize.
A terrain elevation self-adaptive dimensionality reduction image registration method comprises the following steps:
step 1, determining a main image and an auxiliary image, and then determining a registration offset between the main image and the auxiliary image;
step 2, establishing a registration offset fitting model according to the registration offset between the main image and the auxiliary image to obtain 8 fitting coefficients to be determined in the registration offset fitting model;
step 3, solving 8 fitting coefficients to be determined in the registration offset fitting model, and further determining a final registration offset fitting model;
and 4, fitting the model according to the final registration offset to obtain the registration offset of each pixel point in the auxiliary image relative to the main image, and then resampling the auxiliary image to obtain an accurately registered image.
Compared with the prior art, the invention has the following advantages:
firstly, the influence of terrain elevation on the registration offset is considered, and the fitting precision of the registration offset is improved; in practical application, a satellite-borne double-voyage mode has complex conditions, the effective base line length is long, and the flight paths of two flights are not completely parallel and have a very small included angle; thus, long baselines, severe ground relief, and non-parallel tracks are all non-ideal factors that often arise in image registration; the two-dimensional ternary first-order polynomial model containing the terrain elevation term can effectively solve the influence of the terrain elevation on the registration offset of the main image and the auxiliary image, so that the fitting precision of the registration offset is improved.
Secondly, the geometric registration processing of pixel points one by one is not needed in the invention, so that the operation efficiency is improved: the traditional geometric registration method needs to calculate the initial registration offset of each pixel, and then corrects the initial registration offset according to a two-dimensional binary first-order polynomial model by using the high-precision registration offset at a control point, so as to obtain the full-scene high-precision registration offset; the two-dimensional ternary first-order polynomial model provided by the invention can determine the unknown coefficient in the model only by calculating the registration offset of a certain pixel point, and other pixels can directly perform the step of registration offset through the determined model, so that the complexity of operation is reduced.
Thirdly, the polynomial coefficient which needs to be estimated by utilizing the registration offset at the control point is reduced into two constant term coefficients through dimension reduction processing, so that the robustness of coefficient estimation is improved; in the past, in the method of estimating the registration offset of a control point by using a method based on image data or image characteristics and then calculating the registration offset of other pixels according to a two-dimensional binary low-order polynomial model, when the registration offset of the control point is used for estimating a polynomial coefficient, the influence of the number and the position distribution of the control point on the coefficient estimation precision is large; the two-dimensional ternary first-order polynomial model provided by the invention only needs to determine two constant term coefficients by using the conventional method, and other coefficients are obtained by calculation, so that the influence brought by control point selection on the model provided by the invention is small, and the robustness of coefficient estimation is improved.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flowchart illustrating an implementation of a terrain elevation adaptive dimension reduction image registration method according to the present invention;
FIG. 2(a) is a SAR amplitude map;
FIG. 2(b) is a schematic diagram of the quantitative evaluation of registration accuracy by selecting 12 distinctive points within the region indicated by the box in FIG. 2 (a);
FIG. 2(c) is a schematic diagram of an SRTM DEM corresponding to an observation scene;
FIG. 3 is an interference phase diagram between the primary and secondary images after registration by the method of the present invention;
FIG. 4 is a graph of the second order polynomial method and the registration error statistics of the method of the present invention;
FIG. 5(a) is a schematic diagram of the azimuthal registration error of the second order polynomial method;
FIG. 5(b) is a schematic diagram of range-wise registration error for a second-order polynomial approach;
FIG. 5(c) is a schematic view of the azimuthal registration error of the method of the present invention;
fig. 5(d) is a schematic diagram of the distance-wise registration error of the method of the present invention.
Detailed Description
Referring to fig. 1, it is a flowchart of an implementation of a terrain elevation adaptive dimension reduction image registration method according to the present invention; the terrain elevation self-adaptive dimensionality reduction image registration method comprises the following steps:
step 1, determining a satellite-borne radar, wherein a target exists in a detection area of the satellite-borne radar, and two observation images are obtained after the satellite-borne radar observes the detection area twice; two observation images obtained under two observations have pixel deviations in the distance direction and the azimuth direction due to the influence of the time interval.
Randomly selecting one observation image from the two observation images as a main image, taking the other observation image as an auxiliary image, wherein the main image and the auxiliary image respectively comprise H pixel points; h is a positive integer greater than 0.
Selecting any pixel point in the main image, and marking as the pixel point in the main image, wherein the coordinate of the pixel point in the main image is (a)m,rm) Calculation of WGS84 coordinate System from the formulas (1) to (3)Three-dimensional position T of target on earth surface, T ═ Tx,Ty,Tz)T:
R=|P-T| (1)
Wherein, R represents the slant range of the satellite-borne radar reaching the target, t represents the azimuth time, P represents the position vector of the satellite-borne radar under the WGS84 coordinate system, and P ═ is (P ═ isx,Py,Pz)T,PxRepresents the position component, P, of the satellite-borne radar in the x axis under the WGS84 coordinate systemyRepresenting the position component, P, of the satellite-borne radar in the y axis under the WGS84 coordinate systemzThe position component of the satellite-borne radar in the z axis under the WGS84 coordinate system is shown, T represents the three-dimensional position of a target on the ground surface under the WGS84 coordinate system, and T is (T ═x,Ty,Tz)T,TxRepresenting the x-axis component, T, of the target on the surface in the WGS84 coordinate systemyRepresenting the component of the target in the WGS84 coordinate system on the y-axis, T, of the earth's surfacezRepresenting the z-axis component, f, of the target in the WGS84 coordinate systemdDenotes the doppler frequency, V denotes the velocity vector of the satellite radar in WGS84 coordinate system, and V ═ V (V)x,Vy,Vz)T,VxRepresenting the velocity component V of the satellite-borne radar in the x axis under the WGS84 coordinate systemyThe velocity component V of the satellite-borne radar in the y axis under the WGS84 coordinate system is shownzRepresents the speed component of the satellite-borne radar in the z-axis under the WGS84 coordinate system (.)TRepresenting transposition, and lambda represents the wavelength of a satellite-borne radar transmitting signal; h represents the target elevation and can be obtained through a DEM digital elevation model; reRepresenting the equatorial radius of the earth, RpRepresents the earth polar radius; the WGS84 coordinate system is an internationally adopted geocentric coordinate system.
Then according to the three-dimensional position T of the target on the earth surface under the WGS84 coordinate system, calculating to obtain the coordinate (a) of the pixel point in the main imagem,rm) Coordinates (a) of pixel points in the corresponding auxiliary images,rs) The expression is as follows:
as=(t-t0)·fp(4)
wherein, t0Representing the azimuth time, f, corresponding to the first pixel point in the azimuth direction in the secondary imagepRepresenting the pulse repetition frequency, R0Representing the on-board radar slant range, f, corresponding to the first pixel point from the range in the secondary images′The pulse sampling frequency is shown, c is the speed of light, R is the slant distance of the satellite-borne radar to the target, and t is the azimuth time.
Coordinates (a) of pixel points in the master imagem,rm) And the coordinates (a) of the pixel points in the master imagem,rm) Coordinates (a) of pixel points in the corresponding auxiliary images,rs) The difference of (3) is the registration offset between the primary and secondary images.
Specifically, the registration offset calculation model in step 1 needs to determine respective corresponding coordinates of a certain pixel point on the primary and secondary images, and is implemented as follows:
(1a) acquiring a target three-dimensional position corresponding to a main image pixel through forward positioning, wherein the forward positioning means that an imaging parameter (such as Doppler frequency and the like) and orbit data (such as satellite position and the like) are substituted into a distance-Doppler algorithm model to solve the three-dimensional position of a scene target on the earth, namely the process of solving the target three-dimensional position from the image pixel point is called forward positioning;
(1b) and obtaining the coordinates of the target on the auxiliary image through reverse positioning, wherein the reverse positioning means that the two-dimensional coordinates of pixel points of the target point on the image are obtained by utilizing the three-dimensional position of the target, the imaging parameters and the orbit data.
Step 2, determining a registration offset fitting model according to the registration offset between the main image and the auxiliary image: when the base line between the main image and the auxiliary image is long or the terrain complexity in an observation scene is high, the influence of the terrain elevation on the registration offset needs to be considered, and the two-dimensional ternary first-order polynomial model containing the terrain elevation term is used for fitting the coordinate relation between the main image and the auxiliary image in the formulas (6) and (7) so as to obtain a registration offset fitting model, and 8 fitting coefficients to be determined of the registration offset fitting model are obtained and are respectively the first fitting coefficient d to be determined0Second fitting coefficient d to be determined1Third fitting coefficient d to be determined2Fourth fitting coefficient d to be determined3Fifth to-be-determined fitting coefficient g0Sixth to-be-determined fitting coefficient g1Seventh to-be-determined fitting coefficient g2And the eighth fitting coefficient g to be determined3(ii) a The registration offset fitting model is as follows:
as=d0+d1am+d2rm+d3h (6)
rs=g0+g1am+g2rm+g3h (7)
specifically, the traditional registration method assumes that the influence of the target elevation on the registration offset is negligible, and a two-dimensional binary low-order polynomial model containing an azimuth coordinate item and a distance coordinate item is used for fitting the registration offset along with the change of the azimuth coordinate and the distance coordinate of the main image.
Step 3, estimating 8 fitting coefficients to be determined in the registration offset fitting model: 8 fitting coefficients to be determined in the registration offset fitting model need to be determined, 6 first-order coefficients are calculated by using the imaging parameters of the main image and the auxiliary image and the orbit data, and the first-order coefficients are respectively the second fitting coefficients d to be determined1Third fitting coefficient d to be determined2Fourth fitting coefficient d to be determined3Sixth to-be-determined fitting coefficient g1Seventh to-be-determined fitting coefficient g2And the eighth fitting coefficient g to be determined3。
Specifically, first-order partial derivatives of the three-dimensional position of the target relative to the azimuth time, the radar slope and the target elevation of the main image are firstly deduced, and the first-order partial derivatives of the three-dimensional position of the target in the formulas (1) to (3) are respectively obtained by solving the first-order partial derivatives of the azimuth time, the radar slope and the target elevation:
wherein,indicating a first order partial derivative operation, subscript m representing the main image, AmRepresenting the acceleration vector, t, of the satellite-borne radar corresponding to the main imagemIndicating the azimuth time, R, corresponding to the main imagemRepresenting the slope distance, P, at which the lightning-borne radar reaches the target when the main image is determinedmRepresenting the satellite-borne radar position vector, P, corresponding to the main imagem,xRepresenting the component, P, of the corresponding satellite-borne radar position vector of the main image on the x-axis in the WGS84 coordinate systemm,yRepresenting the component of the satellite-borne radar position vector corresponding to the main image on the y-axis in the WGS84 coordinate system, Pm,zThe satellite-borne radar position vector representing the corresponding main image is located at WGS84Component in the z-axis of the system, VmRepresenting the speed vector, V, of the satellite-borne radar corresponding to the main imagem,xRepresenting the component, V, of the velocity vector of the satellite-borne radar corresponding to the main image on the x-axis in the WGS84 coordinate systemm,yRepresenting the component, V, of the velocity vector of the satellite-borne radar corresponding to the main image on the y-axis in the WGS84 coordinate systemm,zThe component of the speed vector of the satellite-borne radar corresponding to the main image on the z axis in the WGS84 coordinate system is shown, T represents the three-dimensional position of the target on the ground surface in the WGS84 coordinate system, and T is (T ═x,Ty,Tz)T,(·)TRepresenting a transposition TxRepresenting the x-axis component, T, of the target on the surface in the WGS84 coordinate systemyRepresenting the component of the target in the WGS84 coordinate system on the y-axis, T, of the earth's surfacezRepresenting the z-axis component of the target on the surface in the WGS84 coordinate system, (. DEG)TRepresenting transpose, h represents target elevation, RpRepresenting the radius of the earth's polar region, ReRepresenting the equatorial radius of the earth, (.)TRepresenting transposition, lambda represents the wavelength of the transmitted signal of the space-borne radar, fd,mIndicating the doppler frequency of the satellite-borne radar when determining the main image.
And then, deducing first-order partial derivatives of the azimuth time of the auxiliary image corresponding to the target and the radar slope distance relative to the three-dimensional position of the target, and respectively solving the first-order partial derivatives of the azimuth time and the radar slope distance in the formulas (1) and (2) on the three-dimensional position of the target to obtain:
wherein the subscript s denotes the auxiliary image, AsRepresenting satellite-borne radar acceleration vectors, t, corresponding to the secondary imagessShowing the assistant graphLike corresponding azimuth time, PsRepresenting satellite-borne radar position vectors, V, corresponding to the secondary imagessRepresenting satellite-borne radar velocity vectors, R, corresponding to the secondary imagessThe method is characterized in that the slope distance of the satellite-borne radar reaching the target when the auxiliary image is determined is shown, T represents the three-dimensional position of the target on the ground surface under a WGS84 coordinate system, and T is (T ═x,Ty,Tz)T,fd,sIndicating the doppler frequency of the satellite radar when determining the secondary image.
Combining the derivation results to obtain 6 first order coefficient estimation values which are respectively second fitting coefficient estimation valuesThird to-be-determined fitting coefficient estimation valueFourth estimated fitting coefficient value to be determinedSixth estimated fitting coefficient value to be determinedSeventh to-be-determined fitting coefficient estimation valueAnd the eighth estimated value of fitting coefficient to be determinedAnd according to the formula (4), the formula (5), and the formulas (8) to (14), obtaining a first partial derivative of the two-dimensional coordinate of the target on the auxiliary image relative to the two-dimensional coordinate and the elevation of the target on the main image as:
wherein the second fitting coefficient estimated valueThird to-be-determined fitting coefficient estimation valueSixth estimated fitting coefficient value to be determinedAnd a seventh estimated value of fitting coefficient to be determinedAre respectively 1, the fourth estimated value of the fitting coefficient to be determinedAnd the eighth estimated value of fitting coefficient to be determinedThe units of (a) are pixels/m, respectively; f. ofp,mIndicating the pulse repetition frequency, f, of the radar on board the satellite in determining the main imagep,sRepresenting pulse repetition frequency of a radar on board a satellite in determining a secondary imageRate, fs′,sIndicating the pulse sampling frequency, f, of the satellite radar when determining the secondary images′,mIndicating the pulse sampling frequency of the satellite-borne radar when the main image is determined, c the speed of light, (a)m,rm) Representing the coordinates of pixel points in the main image (a)s,rs) Representing coordinates (a) of pixels in the master imagem,rm) Coordinates of pixel points in the corresponding auxiliary image, tsIndicating the azimuth time, R, corresponding to the auxiliary imagesRepresents the slant distance of the satellite-borne radar reaching the target when determining the auxiliary image, T represents the three-dimensional position of the target on the earth surface under the WGS84 coordinate system, h represents the elevation of the target, (.)TIndicating transposition, c indicates speed of light.
Then, randomly selecting L pixel points as control points in the main image, 0<L<H, further obtaining L control points, then obtaining the registration offset of the L control points relative to the main image and the coordinates (a) of the pixel points of the main image by adopting a method based on image data or image characteristicsm,rm) As is known, the use of master image pixel point coordinates (a)m,rm) Adding the registration offset of the L control points to obtain auxiliary image pixel point coordinates corresponding to the L control points, and adding the auxiliary image pixel point coordinates and the main image pixel point coordinates (a) corresponding to the L control pointsm,rm) Substituting the formula (6) and the formula (7) to obtain L groups of coefficient estimation values, wherein each group of coefficient estimation values comprises a first fitting coefficient estimation value to be determined and a fifth fitting coefficient estimation value to be determined, and the L groups of coefficient estimation values are similar in numerical value; selecting the intermediate value of each L data in the L groups of coefficient estimation values to be respectively and correspondingly used as the first fitting coefficient estimation valueAnd a fifth fitting coefficient estimate
Specifically, the estimation of the unknown coefficient in the two-dimensional ternary first-order polynomial model containing the terrain elevation term in step 3 includes the following steps:
(3a) and according to the forward positioning model, deducing first-order partial derivatives of the three-dimensional position of the target relative to the azimuth time, the radar slope and the elevation of the target of the main image.
(3b) And according to the reverse positioning model, deducing the first-order partial derivatives of the azimuth time of the auxiliary image corresponding to the target and the radar slant distance relative to the three-dimensional position of the target.
(3c) Combining the two steps of results to obtain a first-order partial derivative of the two-dimensional coordinate of the target on the auxiliary image relative to the two-dimensional coordinate and the elevation of the target on the main image, and obtaining a second fitting coefficient estimation valueThird to-be-determined fitting coefficient estimation valueFourth estimated fitting coefficient value to be determinedSixth estimated fitting coefficient value to be determinedSeventh to-be-determined fitting coefficient estimation valueAnd the eighth estimated value of fitting coefficient to be determined
(3d) Selecting a control point on the main image, and estimating the registration offset at the control point by adopting a method based on image data or image characteristics;
(3e) compensating the first-order component of the registration offset at the control point, and obtaining a first fitting coefficient estimated value by using the residual registration offsetAnd a fifth fitting coefficient estimate
(3f) And calculating registration offset at other pixels to obtain a final registration offset fitting model, so that the main image pixel and the auxiliary image pixel correspond to the same ground imaging unit.
The final registration offset fitting model is:
in order to reduce the influence of the control point on coefficient estimation and improve the robustness of the coefficient estimation, the invention provides a formula for calculating a primary coefficient by using the imaging parameters and orbit data of a main image and an auxiliary image; in the actual measurement data processing process, the imaging parameters of the main and auxiliary images corresponding to the target at the center of the observation scene and the orbit data are used for calculating d1~d3And g1~g3。
Step 4, determining the relative offset of the remaining H-L pixel points of the main and auxiliary images according to the final registration offset fitting model obtained in the step 3: uniformly dividing the region where the remaining H-L pixel points in the main image are located into M control blocks, substituting the center pixel point coordinate of each control block into the final registration offset fitting model to obtain the coordinate of the corresponding pixel point in the auxiliary image, wherein the coordinate difference value of the main image and the auxiliary image is the corresponding relative offset; further obtaining M relative offsets; m is a positive integer greater than 0.
Thirdly, carrying out cubic spline interpolation (usually 10 times of interpolation) on the M relative offsets to obtain the registration offset of each pixel point in the remaining H-L pixel points of the auxiliary image relative to the main image; and finally, resampling the auxiliary image according to the registration offset of each pixel point in the remaining H-L pixel points of the auxiliary image relative to the main image and the registration offset of the L control points relative to the main image respectively, wherein the result after resampling is an accurately registered image, and thus, the registration between the main image and the auxiliary image is completed.
The method can improve the fitting precision of the registration offset, improve the estimation robustness, complete the registration between the main image and the auxiliary image, and can be used for the image registration processing of the satellite-borne InSAR.
The following provides an example of a practical application for further illustrating the present invention:
the method is verified by two SAR complex images acquired by Terras SAR-X when repeatedly navigating and observing tasks are carried out on the Lanzhou region at 8 days 3 and 19 days 3 and 2010 respectively, and main system parameters are shown in Table 1. And taking the SAR complex image acquired on day 3 and 8 in 2010 as a main image, and taking the SAR complex image acquired on day 3 and 19 in 2010 as an auxiliary image.
The SAR amplitude map is as shown in fig. 2(a), 12 distinctive points are selected in the area shown in the block of fig. 2(a) to quantitatively evaluate the registration accuracy, as shown in fig. 2(b), fig. 2(c) is the digital elevation mapped by the srtm (short Radar topographimission) DEM Radar corresponding to the observation scene, which has been projected to the main image slant range plane, the observation scene has a width of 15km × 15km, and the target elevation change 695m in the observation scene; the main and auxiliary images are registered by using the processing flow shown in fig. 1, and the validity of the method is verified by the interference phase diagram obtained by final processing.
TABLE 1 Terras SAR-X Main System parameters
The main parameters of the terraSAR-X radar satellite are listed in table 1, the radar wavelength is 0.031m, the pulse repetition frequency is azimuth direction parameter, the pulse sampling frequency is range direction parameter, and the azimuth direction parameter and the range direction parameter are actual imaging parameters.
Specifically, the main and auxiliary image registration implementation steps are as follows:
(1) for a certain pixel (a) on the main imagem,rm) Substituting the Doppler frequency, the azimuth direction parameter, the range direction parameter and the real-time position and speed of the TerrasAR-X radar satellite into the range-Doppler models shown in the formulas (1) to (3), solving the three-dimensional position of the target, and calculating the two-dimensional coordinate (a) of the three-dimensional position of the target corresponding to the auxiliary images,rs)。
(2) Listing a two-dimensional ternary first-order polynomial model containing terrain elevation items, and calculating d according to expressions (15) to (20) by using imaging parameters of main and auxiliary images corresponding to targets at the center of an observation scene and Terras SAR-X radar satellite orbit data1~d3And g1~g3。
(3) Uniformly selecting 100 control points in the main image, estimating the registration offset of the control points by adopting a real correlation function method, and estimating d by utilizing residual registration offset0And g0。
(4) Calculating the relative offset of any scene target position in the two images by using the fitting model, and performing interpolation resampling (adopting 10-time interpolation) on the whole auxiliary image according to the relative offset to obtain an accurately registered image, thereby finishing the registration between the main image and the auxiliary image.
The interference phase diagram between the main image and the auxiliary image after registration is shown in FIG. 3; the main and auxiliary images after the registration by the method can obtain clear interference phase images, and the effectiveness of the method is verified.
In order to verify the accuracy and robustness of the method of the invention, the registration performance of the method and a second-order polynomial method (two-dimensional binary second-order polynomial model is used for fitting the registration offset, and polynomial coefficients are estimated by using the registration offset at the control points) under different numbers of control points are compared and analyzed by using a Mento-Carlo experiment.
The specific comparison process is as follows:
(1) the number of the Mento-Carlo experiments is 100, 900 control points are uniformly selected in the main image, and the registration offset at the control points is estimated by adopting a real correlation function method;
(2) randomly selecting 100 control points from the 900 control points to estimate a polynomial fitting coefficient in each experiment;
(3) the registration error is estimated using the 12 salient points shown in fig. 2 (b). And performing 200-time interpolation on the same-name feature points on the main and auxiliary images after the registration, and taking the distance direction and azimuth direction offset between the peak values as registration errors at the feature points. Fig. 4 shows the variation of the registration error rms value with the number of control points for the second order polynomial method and the method of the present invention.
As is clear from fig. 4, the registration error of the second-order polynomial method varies greatly with the number of control points, while the registration error of the method described herein varies little with the number of control points; as can be seen from table 1, the vertical baseline between the main and auxiliary images is greater than the horizontal baseline, which results in the effect of the target elevation on the distance-wise registration offset being greater than the azimuth registration offset, so the distance-wise registration error of the second-order polynomial method is greater than the azimuth registration error; the method can track the change of the registration offset along with the elevation of the target, so the registration errors in the azimuth direction and the distance direction of the method are approximately the same, and the registration precision requirement of 0.1 pixel can be met; the above results demonstrate the accuracy and robustness of the method of the invention.
In order to more intuitively demonstrate the accuracy and robustness of the method of the present invention relative to the second-order polynomial method, fig. 5(a), 5(b), 5(c) and 5(d) show the registration error distribution at 12 salient points of 100 trials when the number of control points is 10; fig. 5(a), 5(b), 5(c) and 5(d) are the azimuthal and range registration errors of the second order polynomial method, respectively, the azimuthal and range registration errors of the method of the present invention.
As can be seen from fig. 5(a), 5(b), 5(c) and 5(d), the maximum values of the azimuthal and range registration errors of the second-order polynomial method are 3.47 pixels and 3.45 pixels, respectively, while the maximum values of the azimuthal and range registration errors of the method of the present invention are 0.17 pixel and 0.30 pixel, respectively.
In summary, the invention discloses a landform elevation self-adaptive dimension reduction image registration method, which solves the registration problem of long-baseline or complex scene satellite-borne InSAR images; the method comprises the following implementation steps:
constructing a registration offset fitting model, and calculating 6 unknown coefficients by using the imaging parameters and orbit data of the main and auxiliary images corresponding to the target at the center of the observation scene; selecting a certain number of control points on the main image, and estimating registration offset at the control points by adopting a method based on image data or image characteristics; estimating the remaining 2 constant term fitting coefficients; calculating registration offset at other pixels, and resampling all auxiliary images by the offset to complete main and auxiliary image registration; the method not only can accurately fit the registration offset between the main image and the auxiliary image, but also reduces the polynomial coefficient estimated by the registration offset at the control point to two constant coefficient through dimension reduction processing, thereby improving the robustness of coefficient estimation.
In conclusion, the simulation experiment verifies the correctness, the effectiveness and the reliability of the method.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention; thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (6)
1. A terrain elevation self-adaptive dimensionality reduction image registration method is characterized by comprising the following steps:
step 1, determining a main image and an auxiliary image, and then determining a registration offset between the main image and the auxiliary image;
step 2, establishing a registration offset fitting model according to the registration offset between the main image and the auxiliary image to obtain 8 fitting coefficients to be determined in the registration offset fitting model;
step 3, solving 8 fitting coefficients to be determined in the registration offset fitting model, and further determining a final registration offset fitting model;
and 4, fitting the model according to the final registration offset to obtain the registration offset of each pixel point in the auxiliary image relative to the main image, and then resampling the auxiliary image to obtain an accurately registered image.
2. The method for registering terrain elevation adaptive dimension-reduced images as claimed in claim 1, wherein in step 1, the primary image and the secondary image are determined by:
determining a satellite-borne radar, wherein a target exists in a detection area of the satellite-borne radar, and two observation images are obtained after the satellite-borne radar observes the detection area twice; randomly selecting one observation image from the two observation images as a main image, taking the other observation image as an auxiliary image, wherein the main image and the auxiliary image respectively comprise H pixel points; h is a positive integer greater than 0;
the registration offset between the main image and the auxiliary image is determined in the following process;
selecting any pixel point in the main image, and marking as the pixel point in the main image, wherein the coordinate of the pixel point in the main image is (a)m,rm) And calculating the three-dimensional position T of the target on the ground surface under the WGS84 coordinate system:
R=|P-T|
wherein, R represents the slant range of the satellite-borne radar reaching the target, P represents the position vector of the satellite-borne radar under the WGS84 coordinate system, and T ═ T (T ═ T)x,Ty,Tz)T,TxRepresenting the x-axis component, T, of the target on the surface in the WGS84 coordinate systemyRepresenting the component of the target in the WGS84 coordinate system on the y-axis, T, of the earth's surfacezRepresenting the z-axis component of the target on the surface in the WGS84 coordinate system, (. DEG)TDenotes transposition, fdThe Doppler frequency is represented, V represents the velocity vector of the satellite-borne radar under the WGS84 coordinate system, and lambda represents the wavelength of a signal transmitted by the satellite-borne radar; h represents a target elevation; reRepresenting the equatorial radius of the earth, RpRepresents the earth polar radius;
then according to the three-dimensional position T of the target on the earth surface under the WGS84 coordinate system, calculating to obtain the coordinate (a) of the pixel point in the main imagem,rm) Coordinates (a) of pixel points in the corresponding auxiliary images,rs) The expression is as follows:
as=(t-t0)·fp
where the subscript m denotes the main image, the subscript s denotes the auxiliary image, t0Representing the azimuth time, f, corresponding to the first pixel point in the azimuth direction in the secondary imagepRepresenting the pulse repetition frequency, R0Representing the on-board radar slant range, f, corresponding to the first pixel point from the range in the secondary images′Representing the pulse sampling frequency, c representing the speed of light, R representing the slant distance of the satellite-borne radar to a target, and t representing the azimuth time;
coordinates (a) of pixel points in the master imagem,rm) With the coordinates (a) of the pixels in the auxiliary images,rs) The difference of (3) is the registration offset between the primary and secondary images.
3. The method for registering terrain elevation adaptive reduced-dimension images according to claim 2, wherein in the step 2, the registration offset fitting model specifically comprises:
as=d0+d1am+d2rm+d3h
rs=g0+g1am+g2rm+g3h
8 fitting coefficients to be determined of the registration offset fitting model are respectively the first fitting coefficients to be determinedFixed fitting coefficient d0Second fitting coefficient d to be determined1Third fitting coefficient d to be determined2Fourth fitting coefficient d to be determined3Fifth to-be-determined fitting coefficient g0Sixth to-be-determined fitting coefficient g1Seventh to-be-determined fitting coefficient g2And the eighth fitting coefficient g to be determined3。
4. The method for registering terrain elevation adaptive reduced-dimension images as claimed in claim 3, wherein in step 3, the solution of 8 fitting coefficients to be determined in the registration offset fitting model is as follows:
wherein,indicating a first order partial derivative operation, subscript m representing the main image, AmRepresenting the acceleration vector, t, of the satellite-borne radar corresponding to the main imagemIndicating the azimuth time, R, corresponding to the main imagemRepresenting the slope distance, P, at which the lightning-borne radar reaches the target when the main image is determinedmRepresenting the satellite-borne radar position vector, P, corresponding to the main imagem,xRepresenting the component, P, of the corresponding satellite-borne radar position vector of the main image on the x-axis in the WGS84 coordinate systemm,yRepresenting the component of the satellite-borne radar position vector corresponding to the main image on the y-axis in the WGS84 coordinate system, Pm,zRepresenting the component, V, of the satellite-borne radar position vector corresponding to the main image in the z-axis of the WGS84 coordinate systemmRepresenting the speed vector, V, of the satellite-borne radar corresponding to the main imagem,xRepresenting the component, V, of the velocity vector of the satellite-borne radar corresponding to the main image on the x-axis in the WGS84 coordinate systemm,yRepresenting the component, V, of the velocity vector of the satellite-borne radar corresponding to the main image on the y-axis in the WGS84 coordinate systemm,zThe component of the speed vector of the satellite-borne radar corresponding to the main image on the z axis in the WGS84 coordinate system is shown, T represents the three-dimensional position of the target on the ground surface in the WGS84 coordinate system, and T is (T ═x,Ty,Tz)T,TxRepresenting the x-axis component, T, of the target on the surface in the WGS84 coordinate systemyRepresenting the component of the target in the WGS84 coordinate system on the y-axis, T, of the earth's surfacezRepresenting the z-axis component of the target on the earth's surface in WGS84, h represents the elevation of the target, RpRepresenting the radius of the earth's polar region, ReRepresenting the equatorial radius of the earth, (.)TRepresenting transposition, lambda represents the wavelength of the transmitted signal of the space-borne radar, fd,mIndicating the Doppler frequency of the satellite-borne radar when the main image is determined, the subscript s indicating the auxiliary image, AsRepresenting satellite-borne radar acceleration vectors, t, corresponding to the secondary imagessIndicating the azimuth time, P, corresponding to the auxiliary imagesRepresenting satellite-borne radar position vectors, V, corresponding to the secondary imagessRepresenting satellite-borne radar velocity vectors, R, corresponding to the secondary imagessRepresenting the slant distance at which the radar on board reaches the target when determining the secondary image, fd,sRepresenting the Doppler frequency of the satellite-borne radar when determining the secondary image;
combining the above solutionsThe process obtains 6 first order coefficient estimated values which are respectively second fitting coefficient estimated valuesThird to-be-determined fitting coefficient estimation valueFourth estimated fitting coefficient value to be determinedSixth estimated fitting coefficient value to be determinedSeventh to-be-determined fitting coefficient estimation valueAnd the eighth estimated value of fitting coefficient to be determinedThen, the following results were obtained:
wherein f isp,mIndicating the pulse repetition frequency, f, of the radar on board the satellite in determining the main imagep,sIndicating the pulse repetition frequency, f, of the satellite radar when determining the secondary images′,sIndicating the pulse sampling frequency, f, of the satellite radar when determining the secondary images′,mIndicating the pulse sampling frequency of the satellite-borne radar when the main image is determined, c the speed of light, (a)m,rm) Representing the coordinates of pixel points in the main image (a)s,rs) Representing coordinates (a) of pixels in the master imagem,rm) Coordinates of pixel points in the corresponding auxiliary image, tsIndicating the azimuth time, R, corresponding to the auxiliary imagesRepresents the slant distance of the satellite-borne radar reaching the target when determining the auxiliary image, T represents the three-dimensional position of the target on the earth surface under the WGS84 coordinate system, h represents the elevation of the target, (.)TDenotes transposition, c denotes speed of light;
then, randomly selecting L pixel points as control points in the main image, 0<L<H, further obtaining L control points, and then obtaining registration offset of the L control points relative to the main image by adopting a method based on image data or image characteristics; using the coordinates (a) of the pixels of the master imagem,rm) Adding the registration offset of the L control points to obtain auxiliary image pixel point coordinates corresponding to the L control points, and adding the auxiliary image pixel point coordinates and the main image pixel point coordinates (a) corresponding to the L control pointsm,rm) Substituting the estimated values into a registration offset fitting model to further obtain L groups of coefficient estimated values, wherein each group of coefficient estimated values comprises a first fitting coefficient estimated value to be determined and a fifth fitting coefficient estimated value to be determined; selecting the intermediate value of each L data in the L groups of coefficient estimation values to be respectively and correspondingly used as the first fitting coefficient estimation valueAnd a fifth fitting coefficient estimate。
5. The method for registering terrain elevation adaptive dimension-reduced images as claimed in claim 4, wherein in step 3, the final registration offset fitting model is:
6. the method as claimed in claim 5, wherein in step 4, the registration offset of each pixel point in the secondary image with respect to the primary image is obtained by:
uniformly dividing the region where the remaining H-L pixel points in the main image are located into M control blocks, substituting the center pixel point coordinate of each control block into the final registration offset fitting model to obtain the coordinate of the corresponding pixel point in the auxiliary image, wherein the coordinate difference value of the main image and the auxiliary image is the corresponding relative offset; further obtaining M relative offsets; m is a positive integer greater than 0;
thirdly, carrying out cubic spline interpolation on the M relative offsets to obtain the registration offset of each pixel point in the remaining H-L pixel points of the auxiliary image relative to the main image; and finally, resampling the auxiliary image according to the registration offset of each pixel point in the remaining H-L pixel points of the auxiliary image relative to the main image and the registration offset of the L control points relative to the main image respectively, wherein the result after resampling is an accurately registered image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810090998.7A CN108333562A (en) | 2018-01-30 | 2018-01-30 | A kind of dimensionality reduction method for registering images that landform altitude is adaptive |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810090998.7A CN108333562A (en) | 2018-01-30 | 2018-01-30 | A kind of dimensionality reduction method for registering images that landform altitude is adaptive |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108333562A true CN108333562A (en) | 2018-07-27 |
Family
ID=62926840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810090998.7A Pending CN108333562A (en) | 2018-01-30 | 2018-01-30 | A kind of dimensionality reduction method for registering images that landform altitude is adaptive |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108333562A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865372A (en) * | 2018-08-27 | 2020-03-06 | 中国人民解放军61646部队 | Target height information extraction method based on synthetic aperture radar multi-azimuth observation |
CN112179314A (en) * | 2020-09-25 | 2021-01-05 | 北京空间飞行器总体设计部 | Multi-angle SAR elevation measurement method and system based on three-dimensional grid projection |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102654576A (en) * | 2012-05-16 | 2012-09-05 | 西安电子科技大学 | Image registration method based on synthetic aperture radar (SAR) image and digital elevation model (DEM) data |
CN103308923A (en) * | 2012-03-15 | 2013-09-18 | 通用汽车环球科技运作有限责任公司 | Method for registration of range images from multiple LiDARS |
CN105425216A (en) * | 2015-11-24 | 2016-03-23 | 西安电子科技大学 | Image-segmentation-based registration method of polarized InSAR image in repeated passing |
CN106249236A (en) * | 2016-07-12 | 2016-12-21 | 北京航空航天大学 | A kind of spaceborne InSAR long-short baselines image associating method for registering |
KR20170115257A (en) * | 2016-04-07 | 2017-10-17 | 국방과학연구소 | Clutter Rejection with Multimodal Image Registration and Target Detection Method Based on Doppler Beam Sharpening |
US20170315228A1 (en) * | 2016-04-28 | 2017-11-02 | Fluke Corporation | Rf in-wall image registration using position indicating markers |
-
2018
- 2018-01-30 CN CN201810090998.7A patent/CN108333562A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103308923A (en) * | 2012-03-15 | 2013-09-18 | 通用汽车环球科技运作有限责任公司 | Method for registration of range images from multiple LiDARS |
CN102654576A (en) * | 2012-05-16 | 2012-09-05 | 西安电子科技大学 | Image registration method based on synthetic aperture radar (SAR) image and digital elevation model (DEM) data |
CN105425216A (en) * | 2015-11-24 | 2016-03-23 | 西安电子科技大学 | Image-segmentation-based registration method of polarized InSAR image in repeated passing |
KR20170115257A (en) * | 2016-04-07 | 2017-10-17 | 국방과학연구소 | Clutter Rejection with Multimodal Image Registration and Target Detection Method Based on Doppler Beam Sharpening |
US20170315228A1 (en) * | 2016-04-28 | 2017-11-02 | Fluke Corporation | Rf in-wall image registration using position indicating markers |
CN106249236A (en) * | 2016-07-12 | 2016-12-21 | 北京航空航天大学 | A kind of spaceborne InSAR long-short baselines image associating method for registering |
Non-Patent Citations (2)
Title |
---|
张金强 等: "地形高程自适应的星载InSAR 图像配准方法", 《西安电子科技大学学报》 * |
龙四春: "《DlnSAR改进技术及其在沉降监测中的应用》", 31 July 2012, 北京:测绘出版社 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110865372A (en) * | 2018-08-27 | 2020-03-06 | 中国人民解放军61646部队 | Target height information extraction method based on synthetic aperture radar multi-azimuth observation |
CN110865372B (en) * | 2018-08-27 | 2021-07-20 | 中国人民解放军61646部队 | Target height information extraction method based on synthetic aperture radar multi-azimuth observation |
CN112179314A (en) * | 2020-09-25 | 2021-01-05 | 北京空间飞行器总体设计部 | Multi-angle SAR elevation measurement method and system based on three-dimensional grid projection |
CN112179314B (en) * | 2020-09-25 | 2022-07-29 | 北京空间飞行器总体设计部 | Multi-angle SAR elevation measurement method and system based on three-dimensional grid projection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107102333B (en) | Satellite-borne InSAR long and short baseline fusion unwrapping method | |
CN105974414B (en) | High-resolution Spotlight SAR Imaging autohemagglutination focusing imaging method based on two-dimentional self-focusing | |
US9417323B2 (en) | SAR point cloud generation system | |
US8816896B2 (en) | On-board INS quadratic correction method using maximum likelihood motion estimation of ground scatterers from radar data | |
CN112305510B (en) | DEM matching-based synthetic aperture radar image geometric calibration method | |
CN106249236B (en) | A kind of spaceborne InSAR long-short baselines image joint method for registering | |
CN109541597B (en) | Multi-station radar ISAR image registration method | |
CN104749570B (en) | It is a kind of to move constant airborne biradical synthetic aperture radar target localization method | |
CN104833972B (en) | A kind of bistatic CW with frequency modulation synthetic aperture radar frequency becomes mark imaging method | |
CN114545411B (en) | Polar coordinate format multimode high-resolution SAR imaging method based on engineering realization | |
Pu et al. | A rise-dimensional modeling and estimation method for flight trajectory error in bistatic forward-looking SAR | |
CN110007302A (en) | A kind of spaceborne double antenna strabismus straight rail interference SAR ocean current speed measurement method | |
CN110161503B (en) | Short-range and wide-range SAR high-resolution imaging method based on three-dimensional equidistant circle model | |
CN107991676A (en) | Troposphere error correction method of satellite-borne single-navigation-pass InSAR system | |
CN116500560A (en) | Space-based interference imaging radar altimeter calibration method and system considering phase space variation | |
CN108333562A (en) | A kind of dimensionality reduction method for registering images that landform altitude is adaptive | |
CN112904339B (en) | Bistatic forward-looking SAR imaging method characterized by intersection point of slope course and course | |
CN117269911B (en) | Spaceborne distributed InSAR interference calibration method | |
CN114089333A (en) | SAR vibration error estimation and compensation method based on helicopter platform | |
CN115201824B (en) | Space-variant error compensation and image distortion combined processing method for double-base imaging radar | |
CN107728144A (en) | A kind of interference SAR imaging technique based on the biradical pattern of forward sight | |
CN112505647A (en) | Moving target azimuth speed estimation method based on sequential sub-image sequence | |
CN112505693A (en) | Interferometric inverse synthetic aperture radar imaging registration method, system and storage medium | |
CN116559905A (en) | Undistorted three-dimensional image reconstruction method for moving target of bistatic SAR sea surface ship | |
CN115712095A (en) | SAR satellite three-dimensional positioning error correction method and system based on single angular reflection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180727 |
|
WD01 | Invention patent application deemed withdrawn after publication |