SINGLE-PASS INTERFEROMETRIC SYNTHETIC APERTURE RADAR
RELATED APPLICATIONS This application claims priority under 35 U.S.C. §119 to prior U.S. Provisional Patent Application Serial Number 60/123,269, filed March 8, 1999, the entirety of which is hereby incorporated by reference.
FIELD OF THE INVENTION The present invention relates to the application of interferometric principles to determine terrain height from data collected by a synthetic aperture radar (SAR). It discloses a method of performing interferometric height estimation from data that may be collected on a single orbital pass of a radar-carrying satellite with a single transmit- receive antenna. The invention may be implemented utilizing the techniques disclosed herein together with aspects of known imaging techniques, such as disclosed in C. Jakowatz, Jr., D. Wahl, P. Eichel, D. Ghiaglia and P. Thompson, SPOTLIGHT-MODE SYNTHETIC APERTURE RADAR: A Signal Process Approach (1996).
BACKGROUND OF THE INVENTION
The principles of interferometry are currently being used to extract terrain-height information from synthetic aperture radar images. This is done primarily using aircraft, but demonstrations with orbital SAR have also been somewhat successful. See e.g., Li, Fuk and R.M. Goldstein, "Studies of Multibaseline Spaceborne Interferometric Synthetic Aperture Radars", IEEE Transactions on Geoscience and Remote Sensing. Vol. 28, No. 1, January, 1990, pp. 888-97, and Goldstein, R.M., Zebker, A., and C.L. Werner, "Satellite Radar Interferometry: Two-Dimensional Phase Unwrapping", Radio Science, Vol. 23, No. 4, July-August, 1988, pp. 713-720. Aircraft practice implements the interferometric baseline by mounting two or more spatially separated radars, or one radar and extra receive antennas, on the wing tips and in the fuselage, for example.
To obtain terrain height for large, remote areas, aircraft interferometric synthetic aperture radar (IFSAR) systems have been proven to be expensive and potentially vulnerable. As a result, attention has turned to spaceborne platforms, but here difficulties arise. Optimal baselines from space are quite long, see e.g., Li et al. supra, so that physically separated antennas on a single satellite are impractical if height accuracies
better than a few meters are needed. Multiple imaging passes from one or more radar- carrying vehicles are feasible, but because of uncertainties in vehicle ephemeris, numerous known elevation tie points are needed. A more severe problem is temporal terrain decorrelation between imaging passes. Ideally, one would like no more than a few seconds between collections. However, this is impractical with multiple passes, and often with multiple satellites.
As a consequence of these problems, even though IFS AR is the primary technique which holds the promise of accurate, large area topographic mapping from space, a practical, cost-effective collection system has yet to be proposed.
SUMMARY OF THE INVENTION Overview
The present invention provides a solution to the above-noted problems, and is referred to herein as " SPINS AR" imaging. The SPINS AR imaging approach uses two complex images constructed from overlapping synthetic apertures. Of note, the images may be formed on a single orbital satellite pass using a single antenna. In a primary application, an interferometric baseline is created by the orbital curvature associated with the along-track separation of the synthetic aperture centers. The height estimates are formed using coherent data collected during a contiguous time interval. In addition to the important advantages associated with a single pass and single radar approach (e.g., the temporal decorrelation interval is a maximum of a few seconds and height errors caused by differential vehicle ephemeris errors are substantially avoided.), the SPINSAR invention offers additional benefits. For example, no phase unwrapping and no terrain tie points are needed. Further, height estimation accuracy and elevation post spacing are comparable or superior to those obtainable with other orbital interferometric schemes. Moreover, computational burden is comparable to that of other interferometric techniques.
Characterization As indicated, the present invention provides a method for determining height information for a planetary surface region of interest, and includes the step of forming at
least two complex images of the planetary surface region in different slant planes from synthetic aperture radar SAR image data, wherein the two images comprise partially overlapping and partially non-overlapping data content (i.e., due to the partially overlapping synthetic apertures corresponding with the two images). More particularly, the two SAR images are formed so as to define a convergence angle θc between the mid- aperture slant plane normal vectors of the two images (e.g., θc may be established between about .05° to 1° in typical applications). Further, each of the images should be obtained in spotlight-mode and utilize a common ground reference point (GRP) and focal-plane normal. Preferably, the overlapping data content should comprise at least about three percent (3%) of the image data content of each, and more preferably, the overlapping data content should be from about five percent (5%) to fifty percent (50%) of the total image data content of each.
The inventive method further includes the steps of utilizing the at least two SAR images to obtain spatially variant differential layover values and employing the at least two SAR images to obtain interferometric phase values. In turn, the spatially variant differential layover values and interferometric phase values may be utilized in an estimating step to yield height values for the planetary surface region of interest.
In conjunction with forming the at least two SAR images, and as noted above, the inventive method may include the step of acquiring the image data employed to form the images utilizing a single synthetic aperture radar antenna. Such antenna may be located on a satellite traveling along an orbital path relative to the planetary surface region of interest. In turn, the image data may be acquired during a single orbital pass of the satellite over the planetary surface region of interest. Further, the image data used to form the SAR images may be acquired during a continuous time interval, e.g., less than about 60 seconds, wherein temporal decorrelation is substantially avoided.
It should be noted that, in both the utilizing and employing steps identified above (i.e., to obtain spatially variant differential layover values and interferometric phase values, respectively), both overlapping and non-overlapping image data content portions of the at least two SAR images should be utilized. Further in this regard, it has been found that substantially all of the overlapping and non-overlapping data content portions may be advantageously employed. In this regard, and as will be further described, it has
been recognized that the restriction or total avoidance of aperture trimming in the present invention allows for the obtainment of differential layover values from which height can be estimated and sufficient phase correlation for purposes of interferometric processing. In a further aspect of the present invention, the inventive method may comprise the step of projecting the at least two slant plane SAR images into corresponding focal plane images. By way of example, sampled pixel values for each slant plane SAR image may be utilized to obtain corresponding projected focal plane image pixel values, wherein projected focal plane image pixel values may then be obtained at each pixel location via interpolation. After forming the at least two focal plane images (i.e., corresponding with the at least two slant plane SAR images, the method may further include the step of rotating a first of the focal plane images (i.e., a "slave" image), to match a focal plane grid of a second of the focal plane images (i.e., a "master" image). Thereafter, the method may include the step of removing positional phase differences between the rotated slave image and the master image at each pixel location within the master image to obtain an adjusted master plane image.
The rotated slave image and adjusted master image may be processed to produce corresponding amplitude-only images. In turn, the amplitude-only images may be utilized for registration of the rotated slave image and adjusted master image. In this regard, the method may further include the step of decomposing the image data of each of the amplitude-only images into a corresponding plurality of sub-images, and correlating the sub-images corresponding with the rotated slave image and adjusted master image to obtain corresponding spatially variant differential layover values (i.e., resulting from the surface height of the region of interest). The sub-image, spatially variant differential layover values may be interpolated to obtain values for each pixel location.
After registration, and in another aspect of the inventive method, each of the rotated slave and adjusted master images may be decomposed into a corresponding plurality of image blocks. Then, each complex pixel in each image block corresponding with the adjusted master image may be multiplied by a complex conjugate of the corresponding pixel of the corresponding image block of the related slave image, wherein
a plurality of complex-difference image blocks are obtained. In turn, such plurality of complex-difference image blocks may be utilized to form an interferogram. In conjunction with this aspect, interferogram formation may be achieved by first removing interferometric-phase equivalents of the previously determined spatially variant differential layover values for each pixel location of the complex difference image blocks . Then, a phase value at each pixel location may be extracted via conventional arctangent operation.
Following interferogram formation, the interferogram may be conditioned (e.g., via filtering techniques) and transformed to height values. More particularly, a phase-to- height conversion factor may be applied to the interferogram to obtain the height values . Similarly, layover-to-height conversion factors may be applied to the above-noted differential layover values noted hereinabove. Then, statistical processing techniques may be utilized to obtain best fit planes for the interferometric height estimates and for the SAR height estimates. The best fit plane for the interferometric height estimates may then be employed with the interferometric height values to obtain residual interferometric heights which are adjusted to fit the SAR height estimate plane. The resulting adjusted values reflect the final height estimates for the planetary surface region of interest.
Additional aspects and extensions of the present invention will be apparent to those skilled in the art upon further consideration of the descriptions that follow.
Theoretical Basis
Both IFSAR and its relative SAR stereo are based on the differential layover of scattering centers which lie out of the focal plane. The differential layover is induced by differences in the intersections of the iso-range spheres and iso-Doppler cones with the focal planes of an image pair. The intersections are different, thus generating differential layover proportional to out-of-plane height, provided the scattering center is "viewed" from suitably different aspects by the radar. When the images are formed with aperture centers separated along a single orbital pass, as per the SPINSAR embodiment herein, the curvature of the orbit produces differential layover proportional to out-of-plane height. The SAR stereo principle estimates height by transforming differential layover directly to height. Interferometry, on the other hand, measures the phase equivalent of
the slant-range difference between laid-over scattering centers on the two images and then compares this to the phase difference one would have had if the scattering centers had been in the focal planes.
As one increases the along-orbit distance between the two synthetic aperture centers, the height-measurement sensitivity improves. Unfortunately, as the distance increases, the scattering-center signatures begin to decorrelate in amplitude, making it increasingly difficult to register them for purposes of measuring interferometric phase. Moreover, the phase between corresponding scattering centers also decorrelates with increased viewing-angle difference, introducing an interferometric error source called "baseline decorrelation". So, as the along-orbit distance between aperture centers increases, estimate sensitivity improves but at the expense of degraded pixel registration and phase correlation. The result is a "bathtub" performance curve.
Baseline decorrelation is somewhat tractable in the case of sidelooking aircraft systems. Here the angle diversity needed to generate differential layover is achieved by the grazing-angle difference between the boresight directions of the two antennas. In this situation, the line-of-sight wedges formed by the center of the imaged scene and the antenna positions at the beginning and end of each data collection interval are coincident in the along-track direction. Hence, baseline decorrelation is caused only by the small grazing-angle difference in viewing the scene. Contrast this situation with that of two-pass satellite interferometry where the collection wedges may not be coincident in the along-track direction. They may even be disjoint. According to current theory, and depending on how one models "diffuse" terrain, only the overlapping portion of the collection wedges is useful for interferometry. See e.g., Jakowatz, C.V., Wahl, D.E., Ghliga, D.C., and P.A. Thompson, Spotlight-Mode Svnthetic Aperture Radar: A Signal Processing Approach, Kluwer Academic Publishers,
1996, Boston, pp. 283,285, and Mareshal, N., "Tomographic Formulation of Interferometric SAR for Terrain Elevation Mapping", IEEE Transactions on Geoscience and Remote Sensing, Vol. 33, No. 3, May 1995, pp. 726-739. The non-overlapping data does not correlate and is believed to contribute only noise. Accordingly, standard interferometric processing practice is to discard the non-overlapping data, a process called "aperture trimming" or "wavenumber filtering". In the extreme case where the
collection wedges are disjoint, i.e. the synthetic apertures do not overlap, none of the data correlates, and interferometry presumably cannot be done.
In view of the foregoing, it may appear that baseline decorrelation is problematic for single-pass orbital interferometry for the following reasons: (1) if one forms two images with different along-track aperture centers and the collection regions are disj oint, then baseline decorrelation is total and the attempt fails; (2) if, on the other hand, the apertures overlap and are trimmed to eliminate the data which does not correlate, then the overlapping regions have the same grazing angle so that no differential layover is generated, and again interferometry fails. Fortunately, the consequences of baseline decorrelation are not so dire. To be sure, correlation does decade as one increases the along-track distance between aperture centers, and decorrelation is more or less total, depending on terrain content, when the apertures become disjoint. See e.g., Zebker, H.A., "Decorrelation in Interferometric Radar Echoes", IEEE Transactions on Geoscience and Remote Sensing, Vol. 30, No. 5, September 1992, pp. 950-959. Nevertheless, enough correlation remains to support excellent IFSAR imaging results from single-pass collections.
For example, successful single-image height estimation algorithms based on the proportionality between residual dispersed-domain quadratic phase and out-of-plane object height often use the so-called map-drift algorithm which depends on the residual amplitude correlation of disjoint sub-aperture images. Single-pass orbital SAR stereo also depends on amplitude correlation of disjoint collections with even larger angle diversity, and height accuracies of a few meters have been achieved. On the other hand, both theory and empirical evidence suggest that phase correlation is more sensitive than amplitude correlation to viewing-angle difference. As a result, the present inventors have recognized that SAR stereo imaging apertures must have at least a few percent of overlap to support single-pass IFSAR.
More particularly, the present inventors have determined that if one uses partially overlapped apertures to form two images from data collected on a single orbital pass, then sufficient phase correlation can be achieved for interferometry and the orbital curvature induces differential layover from which height can be estimated by interferometry. In this regard, the inventors have further recognized that in order to utilize the differential
layover, aperture trimming should be restricted or omitted so that sufficient differential layover associated with the non-overlapping image regions is obtained, even at the price of reduced correlation. By using this approach SPINSAR has achieved height accuracies and elevation post spacings of a couple of meters from authentic SAR data.
DESCRIPTION OF THE DRAWINGS Figs. 1 A, IB and IC illustrate process steps of one embodiment of the inventive method of the present invention.
Fig.2 illustrates sub-steps corresponding with process step 40 of the embodiment of Figs. lA - lC.
Fig.3 illustrates sub-steps corresponding with process step 50 of the embodiment of Figs. 1A - IC.
DETAILED DESCRIPTION Referring to Figs. 1 - 3, one embodiment of the present invention will be described. That description will be followed by a review of underlying algorithms which support the described embodiment. Additional embodiments and adaptations will be apparent to those skilled in the art. Process Description The illustrated SPINSAR embodiment of Figs. 1 -3 begins with the formation of at least two SAR images, in different slant planes, for a planetary surface region of interest (step 10). In this regard, the two images are formed so as to define a convergence angle θc between the mid-aperture slant plane normal vectors of the two images. Each of the images should be obtained in spotlight-mode and utilize a common ground reference point (GRP) and focal-plane normal. Of note, the two SAR images are formed so as to have partially overlapping image data content (i.e., via partially overlapping apertures). Preferably, the overlapping data content will be at least about three percent (3%) and most preferably between about five percent (5%) and fifty percent (50%) of the data content of each. In a primary application, the two SAR images may be formed using image data acquired via a single transmit/receive SAR antenna. The image data is acquired by
transmitting microwave energy pulses at a predetermined frequency toward the surface region of interest, and receiving resultant microwave energy reflected from the region. After image data acquisition and storage, an image-formation processor may be employed to complete the processing steps contemplated by the embodiment shown in Fig. 1. Such processing may be completed either on-board an imaging spacecraft, at a ground-based location, or at some other location remote from the imaging spacecraft.
In earth imaging applications, the antenna may be located on a satellite orbiting the earth, wherein the two images are obtained during a single-pass over the region of interest. By way of example, the two complex images may each comprise a matrix of data samples, or pixels, of at least about 1000 x 1000. As will be appreciated, image data acquired by the transmit/receive antenna may be readily processed to define the two images, wherein data obtained during a first time segment is used to create the first image and data obtained during a second time segment is used to form the second image, and wherein the first and second time segments partially overlap so as to define the degree of overlapping image data content. The first and second time segments may comprise a single continuous image acquisition time frame during which imaging data is received by the transmit/receive antenna. By way of example, such acquisition time frame may be less than about 60 seconds.
After the two slant plane images are formed (step 10), the corresponding image data may be used to yield two corresponding, projected focal plane images (step 20). By way of example, the slant plane image data for each of the SAR images may be sampled and the sample values may be employed to obtain corresponding, projected focal plane values. In turn, the projected focal plane values may be utilized to obtain image values at each pixel location. For purposes of further description, one of the projected focal plane images may be referred to as a "master" while the other projected focal plane image may be referred to as "slave". The slave image data may then be rotated so as to match the focal plane grid of the master image data (step 22). At this point, the rotated slave image and the master image are coincident except for differential layover induced by the height of objects within the imaged region of interest that lie outside of the focal plane. As will be described, such differential layover can be extracted in further processing and used for both SAR height estimation and in interferometric processing.
In the later regard, and in order to facilitate interferometric phase extraction, residual position phase differences between the master and updated slave images may be determined and removed from the master image (step 30). In this regard, it will be understood that an image pixel at any given location in the images will generally have a residual position phase, i.e., the differential phase which would have been present if an object present at such location had been in the corresponding focal plane. For spotlight- mode imaging using a polar algorithm, the position phase is the phase equivalent of the slant-range difference between a given pixel location and the common ground reference point (GRP). As such, removal of the position phase differences between the master and related slave images may be achieved via algorithmic processing.
As indicated in Fig. 1A, the described process embodiment also provides for image registration and spatially variant differential layover determination (step 40). In this regard, the adjusted master image data from step 30 and rotated slave image data may be detected, or processed, to produce amplitude-only pixel values for each image, which data may be processed for correlation purposes and to obtain spatially variant differential layover values (step 40). More particularly, and referring to Fig. 2, each of the adjusted master and rotated slave images may be processed to produce amplitude-only pixel values (step 41), and the amplitude-only images may be decomposed into "sub-images" of a predetermined size (step 43). By way of example, each image may be decomposed into a plurality of sub-images each having dimensions in the order of 100 x 100 pixels. Such sub-images corresponding with each of the master and slave images may then be correlated to determine a spatially variant differential layover values for each sub-image occasioned by the height of the imaged regions of interest (step 45). As will be appreciated by those skilled in the art, three steps may be completed in accordance with basic SAR stereo image processing principles. The layover values for each of the sub- images may be median filtered to remove outliers and smoothed with a low-pass filter. Then, the layover values may be interpolated so as to provide a differential layover value for each of the pixel locations within each of the sub-images (step 47). Additionally, in order to enhance precision, steps 45 and 47 may be repeated after further decomposing the master and slave images into corresponding, refined sub-images of an even lesser dimension than noted above (e.g., sub-images comprising 40 x 40 pixels), wherein the
initially-determined differential layover values are employed in obtaining refined layover values.
Referring now to Fig. IB, further steps relating to the interferometric processing aspects of the described embodiment will be reviewed. Specifically, after registration step 40, further processing may be carried out with respect to the adjusted master and rotated slave images. Such complex images may be decomposed into smaller corresponding "squares", or image clocks, wherein complex image values may be utilized with the spatially variant differential layover values obtained in step 40 to obtain complex-difference squares (step 50). More particularly, and with reference now to Fig. 3, the adjusted master image and rotated slave image may be decomposed into squares (step 51 ) having dimensions which in effect determine the spacing of the independent elevation posts that may be employed. By way of example, image squares in the order of 7 x 7 pixels and 15 x 15 pixels are typical. Then, for each adjusted master image square, each complex pixel value may be multiplied by the complex conjugate of the associated pixel of the corresponding rotated slave image square, taking into account the corresponding spatially variant differential layover values for the given pixel obtained in step 40. The complex-different squares obtained in step 50 may then be smoothed by averaging the in-phase and quadrature components (step 60).
Next, an interferogram may be formed from the smoothed complex-difference squares (step 70). More particularly, such formation may be achieved by first removing the interferometric-phase equivalent of the spatially variant differential layover at each pixel location (i.e., as determined in step 40), followed by phase extraction via the conventional arctangent operation.
As will be appreciated, removal of the phase equivalent of the differential layover values avoids a need for phase unwrapping since the interferogram contains only the
"delta" between the true height and the coarse height values determinable by the SAR stereo algorithm referred to in step 40 hereinabove. Further, it should be noted that, as a consequence of the short baseline utilized in the described embodiment, the phase equivalent of the deltas will be less than 360°. Further algorithmic explanation of the described phase equivalent removal is provided hereinbelow.
Following formation, the interferogram may be median filtered to remove outliers and smoothed with a low-pass filter (step 80). Then, the conditioned interferogram may be transformed to height values utilizing a phase-to-height conversion factor (step 80).
The phase-to-height conversion factor may be constructed via simulation using collection parameters determined for each imaging opportunity.
In conjunction with the described embodiment, the above-described SAR stereo differential layover values obtained in step 40 may also be converted to height for each pixel location utilizing a layover-to-height conversion factor (step 82). Again, the layover-to-height conversion factor may be developed via simulation of the relevant imaging geometry.
Next, the interferometric height estimates (i.e., from step 80) and SAR stereo height estimates (i.e., from step 82) may each be statistically processed to determine "best fit" planes for each (step 90). Such processing may utilize least-squares processing techniques. Then, the best fit plane for the interferometric estimates may be utilized with each of the interferometric estimated height values to obtain residual interferometric height values (step 92). In turn, such residual values may be adjusted to the best fit plane for the SAR estimated height values determined in step 90, thereby yielding final height estimates.
Algorithmic Support
1. Aperture Overlap and Baseline Decorrelation
As indicated above, the SAR images referred to in step 10 above should be obtained via partially overlapped apertures in order to provide for the phase correlation needed for interferometric processing. In this regard, assume the imaged terrain consists of uniformly distributed, uncorrelated point scattering centers. This intuitively satisfying model has been used successfully in other SAR applications and is a worst case with respect to complex correlation. Assume further that the terrain is viewed from along-track aperture centers which differ by the slant-plane angle do. As it has been shown, see e.g., Zebker et al. supra, that the complex correlation coefficient for such terrain is given by:
2 - Wa
0) p = ι - dφ λ
where Wa is the azimuth resolution of the SAR and λ is the radar wavelength. The correlation coefficient is zero when: λ
(2) dφ
2 - Wa
Recall, however, that in order to obtain clutter cells with uniformly weighted impulse response width Wa, the synthetic aperture must subtend a slant-plane integration angle θi given by:
Λ . 0.886 - λ
(3) θι = .
2 • Wa
Substituting Eq. (3) into Eq. (2) yields:
(4) dφ = θ i .
Ψ 0.886
This expression tells us that the correlation coefficient diminishes to zero when the synthetic aperture centers are separated approximately by the integration angle. Thus, the correlation becomes zero at the point where the apertures become disjoint. Accordingly, to preserve the correlation necessary for interferometry the coherent data collection interval must be decomposed such that the two apertures have some overlap.
The interferometric baseline for the described SPINSAR embodiment is generated by the angle θc between mid-aperture slant plane normal vectors of the two images formed in step 10 above. In the SAR-stereo community θc is called the convergence angle. The relationship between interferometric phase difference, φint, and out-of-plane height, h, has the form:
(5) h = f(θc, x, y) - φ int ,
where f(.) decreases monotonically with θc, which in turn increases with dφ. The error standard deviations are related by:
(6) σ h = f(θc, x. y) - σ φ int .
To minimize σh we need a large convergence angle, and hence large dφ; but this conflicts with the need for correlation. The result is a bathtub response for σh versus dφ. The only parameter which improves both convergence and correlation simultaneously is the azimuth resolution Wa, the finer the better. These comments are as far as one can go with theory in the general case. This is because p is unpredictable for authentic terrain (although the model used here is thought to be worst case). Moreover, f(θc, x, y) is a complicated function of radar parameters and imaging geometry which must be computed by simulation for each imaging opportunity.
2. Position Phase
As indicated above, residual phase differences between the master and slave images are removed from the master image in step 30 prior to the interferometric processing steps 50-70 of the described embodiment.
In this regard, suppose a scattering center lies at location M(xm,ym) in the master image and at S(xs,ys) in the slave. Then the phase difference is given by:
θ dif (xm, ym) = M(xm, ym)
• S(xs,
where Arg(.) is the argument and the overbar denotes the complex conjugate. To get the interferometric phase, one must subtract from θdif the phase difference one would have observed at M(xm,ym) if it corresponded to an in-plane scattering center. This is called the position phase θpos(xm,ym). It is given by:
θ pos(χm> ym) = - - • (RM (χm' ym) - RmG ) - (RQ (χq5 y ) - RsG J
where:
RmG = slant range from mid-aperture of master image to the GRP, RsG = slant range from mid-aperture of slave image to the GRP, RM(xm,Ym) = slant range from mid-aperture of master image to point
M(xm,ym), and,
RQ(xq, yq) = slant range from mid-aperture of slave image to point Q(xq, yq), where Q(xq,yq) is the location of M(xm, ym) in the coordinate system of the slave image, i.e. the location of the scattering center at
M(xm,ym) if the differential layover had been zero.
Let (a,r) be azimuth and range measured from the GRP in the range-Doppler slant plane. Then the first two terms of a binomial expansion of the Pythagorean Theorem give a good approximation for the range difference between (1) the mid-aperture vehicle location and P(a,r) and (2) the mid-aperture vehicle location and the GRP. The result is: a 2 + r 2 dR = r +
2 RG
where RG is the range from mid-aperture to the GRP. The associated slant-plane position phase is:
For typical imaging geometry RG is much larger than the dimensions of the image, in which case we can further approximate: θPos(a,r) = 4_π -r
But slant range, r, from the GRP is related to focal plane distance, y, from the GRP by:
r = y-cos(φg) ,
where Ψg is the mid-aperture grazing angle between slant and focal planes. Hence:
4-π- us g) θ pos(χ,y)= λ φ -y
Thus, we can write the expression for position-phase difference between master and slave images as:
θ os(xm syπι) = 4,π .lym.coslφgπιJ -yq .co^φgs.
Let θrot be the angle between master and slave focal-plane y-axes. Then:
yq = xm -sin(θrot) - ym • cos(θrot)
and:
θpos(Xm,ym) = ^-- ym -(cθS(φgm) -COS(θrot) - COS(φgS)) -Xm -Sin(θrot)
This expression shows that the position phase in the complex-difference image, which is in the focal plane of the master image, is a plane passing through zero phase at the GRP where xm =ym = 0 and having slopes:
3. Conversion Factors As indicated above, the described embodiment requires a conversion factor from interferometric phase difference to out-of-plane height (step 80). In addition, since we compensate the complex-difference image (i.e., obtained in step 50) with the phase equivalent of differential layover to avoid phase unwrapping (step 70), we also need the conversion from differential layover to equivalent interferometric phase difference. Neither of these conversion factors can be expressed as simple formulas for arbitrary radar parameters and imaging geometries. Instead they are developed by simulation for each imaging opportunity. The method is as follows:
Except where noted, all vector quantities are given in Earth-centered fixed (ECF) axes in which the rotation of the Earth has been referred to the radar-carrying satellite.
Spotlight-mode imaging is assumed, with image formation based on the polar algorithm.
Simulation inputs:
Pm,Ps Mid-aperture Earth-center-to-satellite position vectors for the master and slave images.
G Earth-center-to-GRP vector.
Vm,Vs Mid-aperture master and slave satellite velocity vectors.
λ Transmit wavelength.
M(xm,Ym) Master image focal-plane locations at which conversion factors are desired.
Coordinate transformations and unit vectors:
Mid-aperture slant ranges and slant-plane unit vectors:
Slant-plane normal vectors: Nsm = ,Xm x R"1, Nss = ,XS x Rs,
|Vm X Rm| |Vs X Rsj
Slant-plane to EC1: transformations:
Tsxm= [UR, x Nsm Urm Nsm ] Tsx= [URS x Nss Urs Nss ]
Tsx =Tsx "' Tsx=Tsxs "'
Focal-plane x-axis (cross-range) vectors:
UfXm = Rm Nf ufχs = ^l f
|RmxNfj |RsxNf|
Focal-plane to ECF transformations: Tfxm=[ufxm NfxUfxm Nf] Tfx=[ufxs NfχUfxs Nf]
Focal -to-slant plane transformations:
Tfs„=Txs„#Tfx„ Tfss=Txss »Tfxs
Assume a scattering center elevated by h units at location (xm,ym,h) in the focal plane of the master image. Let (xLm,yLm,0) be the layover point associated with the elevated point. Both of-these points project to the same (a,r) point in the slant plane; thus:
Equating (a,r) in these expressions gives the layover point in terms of the elevated point as follows:
xL
m Tfs
m >0,0 y
Lm Tfs
ml,0
where we have used projections rather than iso-range spheres and iso-Doppler cones with negligible error for realistic imaging geometries.
The Interferometric Phase Difference and Conversion Factor (i.e., for step 80). The slant- range differences from aperture center to the elevated and layover point for the master and slave images are as follows:
dR m =
where the vertical bars denote the magnitude of the enclosed vector quantity. The interferometric phase difference is then the master-slave difference for the elevated point and the master-slave difference for the layover point. The latter difference is the position phase which would have been observed if the scattering center had truly been at the layover point; thus,
4 - π
Φmt- -.f' dRhm - dRh H dRLm - dRLc
The phase-to-height conversion factor Kph is given by:
For typical image sizes and geometries the conversion factor has a planar variation with (xm, ym); hence, it must be evaluated at three or more points fit to a plane, and interpolated at every interferogram point.
Conversion From Differential Layover to Interferometric Phase Difference (i.e., for step 70). The differential layover found by a spatially variant correlation of small regions in the master and slave images (i.e., via step 40) is transformed to equivalent interferometric
phase difference and used to compensate the complex difference image before the phase is extracted (step 70). The focal plane location of the elevated point in the slave image is denoted as (xs,ys,hs). The transformation from master to slave focal plane is:
Tms = Txf ς • Tfx m
Then:
The out-of-plane elevations are identical in the master and slave images since the focal planes have the same normal vector and differ by only a rotation. The layover point (xLs,yLs) in the focal plane of the slave image is then:
1 xs xLc TfsSQ,0 TfSs0,l Tfs Tfs Tfs s0,0 s0,l s0,2 ys yLs Tfs Tfs si,0 SU Tfssi,0 Tfssi,l Tfssi,2 h
Before we can compute the differential layover, this point must be referred back to the master image. Denote this referred point by (xLsm,yLsm); then,
The differential layover is therefore:
Three conversions from differential layover to height are available:
dLx m dLy m dLx m dLy m
KxLh KyLh KepiLh =
Φint
These conversions have negligle variation with (xm,ym). In practice, it is computationally efficient to rotate both master and slave images by the epipolar angle— the angle which makes ICx or Ky zero—in which case the layover is all in the epipolar direction. In this situation the correlation needed to determine the differential layover is one-dimensional rather than two-dimensional. When epipolar correlation is used, one obtains the layover associated with the numerator of Kepi and uses this expression for the conversion. (The correct sign must be applied to the square root.) Finally, the conversion from layover to equivalent interferometric phase is obtained by combining the conversion factors:
Kepi h
KLφ = κφh(χm > ym )
* * *
The above-described embodiment is not intended to limit the scope of the present invention. Other embodiments, extensions and modifications will be apparent to those skilled in the art. For example, the invention may be employed in conjunction with 3 or more SAR images to yield height estimates for a region of interest. All such extensions are intended to be within the scope of the present invention as defined by the claims that follow.